uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,564,115 | arxiv | \section{Introduction}
A remarkable property of the Euclidean space of dimension at least
two, that to a great extent compensates for the lack of partition of
unity for holomorphic automorphisms, was discovered by Anders\'en and
Lempert in early 1990's \cite{An90,AnLe92}, see also the work by
Forstneri\v{c} and Rosay \cite{FoRo93}. Since then, the theory of
Stein manifolds with very large holomorphic automorphism group is
called Anders\'en-Lempert theory.
The property was formalized by Varolin who named it the density
property (DP). A Stein manifold $X$ has the DP if the Lie algebra
generated by complete holomorphic vector fields is dense (in the
compact-open topology) in the space of all holomorphic vector fields
on $X$. Recall that a vector field is called complete if its flow
exits for all complex time and all initial conditions.
The DP allows to construct (global) automorphisms of $X$ with
prescribed local properties. More precisely, any local phase flow on a
Runge domain in $X$ can be approximated by (global)
automorphisms. This has remarkable applications for geometric
questions in complex analysis, we refer the reader to survey articles
\cite{Ro99,KK11,Ku14} and the recent book \cite{Forst11}. For smooth
affine algebraic varieties, the algebraic density property (ADP) was
also introduced by Varolin. The ADP implies the DP, therefore it is
commonly used as a tool to prove the DP.
In this paper we generalize the ADP to not necessarily smooth affine
varieties relative to some closed subvariety containing the singular
locus as follows: Let $X$ be an affine algebraic variety and let
$\Xsing$ be the singular locus. We also let $Y\subseteq X$ be an
algebraic subvariety of $X$ containing $\Xsing$ and let
$I=I(Y)\subseteq \KK[X]$ be the ideal of $Y$. Let $\VFalg(X,Y)$ be
the $\CC[X]$-module of vector fields vanishing in $Y$, i.e.,
$\VFalg(X,Y)=\{\partial \mid \partial(\CC[X])\subseteq I\}$. Let
$\Liealg(X,Y)$ be the Lie algebra generated by all the complete vector
fields in $\VFalg(X,Y)$.
\begin{definition} \label{ADP} %
We say that $X$ has the strong ADP relative to $Y$ if $\VFalg(X,Y) =
\Liealg(X,Y)$. Furthermore, we say that $X$ has the ADP relative to
$Y$ if there exists $\ell\geq 0$ such that $I^\ell\VFalg(X,Y)
\subseteq \Liealg(X,Y)$. With this definition, the ADP relative to
$Y$ with $\ell=0$ is just the strong ADP relative to $Y$. If we let
$Y=\Xsing$ we simply say that $X$ has the strong ADP or the ADP,
respectively.
\end{definition}
Except for the fact that we consider not necessarily smooth varieties,
the strong ADP is a version of Varolin's Definition~3.1 in
\cite{varolin01} of DP for the Lie subalgebra of vector fields
vanishing on $Y$. Whereas for $\ell>0$ our property is slightly weaker
than Varolin's definition since we generate the Lie subalgebra of
vector fields vanishing on $Y$ of order at least $\ell$ using complete
vector fields vanishing on $Y$ of possibly lower order than
$\ell$. Still this version of the ADP has the same remarkable
consequences as in Varolin version of ADP for the group of holomorphic
automorphisms of $X$ fixing $Y$ pointwise (see
Theorem~\ref{AL-Theorem}).
In this paper we investigate the ADP for toric varieties. Our first
main result is the following theorem (see Theorem~\ref{finalthm}).
\begin{theorem*}
Let $X$ be an affine toric variety of dimension at least two and let
$Y$ be a $\TT$-invariant closed subvariety of $X$ containing
$\Xsing$. Then $X$ has the ADP relative to $Y$ if and only if
$X\setminus Y\neq \TT$.
\end{theorem*}
Recall that every smooth affine toric variety is isomorphic
$\CC^k\times (\CC^*)^{n-k}$. A special case of our theorem where
$X=\CC^n$ and $Y$ is the union of up to $n-1$ coordinate hyperplanes
has been already proven by Varolin \cite{varolin01}.
It is well known that every affine toric surface different from $\mathbb{C}^*\times\mathbb{C}$ or $\mathbb{C}^*\times\mathbb{C}^*$ is obtained as a
quotient of $\mathbb{C}^2$ by the action of a cyclic group. Let $d>e$ be
relatively prime positive integers. We denote by $V_{d,e}$ the toric
surface obtained as the quotient of $\mathbb{C}^2$ by the $\mathbb{Z}_d$-action
$\zeta\cdot(u,v) = (\zeta u, \zeta^e v)$, where $\zeta$ is a primitive
$d$-th root of unity. The following theorem is our second main result
(see Corollary~\ref{Z-ADP}).
\begin{theorem*}
$V_{d,e}$ has the strong ADP if and only if $e$ divides $d+1$ and
$e^2 \neq d + 1$.
\end{theorem*}
Furthermore, for every affine toric surface our methods allow to
determine the values of $\ell$ from Definition~\ref{ADP} for which
$I^\ell\VFalg(X,\Xsing) \subseteq \Liealg(X,\Xsing)$. The main
ingredient in the proof of this theorem is an equivariant version of
Brunella's famous classification of complete algebraic vector fields
in the affine plane (see \cite{Br3}) or, equivalently, classification
of complete algebraic vector fields on affine toric surfaces (see
Theorem~\ref{thmlist}). This result might be of independent interest.
\section{Vector fields and the algebraic density property}
In this section we prove a general method for establishing the ADP
that we later will use to show the ADP for toric varieties.
\begin{definition}
Let $X$ be an affine algebraic variety and $Y$ be a subvariety
containing $\Xsing$.
\begin{enumerate}[$(i)$]
\item Let $\Aut(X,Y)$ be the subgroup of automorphism of $X$
stabilizing $Y$. We say that $X$ is homogeneous with respect to
$Y$ if $\Aut(X,Y)$ acts transitively on $X\setminus Y$.
\item We also let $x_0\in \Xreg$. A finite subset $M$ of the tangent
space $T_{x_0}X$ is called a generating set if the image of $M$
under the action of the isotropy group of $x_0$ in $\Aut(X,Y)$
generate the whole tangent space $T_{x_0}X$.
\end{enumerate}
\end{definition}
The following is our main tool to establish the ADP for toric
varieties. It is a generalization of \cite[Theorem~1]{KaKu08}.
\begin{theorem}\label{thm}
Let $X$ be an algebraic variety homogeneous with respect to some
subvariety $Y\supseteq \Xsing$. Let also $L$ be a finitely generated
submodule of the $\KK[X]$-module $\VFalg(X,Y)$ of vector fields
vanishing on $Y$. Assume that $L\subseteq\Liealg(X,Y)$. If the fiber
of $L$ over some $x_0\in X\setminus Y$ contains a generating set,
then $X$ has the ADP relative to $Y$.
\end{theorem}
\begin{proof}
Let $\{\partial_i\}$ be a finite set of vector fields in $L$ such
that $\{\partial_i[x_0]\}$ is a generating set. Let now
$\{\beta_j\}\subseteq \Aut(X,Y)$ be a finite collection of
automorphisms fixing $x_0$ such that
$\{\beta_j^*(\partial_i)[x_0]\}$ span the tangent space at $x_0$.
Since change of coordinates does not change completeness of a vector
field, for $\beta\in \Aut(X,Y)$, the finitely generated module
$L_\beta=\beta^*(L)$ is again contained in $\Liealg(X,Y)$. By
replacing $L$ with $\bigoplus_{j} L_{\beta_j}$, we can assume that
$\{\partial_i[x_0]\}$ span the tangent space at $x_0$.
We let $A_1=\{x\in X\setminus Y\mid \spann(\partial_i[x])\neq
T_{x}X\}$. We also let $A_1=\bigcup A_1^j$ be the decomposition of
$A_1$ in irreducible components and we pick $x_j\in A_1^j$. Since
$X$ is homogeneous with respect to $Y$, we can choose $\alpha_j\in
\Aut(X,Y)$ sending $x_0$ to $x_j$. We also put
$\alpha_0=\operatorname{Id}$. Let now
$$A_2=\big\{x\in X\setminus
Y\mid \spann\{\alpha_j^*(\partial_i)[x]\mid \forall i,j\}\neq
T_{x}X\big\}\,.$$ %
By construction $\dim A_1>\dim A_2$ and so we can proceed by
induction on dimension to obtain a finite collection of
automorphisms $\alpha_j\in \Aut(X,Y)$ such that the collection
$\{\alpha_j^*(\partial_i)[x]\}$ span the tangent space at every
point $x\in X\setminus Y$.
We let $E=\bigoplus_{j} L_{\alpha_j}$. With the same argument as
before, $E$ is a finitely generated $\CC[X]$-submodule of
$\VFalg(X,Y)$ contained in $\Liealg(X,Y)$. By construction, we have
that the fiber of $\widetilde{E}:=\VFalg(X,Y)\big/E$ at every $x\in
X\setminus Y$ is trivial. Hence, the support of $\widetilde{E}$ is
contained in $Y$.
We define
$$J=\operatorname{Ann}_{\CC[X]}\widetilde{E}:=\left\{f\in \CC[X]\mid
fa=0\mbox{ for all } a\in \widetilde{E}\right\}\,.$$ %
By construction $J\widetilde{E}=0$. This yields
$J\VFalg(X,Y)\subseteq E$. Furthermore, by \cite[Ch. II Ex
5.6]{Har77} we have that $V(J)\subseteq Y$. Recall that $I$ is the
ideal of $Y$ and let $J'=J\cap I$ so that $V(J')=Y$. Let now $a_i$
be a finite set of generators of $i$. Since
$\operatorname{rad}(J')=I$, we have that there exists $\ell_i$ such
that $a_i^{\ell_i}\in J$ for all $i$. Letting
$\ell=1+\sum_i(\ell_i-1)$ we obtain
$$I^\ell\subseteq J'\subseteq J\quad\mbox{and so}\quad
I^\ell\VFalg(X,Y)\subseteq J\VFalg(X,Y)\subseteq E\subseteq
\Liealg(X,Y)\,.$$ %
Hence the theorem follows.
\end{proof}
\section{The algebraic density property for affine toric varieties}
We first recall the basic facts from toric geometry that will be
needed in this section. They can be found in any text about toric
geometry such as \cite{Fu93,Oda88,CLS}.
Let $M$ and $N$ be mutually dual lattices of rank $n$ with duality
pairing $M\times N\rightarrow \ZZ$, where $(m,p)\mapsto \langle
m,p\rangle=p(m)$. We also let $M_\QQ=M\otimes_\ZZ \QQ$ and
$N_\QQ=N\otimes_\ZZ \QQ$. Letting $\TT$ be the algebraic torus
$\TT=\spec\CC[M]=N\otimes_\ZZ\CC^*$. A toric variety is a normal
variety endowed with an effective action of $\TT$ having an open
orbit. Since the $\TT$-action is effective, the open orbit is equal to
$\TT$.
It is well known that affine toric varieties can be described by means
of strongly convex polyhedral cones (pointed cones) in the vector
space $N_\QQ$. Indeed, let $\sigma$ be a pointed cone in $N_\QQ$, then
$X_\sigma=\spec \CC[\sigma^\vee\cap M]$ is an affine toric variety and
every affine toric variety arises this way. Here $\CC[\sigma^\vee\cap
M]$ is the semigroup algebra $\CC[\sigma^\vee\cap M]=\bigoplus_{m\in
\sigma^\vee\cap M}\CC\chi^m$. In the following, we denote
$\sigma^\vee\cap M$ by $\sigma^\vee_ M$.
There is a one to one correspondence between the faces $\tau$ of the
cone $\sigma$ and the orbits $\OO(\tau)$ of the $\TT$-action on
$X_\sigma$ (usually called the Orbit-Cone correspondence). The
dimension of an orbit is given by $\dim \OO(\tau)=\rank N-\dim \tau$
and its closure is given by $\overline{\OO(\tau)}=\bigcup_\delta
\OO(\delta)$ where $\delta$ runs over all faces of $\sigma$ containing
$\tau$. The ideal $I(\tau)$ of an orbit closure $\overline{\OO(\tau)}$
is given by
$$I(\tau)=\bigoplus_{m\in \sigma^\vee_M\setminus
\tau^\bot}\CC\chi^m\,$$ %
where $\tau^\bot\subseteq M_\QQ$ is the orthogonal of
$\tau$. Furthermore, the ideal of $X\setminus \TT$ is
$$I(X\setminus \TT)=\bigoplus_{m\in (\relint\sigma^\vee)\cap
M}\CC\chi^m\,,$$ %
where $\relint$ denotes the relative interior.
As usual, we identify a ray $\rho\subseteq\sigma$ with its primitive
vector. The set of all the rays of $\sigma$ is denoted by
$\sigma(1)$. A cone $\sigma$ is called smooth if $\sigma(1)$ is part
of a basis of the lattice $N$. Let $\tau\subseteq \sigma$ be any
face. The orbit $\OO(\tau)$ is contained in $\Xreg$ if and only if
$\tau$ is smooth.
Let now $e\in M$ and $p\in N$. The linear map
$$\partial_{e,p}:\CC[M]\rightarrow\CC[M],\quad \chi^m\mapsto
\langle m,p\rangle\cdot\chi^{m+e}$$ is a homogeneous derivation of the
algebra $\CC[M]$ and so it is a homogeneous vector field on
$\TT=\spec\CC[M]$. By the exponential map, the tangent
space of $\TT=N\otimes_\ZZ\CC^*$ at the identity $\mathfrak{e}\in \TT$
is isomorphic to $N\otimes_\ZZ\CC$ and the evaluation of the vector
field $\partial_{e,p}$ at the smooth point $\mathfrak{e}$ is
$\partial_{e,p}[\mathfrak{e}]=p$.
Let $\sigma\subseteq N_\QQ$ be a pointed cone. The following
proposition gives a description of all the homogeneous vector fields
on $X_\sigma$. The first statement of the following result can be
found in \cite{Dem70}. For the convenience of the reader we provide a
short argument.
\begin{proposition}
The homogeneous vector field $\partial_{e,p}$ on $\TT$ extends to a
homogeneous vector field in $X_\sigma$ if and only if
\begin{enumerate}[Type I:]
\item $e\in\sigma^\vee_M$, or
\item There exists $\rho_e\in \sigma(1)$ such that
\begin{enumerate}
\item $p\in \ZZ \rho_{e}$,
\item $\langle e,\rho_e\rangle=-1$, and
\item $\langle e,\rho\rangle\geq 0$ for all
$\rho\in\sigma(1)\setminus \{\rho_e\}$.
\end{enumerate}
\end{enumerate}
Furthermore, $\partial_{e,p}$ is locally nilpotent if and only if it
is of type II, and $\partial_{e,p}$ is semisimple if and only if it
is of type I and $e=0$.
\end{proposition}
\begin{proof}
The vector field $\partial_{e,p}$ extends to $X_\sigma$ if and only
if $\partial_{e,p}(\CC[\sigma^\vee_M])\subseteq
\CC[\sigma^\vee_M]$. Since $\CC[\sigma^\vee_M]$ is spanned by
$\chi^m$ for all $m\in \sigma^\vee_M$, it is enough to show that
$\partial_{e,p}(\chi^m)\in \CC[\sigma^\vee_M]$. In combinatorial
terms, this corresponds to the condition:
\begin{align}
\label{eq:1}
\mbox{For every } m\in \sigma^\vee_M\setminus p^\bot,
\mbox{ we have }
\langle m+e, \rho\rangle\geq 0\mbox{ for all } \rho\in
\sigma(1)\,.
\end{align}
Assume first that $p$ is not proportional to any $\rho\in
\sigma(1)$. Then for every $\rho\in \sigma(1)$ there exists $m\in
\sigma^\vee_M$ such that $\langle \rho,m\rangle=0$ and $\langle
p,m\rangle\neq 0$. Hence, \eqref{eq:1} implies that $\langle
\rho,e\rangle\geq 0$ and so $\partial_{e,p}$ is of type I.
Assume now that there exists $\rho_e\in \sigma(1)$ such that $p\in
\ZZ \rho_{e}$. With the same argument as above we can show that
$\langle \rho,e\rangle\geq 0$ for all $\rho\in
\sigma(1)\setminus\{\rho_e\}$. Let now $m\in \sigma^\vee_M$ such
that $\langle \rho_e, m\rangle=1$. Then \eqref{eq:1} implies that
$\langle \rho_e,m+e\rangle \geq 0$. This yields $\langle
\rho_e,e\rangle \geq -1$. If $\langle \rho_e,e\rangle=-1$ then
$\partial_{e,p}$ is of type II. If $\langle \rho_e,e\rangle>-1$ then
$\langle \rho_e,e\rangle\geq 0$ and $\partial_{e,p}$ is of type I.
\smallskip
To prove the second assertion, we let $\partial=\partial_{e,p}$ be a
homogeneous vector field. A straightforward computation shows that
\begin{align}
\label{eq:2}
\partial^{\ell+1}(\chi^m)=\langle m+\ell e,p\rangle
\cdot \partial^\ell(\chi^m)\cdot \chi^e\,.
\end{align}
Assume first that $\partial$ is of type I and that $e\in
\sigma^\vee_M\setminus \{0\}$. If $\langle e,p\rangle\neq 0$ then
\eqref{eq:2} yields
$$\partial^{\ell}(\chi^e)=\ell!\cdot\langle
e,p\rangle^\ell\cdot\chi^{\ell e}\neq 0\,,$$ and so $\partial$ is
not locally finite since $\spann\{\chi^{k e}\mid k\in \ZZ_{\geq
0}\}$ is not finite dimensional. If $\langle e,p\rangle=0$ then
let $m\in \sigma_M^\vee$ be such that $\langle m,p\rangle\neq 0$. In
this case \eqref{eq:2} implies
$$\partial^{\ell}(\chi^m)=\langle
m,p\rangle^\ell\cdot\chi^{m+(\ell-1) e}\neq 0\,,$$ and again $\partial$ is
not locally finite with a similar argument.
Assume now that $\partial$ is of type I and that $e=0$. The vector
field $\partial$ is the infinitesimal generator of the algebraic
$\CC^*$-action on $X_\sigma$ given by the $\ZZ$-grading on
$\CC[\sigma_M^\vee]$ induced by the degree function
$\deg(\chi^m)=\langle p,m\rangle$. Hence, the vector field
$\partial$ is semisimple.
Finally, assume that $\partial$ is of type II. For every $m\in
\sigma^\vee_M$ we let $\ell=\langle m,\rho_e\rangle$. Now,
$\partial_{e,p}$ is locally nilpotent since
$\partial_{e,p}^{\ell+1}(\chi^m)=0$ by \eqref{eq:2}.
\end{proof}
In the following corollary, we give an explicit description of the
homogeneous complete vector fields on an affine toric
variety.
\begin{corollary} \label{cor-gl-int}
The vector field $\partial_{e,p}$ is complete if and only
if it is of type II, or it is of type I and $\langle e,p\rangle=0$.
\end{corollary}
\begin{proof}
The vector fields of type II are locally nilpotent, hence
complete. In the following, we assume that $\partial=\partial_{e,p}$
is of type I. First, assume that $\langle e,p\rangle=0$. Then
$\partial=\chi^e\cdot \partial_{0,p}$ and since $\chi^e$ belongs to
the kernel of $\partial_{0,p}$, we have that $\partial$ is complete.
Assume now that $\langle p,e\rangle\neq 0$. Let $I$ be the ideal of
$X\setminus \TT$, i.e.,
$$I=\bigoplus_{m\in \relint(\sigma^\vee)\cap M}\CC\chi^m\,.$$
Since $e\in \sigma^\vee_M$, we have that $\partial(I)\subseteq I$.
Hence, $X\setminus \TT$ is invariant by $\partial_{e,p}$ and so
$\TT$ is also invariant by $\partial_{e,p}$. In the following, we
show that $\partial$ is not complete when restricted to $\TT$. Since
$\lambda\partial$, $\lambda\in \CC^*$ is complete if and only if
$\partial$ is complete, we will assume that $p$ is a primitive
vector in $N$ and $\langle e,p\rangle>0$.
Without loss of generality, we choose mutually dual bases of $N$ and
$M$ such that $p=(1,0,\ldots,0)$ and $e=(e_1,\ldots,e_n)$, with
$e_1>0$ and $n=\rank N$. We will also denote $x_i=\chi^{\beta_i}$
the standard coordinates of the torus $\TT$, where $\{\beta_i\mid
i=1,\ldots, n\}$ is the base of $N$. In this coordinates, the vector
field $\partial$ restricted to $\TT$ is given by
$$\partial=x_1^{e_1+1}x_2^{e_2}\cdots
x_n^{e_n}\frac{\partial}{\partial x_1}\,,$$ which is not complete on
$\TT$ since $e_1>0$. Indeed the vector fields $x^n\partial/\partial x$ on $\mathbb{C}$ are not complete for $n\geq2$.
\end{proof}
Remark that in Corollary~\ref{cor-gl-int} complete vector fields of
type I are extensions of complete vector fields on the big torus $\TT$
while complete vector fields of type II are locally nilpotent, hence
not complete in $\TT$. In the next lemma, we give a criterion for a
homogeneous vector field to vanish in an orbit closure.
\begin{lemma} \label{lm:vanish} Let $\partial_{e,p}$ be a non-zero
homogeneous vector field on $X_\sigma$ and let $\tau\subseteq
\sigma$ be a face. Then $\partial_{e,p}$ vanishes at the orbit
closure $\overline{\OO(\tau)}$ if and only if
\begin{enumerate}[Type I:]
\item $p\in\Span\tau$ or $\langle e,\rho\rangle>0$ for some $\rho\in
\tau(1)$.
\item $\langle e,\rho\rangle>0$ for some $\rho\in \tau(1)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The vector field $\partial_{e,p}$ does not vanish at the orbit
closure $\overline{\OO(\tau)}$ if and only if
$\partial_{e,p}\left(\CC[\sigma_M^\vee]\right)\not\subseteq
I(\tau)$. In combinatorial terms this happens if and only if
\begin{align}
\label{eq:4}
\mbox{there exists }m \in \sigma^\vee_M\setminus p^\bot \mbox{
such that } \langle m+e,\rho\rangle=0 \mbox{ for all } \rho\in
\tau(1)\,.
\end{align}
\noindent \textbf{Case of type I.} In this case, we have
$e\in\sigma^\vee_M$ so $\langle m+e,\rho\rangle=0$ for all
$\rho\in\tau(1)$ if and only if $\langle m,\rho\rangle=0$ and
$\langle e,\rho\rangle=0$ for all $\rho\in\tau(1)$. This is the case
if and only if $m\in\tau^\bot$ and $e\in\tau^\bot$. Such and
$m\in\sigma^\vee_M\setminus p^\bot$ exists if and only if
$\tau^\bot\not\subseteq p^\bot$, i.e., if and only if $p\notin \Span
\tau$. Hence, we conclude that $\partial_{e,p}$ does not vanish at
the orbit closure $\overline{\OO(\tau)}$ if and only if $p\notin
\Span \tau$ and $\langle e,\rho\rangle=0$ for all $\rho\in \tau(1)$.
\medskip\noindent \textbf{Case of type II.} In this case we have
that there exists $\rho_e\in \sigma(1)$ such that $p\in \ZZ
\rho_e\setminus \{0\}$,
$\langle e,\rho_e\rangle=-1$, and $\langle e,\rho\rangle\geq 0$ for
all $\rho\in \sigma(1)\setminus \{\rho_e\}$.
Assume first that $\rho_e\notin \tau(1)$. An argument similar to
case I yields that $\partial_{e,p}$ does not vanish at the orbit
closure $\overline{\OO(\tau)}$ if and only if $p\notin \Span \tau$
and $\langle e,\rho\rangle=0$ for all $\rho\in \tau(1)$. Since
$\rho_e\notin \tau(1)$, we have that $p\notin \Span \tau$ and so the
vector field $\partial_{e,p}$ does not vanish at the orbit closure
$\overline{\OO(\tau)}$ if and only if $\langle e,\rho\rangle=0$ for
all $\rho\in \tau(1)$.
Assume now that $\rho_e\in \tau(1)$. If there exists $\rho\in
\tau(1)$ such that $\langle e,\rho\rangle>0$, then $\langle
m+e,\rho\rangle>0$ for all $m\in \sigma^\vee_M$ and so
$\partial_{e,p}$ vanishes at the orbit $\overline{\OO(\tau)}$ by
\eqref{eq:4}. Assume $\langle e,\rho\rangle=0$ for all $\rho\in
\tau(1)\setminus \{\rho_e\}$ and let $m\in \sigma^\vee_M$ be such
that $\langle m,\rho_e\rangle=1$ and $\langle m,\rho\rangle=0$ for
all $\rho\in \tau(1)\setminus \{\rho_e\}$. We have $\langle
m,\rho_e\rangle\neq 0$ so $m\notin p^\bot$ and $\langle
m+e,\rho\rangle=0$ for all $\rho\in \tau(1)$. By \eqref{eq:4}, we
conclude that $\partial_{e,p}$ does not vanish at the orbit closure
$\overline{\OO(\tau)}$.
\end{proof}
\begin{remark}
The degree of a homogeneous locally nilpotent vector fields (of type
II) is called a root of $\sigma$. The set of all roots of $\sigma$
is denoted by $\RR(\sigma)$. For a root $e\in\RR(\sigma)$, the ray
$\rho_e$ is called the distinguished ray of $e$ and the $\GA$-action
generated by the homogeneous locally nilpotent vector field
$\partial_{e,\rho_e}$ is denoted by $H_e$.
\end{remark}
In order to show the ADP for toric varieties, we need to show that
$X_\sigma$ is homogeneous with respect to some closed subvariety
$Y$. In \cite{AKZ10}, the authors prove that $X_\sigma$ is homogeneous
with respect to $\Xsing_\sigma$. In fact, they show that the group of
special automorphisms acts infinite-transitively with respect to
$\Xsing_\sigma$. In the following, we will show how their methods can
be applied to show that $X_\sigma$ is homogeneous with respect to any
$\TT$-invariant closed subvariety $Y$.
\begin{proposition} \label{rel-hom}
Let $\sigma\subseteq N_\QQ$ be a pointed cone and let $Y$ be any
$\TT$-invariant closed subvariety of $X_\sigma$ containing
$\Xsing_\sigma$. Then $X_\sigma$ is homogeneous relative to $Y$.
\end{proposition}
\begin{proof}
Using the $\TT$-action and the Orbit-Cone correspondence, to prove
the theorem it is enough to find, for every orbit $\OO(\tau)$ in
$\Xreg_\sigma$ different from the open orbit, an automorphism that
\begin{enumerate}[$(i)$]
\item sends a point $x$ in $\OO(\tau)$ into an
orbit of higher dimension, and
\item leaves stable every orbit not containing $\OO(\tau)$ in its
closure.
\end{enumerate}
Let $\rho_1,\ldots,\rho_\ell$ be the rays of $\tau$. In
\cite[Lemma~2.3]{AKZ10} and its proof, the authors show that for
every smooth orbit $\OO(\tau)$ there exists a root $e\in \RR(\sigma)$
such that
\begin{align}
\label{eq:3}
\langle \rho_1,e\rangle=-1,\ \langle \rho_2,e\rangle=\ldots=
\langle \rho_\ell,e\rangle=0,\mbox{ and } \langle \rho,e\rangle>0\
\mbox{ for all rays }\rho\not\notin\tau(1)\,.
\end{align}
Furthermore, they show that a generic automorphism $\alpha$ in the
$\GA$-action $H_e$ corresponding to the root $e$ satisfies $(i)$.
Let $\OO(\delta)$ be any orbit that does not contain $\OO(\tau)$ in
its closure. In combinatorial terms, this means that $\delta$ is a
face of $\sigma$ that is not contained in $\tau$. We claim that
$H_e$ leaves $\overline{\OO(\delta)}$ point-wise invariant and so
$\alpha$ satisfies $(ii)$ which proves the proposition.
In terms of the vector field $\partial_{e,\rho_e}$, our claim is
equivalent to $\partial_{e,\rho_e}$ vanishes at
$\overline{\OO(\delta)}$. Since $\delta$ is not contained in $\tau$
there exists a ray $\rho$ of $\delta$ that is not a ray of
$\tau$. By \eqref{eq:3} we have $\langle e,\rho\rangle>0$. Now the
claim follows from Lemma \ref{lm:vanish}.
\end{proof}
For our next theorem we need the following lemma that follows by
direct computation.
\begin{lemma} \label{commutator} %
Let $\partial_{e_1,p_1}$ and $\partial_{e_2,p_2}$ be two homogeneous
vector fields. Then
$\left[\partial_{e_1,p_1},\partial_{e_2,p_2}\right]=\partial_{e,p}$,
where $p=p_1(e_2)\cdot p_2-p_2(e_1)\cdot p_1$ and $e=e_1+e_2$.
\end{lemma}
\begin{theorem} \label{finalthm}%
Let $X$ be a affine toric variety of dimension at least two and let
$Y$ be a $\TT$-invariant closed subvariety of $X$ containing
$\Xsing$. Then $X$ has the ADP relative to $Y$ if and only if
$X\setminus Y\neq \TT$.
\end{theorem}
\begin{proof}
Let $X=X_\sigma$ be the toric variety given by the pointed cone
$\sigma\in N_\QQ$ and let $X_\sigma\setminus Y\neq \TT$. There is at
least one codimension one $\TT$-orbit not contained in $Y$. Assume
it is $\OO(\rho_1)$ for some ray $\rho_1\in \sigma(1)$. Let $e_1$ be
a root with $\rho_1$ as distinguished ray. By \eqref{eq:3}, we can
assume that $\langle e_1,\rho\rangle>0$ for all $\rho\in
\sigma(1)\setminus \{\rho_1\}$. By Lemma~\ref{lm:vanish}, the
locally nilpotent vector field $\partial_{e_1,\rho_1}$ vanishes at
$Y$ and so $\partial_{e_1,\rho_1}\in \VFalg(X_\sigma,Y)$.
Letting $e_2,e_3\in \relint(\sigma^\vee)\cap M$ be such
that $e_3=e_1+e_2$, we let
$$L=\Span\left\{\partial_{e,p}\mid p\in N, e\in e_3+\sigma^\vee_M\right\}\,.$$
The set $L$ is contained in $\VFalg(X_\sigma,Y)$ since
$\partial_{e,p}\in L$ vanishes in $X_\sigma\setminus \TT$. In fact,
$L$ is a submodule of $\VFalg(X_\sigma,Y)$ since for every $m\in
\sigma^\vee_M$ and every $\partial_{e,p}\in L$, we have
$\chi^m\partial_{e,p}=\partial_{e+m,p}\in L$. Furthermore, the
fiber over the identity $\mathfrak{e}\in \TT\subseteq X_\sigma$ is
given by
\begin{align} \label{spanning}
L_\mathfrak{e}=\Span
\{\partial_{e,p}[\mathfrak{e}]\mid \partial_{e,p}\in
L\}=\Span\{p\mid \partial_{e,p}\in L\}=N\otimes_\ZZ
\CC=T_\mathfrak{e}X_\sigma\,,
\end{align} %
and so $L_\mathfrak{e}$ contains a generating set. We claim that
$L\subseteq \Liealg(X_\sigma,Y)$. Hence $X_\sigma$ has the ADP
relative to $Y$ by Theorem~\ref{thm} and Proposition~\ref{rel-hom}.
By Corollary~\ref{cor-gl-int}, the vector field $\partial_{e,p}$ is
complete if $\langle e,p\rangle=0$. Hence, to prove our
claim it is enough to show that for every $e\in e_3+\sigma^\vee_M$,
there exists $p\in N$ such that $\langle e,p\rangle\neq 0$ and
$\partial_{e,p}\in \Liealg(X_\sigma,Y)$.
Indeed, let $e_4=e-e_1$ and choose $p_4$ be such that $\langle
e_4,p_4\rangle=0$ and $\langle e_1,p_4\rangle\neq 0$ which implies
that $\partial_{e_4,p_4}$ belongs to $\Liealg(X_\sigma,Y)$. This is
possible since $e_4$ lies in $\relint\sigma^\vee$ and $e_1$ is a
root of $\sigma^\vee$. By Lemma~\ref{commutator} we have
$$\left[\partial_{e_1,\rho_1},\partial_{e_4,p_4}\right]=
\partial_{e,p}\quad\mbox{where}\quad p=\rho_1(e_4)\cdot
p_4-p_4(e_1)\cdot \rho_1\,.$$ %
A routine computation shows that
$$\langle e,p\rangle=\
\langle e,\rho_1(e_4)\cdot p_4-p_4(e_1)\cdot \rho_1\rangle=\langle
e_1,p_4\rangle\neq 0\,,$$ proving the claim.
\medskip
Assume now that $X\setminus Y= \TT$. The converse of the theorem
follows from the fact that for all affine toric varieties $X$ and
all $\ell\in \ZZ_{>0}$ there is a vector field $\partial\in
I^\ell\VFalg(X,X\setminus \TT)\setminus \Liealg(X,X\setminus \TT)$,
where $I=I(X\setminus \TT)$. Indeed, Anders\'en \cite{Andersen00}
proved that any complete algebraic vector field on $\TT$ does
preserve the Haar form
$$\omega=\frac{dx_1}{x_1}\wedge\ldots\wedge \frac{dx_n}{x_n}\,.$$
Thus if we find $\partial$ in $I^\ell\VFalg(X,X\setminus \TT)$ whose
restriction to $\TT$ does not preserve $\omega$ we are done.
After a change of coordinates one can assume that
$(1,0,\ldots,0)\in\relint\sigma^\vee$. Then
$\partial=x_1^N\frac{\partial}{\partial x_1}$ is a regular vector
field on $X$ contained in $I^\ell\VFalg(X,X\setminus \TT)$ for $N$
big enough which does not preserve $\omega$.
\end{proof}
\begin{remark}
L\'arusson proved in \cite{La11,For13} that all smooth toric
varieties are Oka-Forstneri\v{c} manifolds, however it is still
unknown if they are elliptic, see \cite{Forst11,Ku14} for
definitions. The proof of Theorem~\ref{finalthm} can be adapted to
prove the following: every smooth quasi-affine toric variety is
elliptic (and thus an Oka-Forstneri\v{c} manifold). Indeed, the
torus $\TT$ is well known to be elliptic. Let $X_0$ be a smooth
quasi-affine toric variety different from $\TT$. Let also $X$ be an
affine toric variety such that $X_0\subseteq X$ is an equivariant
open embedding and let $Y=X\setminus X_0$. Now,
Proposition~\ref{rel-hom} and \eqref{spanning} imply that $X_0$ is
elliptic \cite[Example~5.5.13~(B)]{Forst11}.
\end{remark}
\section{Classification of complete vector fields on affine toric
surfaces}
In this section we classify all complete algebraic vector
fields on a given affine toric surface $X_\sigma$. The classification
works essentially the same as the
classification of complete vector fields on $\mathbb{C}^2$ done by
Brunella \cite{Br3}.
From now on we will use the fact that each affine toric surface
different from $\mathbb{C}^*\times\mathbb{C}$ or $\mathbb{C}^*\times\mathbb{C}^*$ can be seen as the
quotient of $\mathbb{C}^2$ by the action of a cyclic group. Let $d$ be the
order of the group and let $e$ be a co-prime number $0<e<d$ and
consider the action of $\mathbb{Z}_d$ given by $\zeta\cdot(u,v) = (\zeta u,
\zeta^e v)$ where $\zeta$ is a primitive $d$-th root of unity. We
obtain the projection $\pi: \mathbb{C}^2 \rightarrow \mathbb{C}^2/\mathbb{Z}_d =: V_{d,e}$
onto our toric surface which is a ramified covering of $V_{d,e}$
ramified only over the unique singular point. Certainly each vector
field on $X$ pulls back to an invariant vector field of $\mathbb{C}^2$ by
using the fiber-wise isomorphism $D\pi$ on the tangent space. A
complete vector field on $V_{d,e}$ will pull back to an invariant
complete vector field on $\mathbb{C}^2$.
\begin{definition}
Let $f: \mathbb{C}^2 \rightarrow \mathbb{C}$ be a regular function on $\mathbb{C}^2$. The
function $f$ is called $\mathbb{Z}_d$-preserved if the fibers of $f$ are
sent to fibers of $f$ by the $\mathbb{Z}_d$-action. It is called
$\mathbb{Z}_d$-homogeneous of degree $[i]\in\mathbb{Z}_d$ if $\zeta^*f(u,v)=
f(\zeta\cdot(u,v)) = \zeta^if(u,v)$ for all $(u,v)\in\mathbb{C}^2$. Let
$A_{[i]}$ denote the space of $\mathbb{Z}_d$-homogeneous polynomials of
degree $[i]$ then we obtain a decomposition of the ring of regular
functions on $\mathbb{C}^2$ into $\mathbb{Z}_d$-homogeneous parts $\mathbb{C}[u,v] = A_{[0]}
\oplus \ldots \oplus A_{[d-1]}$. In particular $A_{[0]}$ is the ring
of invariant functions $\mathbb{C}[u,v]^{\mathbb{Z}_d}=\mathbb{C}[V_{d,e}]$.
\end{definition}
It is clear from the definition that $A_{[i]}$ is spanned by all
monomials $u^mv^n$ with $[m+en]=[i]\in\mathbb{Z}_d$. Clearly invariant vector
fields are of the form $f\partial/\partial u + g\partial/\partial v$
with $f\in A_{[1]}$ and $g\in A_{[e]}$. Moreover we have the following
easy lemma:
\begin{lemma} \label{lemhomog} Let $f: \mathbb{C}^2 \rightarrow \mathbb{C}$ be a
regular function then the following are equivalent:
\begin{enumerate}
\item $f$ is $\mathbb{Z}_d$-homogeneous,
\item $f$ is $\mathbb{Z}_d$-preserved with $f(0,0)=0$,
\item $f^{-1}(0)$ is $\mathbb{Z}_d$-invariant.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) implies (2) since if $f$ is constant on a curve then also
$\zeta^i \cdot f$ is constant and $f(0,0)=0$ follows directly from
the homogeneity. The fiber $f^{-1}(0)$ contains the $\mathbb{Z}_d$-fixed
point $(0,0)$ thus (3) follows from (2). If the zero fibers of $f$
and $\zeta^*f$ coincide then we have that $\zeta^*f(u,v) = a\cdot
f(u,v)$ for some $a\in\mathbb{C}^*$. By $f(u,v)=\zeta^{d*}f(u,v)=a^d f(u,v)$
we see that $a$ is a $d$-th root of unity and thus (3) implies (1).
\end{proof}
The following lemma is the crucial step in the classification of
invariant complete algebraic vector fields and hence of
complete algebraic vector fields on the toric variety
$V_{d,e}$. Recall that a rational first integral of a vector field is a rational function such that its fibers are tangential to the vector field.
\begin{lemma}\label{lemhomfib}
Let $\partial$ be a $\mathbb{Z}_d$-invariant complete algebraic vector field
on $\mathbb{C}^2$ then $\partial$ preserves either a $\mathbb{Z}_d$-homogeneous
fibration $f: \mathbb{C}^2 \rightarrow \mathbb{C}$ with general fibers $\mathbb{C}$ or
$\mathbb{C}^*$ or $\partial$ has a reduced rational first integral $g:\mathbb{C}^2
\dashrightarrow \mathbb{C}$.
\end{lemma}
\begin{proof}
By \cite{Br3} there is fibration $f:\mathbb{C}^2\rightarrow\mathbb{C}$ with $\mathbb{C}$ or
$\mathbb{C}^*$ fibers which is preserved by the flow $\varphi^t$ of
$\partial$. We may assume that $f(0,0)=0$. If $f$ is
$\mathbb{Z}_d$-homogeneous then we are done. If $f$ is not
$\mathbb{Z}_d$-homogeneous then we construct a rational first integral. The
map $\varphi^t$ acts by multiplication with some $a_t$ on the set of
fibers of $f$ parametrized by $\mathbb{C}$ so we have
$f(\varphi^t(u,v))=a_tf(u,v)$ (indeed $(0,0)$ is a fixed point of
$\varphi^t$). Since $\partial$ is invariant the same holds true for
$g(u,v)=f(\zeta\cdot(u,v))$ and hence the rational map $f/g$ is a
rational first integral for $\partial$. By Stein factorization
$\partial$ has a reduced first integral. Recall that every rational
function $\mathbb{C}^2\dashrightarrow \mathbb{P}^1$ can be decomposed into
$F\circ\tilde f\mathbb{C}^2\dashrightarrow \mathbb{P}^1\rightarrow \mathbb{P}^1$ such that
$\tilde f$ has connected regular fibers, or equivalently is
reduced. This factorization is called Stein factorization.
\end{proof}
The next step will be the classification of $\mathbb{Z}_d$-homogeneous
fibrations with general fibers $\mathbb{C}$ or $\mathbb{C}^*$ and rational first
integrals for invariant vector fields. The classification will be done
up to equivariant automorphisms of $\mathbb{C}^2$ which will lead to a
classification of the vector fields on $V_{d,e}$ up to automorphism of
$V_{d,e}$ since equivariant automorphisms clearly project down to
automorphims of the quotient. Equivariant automorphisms of $\mathbb{C}^2$ are
given by invertible maps $(u,v)\mapsto (p(u,v),q(u,v))$ with $p\in
A_{[1]}$ and $q\in A_{[e]}$.
First we establish an equivariant version of the Abhyankar-Moh
Theorem. We provide a proof using the classical verion of the
theorem. See \cite{ArZa} for a different proof.
\begin{lemma} \label{lemabhy} %
Let $\mathbb{C}\cong L\subset \mathbb{C}^2$ be a line which is invariant by the
group action. Then there is an equivariant automorphism of $\mathbb{C}^2$
mapping $L$ to $\lbrace u =0\rbrace$ or $\lbrace
v=0\rbrace$. Moreover a cross of two invariant lines can be mapped
to $\lbrace uv=0\rbrace$.
\end{lemma}
\begin{proof}
By the classical Abhyankar-Moh Theorem we know that $L$ is given by
a polynomial $p$ which is a component of an automorphism of $\mathbb{C}^2$.
In order to find the other component of the automorphism we have to
find an invariant section of the trivial line bundle given by
$p$. We start with an arbitrary trivialization and get an invariant
section taking the average over images of the zero section by the
group action. Each image is another section because the action sends
fibers of $p$ to fibers of $p$ since the zero fiber is invariant. We
denote the polynomial giving this invariant section by $q$. The map
given by $(p,q)$ is an automorphism of $\mathbb{C}^2$ since it is the
composition of the trivialization we started with and the map
$(u,v)\mapsto (u,v -s(u))$ where $s$ is the invariant
section. Because the zero sets of $p$ and $q$ are invariant they are
$\mathbb{Z}_d$-homogeneous by Lemma \ref{lemhomog} and since they are the
two components of an automorphism their homogeneity degrees coincide
with $[1]$ and $[e]$ so either $(p,q)$ or $(q,p)$ is an equivariant
automorphism and the claim follows. The second statement is trivial
since there we already have an invariant section by assumption.
\end{proof}
We get the following corollary as an immediate consequence, see also
\cite{FlKaZa}.
\begin{corollary} \label{c-fiber}
Let $f:\mathbb{C}^2\rightarrow\mathbb{C}$ be a $\mathbb{Z}_d$-homogeneous fibration with $\mathbb{C}$ fibers and $f(0,0)=0$ then $f(u,v)=u$ or $f(u,v)=v$ up to equivariant automorphism of
$\mathbb{C}^2$.
\end{corollary}
For the classification of $\mathbb{Z}_d$-homogeous fibration with $\mathbb{C}^*$
fibers we first state the non-equivariant version used in \cite{Br3},
see also \cite{Su}.
\begin{lemma}\label{lemcstarfib}
Let $f:\mathbb{C}^2\rightarrow\mathbb{C}$ be a fibration with $\mathbb{C}^*$ fibers then
$f(x,y)$ has one special fiber (say $f^{-1}(0)$) and it is
isomorphic to $\mathbb{C}\cup\mathbb{C}^*$ or $\lbrace xy=0\rbrace$ and $f$ is up to
automorphism of $\mathbb{C}^2$ of the form $f(x,y)=x^m(x^ly + p(x))^n$ or
$f(x,y)=x^my^n$ for coprime $m,n\in\mathbb{N}$, $\deg p<l\geq1$ and
$p(0)\neq 0$.
\end{lemma}
The equivariant version of this lemma is given by the two following
lemmas.
\begin{lemma} \label{cstar-fiber-1}
Let $f:\mathbb{C}^2\rightarrow \mathbb{C}$ be a $\mathbb{Z}_d$-homogeneous fibration with
$\mathbb{C}^*$ fibers and $f^{-1}(0)\cong\mathbb{C}\cup\mathbb{C}^*$ then there are coprime
$m,n\in\mathbb{N}$ and an invariant polynomial $p$ with $\deg p <l\geq1$ and
$p(0)\neq 0$ such that up to equivariant automorphism
$f(u,v)=u^m(u^lv + p(u))^n$ with $[l+e]=[0]$ or
$f(u,v)=v^m(v^lu+p(v))^n$ with $[1+le]=[0]$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lemcstarfib} we know that there exists a not necessary
equivariant automorphism $(x(u,v),y(u,v))$ such that $f(x,y)$ is as
in Lemma~\ref{lemcstarfib}. Clearly, the curve $\mathbb{C}\cong C \subset
f^{-1}(0)$ is invariant by the group action since it is the only
fiber component isomophic to $\mathbb{C}$. By Lemma \ref{lemabhy} we may
assume the $C = \lbrace u=0\rbrace$ or $C=\lbrace v=0\rbrace$. In
the first case this implies that, up to equivariant automorphism,
$x(u,v)=au$ and $y(u,v)=bv + q(u)$ for some $a,b\in\mathbb{C}^*$ and
$q\in\mathbb{C}[u]$ and hence $f$ is of the form $(au)^m((au)^l(bv+q(u)) +
p(u))^n$ with $\deg p<l$. Since $f$ is $\mathbb{Z}_d$-homogeneous we have
$q\in A_{[e]}$ and $p\in A_{[l+e]}$ hence the map $(x(u,v),y(u,v))$
was equivariant after all and $f$ has the desired standard form up
to equivariant automorphism. The equality $[l+e]=[0]$ follows from
the fact that $p(0)\neq 0$. The case $C=\lbrace v=0\rbrace$ leads
similarly to the second possibility.
\end{proof}
\begin{lemma} \label{cstar-fiber-2} Let $f:\mathbb{C}^2\rightarrow \mathbb{C}$ be a
$\mathbb{Z}_d$-homogeneous fibration with $\mathbb{C}^*$ fibers and
$f^{-1}(0)\cong\lbrace uv=0\rbrace$ then there are coprime
$m,n\in\mathbb{N}$ such that $f(u,v)=u^mv^n$ up to equivariant
automorphism. If $d$ is divisible by 4 (say $d=4d'$) and $e=2d'+1$
then $f$ can also be of the form $f(u,v)=u^2-v^2$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemcstarfib} there is an automorphism
$(x(u,v),y(u,v))$ such that $f=x^my^n$. Clearly the 0-fiber $\lbrace
x(u,v)=0\rbrace \cup \lbrace y(u,v)=0 \rbrace$ is invariant by the
group action. If the two lines are invariant themselves then by
Lemma \ref{lemabhy} we may assume that they coincide with $\lbrace
uv=0\rbrace$ and hence we may assume $x(u,v)=au$ and $y(u,v)=bv$ for
some $a,b\in\mathbb{C}^*$ and we are done. If the two lines are interchanged
by the group action then we have $d=2d_0$ is even and
$$ x(\zeta\cdot(u,v))= ay(u,v) \quad \mathrm{and}\quad
y(\zeta\cdot(u,v)=bx(u,v)$$ %
for some $a,b \in \mathbb{C}^*$. After rescaling we may assume that
$a=b$. The fibration $f=x^my^n$ is $\mathbb{Z}_d$-homogeneous so
$x^my^n=const \cdot y^mx^n$ and hence $m=n=1$. Moreover we have
$x(u,v)=x(\zeta^d\cdot(u,v))=a^dx(u,v)$ and hence $a=\zeta^i$ for
some $i$. We see that the maps $P_\pm(u,v)=x(u,v)\pm y(u,v)$ are
$\mathbb{Z}_d$-homogeneous and since they are the components of an
automorphism of $\mathbb{C}^2$ we may assume that the functions $P_\pm$
coincides with the functions $u$ and $v$. Altogether we have
$\frac{1}{4}f(u,v)=(u+v)(u-v)=u^2-v^2$ which is $\mathbb{Z}_d$-homogeneous
only if $2e=2$ or $2e=2d_0+2$. In the first case $(x(u,v),y(u,v))$ is
already equivariant so only the latter case remains. Since $d$ is even and thus $e=d_0+1$ is odd we have that $d_0=2d'$ is even.
\end{proof}
\begin{lemma}\label{lemfirstint}
Let $f: \mathbb{C}^2 \dashrightarrow \mathbb{P}^1$ be a reduced rational first
integral of an invariant complete vector field $\partial$ on $\mathbb{C}^2$
then up to equivariant automorphism of $\mathbb{C}^2$ and M\"obius transform
of $\mathbb{P}^1$ the rational function $f$ is a $\mathbb{Z}_d$-homogeneous
polynomial with $\mathbb{C}$ or $\mathbb{C}^*$ fibers or there are coprime
$m,n\in\mathbb{N}$ such that $f(u,v)=u^m/v^n$.
\end{lemma}
\begin{proof}
A general fiber of $f$ is an orbit closure of the flow of
$\partial$. Since $\partial$ is invariant the set of orbits is preserved
by the $\mathbb{Z}_d$-action hence general fibers of $f$ are mapped to
general fibers of $f$ by the action and the action induces a
$\mathbb{Z}_d$-action on the base $\mathbb{P}^1$. Altogether this means that $f$ is
$\mathbb{Z}_d$-preserved. If $f$ is not surjective then $f$ can be seen as a
polynomial which is $\mathbb{Z}_d$-homogeneous by Lemma \ref{lemhomog} and
has general fibers isomomorphic $\mathbb{C}$ or $\mathbb{C}^*$ since they are orbit
closures.
Now consider the case $f$ surjective. As mentioned in \cite{Br3}
and \cite{Su} such a first integral is always of the form
$f=x^m/y^n$ for some automorphism $(x(u,v),y(u,v))$. The
$\mathbb{Z}_d$-action on the base $\mathbb{P}^1$ is either trivial (and hence $f$ is
$\mathbb{Z}_d$-invariant) or it has exactly two fixed points (so two fibers
of $f$ are $\mathbb{Z}_d$-invariant). In both cases there are two invariant
fibers intersecting transversally (say the $0$- and the
$\infty$-fiber). Indeed if $m=n=1$ all fibers intersect transversely
and if $m\neq n$ all but one fiber intersect pairwise tangentially
so this fiber is clearly invariant and it intersects all other
fibers transversally. By Lemma \ref{lemabhy} we may assume that
these two fibers coincides with $\lbrace u = 0\rbrace$ and $\lbrace
v = 0\rbrace$ and hence $x(u,v)=au$ and $y(u,v)=bv$ or vice versa.
\end{proof}
\begin{theorem} \label{thmlist} Let $\partial$ be a complete algebraic
vector field on $\mathbb{C}^2$ which is invariant by the group action given
by $\zeta\cdot(u,v) = (\zeta u,\zeta^e v)$ where $\zeta$ is a
primitive $n$-th root of unity and $0<e<d$ coprime numbers. Then
$\partial$ has, up to equivariant automorphism of $\mathbb{C}^2$, one of the
forms in the following list.
\begin{enumerate}\itemsep8pt
\item [] \item \begin{enumerate} \itemsep8pt
\item $\displaystyle{ \partial=au\frac {\partial}{\partial u} +
((A(u^d)v+B(u^e))\frac{\partial}{\partial v}}$
\item $\displaystyle{ \partial=av\frac {\partial}{\partial v} + ((A(v^d)u+B(v^{e'}))\frac{\partial}{\partial u}}$
\end{enumerate}
\item[]with $a\in\mathbb{C}$, $0<e'<d$ such that $[ee']=[1] \in \mathbb{Z}_d$
and $A,B\in \mathbb{C}[t]$.
\item \begin{enumerate}\itemsep8pt
\item $\displaystyle{\partial=av\frac{\partial}{\partial v} +
A(u^mv^n)\left[nu\frac{\partial}{\partial u} -
mv\frac{\partial}{\partial v}\right]}$
\item If $d=4d'$ and $e=2d'+1$ then we also have
\item[] $\displaystyle{\partial=a(u+v)\left(\frac{\partial}{\partial u}
+ \frac{\partial}{\partial v}\right) +
A((u^2-v^2)^{2d'})\left[u\frac{\partial}{\partial v} +
v\frac{\partial}{\partial u}\right]}$
\end{enumerate}
\item[]with $a\in\mathbb{C}$, $m,n\in\mathbb{N}$ with $[m+en]=[0]$ and
$A\in\mathbb{C}[t]$ .
\item There are $a\in\mathbb{C}$, $m,n,l\in \mathbb{N}$ with $[m]=[0]$, $p\in
A_{[0]}$, $\deg p< l$, $p(0)\neq 0$ and $A\in\mathbb{C}[t]$ with the
property that \[A(x^m(x^ly+p(x))^n)\cdot(mp(x)+nxp'(x))-ap(x) \in
x^l\cdot\mathbb{C}[x,y]\] such that
\item[] \begin{enumerate} \itemsep8pt
\item $\displaystyle{ \partial= a\left(v+\frac{p(u)}{u^l}\right)
\frac{\partial}{\partial v}+}$
\item[] \quad $\displaystyle{A(u^m(u^lv +
p(u))^n)\cdot\left[nu\frac{\partial}{\partial u} - \left((m+nl)v
+ \frac{mp(u)+nup'(u)}{u^l}\right)\frac{\partial}{\partial
v}\right]}$
\item[] with $[l+e]=0$.
\item $\displaystyle{ \partial= a\left(u+\frac{p(v)}{v^l}\right)
\frac{\partial}{\partial u} + }$
\item[] \quad $\displaystyle{A(v^m(v^lu +
p(v))^n)\cdot\left[nv\frac{\partial}{\partial v} - \left((m+nl)u
+ \frac{mp(v)+nvp'(v)}{v^l}\right)\frac{\partial}{\partial
u}\right]}$
\item[] with $[1+le]=0$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
By Theorem \ref{lemhomfib} we know that the flow $\partial$ preserves
fibers of a $\mathbb{Z}_d$-homogeneous $\mathbb{C}$- or $\mathbb{C}^*$-polynomial (which are
described in Corollary~\ref{c-fiber} and Lemmas~\ref{cstar-fiber-1}
and \ref{cstar-fiber-2}) or it has a rational first integral (which
may be assumed to be of the form $u^m/v^n$ by Lemma
\ref{lemfirstint}). Once we have a polynomial that is preserved by
the flow we can check in Proposition 2 in \cite{Br3} how the vector
field looks like. Since the vector fields need to be
$\mathbb{Z}_d$-invariant some extra conditions are required. In the case of
the rational first integral we have $\partial=nu\partial/\partial u +
mv\partial/\partial v$ which is already in the list.
\end{proof}
\section{The strong algebraic density property for affine toric
surfaces}
First we give a new concept of the ADP which was first introduced in
\cite{KaKu14}.
\begin{definition}
Let $\Gamma$ be a group acting on an smooth affine algebraic variety
$X$. Then $X$ has $\Gamma$-ADP if the Lie algebra of all
$\Gamma$-invariant algebraic vector fields coincides with the Lie
algebra generated by all $\Gamma$-invariant complete algebraic
vector fields.
\end{definition}
As in the section above let $d,e\in\mathbb{Z}$ be two coprime numbers with
$0<e<d$ and let $\zeta$ be a primitive $d$-th root of unity. Consider
again the $\mathbb{Z}_d$-action on $\mathbb{C}^2$ given by $\zeta\cdot(u,v)=(\zeta u,
\zeta^e v)$. Moreover let $e'$ be the unique integer with $0<e'<d$ and
$ee'=1$ mod $d$. It is clear that:
\begin{proposition} \label{surf-ADP}
$V_{d,e}$ has the strong ADP if and only if $\mathbb{C}^2$ has the
$\mathbb{Z}_d$-ADP.
\end{proposition}
Let us introduce the following subsets of $\ZZ^2$
\begin{eqnarray*}
I & = & \lbrace (i,j)\in\ZZ_{\geq 0}^2: \ i+ej = 0 \ \mathrm{mod} \ d \rbrace,\\
J & = & \lbrace (i,j)\in I\setminus\lbrace(0,0)\rbrace: \ i<e \ \mathrm{and} \ j<e' \rbrace \subset I,\\
\end{eqnarray*}
\begin{lemma}
$|J| \leq1 \Leftrightarrow e \ \vert \ d+1$.
\end{lemma}
\begin{proof}
If $e=1$ then also $e'=1$ and thus $J=\emptyset$. If $e,e'>1$ then $
|J| \geq 1$ since $(e-1,e'-1)\in J$. Assume $ee'=d+1$, $i<e$ and
$j<e'$, then we have $i+je<e+d<2d$ and the equality $[i+je]=[0] \in
\mathbb{Z}_d$ implies $i+je=d$. Similarly we get $ie'+j=d$ and thus there is
a unique solution for $(i,j)$ and hence $|J| =1$. If $ee'\geq 2d+1$
then we get another solution of $[i+je]=[0]$ in $J$. Indeed, choose
$l\in\mathbb{N}$ such that $0<d-le<e$ then $(d-le,l)\neq(e-1,e'-1)$ lies in
$J$, since $le<d$ implies $0<l<e'-1$.
\end{proof}
Let us introduce the following notation:
\begin{eqnarray*}
\mathrm{VF}^{(i,j)} & = & \left\lbrace u^i v^j \left(au\frac{\partial}{\partial u} + bv\frac{\partial}{\partial v}\right): \ a,b\in \mathbb{C} \right\rbrace,\\
\mathrm{CVF}^{(i,j)} & = & \left\lbrace au^i v^j \left(ju\frac{\partial}{\partial u} - iv\frac{\partial}{\partial v}\right): \ a\in \mathbb{C} \right\rbrace
\subset\mathrm{VF}^{(i,j)}, \\
\mathrm{LND}_u^k &=& \left\lbrace av^{ke'}\frac{\partial}{\partial u}: \ a\in\mathbb{C}\right\rbrace,\\
\mathrm{LND}_v^k& = &\left\lbrace au^{ke}\frac{\partial}{\partial v}: \ a\in\mathbb{C}\right\rbrace.\\
\end{eqnarray*}
Remark that $\mathrm{CVF}^{(i,j)}$ corresponds to the subset of
complete vector fields in $\mathrm{VF}^{(i,j)}$ by
Corollary~\ref{cor-gl-int}. We have the decomposition of
$\mathbb{Z}_d$-invariant vector fields in homogeneous vector fields given by:
\[
\mathrm{VF}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2) = \bigoplus_{(i,j)\in
I}\mathrm{VF}^{(i,j)} \oplus
\bigoplus_{k\in\mathbb{N}}\left(\mathrm{LND}_u^k \oplus
\mathrm{LND}_v^k\right).
\]
We define the subspace $S$ of
$\mathrm{VF}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$.
\[
S = \bigoplus_{(i,j)\in J}\mathrm{CVF}^{(i,j)} \oplus \bigoplus_{(i,j)\in I\setminus J}\mathrm{VF}^{(i,j)} \oplus
\bigoplus_{k\in\mathbb{N}}\left(\mathrm{LND}_u^k \oplus \mathrm{LND}_v^k\right)\,.
\]
The following is our main result in this section.
\begin{theorem} \label{thminvliealg} For the Lie algebra
$\mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$ generated by all
$\mathbb{Z}_d$-invariant complete algebraic vector fields on $\mathbb{C}^2$ we have:
\[
\mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)=
\begin{cases}
S & e = e'\\
S \oplus \langle \partial \rangle & e \neq e'
\end{cases}
\]
for any $\partial\in \mathrm{VF}^{(e-1,e'-1)}\setminus
\mathrm{CVF}^{(e-1,e'-1)}$. In particular the codimension of the
inclusion $\mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2) \subseteq
\mathrm{VF}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$ is $|J|$ if $e=e'$ and
$|J| -1$ otherwise.
\end{theorem}
Remark that $\dim_\CC \mathrm{CVF}^{(i,j)}=1$ and
$\dim_\CC\mathrm{VF}^{(i,j)}=2$ as a vector space. Hence, in the case
where $e\neq e'$ we have that $\mathrm{VF}^{(e-1,e'-1)}\subseteq
\mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$. We postpone the proof of
this theorem to the end of this section.
The theorem immediately shows in which cases $\mathbb{C}^2$ has $\mathbb{Z}_d$-ADP or,
equivalently, $V_{d,e}$ has the strong ADP. It also allows in each
particular case to determine the values of $\ell$ from
Definition~\ref{ADP} for which $I^\ell\VFalg(X,\Xsing) \subseteq
\Liealg(X,\Xsing)$.
\begin{corollary} \label{Z-ADP} Let $V_{d,e}$ be a toric surface.
\begin{enumerate}[$(i)$]
\item $V_{d,e}$ has the strong ADP if and only if if and only if $e\
\vert \ d+1$ and $e^2 \neq d + 1$.
\item $V_{d,e}$ has the ADP and an upper bound for the minimal
$\ell$ such that $I^\ell\VFalg(X,\Xsing) \subseteq
\Liealg(X,\Xsing)$ is $e+e'-2$.
\end{enumerate}
\end{corollary}
The next lemma shows what is happening if we take the Lie bracket of
two complete homogeneous vector fields.
\begin{lemma} \label{lembrackets} %
Let $\partial_1\in \mathrm{CVF}^{(i,j)}$, $\partial_2 \in
\mathrm{CVF}^{(i',j')}$, $\partial_3\in\mathrm{LND}_u^k$ and $\partial_4 \in
\mathrm{LND}_v^{k'}$, then
\begin{enumerate}[$(i)$]
\item $[\partial_1,\partial_2] \in \mathrm{CVF}^{(i+i',j+j')}$,
\item $[\partial_1,\partial_3] \in \mathrm{VF}^{(i-1,j+ke')}\setminus \mathrm{CVF}^{(i-1,j+ke')}$,
\item $[\partial_1,\partial_4] \in \mathrm{VF}^{(i+k'e,j-1)}\setminus
\mathrm{CVF}^{(i+k'e,j-1)}$,
\item $[\partial_3,\partial_4] \in \mathrm{VF}^{(k'e-1,ke'-1)}$. Furthermore,
$[\partial_3,\partial_4] \in \mathrm{CVF}^{(k'e-1,ke'-1)}$ if and only if
$ek'= e'k$.
\end{enumerate}
\end{lemma}
\begin{proof}
All four statements follow by direct computation using
Corollary~\ref{cor-gl-int} and Lemma~\ref{commutator}.
\end{proof}
The next two lemmas show
$\mathrm{Lie}(S)=\mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$, each of
them showing one inclusion.
\begin{lemma}
$S\subset \mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$
\end{lemma}
\begin{proof}
Take $(i,j)\in I \setminus J$, then either $(i-e,j+1) \in I$ or
$(i+1,j-e')\in I$. In the first case pick $\partial\in
\mathrm{CVF}^{(i-e,j+1)}$ and $\delta\in\mathrm{LND}_v^1$ and by Lemma
\ref{lembrackets} we have $[\partial,\delta]\in\mathrm{VF}^{(i,j)}\setminus
\mathrm{CVF}^{(i,j)}$ and thus $\mathrm{VF}^{(i,j)}\subset
\mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$. The second case works
similarly.
\end{proof}
\begin{lemma}
$\lbrace${invariant complete algebraic vector
fields}$\rbrace \subset \mathrm{Lie}(S)$.
\end{lemma}
\begin{proof}
Let $L$ be the set of vector fields appearing in the list of
Theorem~\ref{thmlist}. We will first show that $L\subset S$. Let
$\partial \in L$ and $\partial = \sum \partial_{i,j}$ its
decomposition into homogeneous parts with respect to the standard
grading on $\mathbb{C}^2$. We directly see that all homogeneous parts of
vector fields (1) and (2a) are complete. For the vector
fields (2b) and (3) we claim that $\partial_{i,j}=0$ whenever
$(i,j)\in J$. Indeed, assume that $\partial_{i,j}\neq 0$ with $(i,j)
\neq (0,0)$ and $\partial_{i,j}$ is not an LND. Then in case (2b) we
have $e=e'=2d'+1$, $i+j\geq 4d'$ and $i\neq j$ since for every
monomial $\mathfrak{m}$ of the polynomial $A$ we have
$\deg_u\mathfrak{m}-\deg_v\mathfrak{m}$ is a multiple of 4. Hence,
either $i>e$ or $j>e'$. In case (3a) under the same assumptions we
have $i> m+nl-l\geq m\geq d> e$. Similarly, in case (3b) we have $j>
m+ln -l\geq m\geq d> e'$.
\medskip
In order to conclude the proof we only need to show that for a
vector field $\delta \in \mathrm{Lie}(S)$ and an equivariant
automorphism $\phi$ the vector field $\phi_*\delta \in
\mathrm{Lie}(S)$. By Lemma 4.10 in \cite{ArZa} $\phi$ is a
composition of equivariant Jonqui\`eres automorphisms or more
precisely it is a composition of linear equivariant automorphisms
and flow maps of the vector fields $u^{ke}\partial/\partial v$ and $
v^{ke'} \partial/\partial u$ (which are contained in $S$). First we
show that for any linear automorphism $\phi$ we have $\phi_*\delta
\in \mathrm{Lie}(S)$. For $e=1$ this statement is true for obvious
reasons, indeed here we already have
$\mathrm{Lie}(S)=\mathrm{Lie}_{\mathrm{alg}}^{\mathbb{Z}_d}(\mathbb{C}^2)$. For $e
\neq 1$ all equivariant linear automorphisms are of the form
$(u,v)\mapsto (au,bv)$ so they act by homothety on homogeneous
vector fields of $\mathrm{Lie}(S)$. Now, if $\phi^t$ is the flow of
the LND $\partial$ then $\phi^t_*\delta \in$ Lie($\partial,\delta$)
for all $t$, since the Taylor expansion of $\phi^t_*\delta$ gives
$\phi^t_*\delta = \delta + t [\partial, \delta] + \frac{1}{2} t^2
[\partial, [\partial, \delta]] + \ldots + \frac{1}{n !} t^n
[\partial, \ldots [\partial, \delta]] \ldots]$ which is a finite sum
since $\partial$ is an LND and hence its flow is algebraic in
$t$. Since $\partial\in S$ the claim follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thminvliealg}]
It is left to show that $\mathrm{Lie}(S)=S$ if $e=e'$ and
$\mathrm{Lie}(S)=S\oplus \langle \partial \rangle$ if $e\neq e'$ for
any $\partial\in \mathrm{VF}^{(e-1,e'-1)}\setminus
\mathrm{CVF}^{(e-1,e'-1)}$. Let $(i,j)\in J$, then we need to show
that $\mathrm{VF}^{(i,j)}\nsubseteq\mathrm{Lie}(S)$ unless $e\neq
e'$ and $(i,j)=(e-1,e'-1)$. Assume
$\mathrm{VF}^{(i,j)}\subset\mathrm{Lie}(S)$, then Lemma
\ref{lembrackets} implies the existence of
$\partial\in\mathrm{VF}^{(i,j)}\setminus \mathrm{CVF}^{(i,j)}$ such
that $\partial=[\partial_1,\partial_2]$ for some
$\partial_1\in\mathrm{LND}_u^1$ and $\partial_2\in\mathrm{LND}_v^1$,
$e\neq e'$ and $(i,j)=(e-1,e'-1)$.
\end{proof}
\section{Implications of the algebraic density property for the
holomorphic automorphism group}
We start with the obvious holomorphic version of Definition~\ref{ADP}.
Let $X$ be a Stein space and let $\Xsing$ be the singular locus. We
also let $Y\subseteq X$ be closed analytic subvariety of $X$
containing $\Xsing$ and let $I_\mathrm{hol}=I(Y)\subseteq \OO(X)$ be the ideal of
$Y$. Let $\VFhol(X,Y)$ be the $\OO(X)$-module of holomorphic vector
fields vanishing in $Y$ i.e., $\VFhol(X,Y)=\{\partial
\mid \partial(\OO(X))\subseteq I_\mathrm{hol}\}$. Let $\Liehol(X,Y)$ be the Lie
algebra generated by all the complete vector fields in $\VFhol(X,Y)$.
\begin{definition} %
We say that $X$ has the strong density property (DP) relative to $Y$
if $\Liehol(X,Y)$ is dense in $\VFhol(X,Y)$ in the compact-open
topology. Furthermore, we say that $X$ has the DP relative to $Y$ if
there exists $\ell\geq 0$ such that $I_\mathrm{hol}^\ell\VFhol(X,Y)$ is
contained in the closure of $\Liehol(X,Y)$. With this definition,
the DP relative to $Y$ with $\ell=0$ is just the strong DP relative
to $Y$.
\end{definition}
\begin{proposition} \label{GAGA}
Let $X$ be an affine algebraic variety and let $Y$ be a subvariety
containing $\Xsing$. Then the ADP for $X$ relative to $Y$ implies
the DP for $X$ relative to $Y$.
\end{proposition}
\begin{proof}
The proposition follows from the fact that $I^\ell\VFalg(X,Y)$ is
dense in $I_\mathrm{hol}^\ell\VFhol(X,Y)$. Indeed, by Theorem A of Cartan, there
are finitely many global sections $s_1,\ldots, s_N$ of
$I^\ell\VFalg(X,Y)$ that generate the stalk at every point. A
standard application of Theorem B of Cartan implies that any
holomorphic section $s_h\inI_\mathrm{hol}^\ell\VFhol(X,Y)$ over an
$\OO(X)$-convex compact $K\subseteq X$ can be written as
$s_h=f_1s_1+\ldots+f_Ns_N$ with $f_i\in \OO(K)$. By approximating
the functions $f_i$ by global functions in $\CC[X]$, this implies
$I^\ell\VFalg(X,Y)$ is dense in $I_\mathrm{hol}^\ell\VFhol(X,Y)$.
\end{proof}
\begin{theorem}[\bf Relative Anders\'en-Lempert
theorem] \label{AL-Theorem} %
Let $X$ be a Stein space with the DP relative to a
closed analytic subvariety $Y$ containing $\Xsing$. Let $\Omega$ be
an open subset of $X$. Suppose that $ \Phi : [0,1] \times \Omega \to
X$ is a $C^1$-smooth map such that
\begin{enumerate} [(i)]
\item $\Phi_t : \Omega \to X$ is holomorphic and injective for every
$ t\in [0,1]$,
\item $\Phi_0 : \Omega \to X$ is the natural embedding of $\Omega$
into $X$,
\item $\Phi_t (\Omega)$ is a Runge subset of $X$ for every $t\in
[0,1]$, and
\item \label{fixing} $\Phi_t (\Omega)$ fixes $Y$ up to order $\ell$, where $\ell$
is such that $I_\mathrm{hol}^\ell\VFhol(X,Y)$ is contained the closure of
$\Liehol(X,Y)$.
Then for each $\epsilon >0 $ and every compact subset $K \subset
\Omega$ there is a continuous family, $\alpha: [0, 1] \to
\Aut_{hol} (X)$ of holomorphic automorphisms of $X$ fixing $Y$
pointwise such that $$\alpha_0 = id \, \, \, {\rm and} \, \,
\,\vert \alpha_t - \Phi_t \vert_K <\epsilon {\rm \ \ for\ \ every\
\ } t \in [0,1]$$
\end{enumerate}
\end{theorem}
Point $(iv)$ in the assumptions of the theorem means the following:
Consider the time dependent vector field
$V(x,t_0)=\left.\frac{d}{dt}\right|_{t=t_0}\Phi_t(\Phi_{t_0}^{-1}(x))$. The
isotopy $\Phi_t (\Omega)$ fixes $Y$ up to order $\ell$ if $V(x,t_0)$
is a section of $I_\mathrm{hol}^\ell\VFhol(X,Y)$ over $\Phi_{t_0}(\Omega)$ for
all $t_0$.
\begin{proof}[Sketch of proof]
The map $\Phi_{t_0}$ is the $t_0$-map of the time dependent vector
field $V(x,t)$. It can be approximated by dividing the time interval
into small pieces and integrating the time independent vector fields
over each piece. By assumption, each of those time independent
fields is a section in
$I_\mathrm{hol}^\ell\VFhol(X,Y)(\Phi_{t_0}(\Omega))$. Since the sheaf
$I_\mathrm{hol}^\ell\VFhol(X,Y)$ is coherent, a similar use of Theorem A and B
of Cartan as in the proof of Proposition~\ref{GAGA} leads to the
fact that these time independent vector fields in the Runge domain
$\Phi_{t_0}(\Omega)$ can be approximated by global vector fields in
$I_\mathrm{hol}^\ell\VFhol(X,Y)$. By assumption, these vector fields can be
approximated by Lie combinations of complete vector fields vanishing
in $Y$ (not necessarily in $I_\mathrm{hol}^\ell\VFhol(X,Y)$). Now the standard
use of Euler's method gives the desired conclusion.
\end{proof}
\begin{remark}
If $Y\cap \Phi_t(\Omega)=\emptyset$ for all $t\in [0,1]$, then
condition~\eqref{fixing} in Theorem~\ref{AL-Theorem} is trivially
satisfied.
\end{remark}
\begin{corollary}
Any smooth point in an affine toric variety $X$ of dimension $n\geq
2$ different from the torus has an open neighborhood in the
Euclidean topology biholomorphic to $\CC^n$.
\end{corollary}
\begin{proof}
Let $x\in X$. Take a Runge neighborhood $U$ of $x$ biholomorphic to
the unit ball sending $x$ to zero and let $\Phi_t$ be the map
$\left(1-\frac{t}{2}\right)z$ in the unit ball. Since $X$ has the DP
relative to $\Xsing$, Theorem~\ref{AL-Theorem} implies that these
contractions can be approximated by holomorphic automorphisms
$\alpha_t$ of $X$ (fixing $\Xsing$ pointwise). The automorphism
$\alpha_1$ has an attractive fixed point near $x$. The bassin of
attraction of this point is biholomorphic to $\CC^n$
\cite{RoRu88}. Since the holomorphic automorphism group of $X$ is
transitive on $X\setminus \Xsing$, the claim follows.
\end{proof}
\bibliographystyle{alpha} |
1,108,101,564,116 | arxiv | \section{Introduction}\label{sec1}
\label{sec:introduction}
Saccadic eye movement is one of most significant feature of human visual system which helps us to scan a scene with incredible celerity and robustness. During saccadic eye movement human eyes rapidly moves from one point to another while simultaneously detecting interesting regions. Modelling and automatic detection of these salient regions which essentially seek attention of human visual system, is currently a problem of considerable interest. It should be apparent that early detection of salient regions have numerous important applications. From scene understanding to rapid target detection, more or less every computer vision task can be aided by saliency prediction.
Previous approaches for saliency mapping can be divided into two groups: bottom up and top down. Bottom up approaches relies on processing of inherent features (like contrast, edges etc.)of the image and do not depend on any priori information, while top down hierarchy inspired methods assume that past experience and knowledge plays an important role in driving attention.
\begin{figure}[t]
\begin{center}$
\begin{array}{ccc}
\includegraphics[width=1in]{forg.png}\hspace{-0.1em}
\label{fig:Stupendous} &
\includegraphics[width=1in]{fs1.png}\hspace{-0.1em} &\vspace{.6em}\\
\includegraphics[width=1in]{fs2.png}\hspace{-0.1em}&
\includegraphics[width=1in]{fd.png}\hspace{-0.1em}
\end{array}$
\end{center}
\vspace{-0.7em}
\caption{(a)Input image, (b) Saliency map using proposed method (with 3 feature channels) ,(c)Saliency map using proposed method (with 7 feature channels), (d) Human fixation density map }
\end{figure}
Bottom up saliency detection methods are generally termed as low level methods as they mostly utilizes low level features of the image. These group of methods can be further classified into biologically inspired approaches, purely computational techniques or methods which lies more or less in the middle-ground. Inspired from the “feature integration theory” ~\cite{Treisman198097}. the saliency model proposed by Itti et al ~\cite{Ittikoch} is undoubtedly the most influential and significant work from the first category. This biologically inspired model segregates the input image into several simple feature maps and calculates center-surround difference for each map and finally combines them in a linear manner to produce the master saliency map. Bruce and Tsotsos ~\cite{Bruce} proposed a saliency model which is based on self-information maximization of any region relative to its surrounding. . Seo and Milanfar~\cite{Seo} also compared the center and surround using a “self-resemblance” measure.Murray et. al~\cite{Murray} proposed a method which utilizes a low level vision model of color perception. Zhang et al ~\cite{Zhang} computed saliency as a probability of target availability in a region. In 2005, Itti and Baldi~\cite{Itti2} proposed a “Bayesian–surprise” based method defines saliency as a measure of “surprise”. Completely or partially computational approaches are also common in past literatures .
In this paper we propose a kalman filter based saliency detection mechanism which is motivated by two much-discussed biological phenomena: 1) deviation from visual expectation or visual surprise~\cite{Expct1}~\cite{Expct2}~\cite{Itti2}~\cite{Itti3} and saccadic eye movement~\cite{Scd1}~\cite{Scd2}. Our algorithm does share the generalized notion of ‘surprise’ presented in a previous work~\cite{Itti2} ~\cite{Itti3} by Itti and Baldi where they have proposed a method for video saliency detection, however using one of the most commonly used estimation technique i.e. Kalman filter, we have developed a more compact and flexible method for calculating ‘surprise’ based saliency in a static image and our model can be easily extended for video case. Our algorithm has three main stages. First the color input image is split into low-level visual feature channels. Based on the choice of feature channels, we have implemented two variants of our model. The first one uses only three opponent color channels and the other one uses seven features channels [as in Itti-Koch model]. Then for each channel individual saliency map is generated using Kalman filter algorithm and lastly all of them are combined to produce the final map. To employ kalman filter model for saliency mapping we have assumed that the input image is a noise corrupted measurement (perceived) signal and the true signal is an “expected image” which our visual system generates. So to produce the saliency map corresponding to a feature channel, we first estimate the expected counterpart of that specific channel using kalman filter. Then we simply calculate the difference between the expected and perceived feature channel and define saliency as the magnitude of difference. The main contributions of our work are as follows:
1) A definition of ‘visual surprise’ in static image.
2) A bottom-up saliency model based on Kalman filter.
3) Evaluation of the proposed model on two popular benchmark data sets.
\section{\uppercase{Kalman filter based saliency detection}}
In this section we will describe our model thoroughly along with details of implementation. The basic architecture of the proposed algorithm has been shown in Fig. \ref{model_f}.
\subsection{Definition of visual surprise in static image}
When humans (also many other animals) look at a scene, their eyes move and scan the scene in a rapid jerk like motion and this is called ‘saccadic eye movement’. During two rapid saccades, eyes stop at fixation points. Naturally, these fixation points indicate the salient regions in a scene. Now Itti and Baldi~\cite{Itti2}~\cite{Itti3} and also others~\cite{Expct1} already showed in their work that our visual system encounters surprise in these regions and our visual prediction(based on prior belief) will be more different from the actual input. In their work Itti \& Baldi~\cite{Itti3} dealt with video data where pre and post frame can be treated as prior and posterior data respectively.
So visual surprise can be easily computed by calculating how much the posterior is dfferent from the prior. But, unlike video data, there is no pre or post frame in a single static image, to tackle this problem, we will move from one randomly selected block of an image to another while treating the former as prior (${\omega}_{k}$) and the later block as posterior (${\omega}_{k+1}$). However, we don't compare the blocks to calculate visual suprise. Instead of that,we simulate a process where we learn an unknown relation between the prior space (${\omega}_{k}$) and its local statistics, then using that relation we are trying to estimate the next region or the posterior, (${\omega}_{k+1}$). So visual surprise of any particular pixel can be defined as:
\begin{equation}
\text{Surprise}=|\text{Estimated value}-\text{Actual input value}|
\end{equation}
We will term the entire esimated image as "visually expected image". In the next section, we have presented formal definitions of "visually expected image" and it's corresponding "saliency map".
When modelling visual surprise, we have also considered an intuitive hypothesis in our work, that is when we are encountering more than a certain level of error in prediction, we become more ‘visually aware’ and rely less on our prior belief; vice versa occurs when perceived input image is continuously agreeing with our expectation. In our model 'visual awareness' decides to which extent our expectation gets modified by the posterior data. We can relate this intuitive idea with our daily experiences, for example: if a car comes suddenly in front of us when we are walking on a road, we will be surprised and for the rest of the path we will stay more cautious. Also it have been assumed that when we shift our eyes to a distant part of scene, our reliance in prior belief reduces. So both distance and error in prediction controls the trade-off between visual awareness and reliance in prior. In section 2.3 we will describe how this scheme can be implemented by manipulating two design parameters of kalman filter.
\subsection{Definition of expected image and image saliency}
As we have already discussed that the heart of our algorithm is the generation of the “visually expected image”. Now to generate visually expected image, ${E}_{c}$ corresponding to the input image channel, say ${I}_{c}$, we have used a coarse construction policy i.e. the expected image will be coarse in nature. To simulate this we have assumed that ${E}_{c}$ will be consist of equally sized regions (in our case: blocks of dimension $m\times n$ ) and all pixels of any specific region/block will have same value. So each uniform block can be defined by only one value. Let’s say, ${M}_{k}$ is the value of all pixels in the kth block of ${E}_{c}$ and its channel input counterpart is the kth block , ${\omega}_{k}$(i.e. $\omega_{k}\subseteq I_{c}$ ). Now ${M}_{k}$ is defined as follows:
\begin{equation}
\begin{split}
{M}_{k}=\sum_{i=1}^{3}{e}_{ik}.\text{Local entropy}_{scale_{i}}+\sum_{i=1}^{2}{m}_{ik}.\text{Local mean}_{scale_{i}}+\sum_{i=1}^{2}{s}_{ik}.\text{Local standard deviation}_{scale_{i}}
\end{split}
\end{equation}
Where ${M}_{k}$ is a linear combination of local statistics. Our first task is to estimate the values of the coefficients (i.e. ${e}_{i}$ ,${m}_{i}$, ${s}_{i}$) of ${M}_{k}$ and using that we will construct the coarse expected image, ${E}_{c}$. For this estimation purpose we have used Kalman filter. After constructing expected image ${E}_{c}$ associated with ${I}_{c}$, the saliency map Sc corresponding to ${I}_{c}$, can be computed as follows:
\begin{equation}
S_{c}=|I_{c}-E_{c}|
\end{equation}
After computing saliency map for each channel we combine them and apply center bias to generate the final saliency map. However, before combining these channel saliency maps, we enhance them individually via ‘contrast stretching’ to make the salient regions more conspicuous.
Though we have defined ${M}_{k}$ as a linear combination of statistics, our model doesn't impose any restriction on the choice of ${M}_{k}$. ${M}_{k}$ could have been more simpler or more thoughtfully crafted nonlinear combination of features.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]
{model2.png}
\end{center}
\caption{Flow diagram of the proposed model. This diagram represents the first variant of the proposed model which initially splits the input image into three opponent color maps and uses them as feature channel for saliency detection. The second variant follows exactly same framework as this, but uses seven feature channels (one intensity, two color and four orientation channels).}
\label{model_f}
\end{figure*}
\subsection{Kalman filter algorithm}
In our work we have assumed that the input image is the measurement signal and the predicted image by our visual system is the true signal. So using the Kalman filter algorithm~\cite{Kal1}~\cite{Kal2}~\cite{Kal3} we will estimate the true signal using block to block random traversal policy which has been already described in previous section. Kalman filter is a very well-known estimator so we will just describe the main stages of the algorithm step by step below:
The state variable form of the states (in our case the coefficients) can be described as :
\begin{equation}
\texttt{x}_{k+1}=\texttt{F}_{k}\texttt{x}_{k}+\texttt{w}_{k}
\end{equation}
where,
$\texttt{x}_{k}=\begin{bmatrix}
\texttt{x}_{1k}&\texttt{x}_{2k}&\texttt{x}_{3k}&\texttt{x}_{4k}&\texttt{x}_{5k}&\texttt{x}_{6k}&\texttt{x}_{7k}
\end{bmatrix}^{T}$ is the state vector at $kth$ instant. The states are the coefficients of ${M}_{k}$ (i.e. $\texttt{x}_{1k}={e}_{1k},\texttt{x}_{2k}={e}_{2k},\texttt{x}_{3k}={e}_{3k},\texttt{x}_{4k}={m}_{1k},\texttt{x}_{5k}={m}_{2k},\texttt{x}_{6k}={s}_{1k},\texttt{x}_{7k}={s}_{2k}$).
$\texttt{F}_{k}={I}_{7}$ (identity matrix of size 7), is the state transition matrix
$\texttt{w}_{k}$ is process noise(zero mean white Gaussian type)
The observation signal can be represented using the following linear equation :
\begin{equation}
\texttt{z}_{k}=\texttt{H}_{k}\texttt{x}_{k}+\texttt{v}_{k}
\label{eq_m}
\end{equation}
where,
$\texttt{z}_{k}$ is the measurement vector at $kth$ instant and $\texttt{v}_{k}$ is measurement noise(zero mean white Gaussian type).
$\texttt{H}_{k}$ is a vector which defines the relation between state and measurement vector. As our model treats original image as measurement signal and any expected block can be represented by only a single value, we will update the coefficients of $M_{k}$ using the mean value of all elements/pixels which belong to $\omega_{k}$ i.e.$\left \langle \texttt{w}_{k} \right \rangle$. So equation \ref{eq_m} can be rewritten as follows:
\begin{equation}
\left \langle \texttt{z}_{k} \right \rangle=\texttt{H}_{k}\texttt{x}_{k}+\texttt{v}_{k}.
\end{equation}
where, $ \texttt{z}_{k}= \omega_{k}$ and $\texttt{H}_{k}=\begin{bmatrix}
\texttt{h}_{1k}&\texttt{h}_{2k}&\texttt{h}_{3k}&\texttt{h}_{4k}&\texttt{h}_{5k}&\texttt{h}_{6k}&\texttt{h}_{7k}
\end{bmatrix}$ is a 1$\times$7 dimensional vector which contains the local statistics (for e.g $\texttt{h}_{1k}=\text{Local entropy}_{scale_{1}}$).
Now the 7-state kalman filter can be expressed using the following equations:
\begin{equation}
\hat{{\texttt{x}^{-}}_{k+1}}=\texttt{F}_{k}\hat{\texttt{x}_{k}}
\end{equation}
\begin{equation}
\texttt{P}^{-}_{k+1}=\texttt{F}_{k}\texttt{P}_{k}\texttt{F}^{T} _{k}+\texttt{Q}_{k}
\end{equation}
\begin{equation}
\texttt{K}_{k}=\texttt{P}^{-}_{k}\texttt{H}^{T}_{k}(\texttt{H}_{k}\texttt{P}^{-}_{k}\texttt{H}^{T}_{k}+\texttt{R}_{k} )^{-1}
\end{equation}
\begin{equation}
\hat{{\texttt{x}}_{k}}=\hat{{\texttt{x}^{-}}_{k}}+\texttt{K}_{k}(\texttt{z}_{k}-\texttt{H}_{k}\hat{{\texttt{x}^{-}}_{k}} )
\end{equation}
\begin{equation}
\texttt{P}_{k}=(\texttt{I}-\texttt{K}_{k}\texttt{H}_{k})\texttt{P}^{-}_{k}
\end{equation}
Where, $\texttt{Q}_{k}$ and $\texttt{R}_{k}$ are the process and measurement noise covariance corresponding to $\omega_{k}$. $\texttt{K}_{k}$ is the kalman gain and $\texttt{P}_{k}$ is the error covariance matrix. In our case the measure update equation of state of state vector has been slightly adjusted as shown below:
\begin{equation}
\hat{{\texttt{x}}_{k}}=\hat{{\texttt{x}^{-}}_{k}}+\texttt{K}_{k}(\left \langle \texttt{z}_{k} \right \rangle-\texttt{H}_{k}\hat{{\texttt{x}^{-}}_{k}} )
\end{equation}
where $\texttt{z}_{k}={\omega}_{k}$
Now, as we have presented the key equations of the kalman filter it will be easy to see how we can simulate the method we have described in the previous section. The process noise covariance, $\texttt{Q}_{k}$ controls how much our estimated value will rely on the process (in our case prior belief) and measurement noise covariance, $\texttt{R}_{k}$ controls how much our prediction will be modulated by measurement. So if we choose high $\texttt{Q}_{k}$ and low $\texttt{R}_{k}$, prediction will trust the measurements more and vice versa occurs when we choose low $\texttt{Q}_{k}$ and high $\texttt{R}_{k}$. So when error in prediction gets higher than a certain threshold value or we move between two blocks which are away from each other, we will increase $\texttt{Q}_{k}$ and decrease $\texttt{R}_{k}$ and if both conditions are unsatisfied we will choose a low $\texttt{Q}_{k}$ and high $\texttt{R}_{k}$ .
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c}
\hline
$\texttt{P}_{0}$&$\texttt{x}_{0}$ & $\texttt{Q}_{1}$&$\texttt{R}_{1}$&$\texttt{Q}_{2}$&$\texttt{R}_{2}$\\
\hline\hline
${I}_{7}$& $\textbf{0}_{7\times1}$&$0.1\times$${I}_{7}$&${10}^{-10}$& ${10}^{ -10} \times{I}_{7}$&0.1\\
\hline
\end{tabular}
\end{center}
\caption{Kalman filter parameters used in our algorithm. $\texttt{P}_{0}$ and $\texttt{x}_{0}$ are respectivly intial error covariance matrix and intial state vector. $\texttt{Q}_{1}$ and $\texttt{R}_{1}$ belongs to the set-I of the noise covariance matrices and $\texttt{Q}_{2}$ and $\texttt{R}_{2}$ belongs to the second set. }
\label{table_p}
\end{table}
\subsection{Implementation details}
The first parameter we need to specify for our algorithm is the size of the blocks. To generate our results we have used blocks of size 25 x 25 (this has been selected empirically) and in all of our simulations, where we initially down-sampled the input image to 400$\times$300, this size provides satisfactory performance.
We already have shown that function ${M}_{k}$ incorporates three local statistics at different scales. To employ this, initially we calculated two local standard deviation maps (considering 3x3 and 5x5 neighborhood), two local mean maps (3x3 and 5x5 neighborhood) and three local entropy maps (5x5, 7x7 and 9x9 neighborhood) associated with the input image. Then when calculating ${M}_{k}$ for the kth block we simply taken mean of the values from this feature maps, over only the region which kth block specifies. Therefore the measurement vector $\texttt{H}_{k}$ contains the seven values corresponding to $\omega_{k}$ from these seven maps.
\begin{figure}[t]
$
\begin{array}{ccccccccc}
\includegraphics[width=.7in]{ORG1.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D1.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU1.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M1.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S1.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B1.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc1.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT1.png}\\
\includegraphics[width=.7in]{ORG2.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D2.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU2.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M2.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S2.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B2.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc2.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT2.png}\\
\includegraphics[width=.7in]{ORG3.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D3.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU3.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M3.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S3.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B3.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc3.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT3.png}\\
\includegraphics[width=.7in]{ORG4.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D4.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU4.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M4.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S4.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B4.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc4.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT4.png}\\
\includegraphics[width=.7in]{ORG5.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D5.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU5.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M5.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S5.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B5.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc5.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT5.png}\\
\includegraphics[width=.7in]{ORG6.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D6.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU6.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M6.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S6.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B6.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc6.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT6.png}\\
\includegraphics[width=.7in]{ORG7.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D7.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU7.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M7.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S7.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B7.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc7.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT7.png}\\
\includegraphics[width=.7in]{ORG8.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{D8.png}\hspace{-0.4em} &
\includegraphics[width=.7in]{SU8.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{M8.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{S8.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{B8.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{rc8.png}\hspace{-0.4em}&
\includegraphics[width=.7in]{KT8.png}\\
\end{array}$
\vspace{-0.7em}
\caption{Sample results of our model along with human fixation density maps and results from other reference models. Column 1: Original input image, Column 2: Human fixation density map, Column 3: SUN~\cite{Zhang} , Column 4: CIWM~\cite{Murray}, Column 5: SR~\cite{Seo} ,Column 6: AIM~\cite{Bruce}, Column 7: RCSS~\cite{rcss}, Column 8: saliency maps from our proposed KS-7}
\label{visual_f}
\end{figure}
Usually, tuning of kalman filter parameters is a challenging task but in our case tuning is not necessary as we are interested in coarse estimation of expected image. Furthermore, as expected image is a hypothetical signal and there is no precise definition of it, we cannot evaluate error in its estimation. Table \ref{table_p} contains the values of kalman filter parameters which we have used. As we have said earlier we will oscillate between two sets of values of $\texttt{R}_{k}$ and $\texttt{Q}_{k}$. When error between ${M}_{k}$ and $\left \langle \omega_{k} \right \rangle$ goes above a certain threshold error or we move from a block to another which does not belong to the neighborhood of the former (assuming 4-connectivity), we will use the values from set-I ($\texttt{Q}_{1}$ and $\texttt{R}_{1}$); otherwise we will use set-II ($\texttt{Q}_{2}$ and $\texttt{R}_{2}$). The error between ${M}_{k}$ and $\omega_{k}$ can be defined as follows:
\begin{equation}
Error_{k}=|M_{k}-\left \langle \omega_{k} \right \rangle|
\end{equation}
The random traversal strategy is not critical for our algorithm, we could traverse among the blocks in any manner. But if we move from one block to another only in a nearest neighbor(along any direction) sense, this can sometime slightly reduce the performance. Probabaly this is because continuously navigating along similar region can lead to almost no changes in coefficient values for a long time and then if a region even with little difference is introduced , it will produce more error. However in our algorithm we use large block size and avail a coarse construction strategy, so performance get negligibly affected by traversal strategy.
Now if we don't move in nearest neighbor manner, distance between blocks will also decide how much our current expectation will be modulated by the prior belief.
For multi-scale implementation, along with the initially scaled input, we also produced saliency maps corresponding to the half and quarter resolution images and then get the master saliency map combining these three saliency maps generated at three scales.
\section{Experimental results}
We have evaluated our proposed algorithm against two benchmark datasets: 1) Toronto dataset~\cite{bruce2} and 2) MIT-300 dataset~\cite{juddreport}. The Toronto human fixation dataset, collected by Bruce and Tsotsos~\cite{bruce2}, is a well know benchmark dataset for visual saliency detection task which contains 120 color images of equal dimensions (681 x 511) and eye fixation records from 20 human observers. MIT-300 is a relatively new dataset which contains 300 benchmark images of varying dimensions. It has been already stated that, in this work we have implented two different variants of our model. We will term the first implementation, which uses only three color channels, as "KS-3'' and the other one as "KS-7" which utilizes seven feature channels.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
Model& AUC-Judd&AUC-Borji&CC&SIM&NSS\\
\hline\hline
AIM~\cite{Bruce} &0.79&0.77&0.33&0.38&0.83 \\
SUN~\cite{Zhang}&0.69&0.68&0.26&0.36&0.76 \\
SR~\cite{Seo}& 0.77&0.76&0.40&0.41&1.10\\
CIWM~\cite{Murray}&0 .75&0.74&0.35&0.38&0.96\\
RCSS~\cite{rcss}&0.78&0.76&0.43&\textbf{0.44}&1.16\\
KS-3 (proposed)& 0.79&0.78&0.44&0.42&1.20\\
KS-7 (proposed)& \textbf{0.83}&\textbf{0.82}&\textbf{0.53}&\textbf{0.44}&\textbf{1.42}\\
\hline
\end{tabular}
\label{table_r}
\end{center}
\label{table_r}
\caption{Quantitative comparison between the proposed Kalman filter based method and other methods when predicting human eye fixations on Toronto data set~\cite{bruce2}. }
\end{table}
\subsection{Evaluation metrics}
For quantitative evaluation we have used five standard metrics, namely AUC-Judd~\cite{juddreport}~\cite{riche}, AUC-Borji~\cite{riche}, Correlation Coefficient (CC)~\cite{riche}, Similarity measure (SIM)~\cite{juddreport}~\cite{riche} and Normalized Scanpath Saliency (NSS)~\cite{nss}~\cite{riche}. AUC-Judd and AUC-Borji are both area under the ROC(Receiver operating characteristic) curve based metrics which convert the saliency maps into binary maps and treat them as classifiers.
The third metric CC or correlation coefficient is a linear measure which can be calculated as below:
\begin{equation}
CC=\frac{cov(S,F)}{\sigma _{S}\ast \sigma _{F}}
\end{equation}
where $S$ and $F$ are saliency map and human fixation map respectively.
The output range of CC metric is between -1 to +1. $|\text{CC output}| = 1$ denotes there is perfect linear relationship exists between the ground fixation density map and saliency map. The similarity metric (SIM) compares fixation map and saliency map when they are viewed as normalized distributions. The similarity measure between normalized distributions,$S_{n}$ and $F_{n}$ can be given by:
\begin{equation}
SIM= \sum_{x=1}^{N}min(S_{n}(x),F_{n}(x))
\end{equation}
where, $\sum_{x=1}^{N}S_{n}(x)=1$ and $\sum_{x=1}^{N}F_{n}(x)=1$
A similarity score of 1 denotes that the two distributions are identical. The last metric we used for evaluation is Normalized Scanpath Saliency or NSS. This quantitative measure was proposed by Peteers and Itti ~\cite{nss} in 2005. The overall NSS score of a saliency map can be given by:
\begin{equation}
NSS=\frac{1}{N}\sum_{x=1}^{N}S_{n}(x)
\end{equation}
where $S_{n}$ and $N$ denote normalized saliency map and the total number of human eye fixations respectively.
\subsection{Performance on Toronto data set}
On Toronto dataset, We have compared our results with five other saliency models which are: 1) SUN~\cite{Zhang} 2) Information maximization model(AIM)~\cite{Bruce} 3) Self resemblance model(SR)~\cite{Seo} and 4) Chromatic Induction wavelet model(CIWM)~\cite{Murray} and 5) Random Center Surround Model(RCSS)~\cite{rcss}. Performances have been compared with the other methods both quantitatively (Table 2) and visually (Fig.\ref{visual_f}).
From Table 2, we can easily see that the 7 channel variant(KS-7) of the proposed approach outperformed the other models against all metrics by a wide margin. Only Random Center Surround Saliency model gave similar performance in terms of similarity score. The 3 channel variant of our algorithm, KS-3 also achieved state of the art performance on all metrics. As we can see from the example images, our method is less susceptible to the edges than Zhang et al(SUN) and Bruce\& Tsotsos(AIM). Though, CIWM and Self resemblance model(SR) sometimes demonstrated better edge suppression, these models tended to include large non-salient regions. When contrast between salient region and background is relatively low (e.g. sample image 7 in Figure 3), only Bruce-Tsotos and Our model performed well. Zhang's method(SUN) mainly highlighted edges for most of the images.From visual inspection, it seems that RCSS model is more appropriate for salient object detection rather than eye fixation prediction. RCSS also gave very poor performance for both high entropy images and low contrast images. Qualitative comparison among results from different models also suggests that a large part of our success can be attributed to the significant reduction in false detection.
In Figure 4, we have demonstrated the ROC (Receiver operating characteristic) curves for CIWM~\cite{Murray}, RCSS~\cite{rcss} and the proposed method. As we can see from the plots, KS-7 demonstrates greater efficacy than the other models. ROC curves of RCSS and KS-3 are close to each other while performance of CIWM is inferior to the other 3 models.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3 in,height=2.8in]
{ROC_latest2.png}
\end{center}
\caption{ROC curves from both of the versions of our model, CIWM~\cite{Murray} and RCSS model~\cite{rcss}}
\label{fig:short}
\end{figure}
\subsection{Performance on MIT-300 data set}
As the ground fixation maps for MIT-300 images are not publicly available, we compared our model only quantitatively with the other approaches on this dataset. In addition to the 5 models used for comparison on Toronto dataset, we have assessed our model’s performance on MIT-300 against 7 other state of the art methods which are: CNN-VLM~\cite{cnn}, Multiple Kernel Learning model (MKL)~\cite{mkl}, Context Aware Saliency (CAS)~\cite{context}, Generalized Nonlocal Mean Saliency (GNMS)~\cite{gnms}, NARFI saliency (NARFI)~\cite{narfi}, Sampled Template Collation (STC)~\cite{stc} and LGS model~\cite{lgs}. In table 3, we have presented quantitative performance of various models on MIT-300 data set and these results from MIT-300 clearly demonstrates the superiority of our kalman based method which outperformed all other approaches against AUC-Judd, AUC-Borji and CC metric. On SIM and NSS metric also the proposed approach(KS-7) achieved top scores along with RCSS and CNN-VLM model. Despite being a completely low level model our method performed better in an overall manner than the two learning based approaches: CNN-VLM and MKL. The proposed approach also gave significantly better saliency predictions than Context Aware Saliency model(CAS) which uses higher level feature detectors (such as face detector) as well as low level detectors.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
Model& AUC-Judd&AUC-Borji&CC&SIM&NSS\\
\hline\hline
CNN-VLM~\cite{cnn}& 0.79&\textbf{0.79}&0.44&0.43&\textbf{1.18} \\
MKL~\cite{mkl}& 0.78&0.78&0.42&0.42&1.08 \\
CAS~\cite{context}& 0.74&0.73&0.36&0.43&0.95\\
LGS~\cite{lgs}& 0.76&0.76&0.39&0.42&1.02\\
GNM~\cite{gnms}& 0.74&0.67&0.34&0.42&0.97\\
NARFI~\cite{narfi} & 0.73&0.61&0.31&0.38&0.83\\
STC~\cite{stc}& 0.79&0.78&0.40&0.39&0.97 \\
RCSS~\cite{rcss}& 0.75&0.74&0.38&\textbf{0.44}&0.95\\
CIWM~\cite{Murray} & 0.70&0.69&0.27&0.38&0.73 \\
SUN~\cite{Zhang}&0.67&0.66&0.25&0.38&0.68\\
AIM~\cite{Bruce}&0.77&0.75&0.31&0.40&0.79\\
SR~\cite{Seo}&0.71&0.69&0.31&0.41&0.83\\
KS-7 (proposed)&\textbf{ 0.80}&\textbf{0.79}&\textbf{0.46}&\textbf{0.44}&\textbf{1.18} \\
\hline
\end{tabular}
\label{table_r}
\end{center}
\caption{Quantitative comparison between the proposed Kalman filter based method and other methods when predicting human eye fixations on MIT-300 data set~\cite{juddreport}. }
\end{table}
\section{Conclusion}
In this paper we have presented a Kalman filter based saliency detection method which generates a “visually expected scene” and based on that builds a saliency map. We have developed our model around the notion of “visual surprise” and it can be extended easily for video data, where instead of traversing the spatial domain, we will progress through the time domain. Our proposed model also provides a great deal of flexibility as anybody can use their own definition of the function, ${M}_{k}$, combining multiple features. We have evaluated two different implementations of our model using the two popular benchmark data sets and compared our results with various other established algorithms. Experiments have showed that the proposed model performs considerably better than several other existing methods.
In future we would like explore pre-attentive segmentation models for initial segmentation instead of just dividing into blocks of uniform size in an ad hoc manner. Optimization of model implementation also demands detailed investigation as we did not attempt it yet. Finally, we would like to examine if performance can be improved by means of introducing non-linear combination of local statistics instead of linear one.
{\parindent0pt
\parskip8pt
|
1,108,101,564,117 | arxiv | \section{INTRODUCTION}
Increasingly often, researchers are confronted with monitoring the states of nodes in large computer, social, or power networks where these states dynamically change due to viruses, rumors, or failures that propagate according to the graph topology \cite{cohen-2003-91,Eubank_2004,newman-2002-66}. This class of network dynamics has been extensively modeled as a percolation phenomenon, where nodes on a graph can randomly ``infect'' their neighbors.
Percolation across networks has a rich history in the field of statistical physics, computer science, and mathematical epidemiology. Here, researchers are typically confronted with a network, or a distribution over the network topology, and extract fixed point attractors of node configurations, thresholds for phase transitions in node states, or distributions of node state configurations \cite{draief-2006,chak08,newman-2005-95}. In the field of fault detection, the nodes or edges can ``fail'', and the goal is to activate a subset of sensors in the network which yield high quality measurements that identify these failures \cite{Zheng05}. While the former field of research concerns itself with extracting \textit{offline} statistics about properties of the percolation phenomenon on networks, devoid of any measurements, the latter field addresses \textit{online} measurement selection tasks.
Here, we propose a methodology that actively tracks a causal Markov process across a complex network (such as the one in Figure \ref{sf}), where measurements are adaptively selected. We extract conditions such that the updated posterior probability of all nodes ``infected'' is driven to one in the limit of large observation time. In other words, we derive conditions for the existence of an epidemic threshold on the updated posterior distribution over the states.
The proposed percolation threshold should more accurately reflect the true conditions that cause a phase transition in a network, e.g., node status changing from healthy/normal to infected/failed, than traditional thresholds derived from conditions on predictive distributions which are devoid of observations or controls.
Since most practical networks of interest are large, such as electrical grids, it is usually infeasible to sample all nodes continuously, as such measurements are either expensive or bandwidth is limited. Given these, or other resource constraints, we present an information theoretic sampling strategy that selectively targets specific nodes that will yield the largest information gain, and thus, better detection performance.
The proposed sampling strategy balances the trade-off between trusting the \textit{predictions} from the known model dynamics (from percolation theory) and expending precious resources to select a set of nodes for measurement.
We present the adaptive measurement selection problem and give two tractable approximations to this subset selection problem based upon the joint and marginal posterior distribution, respectively. A set of decomposable Bayesian filtering equations are presented for this adaptive sampling framework and the necessary tractable inference algorithms for complex networks are discussed. We present analytical worst case performance bounds for our adaptive sampling performance, which can serve as sampling heuristics for the activation of sensors or trusting predictions generated from previous measurements.
To the author's knowledge, this is the first attempt to extract a percolation threshold of an actively monitored network using the updated posterior distribution instead of the observation independent predictive distributions.
\section{PROBLEM FORMULATION}
The objective of actively monitoring the $n$ node network is to recursively update the posterior distribution of each hidden node state given various measurements. Specifically, the next set of $m$ measurement actions (nodes to sample), $m \ll n$, at next discrete time are chosen such that they yield the highest quality of \textit{information} about the $n$ hidden states. The condition on $m\ll n$ simulates the reality of fixed resource constraints, where typically only a small subset of nodes in a large network can be observed at any one time.
Here, the hidden states are discrete random variables that correspond to the states encoded by the percolation process on the graph. Here, the graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$, with $\mathcal{V}$ representing the set of nodes and $\mathcal{E}$ corresponding to the set of edges. Formally, we will assume a state-space representation of a discrete time, finite state, partially observed Markov decision process (POMDP). Here,
\begin{equation}\label{Z}
\textbf{Z}_k = \{Z_k^1,\dots,Z_k^n\}
\end{equation}
represents the joint hidden states, e.g., healthy or infected
\begin{equation}\label{Y}
\textbf{Y}_k = \{\textbf{Y}_k^{(1)},\dots,\textbf{Y}_k^{(m)}\}
\end{equation}
represents the $m$ observed measurements obtained at time $k$, e.g., biological assays or {\tt PING}ing an IP address, and
\begin{equation}\label{A}
\textbf{a}_k = \{a^1_k,\dots,a^m_k\}
\end{equation}
represents the $m$ actions taken at time $k$, i.e., which nodes to sample. Here, $\textbf{Y}_k^{(j)}$, continuous/categorical valued vector of measurements, which is induced by action $a^j_k$, $a^j_k \in \mathcal{A}$, with $\mathcal{A} = \{1,\dots,n\}$ confined to be the set of all $n$ individuals in the graph, and $Z_k^i \in \{0,1, \dots, r\}$. Since the topology of $\mathcal{G}$ encodes the direction of "flow" for the process, the state equations may be modeled as a decomposable partially observed Markov process:
\begin{eqnarray}\label{state_eqs}
\textbf{Y}^i_k & =& \textbf{f}( Z^i_k ) + \textbf{w}^i_k\label{Y} \\
Z^i_k & =& h\left( Z^i_{k-1}, \{ Z^j_{k-1} \}_{j \in \eta(i) } \right)\label{Z}.
\end{eqnarray}
Here, $\eta(i) = \{j: \mathcal{E}\left(\mathcal{V}_i,\mathcal{V}_j \right) \notin \emptyset \}$ is the neighborhood of $i$, $\textbf{f}( Z^i_k )$ is a non-random vector-valued function, $\textbf{w}^i_k$ is measurement noise, and $h\left( Z^i_{k-1}, \{ Z^j_{k-1} \}_{j \in \eta(i) } \right)$ is a stochastic equation encoding the transition dynamics of the Markov process (see Figure \ref{2tbn} for a two node graphical model representation).
\begin{figure}[h!]
\centering
\includegraphics[width=1.75in]{2TBNpp.pdf}
\caption{Partially Observed Markov Structure for $i$ and $j$ for $\mathcal{E}\left(\mathcal{V}_i,\mathcal{V}_j\right) \notin \emptyset$}
\label{2tbn}
\end{figure}
\subsection{BAYESIAN FILTERING}
In our proposed framework for actively monitoring the hidden node states in the network, the posterior distribution is the sufficient statistic for inferring these states. The general recursion for updating the joint posterior probability given all past and present observations is given by the standard Bayes update formula:
\begin{equation}\label{bayes}
p(\textbf{Z}_k | \textbf{Y}_0^k) = \frac{ f( \textbf{Y}_k | \textbf{Z}_k ) }{ g( \textbf{Y}_k | \textbf{Y}_0^{k-1} ) } p( \textbf{Z}_k | \textbf{Y}^{k-1}_0 )
\end{equation}
with
\begin{equation}\label{intract}
p( \textbf{Z}_k | \textbf{Y}^{k-1}_0 ) = \hspace{-6mm} \sum_{ \textbf{z} \in \{0,1,\dots,r\}^n } \hspace{-4mm} p( \textbf{Z}_k | \textbf{Z}_{k-1} = \textbf{z}) p( \textbf{Z}_{k-1} = \textbf{z} | \textbf{Y}^{k-1}_0).
\end{equation}
The Chapman-Kolmogorov equations provide the connection between the posterior update (\ref{intract}) and the distribution resulting from the standard percolation equations. In the former, the updates are conditional probabilities that are conditional on past observations, while in the latter, the updates are not dependent on observations.
The local interactions in the graph $\mathcal{G}$ imply the following conditional independence assumptions:
\begin{equation}\label{likelihood}
f( \textbf{Y}_k | \textbf{Z}_k) = \prod_{i=1}^n f( \textbf{Y}^i_k | Z^i_k).
\end{equation}
\begin{equation}\label{trans}
p( \textbf{Z}_k | \textbf{Z}_{k-1}) = \prod_{i=1}^n p\left( Z^i_k | Z^i_{k-1}, \{ Z^j_{k-1} \}_{j \in \eta(i)}\right)
\end{equation}
where the likelihood term is defined in (\ref{Y}) and the transition dynamics are defined in (\ref{Z}). This decomposable structure allows the belief state (posterior excluding time $k$ observations) update, for the $i^{th}$ node in $\mathcal{G}$, to be written as:
\begin{equation}\label{marginal}
p(Z_k|\textbf{Y}_0^{k-1}) = \hspace{-7mm} \sum_{\textbf{z} \in \{0,1,\dots,r\}^{\lVert \text{pa} \rVert}} \hspace{-7mm} p(Z_k | \textbf{Z}^{ \text{pa} }_{k-1} = \textbf{z})p(\textbf{Z}^{ \text{pa} }_{k-1} = \textbf{z} |\textbf{Y}_0^{k-1})
\end{equation}
with the parent set, $\text{pa} = \{ \eta(i), i\}$. Unfortunately, for highly connected nodes in $\mathcal{G}$, this marginal update becomes intractable. It thus must be approximated \cite{Doucet:2000bv, Ng02factoredparticles, Mackay_2002_information}.
\subsection{INFORMATION THEORETIC ADAPTIVE SAMPLING}
In most real world situations, acquiring measurements from all $n$ nodes at any time $k$ is unrealistic, and thus, a sampling policy must be exploited for measuring a subset of nodes \cite{hero_book_07,Zheng05}. Since we are concerned with monitoring the states of the nodes in the network, an appropriate reward is the expected information gain between the \textit{updated} posterior, $p_k = p(\textbf{Z}_k | \{ \textbf{Y}^{i}_k\}_{i \in \textbf{a}_k}, \textbf{Y}_0^{k-1})$, and the belief state, $p_{k|k-1} = p(\textbf{Z}_k | \textbf{Y}_0^{k-1})$:
\begin{equation}\label{IG}
\textbf{a}_k = \mbox{argmax}_{\textbf{a} \subset \mathcal{A} }\mathbb{E}\left[ \mathcal{D}_{\alpha}\left( \{ \textbf{Y}^{i}_k \}_{i \in \textbf{a}} \right) | \textbf{Y}^{k-1}_0 \right]
\end{equation}
\begin{equation}\label{div}
\mathcal{D}_{\alpha}\left( \{ \textbf{Y}^{i}_k \}_{i \in \textbf{a}} \right) = \mathcal{D}_{\alpha}\left( p_k || p_{k|k-1} \right), \ 0 < \alpha < 1
\end{equation}
with $\alpha$-Divergence
\begin{equation}\label{alpha_div}
\mathcal{D}_{\alpha}(p || q) = \frac{1}{\alpha-1} \mbox{log} \left( \mathbb{E}_q \left[ \left(p/q\right)^{\alpha} \right] \right)
\end{equation}
for distributions $p$ and $q$ with identical support.
The reward in (\ref{IG}) has been widely applied to multi-target, multi-sensor tracking for many problems including, sensor management and surveillance \cite{hero_book_07, ecs16801}. Note that $\lim_{\alpha \rightarrow 1}\mathcal{D}_{\alpha}( p || q ) \rightarrow \mathcal{D}_{KL}( p || q)$, where $\mathcal{D}_{KL}( p ||q)$ is the Kullback-Leibler divergence between $p$ and $q$. The expectation in (\ref{IG}) is taken with respect to the conditional distribution $g(\textbf{Y}_k|\textbf{Y}_0^{k-1})$ given the previous measurements $\textbf{Y}_0^{k-1}$ and actions $\textbf{a}_k$. In practice, the expected information divergence in (\ref{IG}) must be evaluated via Monte-Carlo methods. Also, the maximization in (\ref{IG}) requires enumeration over all $\binom{n}{m}$ actions (for subsets of size $m$), and therefore, we must resort to Greedy approximations. We propose incrementally constructing the set of actions at time $k$, $\textbf{a}_k$, for $j = 1,\dots,m$, according to:
\begin{equation}\label{approxIG}
a^j_k = \mbox{argmax}_{i \in \mathcal{A} \setminus \textbf{a}_k }\mathbb{E}\left[ \mathcal{D}_{\alpha}\left( \textbf{Y}^i_k, \{ \textbf{Y}^{j}_k \}_{j \in \textbf{a}_k} \right) | \textbf{Y}^{k-1}_0 \right].
\end{equation}
Both (\ref{IG}) and (\ref{approxIG}) are selecting the nodes to sample which yield maximal divergence between the percolation prediction distribution (belief state) and the updated posterior distribution, averaged over all possible observations. Thus (\ref{IG}) provides a metric to assess whether to trust the predictor and defer actions until a future time or choose to take action, sample a node, and update the posterior.
\subsubsection{Lower Bound on Expected $\alpha$-Divergence}
Since the expected $\alpha$-Divergence in (\ref{IG}) is not closed form, we could resort to numerical methods for estimating this quantity. Alternatively, one could specify an analytical lower-bound that could be used in-lieu of numerically computing the expected information gain in (\ref{IG}) or (\ref{approxIG}).
We begin by noting that the expected divergence between the updated posterior and the predictive distribution (conditioned on previous observations) differ only through the measurement update factor, $f_k/g_{k|k-1}$ ((\ref{IG}) re-written):
\vspace{-7mm}
\begin{gather}
\mathbb{E}_{g_{k|k-1}} \left[ \mathcal{D}_{\alpha} \left( p_k || p_{k|k-1} \right) \right]] \nonumber\\ = \mathbb{E}_{g_{k|k-1}} \left[ \frac{1}{\alpha-1} \mbox{log} \ \mathbb{E}_{p_{k|k-1}} \left[ \left( \frac{ f_k }{g_{k|k-1}} \right)^{\alpha} \right] \right]\label{alphaDiv}.
\end{gather}
\vspace{-4mm}
So, if there is significant overlap between the likelihood distributions of the observations, the expected divergence will tend to zero, implying that there is not much value-added in taking measurements, and thus, it is sufficient to use the percolation predictions for inferring the states.
It would be convenient to interchange the order of the conditional expectations in (\ref{alphaDiv}). It is easily seen that Jensen's inequality yields the following lower bound for the expected information gain
\begin{gather}\label{performbound}
\mathbb{E}_{g_{k|k-1}} \left[ \mathcal{D}_{\alpha} \left( p_k || p_{k|k-1} \right) \right] \nonumber\\ \ge \frac{1}{\alpha-1} \mbox{log} \ \mathbb{E}_{p_{k|k-1}} \left[ \mathbb{E}_{g_{k|k-1}} \left[ \left( \frac{ f_k }{g_{k|k-1}} \right)^{\alpha} \right] \right].
\end{gather}
Here, the inner conditional expectation can be obtained from $\mathcal{D}_{\alpha}\left(f_k||g_{k|k-1} \right)$, which has a closed form for common distributions (e.g., multivariate Gaussians) \cite{hero_book_07}.
\section{ASYMPTOTIC ANALYSIS OF MARGINAL POSTERIOR}
For tracking the percolation process across $\mathcal{G}$, we have discussed recursive updating of the belief state. However, computing these updates exactly is in general intractable. For the remainder of the paper, we will use (\ref{Y}) and (\ref{Z}) to directly update the marginal posterior distribution using the following matrix representation:
\begin{equation}\label{marg_update}
\textbf{p}_k(z) = \textbf{D}_k(z) \textbf{p}_{k|k-1}(z)
\end{equation}
with updated marginal posterior $\textbf{p}_k(z) = [p_{1,k}(z), \dots,p_{n,k}(z)]^T$ with $p_{i,k}(z) = p(Z^i_k=z| \textbf{Y}^i_k, \textbf{Y}^{k-1}_0)$, $\textbf{D}_k(z) = \text{diag} \left( f_{i,k}^{(z)} / g_{i,k|k-1} \right) $, and marginal belief state $\textbf{p}_{k|k-1}(z) = [p_{1,k|k-1}(z), \dots,p_{n,k|k-1}(z)]^T$ with $p_{i,k|k-1}(z) = p(Z^i_k=z | \textbf{Y}^{k-1}_0)$.
Note that for $i \notin \textbf{a}_k$, $\left( \textbf{D}_k(z) \right)_{i,i} = 1$, and $p_{i,k}(z) = p_{i,k|k-1}(z)$. Given that we can find an efficient way of updating $\textbf{p}_{k|k-1}(z)$, according to the transition dynamics (\ref{Z}), we can solve a modified version of (\ref{approxIG}), for $j=1,\dots,m$:
\begin{equation}\label{IG_marg}
a^j_k = \mbox{argmax}_{i \in \mathcal{A} \setminus \textbf{a}_k }\mathbb{E}\left[ \mathcal{D}_{\alpha}\left( \textbf{Y}^i_k \right) | \textbf{Y}^{k-1}_0 \right]
\end{equation}
\begin{equation}\label{div}
\mathcal{D}_{\alpha}\left( \textbf{Y}^i_k \right) = \mathcal{D}_{\alpha}\left( p_{i,k}(z) || p_{i,k|k-1}(z) \right), \ 0 < \alpha < 1.
\end{equation}
\subsection{TOTAL DIVERGENCE OF UPDATED POSTERIOR}
One interesting property of the Bayesian filtering equations is that the updated posterior can be written as a perturbation of the predictive percolation distribution through the following relationship ($z$ omitted for clarity):
\begin{equation}\label{mat_update}
\textbf{p}_k = \textbf{D}_k \textbf{p}_{k|k-1} = \textbf{p}_{k|k-1} + \left( \textbf{D}_k - \textbf{I} \right) \textbf{p}_{k|k-1}.
\end{equation}
Hence, when the sensors do a poor job in discriminating the observations, $\textbf{D}_k \approx \textbf{I}$, we have $\textbf{p}_k \approx \textbf{p}_{k|k-1}$. It is of interest to determine when there is significant difference between the posterior update and the prior update specified by the standard percolation equations. Recall that the updated posterior is, in the mean, equal to the predictive distribution, $\mathbb{E}\left[ \textbf{p}_k |\textbf{Y}^{k-1}_0 \right] = \textbf{p}_{k|k-1}$. The total deviation of the updated posterior from the percolation distribution can be summarized by computing the trace of the following conditional covariance:
\vspace{-5mm}
\begin{small}
\begin{gather}
\mbox{tr} \left( \textbf{R} \left[ \textbf{p}_k | \textbf{Y}^{k-1}_0 \right] \right) = \label{cond_cov} \\
\mbox{tr} \left( \mathbb{E} \left[ \left( \textbf{p}_k - \mathbb{E} \left[ \textbf{p}_k | \textbf{Y}^{k-1}_0 \right] \right) \left( \textbf{p}_k - \mathbb{E} \left[ \textbf{p}_k | \textbf{Y}^{k-1}_0 \right] \right)^T | \textbf{Y}^{k-1}_0 \right] \right). \nonumber
\end{gather}
\end{small}
\vspace{-5mm}
Using (\ref{mat_update}) and properties of the trace operator, we obtain the following measure of total deviation of the updated posterior from the predictive distribution in terms of $f_k$ and $g_{k|k-1}$:
\vspace{-5mm}
\begin{gather}\label{deviation}
\text{tr} \left( \textbf{R} \left[ \textbf{p}_k | \textbf{Y}^{k-1}_0 \right] \right) = \mbox{tr} \left( \mathbb{E} \left[ \left( \textbf{D}_k - \textbf{I}\right)^2 | \textbf{Y}^{k-1}_0 \right] \textbf{P}_{k|k-1} \right)
\end{gather}
\vspace{-5mm}
with $\textbf{P}_{k|k-1} = \textbf{p}_{k|k-1}\textbf{p}_{k|k-1}^T$. The conditional expectation in (\ref{deviation}) is the Pearson $\chi^2$ divergence between distributions $f_{i,k}$ and $g_{i,k|k-1}$, for all $i$. This joint measure of deviation is analytical for particular families of distributions and thus can be used as an alternative measure of divergence for activation of sensors \cite{hero_book_07}.
\subsection{PERCOLATION THRESHOLD OF UPDATED POSTERIOR}
There has recently been significant interest in deriving the conditions of a percolation/epidemic threshold in terms of transition parameters and the graph adjacency matrix spectra for two state causal Markov processes \cite{chak08,draief-2006,newman-2005-95}. Such thresholds yield conditions necessary for an epidemic to arise from a small number of ``infections''. Knowledge of these conditions are particularly useful for designing ``robust'' networks, where the probability of epidemics is minimized.
Percolation thresholds are typically obtained by extracting the sufficient conditions of the network and model parameters for the node states to be driven to their stationary point, with high probability. The probability of these events are usually computed using the observation independent percolation distribution \cite{chak08,draief-2006,newman-2005-95}.
We use the results in \cite{chak08,draief-2006} to derive a percolation threshold based upon the updated posterior distribution assuming a restricted class of two-state Markov processes. These conditions should more accurately model the \textit{current} network threshold since the posterior distribution tracks a particular ``disease'' trajectory better than the observation independent percolation distribution.
Formally, $Z^i_k \in \{0,1\}$, $f^{(z)}_{i,k} = f(\textbf{Y}^i_k | Z^i_k = z)$ is the conditional likelihood for node $i$, $p_{i,k} = p(Z^i_k = 1 | \textbf{Y}^i_k,\textbf{Y}^{k-1}_0)$, and $p_{i,k} = p(Z^i_k = 1 | \textbf{Y}^{k-1}_0)$. Here, we will assume that $\textbf{Z}_k = \textbf{0}$ is the unique absorbing state of the system.
The Bayes update for $p_{i,k}$ can be written as ($i$ subscript omitted for clarity):
\begin{eqnarray}\label{bayesupdating}
p_k & =& \frac{ f^{(1)}_k }{ f^{(1)}_k p_{k|{k-1}} + f^{(0)}_k (1-p_{k|{k-1}} )} p_{k|{k-1}} \nonumber \\
& =& \frac{ f^{(1)}_k / f^{(0)}_k }{1 + \frac{ f^{(1)}_k - f^{(0)}_k }{ f^{(0)}_k} p_{k|{k-1}} } p_{k|{k-1}} \nonumber \\
& =& \frac{ f^{(1)}_k / f^{(0)}_k }{1 + \frac{ \Delta f_k }{ f^{(0)}_k } p_{k|{k-1}} } p_{k|{k-1}}\label{bayes_geo}.
\end{eqnarray}
There are three different sampling/observation dependent possibilities for each individual at time $k$: case (1), $i$ is not sampled and therefore, $p_k = p_{k|k-1}$, case (2), $\Delta f_k > 0$, and case (3), $\Delta f_k < 0$.
We first derive a tight-upper bound for cases (2) and (3) of the form $p_k \le c_k \ p_{k|k-1}$. For the remainder of the analysis we will assume that $| \frac{ \Delta f_k }{ f^{(0)}_k } p_{k|{k-1}} | < 1$ for cases (2) and (3) (see Appendix).
Using the upper-bounds derived in the Appendix, and after gathering all $n$ nodes, we have the following element-wise upper-bound on the updated belief state:
\begin{equation}\label{upper_pos}
\textbf{p}_k \le \textbf{C}_k \textbf{p}_{k|{k-1}} = \left( \textbf{B}_k + \mathcal{O}_k \right) \textbf{p}_{k|{k-1}}.
\end{equation}
with $ \textbf{B}_k = \text{diag}\left(b_{i,k}\right)$ and $ \mathcal{O}_k = \text{diag}\left( \mathbb{I}_{ \{ \Delta f_{i,k} < 0 \} } \mathcal{O}\left( \frac{|\Delta f_{i,k}| }{ f^{(0)}_{i,k} } p_{i,k|{k-1}} \right) \right)$ where $\mathbb{I}_{ \{ \Delta f_{i,k} < 0 \} }$ is the indicator function for the event $\Delta f_{i,k} < 0$.
Thus far, we have established, under the assumptions of $| \frac{ \Delta f_k }{ f^{(0)}_k } p_{k|{k-1}} | < 1$, an upper-bound for the updated posterior in terms of observation likelihoods and the belief state (\ref{upper_pos}).
Next, consider the restricted class of two-state Markov processes on $\mathcal{G}$, for which we can produce a bound of the form
\begin{equation}\label{dynamic_bound}
\textbf{p}_{k|k-1} \le \textbf{S} \textbf{p}_{k-1}
\end{equation}
where \textbf{S} contains information about the transition parameters and the topology of the network.
It turns out that the $SIS$ model of mathematical epidemiology falls within this restricted class of percolation problems \cite{chak08}.
\noindent
The $SIS$ model on a graph $\mathcal{G}$, assumes that each of the $n$ individuals are in states $0$ or $1$, where $0$ corresponds to \textit{susceptible} and $1$ corresponds to \textit{infected}. At any time $k$, an individual can receive the infection from their neighbors, $\eta(i)$, based upon their states at $k-1$.
\noindent
Under this $SIS$ model in \cite{chak08}
\begin{equation}\label{system_matrix}
\textbf{S} = (1-\gamma)\textbf{I} + \beta \textbf{A}
\end{equation}
where the Markov transition parameters $\gamma$ is the probability of $i$ transitioning from $1$ to $0$, $\beta$ is the probability of transmission between neighbors $i$ and $j$, and \textbf{A} is the graph adjacency matrix (see Figure \ref{SIS_chain}).
Returning to the derivation, using the bound (\ref{dynamic_bound}), we have, by induction, the following recursion:
\begin{eqnarray}\label{recursion}
\textbf{p}_k &\le& \textbf{C}_k \textbf{p}_{k|{k-1}} \le \textbf{C}_k \textbf{S} \textbf{p}_{k-1} \le \left( \textbf{C}_k \textbf{S} \cdots \textbf{C}_1 \textbf{S} \right) \textbf{p}_0 \nonumber \\
&=& \left( \textbf{B}_k \textbf{S} \cdots \textbf{B}_1 \textbf{S} \right) \textbf{p}_0 + \mathcal{O}_{ \textbf{C}_k \textbf{S} }
\end{eqnarray}
where we have lumped the higher order modes and higher order cross-terms into $\mathcal{O}_{ \textbf{C}_k \textbf{S} } $.
The \textit{dominant mode of decay} of the updated posterior may be found by investigating the following eigen-decomposition:
\begin{equation}\label{BS}
\textbf{B}_k \textbf{S} = \left( \sum_{j=1}^n b_{j,k} \textbf{e}_j \textbf{e}_j^T \right) \left( \sum_{j=1}^n \lambda_j \textbf{u}_j \textbf{u}_j^T \right)
\end{equation}
with $\textbf{e}_j = [0,\dots,0,1,0,\dots,0]^T$ ($1$ at $j^{th}$ element). Without loss of generality, we can assume the eigenvalues of $\textbf{S}$ are listed in decreasing order, $|\lambda_1| \ge \dots \ge |\lambda_n|$. Now rewriting (\ref{BS}), we have
\begin{eqnarray}\label{spectral}
\textbf{B}_k \textbf{S} &=& \left( b_{j_k} \textbf{e}_{j_k} \textbf{e}_{j_k}^T + \mathcal{O}_B \right) \left( \lambda_1 \textbf{u}_1 \textbf{u}_1^T + \mathcal{O}_S \right) \nonumber \\
&=& \left( \lambda_1 b_{j_k} \textbf{e}_{j_k} \textbf{e}_{j_k}^T \textbf{u}_1 \textbf{u}_1^T + \mathcal{O}_{BS} \right)
\end{eqnarray}
where $b_{j_k} = \text{max}_{j\in\{1,\dots,n\}} b_{j,k}$ and the $\mathcal{O}_B, \mathcal{O}_S, \mathcal{O}_{BS}$ variables corresponds to the higher order terms. Inserting (\ref{spectral}) into (\ref{recursion}), and matching the largest eigenvalues of $\textbf{B}_k$ with $\lambda_1$ we obtain
\begin{eqnarray}\label{bound}
\textbf{p}_k &\le& \left( \textbf{B}_k \textbf{S} \dots \textbf{B}_1 \textbf{S} \right) \textbf{p}_0 + \mathcal{O}_{ \textbf{C}_k \textbf{S} } \nonumber \\
&=& \hspace{-3mm} \lambda_1^k \prod_{l=1}^k b_{j_l} \left( \prod_{l=1}^k\left( \textbf{e}_{j_l} \textbf{e}_{j_l}^T \textbf{u}_1 \textbf{u}_1^T \right) \right) \textbf{p}_0 + \hspace{-1mm} \mathcal{O}(\varphi^k).
\end{eqnarray}
Thus, at large $k$, the dominant mode of the posterior goes as $\lambda_1^k \prod_{l=1}^k b_{j_l}$ (the modes in $\mathcal{O}(\varphi^k)$ decay faster than the dominant mode presented above).
We can see that if the spectral radius of $\textbf{S}$ is less than one, $|\lambda_1| < 1$, then for large $k$, $\textbf{p}_k \to \textbf{0}$, which is the unique absorbing state of the system.
This epidemic threshold condition on $\lambda_1$ has been previously established for unforced $SIS$-percolation processes \cite{chak08}. However, in the tracking framework, the rate at which the posterior decays to the \textit{susceptible} state is perturbed by an additional measurement dependent factor, $\prod_{l=1}^k b_{j_l}$.
This measurement-dependent dominant mode of the posterior should more accurately model the true dynamic response of the node states better than that in \cite{chak08} since the posterior better tracks the truth than the unforced predictive distribution. Additionally, this dominant mode of the updated posterior distribution allows one to simulate the response of the percolation threshold to intervention and control actions which are designed to increase the threshold, such that the probability of epidemics is minimized.
\section{NUMERICAL EXAMPLE}
\begin{figure}[h!]
\centering
\includegraphics[width=1.80in]{SIS_GMpp.pdf}
\caption{$SIS$ Markov Chain for Node $i$ Interacting with the Infected States of its Neighbors}
\label{SIS_chain}
\end{figure}
Here, we present results of simulations of our adaptive sampling for the active tracking of a causal Markov ground truth process across a random 200 node, scale-free network (Figure \ref{sf}). Since the goal in tracking is to accurately classify the states of each node, we are interested in exploring the detection performance as the likelihood of an epidemic increases through the percolation threshold for this graph.
One would expect different phase transitions (thresholds) in detection performance for various sampling strategies, ranging from the lowest threshold for unforced percolation distributions to highest for a continuous monitoring of all $n$ nodes. We will present a few of these detection surfaces that depict these phase transitions for the unforced percolation distribution, random $m=40$ node sampling, and our proposed information theoretic adaptive sampling of $m=40$.
Here, we will restrict our simulations to the two-state $SIS$ model of mathematical epidemiology described above.
\begin{figure}[h!]
\vspace{-3.0mm}
\centering
\includegraphics[width=2.5in]{sf_200.jpg}
\vspace{-2mm}
\caption{200 Node Scale-Free Graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$}
\label{sf}
\end{figure}
The sensor models (\ref{Y}), are of the form of two-dimensional multivariate Guassians with common covariance and shifted mean vector. The transition dynamics of the $i^{th}$ individual (\ref{Z}), for the $SIS$ model is given by:
\vspace{-6mm}
\begin{small}
\begin{equation}\label{SIS}
Z^i_k|\textbf{Z}^{ \{i,\eta(i)\} }_{k-1} \hspace{-1mm} \sim (1-\gamma) Z^i_{k-1} + (1 - Z^i_{k-1}) \hspace{-1mm} \left[ 1- \hspace{-2mm} \displaystyle\prod_{j \in \eta(i)} ( 1-\beta Z^j_{k-1} )\right].
\end{equation}
\end{small}
\vspace{-4mm}
where $Z^i_{k-1} \in \{0,1\}$ is the indicator function of $i$ being infected at time $k-1$. The transmission term between $i$ and $\eta(i)$ is known the Reed-Frost model \cite{chak08,draief-2006,newman-2002-66}. Since the tail of the degree distribution of our synthetic scale-free graph contains nodes with degree greater than 10, updating (\ref{marginal}) exactly is unrealistic and we must resort to approximate algorithms. Here, we will assume the mean field approximation used by \cite{chak08} for this $SIS$ model, resulting in the following marginal belief state update for the $i^{th}$ node of infected ($Z_k^i = 1$):
\vspace{-6mm}
\begin{small}
\begin{equation}\label{mfapprox}
p_{i,k|k-1} = (1-\gamma)p_{i,k-1} + (1 - p_{i,k-1}) \left[ 1-\displaystyle\prod_{j \in \eta} ( 1-\beta p_{j,k-1} )\right].
\end{equation}
\end{small}
\vspace{-4mm}
Equation (\ref{mfapprox}) allows us to efficiently update the marginal belief state directly for all $n$ nodes which are then used for estimating the best $m$ measurements using (\ref{IG_marg}).
\begin{figure}[h!]
\centering
\subfigure[AUR Surface for Unforced Prediction Distribution (no evidence acquired throughout the monitoring)]
{
\label{AUR_perc}
\includegraphics[width=8.5cm]{AUR_surface_perc.pdf}
}
\hspace{1cm}
\subfigure[AUR Surface for Updated Posterior Distribution with $m=40$ Random Measurements at Each Time $k$]
{
\label{AUR_rnd}
\includegraphics[width=8.5cm]{AUR_surface_updated_rnd.pdf}
}
\hspace{1cm}
\subfigure[AUR Surface for Updated Posterior Distribution with $m=40$ Information Theoretic Adaptive Measurements at Each Time $k$]
{
\label{AUR_adaptive}
\includegraphics[width=8.5cm]{AUR_surface_updated.pdf}
}
\caption{Area under the ROC curve surface as a function of percolation parameter $\tau = \beta/\gamma$ and time }
\label{AUR}
\end{figure}
As we are interested in detection performance as a function of time and epidemic intensity, the Area Under the ROC Curve (AUR) is a natural statistic to quantify the detection power (detection of the infected state). The AUR is evaluated at each time $k$, each $SIS$ percolation intensity parameter
\begin{equation}\label{tau}
\tau = \beta/\gamma
\end{equation}
and over 500 random initial states of the network. For the $SIS$ model, $\tau$ is the single parameter (aside from the topology of the graph) that characterizes the intensity of the percolation/epidemic. It is useful to understand how the detection performance varies as a function of epidemic intensity, as it indicates how well the updated posteriors are playing ``catch-up'' in tracking the true dynamics on the network.
For this $SIS$ model, the percolation threshold is defined as $\tau_c = 1/\lambda_1(\textbf{A})$ where $\lambda_1(\textbf{A}) = \text{max}_{i \in \{1,\dots,n\} } | \lambda_i |$ is the spectral radius of the graph adjacency matrix, $\textbf{A}$ \cite{chak08}. Values of $\tau$ greater than $\tau_c$ imply that any infection tend to become an epidemic, whereas those values less than $\tau_c$ imply that small epidemics tend to die out.
For the network under investigation (Figure \ref{sf}), $\tau_c = 0.1819$. We see from Figure \ref{AUR_perc} that a phase transition in detection power (AUR) for the unforced percolation distribution does indeed coincide with the epidemic threshold $\tau_c$. While the epidemic threshold for the random and adaptive sampling policies is still $\tau_c = 0.1819$, the measurements acquired allow the posterior to better track the truth, but only up to their respective phase transitions in detection power (see Figures \ref{AUR_rnd} and \ref{AUR_adaptive}).
Figure \ref{AUR_adaptive} confirms that the adaptive sampling better tracks the truth than randomly sampling nodes, while pushing the phase transition in detection performance to higher percolation intensities, $\tau$. We see that the major benefit of the adaptive sampling is apparent when conditions of the network are changing moderately, at medium epidemic conditions. Beyond a certain level of percolation intensity, more resources will need to be allocated to sampling to maintain a high level of detection performance.
A heuristic sampling strategy based on the topology of $\mathcal{G}$ was also explored (results not shown) by sampling the "hubs" (highly-connected nodes). However, detection performance was only slightly better than random sampling and poorer than our adaptive sampling method.
\begin{figure}[h!]
\centering
\includegraphics[width=3.60in]{node_sampling.pdf}
\vspace{-10mm}
\caption{Relative Frequency of Nodes Sampled ($z$-axis) of a Given Degree ($x$-axis) Over Time ($y$-axis) for $m=40$ Adaptive Sampling Strategy: a.) $\tau = 0.125$, b.) $\tau = 0.2143$, and c.) $\tau = 0.5$}
\label{sampling}
\end{figure}
It is often useful for developing sampling heuristics and offline control/intervention policies to inspect what \textit{type} of nodes, topologically speaking, is the adaptive sampling strategy targeting, under various network conditions (different values of $\tau$). In Figure \ref{sampling}, the relative frequency of nodes sampled with a particular degree is plotted against time (under the $m=40$ adaptive sampling strategy) for three different values of $\tau$ (over 500 random initial conditions of the network).
For the larger of the three values explored ($\tau = 0.5 > \tau_c$) we see that the sampling is approximately uniform across the nodes of each degree on the graph (Figure \ref{sampling}(c)). Therefore, under extremely intense epidemic conditions, the adaptive sampling strategy is targeting all nodes of each degree equally, and therefore, it is sufficient to perform random sampling. For the two lower values of $\tau$, Figure \ref{sampling}(a) and Figure \ref{sampling}(b) (near $\tau_c$), we see that adaptive policy targets highly connected nodes more frequently than those of lesser degree and thus, it is more advantageous to exploit such a strategy, as compared to random sampling (see AUR surface in Figure \ref{AUR_adaptive}).
\section{DISCUSSION}
In this paper, we have derived the conditions for a network specific percolation threshold using expressions for the updated posterior distribution resulting from actively tracking the process. These conditions recover the unforced percolation threshold derived in \cite{chak08} but with an additional factor involving sensor likelihood terms due to measurements obtained throughout the monitoring. A term of the form $\lambda_1^k \prod_{l=1}^k b_{j_l}$ (derived in (\ref{bound})) was shown to be the dominant mode of the updated posterior dynamic response to active intervention of immunizing the nodes (holding node states constant). The conditions of the percolation using the updated posterior should more accurately model the phase transition corresponding to a particular disease trajectory and therefore, enable a better assessment of immunization strategies and any subsequent observations resulting from such actions. The framework presented above, along with the new posterior percolation threshold, should provide additional insight into active monitoring of large complex networks under resource constraints.
\section{APPENDIX}\label{A}
In case (2), when $\Delta f_k > 0$, we can re-write (\ref{bayes_geo}) in terms of an \textit{alternating geometric series}:
\begin{eqnarray}\label{bayesupdate_withsum}
p_k &=& \frac{ f^{(1)}_k }{ f^{(0)}_k } \left[ \displaystyle\sum_{l=0}^{\infty} (-1)^l \left(\frac{ |\Delta f_k| }{ f^{(0)}_k } p_{k|{k-1}} \right)^l \right] p_{k|{k-1}} \nonumber \\
&\le& \frac{ f^{(1)}_k }{ f^{(0)}_k } \left[ 1 + \frac{ |\Delta f_k| }{ f^{(0)}_k } p_{k|{k-1}} \right] p_{k|{k-1}}
\end{eqnarray}
where we have used the fact that $1/(1+|a|) \le 1+|a|$. Recalling that $p \ge p^2$ for $0 \le p \le 1$, we have
\begin{equation}\label{bayesupdate_withsum}
p_k \le \frac{ f^{(1)}_k }{ f^{(0)}_k } \left[ 1 + \frac{ |\Delta f_k| }{ f^{(0)}_k } \right] p_{k|{k-1}}.
\end{equation}
In case (3), when $\Delta f_k < 0$, and (\ref{bayes_geo}) can be represented as a \textit{geometric series}:
\vspace{-5mm}
\begin{small}
\begin{gather}\label{bayesupdate_withsum}
p_k = \frac{ f^{(1)}_k }{ f^{(0)}_k } \left[ \sum_{l=0}^{\infty} \left( \frac{ |\Delta f_k| }{ f^{(0)}_k } p_{k|{k-1}} \right)^l \right] p_{k|{k-1}} \nonumber \\
= \frac{ f^{(1)}_k }{ f^{(0)}_k } \left[ 1 + \frac{ |\Delta f_k| }{ f^{(0)}_k } p_{k|{k-1}} + \sum_{l=2}^{\infty} \left(\frac{ |\Delta f_k| }{ f^{(0)}_k } p_{k|{k-1}} \right)^l \right] p_{k|{k-1}}.
\end{gather}
\end{small}
\vspace{-5mm}
Once again, using the $p \ge p^2$ bound, we obtain:
\begin{equation}\label{bayesupdate_withsum}
p_k \le \frac{ f^{(1)}_k }{ f^{(0)}_k } \left[ 1 + \frac{ |\Delta f_k| }{ f^{(0)}_k } \right] p_{k|{k-1}} + \mathcal{O}\left( \frac{|\Delta f_k| }{ f^{(0)}_k } p_{k|{k-1}} \right).
\end{equation}
A general inequality, that is equality for case (1), is of the form $p_k \le c_k \ p_{k|k-1}$ with
\begin{center}
$b_k=\begin{cases}
1 &, i \notin \textbf{a}_k \\
\frac{ f^{(1)}_k }{ f^{(0)}_k } \left[ 1 + \frac{ |\Delta f_k| }{ f^{(0)}_k } \right] &, \Delta f_k > 0 \ \text{or} \ \Delta f_k < 0
\end{cases}$
\end{center}
with $c_k = b_k$ for cases (1) and (2) and $c_k = b_k + \mathcal{O}\left( \frac{|\Delta f_k| }{ f^{(0)}_k } p_{k|{k-1}} \right)$ for case (3).
|
1,108,101,564,118 | arxiv | \section{Introduction}
\label{intro}
Frustrated quantum spin models on the two-dimensional (2D) honeycomb
lattice have become the objects of intense study. Quantum
fluctuations on spin lattices are generally larger for lower
dimensionality $D$ and smaller values of the coordination number $z$
of the lattice, as well as for smaller values of the spin quantum
number $s$ of the lattice spins. Spin-1/2 models on the honeycomb
lattice (with $D=2$ and $z=3$) are thus expected to have large quantum
fluctuations, which, in turn, open up the theoretical possibility of
realizing exotic ground-state (GS) phases with novel magnetic
properties and/or novel ordering.
Additional impetus for studying 2D honeycomb models came from the
reported presence of a quantum spin-liquid (QSL) phase in both the
exactly soluble (albeit somewhat artificial) Kitaev model of spin-1/2
particles on a honeycomb lattice \cite{kitaev}, and the half-filled
Fermi-Hubbard (FH) model on a honeycomb lattice \cite{meng}. Thus,
Meng {\it et al.} \cite{meng} reported in a quantum Monte Carlo (QMC) calculation,
free of the usual fermion sign problems, the presence in the honeycomb
FH model of a QSL phase, at moderate values of the on-site Coulomb
repulsion strength ($U$), situated between the nonmagnetic metallic
insulator (or semi-metal) phase at low $U$ and the antiferromagnetic
(AFM) Mott insulator phase for large $U$. Since the $U \rightarrow
\infty$ limit corresponds to the pure Heisenberg antiferromagnet
(HAFM), i.e., with nearest-neighbour (NN) interactions (of strength
$J_{1} > 0$) only, the Mott insulator phase of the Hubbard model
corresponds to the N\'{e}el-ordered phase of the HAFM spin-lattice
model. Higher-order terms in the $t/U$ expansion of the FH model
(where {\it t} is the strength parameter of the NN hopping term) lead
to frustrating exchange couplings in the corresponding spin-lattice
model in which the HAFM with NN exchange couplings is the leading term
in the large-$U$ expansion. The simplest such frustrated model is the
$J_{1}$--$J_{2}$ model studied here, where the next-nearest-neighbour
(NNN) spin pairs have an additional exchange coupling of strength
$J_{2}>0$.
A later study of the FH model, using a Schwinger boson mean field
theory (SB-MFT) approach \cite{Wang:2010_honey}, provided some
corroborating evidence for a $\mathbb{Z}_{2}$ QSL state; and a
Schwinger fermion representation of the same model
\cite{Lu:2011_honey} gave some evidence for both a $\mathbb{Z}_{2}$
QSL phase and a chiral antiferromagnetic phase. However, later
numerically exact QMC calculations by Sorella {\it et al.}
\cite{Sorella:2012}, with much larger clusters than those used by Meng
{\it et al.} \cite{meng}, have cast considerable doubt on their
original finding of an intermediate QSL phase. We note in this
context that the presence of magnetically ordered phases is difficult
to detect by standard QMC techniques when the ordering is small, since
the usual quantity measured is the {\it square} of the order
parameter. As a consequence, in addition to the usual problem of
finding an appropriate finite-size extrapolation formula, very large
clusters are required with high precision. It is this effect that has
apparently caused the controversy between Refs.\ \cite{meng} and
\cite{Sorella:2012} regarding the existence or not of an intermediate
QSL phase in the FH model on the honeycomb lattice. In a very recent
paper \cite{Assaad:2013_honey_Hubbard} this controversy has
effectively been resolved by using a novel QMC technique that measures
the local magnetic order parameter $M$ directly, rather than its
square, $M^{2}$. Use of this technique leads
\cite{Assaad:2013_honey_Hubbard} to the rather firm conclusion that in
the FH model on the honeycomb lattice there is a single continuous
quantum phase transition between the nonmagnetic semi-metal and AFM
Mott insulator phases, with no intermediate QSL phase.
It is also pertinent to ask whether the $J_{1}$--$J_{2}$ model
actually does represent well the low-energy physics of the FH model on
the honeycomb lattice. While this is undoubtedly true for small
enough values of the Hubbard parameter $t/U$, it is interesting to
enquire more deeply and quantitatively about this question. In
particular, two recent studies \cite{Yang:2011,Yang:2012} have thrown
considerable light on the relationship between the physics of FH and
$J_{1}$--$J_{2}$ models on the honeycomb lattice. Thus, in the first
place, it has been shown \cite{Yang:2011} that the ratio $x \equiv
J_{2}/J_{1}$ actually stays quite small over a large range of values
of $t/U$. More specifically, it is always smaller than the value
$x_{c_{1}}$, which is the point at which the N\'{e}el order, present
at $x=0$, first vanishes as $x$ is increased, as we discuss below.
Secondly, in a very interesting paper \cite{Yang:2012} that studied in
detail the full low-energy spin model arising from the FH model on the
honeycomb lattice, it was shown that six-spin interactions on
hexagonal plaquettes are the most important leading correction to the
NN $J_{1}$ bonds, rather than the NNN $J_{2}$ bonds.
Despite all of the above caveats of the relevance of the
$J_{1}$--$J_{2}$ model on the honeycomb lattice to describe the
low-energy physics of the corresponding FH honeycomb model, it remains
of very great interest in its own right. This has possibly even been
heightened by the considerable uncertainty that has existed until very recently, as discussed above, as to
whether or not a QSL phase exists for the FH model. For this and
other reasons, this spin-lattice model and its generalizations [specifically to
include also next-next-nearest-neighbour (NNNN) bonds with strength
$J_{3}$], have been much studied
\cite{Mulder:2010_honey,DJJF:2011_honeycomb,Albuquerque:2011_honey,Oitmaa:2011_honey,Reuther:2011_honey,Mezzacapo:2012_honey,Bishop:2012_honeyJ1-J2,Li:2012_honey_full,Zhu:2012_honeyJ1-J2,Ganesh:2013_J1J2honey,Zhang:2013_J1J2honey,Yu:2013:honey}
recently.
\section{The model}
\label{model_section}
The Hamiltonian of the model studied here is given by
\begin{equation}
H = J_{1}\sum_{\langle i,j \rangle} \mathbf{s}_{i}\cdot\mathbf{s}_{j} + J_{2}\sum_{\langle\langle i,k \rangle\rangle}
\mathbf{s}_{i}\cdot\mathbf{s}_{k}\,,
\label{eq1}
\end{equation}
where index $i$ runs over all honeycomb lattice sites, and indices $j$ and $k$ run over all
NN and NNN sites to $i$, respectively, counting each bond once only. Each lattice site $i$ carries a particle with spin
$s=\frac{1}{2}$ and a spin operator ${\bf s}_{i}=(s_{i}^{x},s_{i}^{y},s_{i}^{z})$.
The lattice and exchange bonds are illustrated in figure~\ref{model}.
\begin{figure}[!tb]
\begin{center}
\mbox{
\subfigure[]{\scalebox{0.35}{\includegraphics{fig1a.eps}}}
\quad
\subfigure[]{\scalebox{0.35}{\includegraphics{fig1b.eps}}}
}
\caption{(Colour online) The $J_{1}$--$J_{2}$ model on the honeycomb lattice (with $J_{1}=1$),
showing (a) the N\'{e}el and (b) N\'{e}el-II states.
The arrows represent spins located on lattice sites \textbullet.}
\label{model}
\end{center}
\end{figure}
We are interested in the case where both NN and NNN bonds are AFM
in nature, and henceforth, we put $J_{1}=1$ to set the energy scale and define
the frustration parameter $x \equiv J_{2}/J_{1}$.
The classical ($s \rightarrow \infty$) ground state of the model is
N\'{e}el-ordered for $0 \leq x < \frac{1}{6}$, whereas for all values
$x > \frac{1}{6}$ the spins are spirally ordered. In this latter
regime, the classical model has a one-parameter family of degenerate
incommensurate ground states where the spiral wave vector can orient
in any direction. At leading order, i.e., $O(1/s)$, spin-wave
fluctuations lift this accidental degeneracy in favour of particular
wave vectors \cite{Mulder:2010_honey}. For the extreme quantum case,
$s=1/2$, considered here, we expect quantum fluctuations to be strong
enough to destroy the spiral order over a wide range of values of $x$.
In a recent paper \cite{Bishop:2012_honeyJ1-J2} that used the coupled
cluster method (CCM), we have verified that expectation for all values
in the range $0 \leq x \leq 1$ considered here.
We showed too \cite{Bishop:2012_honeyJ1-J2} that
quantum fluctuations preserve the N\'{e}el order to higher values of
$x$ than in the classical model. Thus, we found that the GS phase
of the $s=1/2$ model is N\'{e}el-ordered for $x<x_{c_{1}} \approx
0.207(3)$. At $x=x_{c_{1}}$ there appears to be a continuous
deconfined phase transition to a GS paramagnetic phase exhibiting
plaquette valence-bond crystalline (PVBC) order. Furthermore, we
found the PVBC state to be the stable GS phase in the regime
$x_{c_{1}}<x<x_{c_{2}}$, where $x_{c_{2}} \approx 0.385(10)$.
Our aim now is to investigate further the transition at $x=x_{c_{2}}$
and the nature of the GS phase(s) for $x>x_{c_{2}}$.
\section{Coupled cluster method}
\label{CCM}
The CCM \cite{Bi:1991,Bi:1998,Fa:2004}, that we will employ here, has been very successfully applied
to many models in quantum magnetism, including models on the honeycomb
lattice
\cite{DJJF:2011_honeycomb,Bishop:2012_honeyJ1-J2,Li:2012_honey_full}
of interest here. It provides a well-structured means of studying
various candidate GS phases and their regimes of stability, for each
of which the description is systematically improvable in terms of
well-defined truncation hierarchies for the quantum multi-spin
correlations. We now briefly describe the method and refer the reader
to the literature (see,
e.g.,~\cite{Bi:1991,Bi:1998,Fa:2004,Ze:1998,Kr:2000,Bi:2000,Darradi:2005,Bi:2008_PRB_J1xxz_J2xxz,Darradi:2008_J1J2mod,Bishop:2010_UJack})
for further details.
The starting point for any CCM calculation is the selection of a
suitable normalized model (or reference) state $|\Phi\rangle$. For
spin systems it is often convenient to take a classical (uncorrelated)
GS wave function for $|\Phi\rangle$. For the present case we choose
the N\'{e}el state shown in figure~\ref{model}(a) for small values of
the frustration parameter $x$. For larger values of $x$ we could
choose one of the classical spiral GS phases to provide a CCM model
state, but as we have argued above these are likely to be very fragile
against quantum fluctuations. Instead, for larger values of $x$, we
choose here the so-called N\'{e}el-II phase shown in
figure~\ref{model}(b) (which has also been denoted as the
anti-N\'{e}el phase earlier \cite{Bishop:2012_honeyJ1-J2}), that
occurs in the classical ($s\rightarrow\infty$) model only at the
isolated and highly degenerate critical point $x=\frac{1}{2}$.
Whereas the N\'{e}el state has all 3 NN spins to a given spin
antiparallel to it, the N\'{e}el-II state also comprises AFM sawtooth
chains along one of the three equivalent honeycomb directions, but
with NN spins on adjacent chains now parallel to one another. The
N\'{e}el-II state is also sometimes known in the literature as the
collinear striped AFM phase for reasons that should be clear from
figure~\ref{model}(b), although we prefer to avoid this name here
since it is open to confusion with other AFM states on the honeycomb
lattice that have also been called striped states (see,
e.g.,~\cite{Li:2012_honey_full}). The N\'{e}el-II state is thus also
easily seen to break the lattice rotational symmetry.
It is convenient to perform a mathematical rotation of the local axes
of the spins such that all spins in the reference state align along
the negative $z$-axis. The Schr\"{o}dinger ground-state ket and bra
CCM equations are $H|\Psi\rangle = E|\Psi\rangle$ and
$\langle\tilde{\Psi}|H=E\langle\tilde{\Psi}|$ respectively. The CCM
employs the exponential parametrizations, $|\Psi\rangle={\rm
e}^{S}|\Phi\rangle$ and
$\langle\tilde{\Psi}|=\langle\Phi|\tilde{S}$e$^{-S}$. The
correlation operator $S$ is expressed as $S = \sum_{I\neq0}{\cal
S}_{I}C^{+}_{I}$ and its counterpart is $\tilde{S} = 1 +
\sum_{I\neq0}\tilde{\cal S}_{I}C^{-}_{I}$ where, by definition,
$C^{-}_{I}|\Phi\rangle = 0 = \langle\Phi|C^{+}_{I}, \forall I \neq 0$.
Thus we have the normalization condition
$\langle\tilde{\Psi}|\Psi\rangle = \langle\Phi|\Phi\rangle \equiv 1$.
The multispin creation operators $C^{+}_{I} \equiv
(C^{-}_{I})^{\dagger}$, with $C^{+}_{0} \equiv 1$, are written as
\(C^{+}_{I}\equiv s^{+}_{j_{1}} s^{+}_{j_{2}} \cdots s^{+}_{j_{n}}\),
in terms of the single-site spin-raising operators $s^{+}_{k}\equiv
s^{x}_{k}+is^{y}_{k}$. The GS energy is $E=
\langle\Phi|\mbox{e}^{-S}H\mbox{e}^{S}|\Phi\rangle$; and the local
average onsite magnetization $M$ in the rotated spin coordinates is $M
\equiv -\frac{1}{N}
\langle\tilde{\Psi}|\sum_{j=1}^{N}s^{z}_{j}|\Psi\rangle$. The ket-
and bra-state correlation coefficients $({\cal S}_{I}, \tilde{{\cal
S}_{I}})$ are calculated by requiring the expectation value
$\bar{H}=\langle\tilde{\Psi}|H|\Psi\rangle$ to be a minimum with
respect to all parameters $({\cal S}_{I}, \tilde{{\cal S}_{I}})$, and
hence $\langle \Phi|C^{-}_{I}\mbox{e}^{-S}H\mbox{e}^{S}|\Phi\rangle =
0$ and $\langle\Phi|\tilde{S}(\mbox{e}^{-S}H\mbox{e}^{S} -
E_{0})C^{+}_{I}|\Phi\rangle = 0\;; \forall I \neq 0$.
The CCM formalism is exact if all spin configurations are included in
the $S$ and $\tilde{S}$ operators. In practice, however, truncations
are needed. We employ here the well-studied localized (lattice-animal-based subsystem) LSUB$m$
scheme~\cite{Ze:1998,Kr:2000,Bi:2000,Darradi:2005,Bi:2008_PRB_J1xxz_J2xxz,Darradi:2008_J1J2mod,Bishop:2010_UJack},
in which all possible multi-spin-flip correlations over different
locales on the lattice defined by $m$ or fewer contiguous lattice
sites are retained. Such clusters are defined to be contiguous in
this sense if every site in the cluster is adjacent (as a nearest
neighbour) to at least one other site in the cluster. The interested reader is referred to the literature (see, e.g., \cite{Ze:1998}) for figures illustrating the LSUB$m$ scheme in detail. The numbers
$N_{f}$ of such fundamental configurations that are distinct under the
(space and point-group) symmetries of the lattice and the model state
increase rapidly with the LSUB$m$
truncation index $m$. Thus the highest LSUB$m$ level that we
can reach here, even with massive parallelization and the use of
supercomputing resources \cite{ccm}, is LSUB$12$, for which $N_{f} = 293309$ for
the N\'{e}el-II state.
Since, in any truncation, CCM parametrizations automatically satisfy
the Goldstone linked cluster theorem, we may work from the outset in
the thermodynamic limit, $N \rightarrow \infty$. Nevertheless, the
raw LSUB$m$ data still need to be extrapolated to the exact $m
\rightarrow \infty$ limit. Thus, for the GS energy per spin, $E/N$,
we use (see, e.g., \cite{Bi:2000,Darradi:2005,Bi:2008_PRB_J1xxz_J2xxz,Darradi:2008_J1J2mod})
\begin{equation}
E(m)/N = a_{0}+a_{1}m^{-2}+a_{2}m^{-4}\,; \label{E_extrapo}
\end{equation}
while for the magnetic order parameter, $M$, defined above,
we use either the scheme
\begin{equation}
M(m) = b_{0}+b_{1}m^{-1}+b_{2}m^{-2}\,, \label{M_extrapo_standard}
\end{equation}
for systems showing no or only slight frustration
(see, e.g., \cite{Kr:2000,Darradi:2005}), or the scheme
\begin{equation}
M(m) = c_{0}+c_{1}m^{-1/2}+b_{2}m^{-3/2}\,, \label{M_extrapo_frustrated}
\end{equation}
for more strongly frustrated systems or ones showing a GS
order-disorder transition (see, e.g.,
\cite{Bi:2008_PRB_J1xxz_J2xxz,Darradi:2008_J1J2mod}).
In principle one may always test for the correct leading exponent in
the LSUB$m$ extrapolation scheme for any physical quantity $Z$ by
first fitting to the formula $Z(m)=d_{0}+d_{1}m^{-\nu}$. For the GS
energy, $E/N$, we generally find $\nu \approx 2$ for a wide
variety of spin systems, both non-frustrated and frustrated. For the
magnetic order parameter, $M$, on the other hand we generally find $\nu
\approx 1$ for unfrustrated systems (or for ones with very small
frustration), and $\nu \approx 0.5$ for more strongly frustrated
systems. We discuss this more fully in section \ref{results} in the context of the
present model. These general results for the leading exponents then provide the basis for equations (\ref{E_extrapo})-(\ref{M_extrapo_frustrated}).
Finally, we note that since the hexagon is an important structural
element of the honeycomb lattice we never use LSUB$m$ data with $m <
6$ to perform the extrapolations. Furthermore, in any CCM calculation using the LSUB$m$ scheme, we always need to
check whether the lowest-order potentially usable approximation, namely
LSUB6 here, is {\it actually} usable in the sense of fitting the
extrapolation scheme to be used. Although it generally does do so,
there are also (relatively rare) occasions when it does not,
presumably due either to the result being too far removed from the
asymptotic $m \rightarrow \infty$ limit or to the fact that for the
particular CCM model state used these lowest-order approximants omit
one or more of the most important multispin correlations.
\section{Results and discussion}
\label{results}
In figures~\ref{E} and \ref{M} we show our results for the GS energy per spin, $E/N$, and magnetic order
parameter, $M$, using both the N\'{e}el and
N\'{e}el-II states as CCM model states.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6cm,angle=270]{fig2.eps}
\caption{(Colour online) CCM LSUB$m$ results for the GS energy per
spin, $E/N$, as a function of the frustration parameter, $x \equiv
J_{2}/J_{1}$, of the spin-$1/2$ $J_{1}$--$J_{2}$ honeycomb model
(with $J_{1}>0$), using the N\'{e}el (left curves) and N\'{e}el-II
(right curves) states as model states, with $m=\{6,8,10,12\}$. The extrapolated curves
LSUB$\infty$(1) and LSUB$\infty$(2) use this data set and the restricted set
$m=\{8,10,12\}$ respectively, with
equation~(\ref{E_extrapo}).}
\label{E}
\end{center}
\end{figure}
Figure~\ref{E} shows clearly that the CCM LSUB$m$ results for the GS
energy extrapolate extremely rapidly with increasing order $m$ of
approximation to the exact LSUB$\infty$ limit. It also shows clearly
how the LSUB$m$ results based on both the N\'{e}el and N\'{e}el-II model states naturally terminate at some critical values of the frustration
parameter $x$, which themselves depend on the order parameter $m$ of
the particular LSUB$m$ approximation, beyond which no real CCM
solution can be found. Such termination points of CCM solutions are
well studied (and see, e.g., Refs.\ \cite{Bi:1998,Fa:2004}) and well
understood. They are simply reflections of the quantum phase
transitions in the system and, as such, may themselves be used to
estimate the positions of the corresponding quantum critical points
\cite{Bi:1998}. We do not, however, examine the extrapolation
properties of the termination points further here, since we have more
accurate criteria available to us to determine the quantum critical
points, as we discuss more fully below. Nevertheless, figure \ref{E}
shows clearly that the CCM LSUB$m$ results based on both the N\'{e}el
and N\'{e}el-II model states for finite values of $m$ extend beyond the
corresponding LSUB$\infty$ transition points into unphysical regions
where such states in the real (LSUB$\infty$) case have ceased to
exist. Such unphysical regimes diminish in size to zero as $m
\rightarrow \infty$. Figure \ref{E} shows that there are no energy
crossings between the N\'{e}el and N\'{e}el-II phases at any LSUB$m$
level of approximation, and that there is a clear range of values of
the frustration parameter, $x_{c_{1}} < x < x_{c_{2}}$, in which
neither the N\'{e}el nor the N\'{e}el-II states provide a physical GS
phase. The simple unextrapolated LSUB12 estimates for the two
termination points, namely $x_{c_{1}} \lesssim 0.23$ and $x_{c_{2}}
\gtrsim 0.35$ already provide remarkably good estimates for the
corresponding quantum critical points, as we shall see below.
We note from figure \ref{E} that the LSUB$m$ estimates for the GS energy
approach the asymptotic LSUB$\infty$ limit very rapidly, and hence the
extrapolations are rather insensitive to both the fitting scheme and data set
used. Nevertheless, a fit of the form $E(m)/N = e_{0} +
e_{1}m^{-\nu}$ for the N\'{e}el-II LSUB$m$ results gives the usual
expected result $\nu \approx 2$ for the data set $m=\{8,10,12\}$,
whereas the inclusion of the LSUB6 result leads to a spurious value
$\nu \approx 1$. By contrast, both data sets $m=\{6,8,10,12\}$ and
$m=\{8,10,12\}$ yield a value $\nu \approx 2$ for the corresponding
LSUB$m$ N\'{e}el results. The anomalous nature of the LSUB6
N\'{e}el-II approximation is discussed further below with regard to
the magnetic order parameter $M$, for which its behaviour is more
critical and more pronounced.
We now turn our attention to the corresponding CCM LSUB$m$ results for
the magnetic order parameter, as shown in figure~\ref{M}, using both
the N\'{e}el and N\'{e}el-II states as the CCM model states. For the
present model we find that an extrapolation formula for the magnetic
order parameter of the form $M(m)=d_{0}+d_{1}m^{-\nu}$ fits the data
well on the N\'{e}el side with a leading exponent $\nu \approx 1$ for
values of the frustration parameter $x$ equal to or very close to
zero, whereas the value $\nu \approx 0.5$ accurately fits the data
over most of the range $x \gtrsim 0.1$. Accordingly, in
figure~\ref{M} on the N\'{e}el side we show extrapolations using both
equations~(\ref{M_extrapo_standard}) and (\ref{M_extrapo_frustrated}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=6cm,angle=270]{fig3.eps}
\caption{(Colour online) CCM LSUB$m$ results for the GS order parameter, $M$, as a function of the frustration parameter, $x \equiv J_{2}/J_{1}$, of the
spin-$1/2$ $J_{1}$--$J_{2}$ honeycomb model ($J_{1}=1$), using the
N\'{e}el (left curves) and N\'{e}el-II (right curves) states as
model states, with $m=\{6,8,10,12\}$. The extrapolated curves
LSUB$\infty$(1) and LSUB$\infty$(3) use this data set with
equations~(\ref{M_extrapo_frustrated}) and (\ref{M_extrapo_standard})
respectively, while the LSUB$\infty$(2) curve uses
equation~(\ref{M_extrapo_frustrated}) with the restricted set
$m=\{8,10,12\}$.}
\label{M}
\end{center}
\end{figure}
Equation (\ref{M_extrapo_standard}),
which is appropriate when $J_{2}=0$, yields the value $M \approx
0.271(2)$ for the unfrustrated HAFM on the hexagonal lattice (i.e.,
with NN interactions only), in excellent agreement with the best
available QMC estimate \cite{Castro:2006_HC}, $M=0.2677(6)$. Our own
error estimates are based on sensitivity checks using different
LSUB$m$ data sets. Similarly we see from figure~\ref{M} that all
extrapolations give essentially the same estimate $x_{c_{1}} \approx
0.207(3)$ for the point where N\'{e}el order vanishes ($M \rightarrow
0$). We showed previously \cite{Bishop:2012_honeyJ1-J2} that the phase
transition at $x=x_{c_{1}}$ is a continuous deconfined one between
states with N\'{e}el and PVBC order.
Figure \ref{M} also shows corresponding results for $M$ for a possible
phase with N\'{e}el-II ordering. In this case we find (even by simple
inspection by eye) that the LSUB6 results do not fit with a
leading-order extrapolation scheme of the form
$M(m)=d_{0}+d_{1}m^{-\nu}$ with {\it any} value of $\nu$. By
contrast, the LSUB$m$ results with $m>6$ are accurately fitted by this
form with a leading-order exponent $\nu \approx 0.5$ over the whole
range of values of the frustration parameter $x$ shown. Precisely why
the LSUB6 result should be anomalous in this case is unclear, but as
discussed in section \ref{CCM} we must now discard it for
extrapolation purposes. For these reasons we show in figure~\ref{M}
only extrapolated results using equation (\ref{M_extrapo_frustrated})
for the N\'{e}el-II model state, based on $m=\{8,10,12\}$. The
results clearly show that N\'{e}el-II ordering is present, albeit with
a rather small value of the order parameter, $M \lesssim 0.1$, for
$x>x_{c_{3}}$ where $x_{c_{3}} \approx 0.65(5)$, but where the error
estimate is now more uncertain.
In our previous work \cite{Bishop:2012_honeyJ1-J2} we showed that the
N\'{e}el-II state becomes susceptible to PVBC ordering for
$x<x_{c_{2}} \approx 0.385(10)$, but we now observe that the
N\'{e}el-II state is itself only stable as a magnetically ordered
state for $x>x_{c_{3}}$. We are thus led to enquire about the
possible GS phase(s) of the system in the range
$x_{c_{2}}<x<x_{c_{3}}$. In view of the persistence of our CCM
LSUB$m$ solutions based on the N\'{e}el-II model state, with finite
values of $m$, well into the region $x<x_{c_{3}}$ before they
terminate (as is clearly seen from figure~\ref{M}), we expect that the
actual GS phase in this intermediate regime might share similarities
with the N\'{e}el-II state. For example, just as the N\'{e}el-II state
breaks the lattice rotational symmetry, so does another valence-bond
solid state, namely the staggered-dimer valence-bond crystalline
(SDVBC) (or lattice nematic) state. This is formed from the
N\'{e}el-II state by replacing all of the parallel NN spin pairs by spin-zero dimers (and see
figure~\ref{X}).
\begin{figure}[t]
\begin{center}
\mbox{
\subfigure{\includegraphics[width=6cm,height=6cm,angle=270]{fig4a.eps}}
\raisebox{-3.5cm}{
\subfigure{\includegraphics[width=2.2cm,height=2.2cm]{fig4b.eps}}
}
}
\caption{(Colour online) Left: CCM LSUB$m$ results for the inverse
staggered dimer susceptibility, $1/\chi_{d}$, as a function of the frustration parameter, $x \equiv J_{2}/J_{1}$, of the
spin-1/2 $J_{1}$--$J_{2}$ honeycomb model ($J_{1}=1$),
using the N\'{e}el-II state as model state, with $m=\{6,8,10,12\}$.
The extrapolated curves LSUB$\infty$(1) and LSUB$\infty$(2) are
derived from fitting the perturbed energies (see text) as
$e(\delta)=e_{0}(\delta)+e_{1}(\delta)m^{-\nu}$, and use the data sets
$m=\{6,8,10,12\}$ and $m=\{8,10,12\}$ respectively. Right: The field $F \rightarrow
\delta\; \hat{O}_{d}$ for the staggered dimer susceptibility,
$\chi_{d}$. Thick (red) and thin (black) lines correspond
respectively to strengthened and unaltered NN exchange couplings,
where $\hat{O}_{d} = \sum_{\langle i,j \rangle} a_{ij}
\mathbf{s}_{i}\cdot\mathbf{s}_{j}$, and the sum runs over all NN
bonds, with $a_{ij}=+1$ and 0 for thick (red) lines and thin
(black) lines respectively.}
\label{X}
\end{center}
\end{figure}
In order to investigate the possibility of an SDVBC phase we first
consider the
response of the system to a field operator $F$ (and see, e.g., \cite{Darradi:2008_J1J2mod}). Thus, a field term $F=\delta\;
\hat{O}_{d}$ is added to the Hamiltonian of equation~(\ref{eq1}), where
$\hat{O}_{d}$ is an operator corresponding to the possible SDVBC
order, illustrated in figure~\ref{X} and defined in its caption. The
energy per site, $E(\delta)/N \equiv e(\delta)$, is then calculated in
the CCM for the perturbed Hamiltonian $H + F$, using the N\'{e}el-II
model state. We define the corresponding susceptibility as $\chi_{d}
\equiv - \left. (\partial^2{e(\delta)})/(\partial {\delta}^2)
\right|_{\delta=0}$. Clearly the GS phase becomes unstable against
SDVBC order when $\chi_d^{-1}$ becomes zero. We now use the LSUB$m$ extrapolation scheme $e(\delta) =
e_{0}(\delta)+e_{1}(\delta)m^{-\nu}$, with the exponent $\nu$ also a
fitting parameter, rather than our standard energy extrapolation
scheme of equation~(\ref{E_extrapo}), in order to calculate the
extrapolated values of $\chi^{-1}_{d}$ shown in figure~\ref{X}. For the
same data set $m=\{8,10,12\}$ used to calculate $M$ for the
N\'{e}el-II state above, the fitted value of $\nu$ is close to 2 over
most of the range of the $J_{2}$ values shown, except near the
termination point of this phase, where it falls sharply. By contrast,
for the set $m=\{6,8,10,12\}$ also shown in figure~\ref{X}, $\nu$ is closer to 1 over most of the range. This again
reinforces the anomalous nature of the LSUB6 results.
What we see from figure~\ref{X} is that the extrapolated value of
$\chi^{-1}_{d}$ is close to zero over a range of values of $x$ that
extends from $x_{c_{2}}$ below to an upper value of about 0.6, which
is completely compatible with the value $x_{c_{3}}$ obtained from the
order parameter $M$ of the N\'{e}el-II state. Thus, by combining our
results, we conclude that in the region $x_{c_{2}}<x<x_{c_{3}}$ the GS
phase has SDVBC order, while for $x>x_{c_{3}}$ the GS phase has
N\'{e}el-II order, although this latter ordering is weak and quite
fragile against the still strongly competing SDVBC order. The shape
of the CCM curves for $\chi^{-1}_{d}$ in figure~\ref{X} are indicative
of a continuous (and hence deconfined) quantum critical point at
$x_{c_{3}}$, whereas the corresponding curves for $\chi^{-1}_{p}$, the
inverse plaquette susceptibility, found in our earlier work
\cite{Bishop:2012_honeyJ1-J2} were much more indicative of a direct
first-order transition at $x_{c_{2}}$. We see no signals at all of
the spiral ordering that is present classically for $x>\frac{1}{6}$
for any value of $x$ in the range $0<x<1$ examined.
\section{Summary}
\label{summary}
In conclusion, over the range $0<x<1$ we find that the spin-1/2
$J_{1}$--$J_{2}$ HAFM on the honeycomb lattice has four phases with,
respectively, N\'{e}el, PVBC, SDVBC, and N\'{e}el-II ordering. Our CCM estimate for the phase diagram is shown in figure~\ref{phase}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{fig5.eps}
\caption{(Colour online) Phase diagram of the spin-$1/2$
$J_{1}$--$J_{2}$ model on the honeycomb lattice (with $J_{1}>0$ and
$x \equiv J_{2}/J_{1}>0$), as obtained by a CCM analysis. The
quantum critical points are at $x_{c_{1}} \approx 0.207(3)$, $x_{c_{2}} \approx 0.385(10)$, and $x_{c_{3}} \approx 0.65(5)$, as
shown in the diagram.}
\label{phase}
\end{center}
\end{figure}
We note that all of our most accurate estimates for the three quantum
critical points are based on evaluations of the positions at which the
relevant magnetic order parameters and/or the inverse susceptibilities
to the relevant forms of valence-bond solid order vanish. Since there
are no energy crossings between the N\'{e}el and N\'{e}el-II states
directly used as CCM model states in our CCM calculations, the GS
energy data only give direct corroborating evidence for the
transitions at $x_{c_{1}}$ and $x_{c_{2}}$ from the corresponding
termination points of the CCM LSUB$m$ solutions based on the N\'{e}el
and N\'{e}el-II model states respectively, as discussed in section
\ref{results} and illustrated in figure \ref{E}.
Our first calculated critical point, $x_{c_{1}} \approx 0.207(3)$, at
which N\'{e}el order melts, agrees well with other recent results,
including $x_{c_{1}} \approx 0.195(25)$ from a large-scale exact
diagonalization (ED) study \cite{Albuquerque:2011_honey}, $x_{c_{1}}
\approx 0.26$ \cite{Zhu:2012_honeyJ1-J2} and $x_{c_{1}} \approx 0.22$
\cite{Ganesh:2013_J1J2honey} from two separate density-matrix
renormalization group (DMRG) studies, and $x_{c_{1}} \approx 0.2075$
\cite{Zhang:2013_J1J2honey} and 0.21 \cite{Yu:2013:honey} from two
recent SB-MFT studies. Both DMRG studies
\cite{Zhu:2012_honeyJ1-J2,Ganesh:2013_J1J2honey} and the ED study
\cite{Albuquerque:2011_honey} concur with us that the transition at
$x_{c_{1}}$ is probably a continuous deconfined one to a PVBC state,
whereas both SB-MFT studies \cite{Zhang:2013_J1J2honey,Yu:2013:honey}
indicate a transition to a QSL state.
Our second calculated critical point, $x_{c_{2}} \approx 0.385(10)$,
at which the PVBC order melts, is similarly in good agreement with the
result $x_{c_{2}} \approx 0.375(25)$ from the ED study
\cite{Albuquerque:2011_honey}, and the results $x_{c_{2}} \approx
0.36$ \cite{Zhu:2012_honeyJ1-J2} and $x_{c_{2}} \approx 0.35$
\cite{Ganesh:2013_J1J2honey} from the two DMRG studies. We find that
the transition at $x_{c_{2}}$ is probably a direct first-order one to
a state with SDVBC order. Both DMRG studies
\cite{Zhu:2012_honeyJ1-J2,Ganesh:2013_J1J2honey} concur that the
transition at $x_{c_{2}}$ is to a state with SDVBC order, although
Ganesh {\it et al}. \cite {Ganesh:2013_J1J2honey} find evidence for
the surprising scenario that the transition at $x_{c_{2}}$ is also of
the continuous deconfined type, as at $x_{c_{1}}$. The two SB-MFT
studies \cite{Zhang:2013_J1J2honey,Yu:2013:honey} find QSL states out
to values $x \approx 0.3732$ \cite{Zhang:2013_J1J2honey} and $x
\approx 0.43$ \cite{Yu:2013:honey}, respectively, beyond the point
$x_{c_{1}}$ at which N\'{e}el order melts. They disagree, however,
between themselves as to what is the nature of the GS phase for larger
values of $x$, beyond the QSL phase. Thus, Zhang and Lamas
\cite{Zhang:2013_J1J2honey} find the GS phase to be spirally ordered
(just as in the classical, $s \rightarrow \infty$, version of the
model) for $0.398 \lesssim x\; (\lesssim 0.5)$, and to have SDVBC order in
the very narrow region $0.3732 \lesssim x \lesssim 0.398$; whereas Yu
{\it et al}. \cite{Yu:2013:honey} find that for $x \gtrsim 0.43$ the GS
phase has N\'{e}el-II order. The ED study
\cite{Albuquerque:2011_honey}, by contrast, finds a first-order
transition at $x_{c_{2}}$ to a state that cannot be distinguished
between having either SDVBC or N\'{e}el-II order.
Finally, we find evidence for a third critical point at $x_{c_{3}}
\approx 0.65(5)$ at which a continuous (and hence again deconfined)
transition occurs to a state with weak N\'{e}el-II magnetic order. We
note that such a transition is also compatible with the DMRG result of
Ganesh {\it el al}. \cite{Ganesh:2013_J1J2honey}, which could not rule
out a melting of the SDVBC order for values $x \gtrsim 0.7$. It is
interesting to speculate whether the weak N\'{e}el-II magnetic order
observed by us for $x > x_{c_{3}}$ might be interpreted as, or arise
from, a sort of ``dressed'' SDVBC state in which spin-triplets now
contribute on the spin-singlet dimer bonds. It is too far beyond the
scope of the present analysis, however, to address such delicate
questions authoritatively.
As a last remark, it is interesting to note that in a very recent
study using a projector QMC technique \cite{Damle:2013_honey} a very
similar direct continuous quantum phase transition to what we observe
here for the $J_{1}$--$J_{2}$ model at $x_{c_{1}}$, between states
with N\'{e}el and PVBC order, has also been observed in a related
spin-1/2 $J_{1}$-$Q$ model on the honeycomb lattice, of precisely the
type suggested by Yang {\it et al}. \cite{Yang:2012} to be more
relevant to the low-energy physics of the FH model on the honeycomb
lattice, as discussed previously in section~\ref{intro}. This
$J_{1}$-$Q$ model also contains NN AFM exchange bonds of strength
$J_{1}$, but with our competing NNN exchange bonds of strength $J_{2}$
replaced by a six-spin interaction term of strength $Q$ on hexagonal
plaquettes, which by itself favours the formation of a state with PVBC
order. It would clearly also be of interest to apply a comparable CCM
study to the $J_{1}$-$Q$ model to that used here for the
$J_{1}$--$J_{2}$ model, in order to investigate its GS phase diagram
similarly.
\section*{ACKNOWLEDGMENT}
We thank the University of Minnesota Supercomputing Institute for the
grant of supercomputing facilities for this research.
\section*{References}
|
1,108,101,564,119 | arxiv | \section{Introduction}
Let $f(x)$ be a function defined on $(0,+\infty)$ to be approximated. We suppose that there exists a fixed positive integer $\nu$ and a constant $c\neq 0$ such that
\begin{align}
\lim_{x\rightarrow +\infty}x^{\nu}f(x)=c.\label{Condition-1}
\end{align}
In this case, we say that the function $f(x)$ is of order $x^{-\nu}$ when $x$ tends to infinity, and denote
\begin{align}
\mathrm{R}(f(x)):=\nu,
\end{align}
where $\nu$ is the exponent of $x^{\nu}$. For convenience, $\mathrm{R}(0)$ is stipulated to be infinity. Hence, $\mathrm{R}(f(x))$ characterizes the rate of convergence for $f(x)$ as $x$ tends to infinity. From~\eqref{Condition-1},
there exists a large positive number $X_0$ such that $f(x)/c>0$
when $x>X_0$.
\bigskip
In analysis, approximation theory, applied mathematics, etc., we often need to investigate the rational function approximation problem. Let $\frac{P_l(x)}{Q_m(x)}$ be an approximation to $f(x)$ as $x$ tends to infinity, where $P_l(x)$ and $Q_m(x)$ are polynomials in $x$. Quite similarly to the rational approximation problem for an irrational number, in order to find a better approximation to $f(x)$, we have to increase the degrees of both $P_l(x)$ and $Q_m(x)$. The main interest in this paper is to try to look for the fastest possible continued
fraction approximation or guess its approximation structure for $f(x)$ as $x$ tends to infinity.
\bigskip
The paper is organized as follows. In Sec.~2, we mainly introduce a definition to classify the continued fraction. In Sec.~3, we will prepare two preliminary lemmas for later use. In Sec.~4, we first develop further the previous
\emph{multiple-correction method}. Secondly, we introduce a transformation named as~\emph{Moritici-transformation} to change a kind of continued fraction approximation problem. In addition, we also give its \emph{Mathematica} program for the reader's convenience. Thirdly, similarly to Taylor's formula, we introduce two definitions of the \emph{formal Type-I and Type-II continued fraction approximation of order $k$} for a function, and the formal continued fraction expansion, respectively. This section constitutes the main part of this paper. To illustrate our method formulated in Sec.~4, in Sec.~5 we use the volume of the unit ball as an example to present some new inequalities. In Sec.~6, we test the well-known generalized Lord Brouncker's continued fraction formula, and show that it is the fastest possible. We also give some applications for the continued fraction formula involving the volume of the unit ball. In Sec.~7, we will use a continued fraction formula of Ramanujan to illustrate how to get the fastest possible form of the continued fraction expression. In Sec.~8, we explain how to guess the fastest possible continued fraction expansions, and give three conjectures associated to the special rate of gamma functions. In the last section, we analyze the related perspective of research in this direction.
\section{Notation and definition}
Throughout the paper, we use the notation $\lfloor x\rfloor$ to denote the largest integer not exceeding $x$. The notation
$P_k(x)$(or $Q_k(x)$) means a polynomial of degree $k$
in $x$. We will use the $\Phi(k;x)$ to denote a polynomial of degree $k$ in $x$ with the leading coefficient equals one, which may be different at each occurrence. While, the notation $\Psi(k;x)$
means a polynomial of degree $k$ in $x$ with all coefficients non-negative, which may be different at each occurrence.
Let $(a_n)_{n\ge 1}$ and $(b_n)_{n\ge 0}$ be two sequences of real numbers with $a_n\neq 0 $ for all $n\in\mathbb{N}$ . The generalized continued fraction
\begin{align}
\tau=b_0+\frac{a_1}{b_1+\frac{a_2}{b_2+\ddots}}=b_0+
\begin{array}{ccccc}
a_1 && a_2 & \\
\cline{1-1}\cline{3-3}\cline{5-5}
b_1 & + & b_2 & + \cdots
\end{array}
=b_0+\K_{n=1}^{\infty}
\left(\frac{a_n}{b_n}\right)
\end{align}
is defined as the limit of the $n$th approximant
\begin{align}
\frac{A_n}{B_n}=b_0+\K_{k=1}^{n}\left(\frac{a_k}{b_k}\right)
\end{align}
as $n$ tends to infinity. The canonical numerators $A_n$ and denominators $B_n$ of the approximants satisfy the recurrence relations~(see [8, p. 105])
\begin{align}
A_{n+2}=b_{n+2}A_{n+1}+a_{n+2}A_n,\quad B_{n+2}=b_{n+2}B_{n+1}+a_{n+2}B_n
\end{align}
with the initial values $A_0=b_0, B_0=1, A_1=b_0b_1+a_1$ and
$B_1=b_1$.
\bigskip
To describe our method clearly, we will introduce two definitions as follows.
\bigskip
\noindent {\bf Definition 1.} Let $c_0\neq 0$, and $x$ be a free variable. Let $(a_n)_{n=0}^{\infty}$, $(b_n)_{n=0}^{\infty}$ and $(c_n)_{n=0}^{\infty}$ be three real sequences. The formal continued fraction
\begin{align}
\frac{c_0}{\Phi(\nu;x)+\K_{n=0}^{\infty}\left(\frac{a_n}{x+b_n}\right)}
\end{align}
is said to be a \emph{Type-I} continued fraction. While,
\begin{align}
\frac{c_0}{\Phi(\nu;x)+\K_{n=0}^{\infty}\left(\frac{a_n}{x^2+b_n x+c_n}\right)}
\end{align}
is said to be a \emph{Type-II} continued fraction.
\bigskip
\begin{rem} The \emph{Type-I} and \emph{Type-II} are two kinds of fundamental structures we often meet. Certainly, we may define other-type continued fraction. Because of their complexity, in this paper we will not discuss the involved problems.
\end{rem}
\bigskip
\noindent {\bf Definition 2.} If the sequence $(b_n)_{n=0}^{\infty}$ is a constant sequence $(b)_{n=0}^{\infty}$ in the \emph{Type-I}~( or \emph{Type-II})
continued fraction, we call the number $\omega=b$~(or $\omega=\frac b2$) the $\mathrm{MC}$-point for the corresponding continued fraction.
We use $\hat{x}=x+\omega$ to denote the $\mathrm{MC}$-shift of $x$.
\bigskip
If there exists the $\mathrm{MC}$-point, we have the following \emph{simplified form}
\begin{align}
\frac{c_0}{\Phi_1(\nu;\hat{x})+\K_{n=0}^{\infty}\left(\frac{a_n}
{\hat{x}}\right)}\quad
\mbox{or}\quad
\frac{c_0}{\Phi_1(\nu;\hat{x})+\K_{n=0}^{\infty}\left(\frac{a_n}
{\hat{x}^2+d_n}\right)},\label{canonical-form}
\end{align}
where $d_n=c_n-\frac{b^2}{4}.$
\section{Two preliminary lemmas }
Mortici~\cite{Mor1} established a very useful tool for measuring the rate of convergence, which claims that a sequence $(x_n)_{n\ge 1}$ converging to zero is the fastest possible when the difference $(x_n-x_{n+1})_{n\ge 1}$ is the fastest possible. Since then, Mortici's lemma has been effectively applied in many papers such as~\cite{CXY,Cao1,CY,CW,Mor2,Mor3,Mor4,MCL}.
The following lemma is a generalization of Mortici's lemma. For details, readers may refer to~\cite{Cao2}.
\begin{lem} If $\lim_{x\rightarrow+\infty}f(x)=0$, and there exists the limit
\begin{align}
\lim_{x\rightarrow+\infty}x^\lambda\left(f(x)-f(x+1)\right)=l\in
\mathbb{R},
\end{align}
with $\lambda>1$, then
\begin{align}
\lim_{x\rightarrow+\infty}x^{\lambda-1}f(x)=\frac{l}{\lambda-1}.
\end{align}
\end{lem}
\bigskip
In this paper, we will use the following simple inequality,
which is a consequence of Hermite-Hadamard inequality.
\begin{lem} Let $f$ be twice differentiable
with $f''$ continuous. If $f''(x)>0$, then
\begin{align}
\int_{a}^{a+1}f(x) dx > f(a+1/2).\label{LEM3}
\end{align}
\end{lem}
\section{The multiple-correction, the Mortici-transformation and the formal continued fraction expansion}
\subsection{The multiple-correction method}
In this subsection, we will develop further the previous \emph{multiple-correction method} formulated in~\cite{Cao1,Cao2}. For some applications of this method, reader may refer to~\cite{CXY,Cao2,CY,CW}. In fact, the
\emph{multiple-correction method} is a recursive algorithm, and one of its advantages is that by repeating correction-process we always can accelerate the convergence. More precisely, every non-zero coefficient plays an important role in accelerating the convergence. The
\emph{multiple-correction method} consists of the following several steps.
\bigskip
\noindent {\bf(Step 1) The initial-correction.} The initial-correction is
vital. Determine the initial-correction $\Phi_0(\nu;x)$ such that
\begin{align}
\mathrm{R}\left(f(x)-\frac{c}{\Phi_0(\nu;x)}\right)=
\max_{\Phi(\nu;x)}\mathrm{R}
\left(f(x)-\frac{c}{\Phi(\nu;x)}\right).
\end{align}
\bigskip
\noindent {\bf(Step 2) The first-correction.} If there exists a real number $\kappa_0$ such that
\begin{align}
\mathrm{R}\left(f(x)-\frac{c}{\Phi_0(\nu;x)+\frac{\kappa_0}{x}}\right)
>\mathrm{R}\left(f(x)-\frac{c}{\Phi_0(\nu;x)}\right),
\end{align}
then we take the first-correction $\mathrm{MC}_1(x)=\frac{\kappa_0}{x+\lambda_0}$ with
\begin{align}
\lambda_0=\max_{\lambda}\mathrm{R}\left(f(x)-\frac{c}{\Phi_0(\nu;x)
+\frac{\kappa_0}{x+\lambda}}\right).
\end{align}
In this case, the first-correction has the form \emph{Type-I}. Otherwise, we take the first-correction $\mathrm{MC}_1(x)$ in the form \emph{Type-II}, i.e. $\mathrm{MC}_1(x)=\frac{\kappa_0}{x^2+\lambda_{0,1}x+\lambda_{0,2}}$ such that
\begin{align}
(\kappa_0,\lambda_{0,1},\lambda_{0,2})=
\max_{\kappa,\lambda_1,\lambda_2}\mathrm{R}
\left(f(x)-\frac{c}{\Phi_0(\nu;x)
+\frac{\kappa}{x^2+\lambda_1 x+\lambda_2}}\right).
\end{align}
If $\kappa_0=0$, we stop the correction-process, which means that the rate of convergence can not be further improved only by making use of \emph{Type-I} or \emph{Type-II} continued fraction structure.
\bigskip
\noindent {\bf(Step 3) The second-correction to the $k$th-correction.} If $\mathrm{MC}_1(x)$ has the form \emph{Type-I}, we take the second-correction
\begin{align}
\mathrm{MC}_2(x)=\frac{\kappa_0}
{x+\lambda_0+\frac{\kappa_1}{x+\lambda_1}},
\end{align}
which satisfies
\begin{align}
(\kappa_1,\lambda_1)=\max_{\kappa,\lambda}\mathrm{R}
\left(f(x)-\frac{c}{\Phi_0(\nu;x)
+\frac{\kappa_0}
{x+\lambda_0+\frac{\kappa}{x+\lambda}}}\right).
\end{align}
Similarly to the first-correction, if $\kappa_1=0$, we stop the correction-process.
If $\mathrm{MC}_1(x)$ has the form \emph{Type-II}, we take the second-correction
\begin{align}
\mathrm{MC}_2(x)=\frac{\kappa_0}
{x^2+\lambda_{0,1}x+\lambda_{0,2}+\frac{\kappa_1}
{x^2+\lambda_{1,1}x+\lambda_{1,2}}},
\end{align}
such that
\begin{align}
(\kappa_1,\lambda_{1,1},\lambda_{1,2})=
\max_{\kappa,\lambda_{1},\lambda_{2}}\mathrm{R}
\left(f(x)-\frac{c}{\Phi_0(\nu;x)
+\frac{\kappa_0}
{x^2+\lambda_{0,1}x+\lambda_{0,2}+\frac{\kappa}
{x^2+\lambda_{1}x+\lambda_{2}}}}\right).
\end{align}
If $\kappa_1=0$, we also need to stop the correction-process.
If we can continue the above correction-process to determine the $k$th-correction function $\mathrm{MC}_k(x)$ until some $k^*$ you want, then one may use a recurrence relation to determine $\mathrm{MC}_k(x)$. More precisely, in the case of \emph{Type-I} we choose
\begin{align}
\mathrm{MC}_k(x)=\K_{j=0}^{k-1}
\left(\frac{\kappa_j}{x+\lambda_j}\right)
\end{align}
such that
\begin{align}
(\kappa_{k-1},\lambda_{k-1})=\max_{\kappa,\lambda}\mathrm{R}
\left(f(x)-\left(\begin{array}{ccccccc}
c& &\kappa_0 & & \kappa_{k-2} & & \kappa \\
\cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7}
\Phi_0(\nu;x) &+&x+\lambda_0 & +\cdots+ & x+\lambda_{k-2} & + &x+\lambda
\end{array}\right)\right).
\end{align}
While, in the case of \emph{Type-II} we take
\begin{align}
\mathrm{MC}_k(x)=\K_{j=0}^{k-1}\left(\frac{\kappa_j}
{x^2+\lambda_{j,1}x+\lambda_{j,2}}\right),
\end{align}
which satisfies
\begin{align}
(\kappa_{k-1},\lambda_{k-1,1},\lambda_{k-1,2})
=\max_{\kappa,\lambda_{1},\lambda_{2}}\mathrm{R}
\left(f(x)-G(\kappa,\lambda_1,\lambda_2;x)\right),
\end{align}
where
\begin{align*}
G(\kappa,\lambda_1,\lambda_2;x):=\begin{array}{ccccccc}
c& &\kappa_0 & & \kappa_{k-2} & & \kappa \\
\cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7}
\Phi_0(\nu;x) &+& x^2+\lambda_{0,1}x+\lambda_{0,2} & +\cdots+ & x^2+\lambda_{k-2,1}x+\lambda_{k-2,2} & + &x^2+\lambda_1 x+\lambda_2
\end{array}.
\end{align*}
\bigskip
Note that in the case of both \emph{Type-I} and \emph{Type-II} continued fraction approximation, if $\kappa_{k-1}=0$, we must stop the correction-process. In other words, to improve the rate of convergence, we need to choose some more complex continued fraction structure instead of it.
\bigskip
\begin{rem}
Sometimes, we need to consider its equivalent forms. For example,
the Stirling's formula reads (See, e.g. [1, p. 253])
\begin{align}
\Gamma(x+1)\sim \sqrt{2\pi x} \left(\frac xe\right)^{x},\quad x\rightarrow +\infty,\label{Stirling's formula}
\end{align}
which is equivalent to
\begin{align}
\lim_{x\rightarrow \infty}x^3 f(x)=1,\label{Ramanujan-Type}
\end{align}
where
\begin{align}
f(x)=8 \pi^3\left(\frac xe\right)^{6x}\Gamma^{-6}(x+1).
\end{align}
From the above asymptotic formula, we may study Ramanujan-type continued fraction approximation for the gamma function. For more details, see Cao~\cite{Cao2} or next section. Moreover, we note that~\eqref{Stirling's formula} has many equivalent forms. Hence, it is not difficult to see that the equivalent transformation of a practical problem influences directly the initial-correction and final continued fraction approximation.
\end{rem}
\begin{rem}
If $\nu$ is a negative integer, our method is still efficient, i.e. we may consider the reciprocal of $f(x)$.
\end{rem}
\begin{rem}
For comparison, we use the mathematical notation ``$\mathrm{R}$" and ``$\max$" in the above definition, which make the method more clearly.
\end{rem}
\subsection{The Mortici-transformation}
In this subsection we will explain how to look for all the related coefficients in $\Phi_0(\nu;x)$ and $\mathrm{MC}_k(x)$. If we can expand $f(x)$ into a power series in terms of $1/x$ easily, then it is not difficult to determine $\Phi_0(\nu;x)$ and $\mathrm{MC}_k(x)$. Similarly, if we may expand the difference $f(x)-f(x+1)$ into a power series in terms of $1/x$, by the generalized Moritici's lemma we also can find $\Phi_0(\nu;x)$ and $\mathrm{MC}_k(x)$, e.g. the Euler-Mascheroni
constant, the constants of Landau, the constants of Lebesgue, etc.~(See~\cite{Cao1}). However, in many cases the previous two approaches are not very efficient, e.g. gamma function~(see, Remark 2) and the ratio of the gamma functions~(for example, see Sec.~7 below). Instead, we may employ the following method to achieve it.
\bigskip
First, we introduce the $k$th-correction relative error sequence $(E_k(x))_{k\ge 0}$ as follows
\begin{align}
&f(x)=\frac{c}{\Phi_0(\nu;x)}
\exp\left(E_0(x)\right),\label{E0-def}\\
&f(x)=\frac{c}{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}
\exp\left(E_k(x)\right),\quad k\ge 1,\label{Ek-def}
\end{align}
where $\Phi_0(k;x)$ is a polynomial of degree $\nu$ in $x$ with the leading coefficient equals one, to be specified below.
It is easy to verify that
\begin{align*}
&f(x)-\frac{c}{\Phi_0(\nu;x)}=\frac{c}{\Phi_0(\nu;x)}\left(
\exp\left(E_0(x)\right)-1
\right),\\
&f(x)-\frac{c}{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}=
\frac{c}{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}
\left(
\exp\left(E_k(x)\right)-1
\right),\quad k\ge 1.
\end{align*}
It is well-known that
\begin{align*}
\lim_{t\rightarrow 0}\frac{\exp(t)-1}{t}=1,
\end{align*}
by $\lim_{x\rightarrow\infty}E_k(x)=0$ we obtain
\begin{align}
&\mathrm{R}\left(f(x)-\frac{c}{\Phi_0(\nu;x)}\right)
=\nu+\mathrm{R}\left(E_0(x)\right),\\
&\mathrm{R}\left(f(x)-\frac{c}{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}\right)
=\nu+\mathrm{R}\left(E_k(x)\right),\quad k\ge 1.
\end{align}
In this way, we turn the problem to solve
$\mathrm{R}\left(E_k(x)\right)$
Take the logarithm of~\eqref{E0-def} and~\eqref{Ek-def}, respectively, we deduce that
\begin{align*}
&\ln \frac{f(x)}{c}=-\ln\left(\Phi_0(\nu;x)\right)+E_0(x),\\
&\ln \frac{f(x)}{c} =-\ln\left(\Phi_0(\nu;x)+\mathrm{MC}_k(x)\right)+E_k(x),\quad k\ge 1.
\end{align*}
Next, let us consider the difference
\begin{align}
E_0(x)-E_0(x+1)=&\ln\frac{f(x)}{f(x+1)}
+\ln\frac{\Phi_0(\nu;x)}{\Phi_0(\nu;x+1)},\\
E_k(x)-E_k(x+1)=&\ln\frac{f(x)}{f(x+1)}
+\ln\frac{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}{\Phi_0(\nu;x+1)
+\mathrm{MC}_k(x+1)},\quad k\ge 1.
\end{align}
By Lemma 1~(the generalized Moritici's lemma), we have
\begin{align}
\mathrm{R}\left(E_k(x)\right)=\mathrm{R}\left(E_k(x)-E_k(x+1)\right)-1.
\end{align}
Finally, if set $\mathrm{MC}_0(x)\equiv 0$, then we attain the following useful tool.
\bigskip
\begin{lem} Let $f(x)$ satisfy~\eqref{Condition-1}. Under the above notation, we have
\begin{align}
&\mathrm{R}\left(f(x)-\frac{c}{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}\right)
\label{Mortici-transformation}\\
=&\nu-1+\mathrm{R}\left(\ln\frac{f(x)}{f(x+1)}
+\ln\frac{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}{\Phi_0(\nu;x+1)
+\mathrm{MC}_k(x+1)}\right),\quad k\ge 0.\nonumber
\end{align}
\end{lem}
The idea of Lemma 3 is first originated from Mortici~\cite{Mor1}, which will be called a \emph{Mortici-transformation}. We would like to stress that \emph{Mortici-transformation} implies the following assertion
\begin{align}
&\max_{\kappa,\lambda \ (\mbox{\emph{\footnotesize or}}~ \kappa,\lambda_1,\lambda_2)}\mathrm{R}\left(f(x)-\frac{c}
{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}\right)
\label{Mortici-transformation-1}\\
=&\max_{\kappa,\lambda \ (\mbox{\emph{\footnotesize or}}~ \kappa,\lambda_1,\lambda_2)}\mathrm{R}\left(\ln\frac{f(x)}{f(x+1)}
+\ln\frac{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}{\Phi_0(\nu;x+1)
+\mathrm{MC}_k(x+1)}\right),\quad k\ge 0.\nonumber
\end{align}
In the sequel, we will use this relation many times. For the sake of simplicity, we will always assume that the difference
\begin{align}
\ln\frac{f(1/z)}{c}- \ln\frac{f(1/z +1)}{c} =\ln\frac{f(1/z)}{f(1/z+1)}\label{Condition-2}
\end{align}
is an analytic function in a neighborhood of point $z=0$.
\bigskip
For the reader's convenience, we would like to give the complete \emph{Mathematica} program for finding
all the coefficients in $\Phi_0(\nu;x)$ and $\mathrm{MC}_k(x)$ by making use of~\emph{Mortici-transformation}.
\bigskip
{\bf (i)}. First, let the function $MT[x]$ be defined by
\begin{align*}
MT[x]:=\ln\frac{f(x)}{f(x+1)}
+\ln\frac{\Phi_0(\nu;x)+\mathrm{MC}_k(x)}{\Phi_0(\nu;x+1)
+\mathrm{MC}_k(x+1)}.
\end{align*}
{\bf (ii)}. Then we manipulate the following \emph{Mathematica} command to
expand $MT[x]$ into a power series in terms of $1/x$:
\begin{align}
\text{Normal}[\text{Series}
[MT[x]
\text{/.}~ x\rightarrow
1/u, \{u,0,l_k\}]]\text{/.}~ u\rightarrow 1/x~(\text{// Simplify})
\label{MT-Mathematica-Program}
\end{align}
We remark that the variable $l_k$ needs to be suitable chosen according to the different function.
\bigskip
{\bf (iii)}. Taking out the first some coefficients in the above power series, then we enforce them to be zero, and finally solve the related coefficients successively.
\begin{rem}
Actually, once we have found $\mathrm{MC}_k(x)$, \eqref{MT-Mathematica-Program} can be used again to determine the rate of convergence. In addition, we can apply it to check the general term formula for $\mathrm{MC}_k(x)$.
\end{rem}
\subsection{The formal continued fraction expansion}
Similarly to Taylor's formula, if the $k$th-correction $\mathrm{MC}_k(x)$ for $f(x)$ has the \emph{Type-I (or the Type-II)} structure, then we may construct the \emph{formal Type-I (or Type-II) continued fraction approximation of order $k$} for $f(x)$ as follows:
\begin{align}
CF_k(f(x)):= \frac{1}{\Phi_0(\nu;x)+\mathrm{MC}_k(x)},\quad k\ge 0.
\end{align}
For example, Euler-Mascheroni
constant has the formal \emph{Type-I} continued fraction approximation of order $k$, while both Landau's constants and Lebesgue's constants have the formal \emph{Type-II} continued fraction approximation of order $k$. For details, readers may refer to~\cite{Cao1}.
\bigskip
{\bf Example 1.} Let $f(x)=\frac{\Gamma^4(x+\frac 14)}{\Gamma^4(x+1)}$. Then $CF_k(f(x))$ is the \emph{Type-I}, its $\mathrm{MC}$-point $\omega$ equals $\frac 18$~(i.e. $\lambda_m\equiv \frac 18$), and
\begin{align}
CF_k(f(x))=\frac{1}{(x+\frac 18)^3+\frac {7}{128} (x+\frac 18)+\mathrm{MC}_k(x)},
\end{align}
where $
(\kappa_0,\kappa_1,\kappa_2,\ldots)=\left(-\frac{189}{32768}, \frac{1483}{2688}, \frac{8923253}{15945216}, \frac{
10136617131375}{6775390309888}, \frac{2313439127848201}{1428763287038592},\ldots\right).
$
{\bf Example 2.} Let $G_{\eta}(x)$ be defined by \eqref{G-eta-Def}
below. In the case of $\eta=\frac 12$, $CF_k\left(G_{\eta}^2(x)\right)$ is the \emph{Type-II}, for details see Corollary 2 in Sec.~7. If $\eta\neq \frac 12$, then $CF_k\left(G_{\eta}^2(x)\right)$ is the \emph{Type-I}, and it has not $\mathrm{MC}$-point. We have
\begin{align}
CF_2\left(G_{\eta}^2(x)\right)=\frac{1}{x^2+2\eta(1-\eta)x+2 \eta^2(\eta-1)^2 +\mathrm{MC}_2(x)},
\end{align}
where $\kappa_0=-\frac 13\eta^2 (1-\eta)^2(2 \eta-1)^2$, $\lambda_0=\frac{(2 \eta - 1)^2}{8} + \frac 14 - \frac{3}{8 (2 \eta-1)^2}$, $\kappa_1=\frac{1}{64} \left( (2 \eta - 3)^2(2 \eta + 1)^2 + 10 + \frac{45}{(2 \eta-1)^4}\right)$, and $\lambda_1=\frac{(2 \eta - 1)^2}{24} + \frac 14 + \frac{3}{8 (2 \eta-1)^2} -\frac{( \eta-2)^2 (\eta+1)^2 (2 \eta- 1)^2}{6(2 \eta-1)^4\kappa_1}$.
\bigskip
If we rewrite $CF_k(f(x))$ in a rational function of the form $\frac{P_r(x)}{Q_s(x)}$, then $s=k+\nu$ in the case of \emph{Type-I}, and $s=2k+\nu$ in the case of \emph{Type-II}. If we let $\mathrm{R}\left(f(x)-CF_k(f(x))\right)=K$, then
\begin{align}
f(x)=CF_k(f(x))+O\left(x^{-K}\right),\quad x\rightarrow\infty.
\end{align}
\bigskip
Let $\theta_0=0$ or $1$. A lot of computations reveal that if $CF_k(f(x))$
is the \emph{Type-I}, then $K=2k+2\nu+1+\theta_0$, and
$K=4k+2\nu+1+\theta_0$ in the case of \emph{Type-II}, respectively.
\bigskip
For a suitable ``not very large" positive integer $k$, by using of \emph{Mortici-transfomation} and~\eqref{MT-Mathematica-Program}, we may get the rate of convergence for $f(x)-CF_k(f(x))$ when $x$ tends to infinity. Moreover, by making use of telescoping method, Hermite-Hadamard inequality, etc, sometimes we can prove sharp double inequalities of $f(x)-CF_k(f(x))$ for as smaller $x$ as possible. We will give an example in Sec.~5.
\bigskip
Now let $k$ tend to $\infty$, we get the \emph{formal Type-I (or Type-II) continued fraction expansion} for $f(x)$, or shortly write \begin{align}
f(x)\sim CF(f(x)):=CF_{\infty}(f(x)),\quad x\rightarrow \infty.
\end{align}
In some cases, we can test and guess further the general term of
$CF(f(x))$. Here we need to apply some tools in number theory,
difference equation, etc. We will show some examples in Sec.~7.
\bigskip
For the formal continued fraction expansion, we are often concerned with the following two main problems.
\bigskip
\noindent {\bf Problem 1.} Determine the domains of convergence for the formal continued fraction expansion $CF(f(x))$. We may refer to two very nice books:~L. Lorentzen and H. Waadeland\cite{LW}, and A. Cuyt, V.B. Petersen, B. Verdonk, H. Waadeland, W.B. Jones~\cite{CPV}, or some other classical books cited in there.
\bigskip
\noindent {\bf Problem 2.} Prove an identity for as the large domains as possible. That is, based on Problem 1, to determine the intervals $\mathrm{I}$ such that $f(x)= CF(f(x))$ for all $x\in \mathrm{I}$. For example, with the help of continued fraction theory, hypergeometric series, etc., we hope at least to find a interval $(x_0,\infty)\subset \mathrm{I}$ for some $x_0>0$. Certainly, we may extend it to a complex domain. However, in this paper we will not investigate this topic.
\bigskip
On one hand, to determine all the related coefficients, we often use an appropriate symbolic computation software, which needs a huge of computations. On the other hand, the exact expressions at each occurrence also takes a lot of space. Hence, in this paper we omit some related details for space limitation.
\begin{rem}
From the above discussion, we observe that for a specific function, except a huge of computations, probably only such two kinds of structures can not provide ``good continued fraction approximation''. In addition, in the theory of classical continued fraction, even if there is a continued fraction expansion for a given function, we often do not know whether it is the fastest possible or best possible. Generally speaking, for a given continued fraction, finding the rate of convergence for the $k$th approximant is not always easy.
\end{rem}
\section{The volume of the unit ball}
It is well-known that the volume of the unit ball in $\mathbb{R}^n$ is
\begin{align}
\Omega_n=\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac n2+1)}.\label{Omega-n-Def}
\end{align}
Many authors have investigated the inequalities about the $\Omega_n$, e.g. see~\cite{Al1,Al2,Al3,AVV,AQ,BH,Gao,KR,Me,Mor3,Mor4,QV,Sch} and references therein.
Chen and Li~\cite{CL} proved~($a=\frac e2,~b=\frac 13$):
\begin{align}
\frac{1}{\sqrt{\pi(n+a)}}\left(\frac{2\pi e}{n}\right)^{\frac n2}\le \Omega_n<\frac{1}{\sqrt{\pi(n+b)}}\left(\frac{2\pi e}{n}\right)^{\frac n2}.
\end{align}
Recently, Mortici~[25, Theorem 3] showed that for every integer $n\ge 3$ in the left-hand side and $n\ge 1$ in the right-hand side, then we have the following Gosper-type inequalities:
\begin{align}
\frac{1}{\sqrt{\pi(n+\theta(n))}}\left(\frac{2\pi e}{n}\right)^{\frac n2}\le \Omega_n<\frac{1}{\sqrt{\pi(n+\vartheta(n))}}\left(\frac{2\pi e}{n}\right)^{\frac n2},
\end{align}
where
\begin{align*}
\theta(n)=\frac 13+\frac{1}{18 n}-\frac{31}{810 n^2},\quad \vartheta(n)=\theta(n)-\frac{139}{9720 n^3}.
\end{align*}
Now we let
\begin{align}
V(x)=\frac{\pi^x}{\Gamma(x+1)},\quad x>0.\label{V-Def}
\end{align}
Let us imagine that if $\frac{1}{\Gamma(x+1)}\sim H(x)$ when $x$ tends to infinity, then $V(x)$ has an asymptotic formula of the form
$\pi^x H(x)$. In this sense, by Remark 2 and Remark 3, it suffices to consider the asymptotic formula for the gamma function. In fact, we note that both $f(x)$ and $1/f(x)$ have the same $k$th-correction $MC_k(x)$.
From~\eqref{Ramanujan-Type}, we introduce the relative error sequence $(E_k(x))_{k\ge 0}$ to be defined by
\begin{align}
f(x):=&8 \pi^3\left(\frac xe\right)^{6x}\Gamma^{-6}(x+1)=\frac{\exp(E_0(x))}{\Phi_0(x)},
\label{E0-Gamma-Def}\\
f(x):=&\frac{\exp(E_k(x))}{\Phi_0(x)+\mathrm{MC}_k(x)},\quad k\ge 1,\label{Ek-Gamma-Def}
\end{align}
where $\Phi_0(x)= x^3+\frac 12 x^2+\frac 18 x+\frac {1}{240}$, and
\begin{align}
\mathrm{MC}_k(x)=\K_{j=0}^{k-1}\left(\frac{\kappa_j}{x+\lambda_j}
\right),
\end{align}
here $\kappa_0=-\frac{11}{1920}, \lambda_0=\frac{79}{154},
\kappa_1=\frac{459733}{711480}, \lambda_1=-\frac{1455925}{70798882},\ldots$. We stress that $\Phi_0(x)$ was claimed first by Ramanujan~\cite{Ram}, and some more coefficients may be founded in~\cite{Cao2}. By employing Lemma 1, \eqref{Ek-difference-Exp}~(see below) and~\eqref{MT-Mathematica-Program}, it is not difficult to verify that
\begin{align}
&\lim_{x\rightarrow\infty}x^{4}E_0(x)=\frac{11}{1920}:=C_0,\\
&\lim_{x\rightarrow\infty}x^{6}E_1(x)=-\frac{459733}{124185600}:=C_1.
\end{align}
The following theorem tells us how to improve the above results and obtain some sharper estimates for $E_0(x)$ and $E_1(x)$.
\begin{thm} Let $E_0(x)$ and $E_1(x)$ be defined as~\eqref{E0-Gamma-Def} and~\eqref{Ek-Gamma-Def}, respectively.
(i) For every real number $x\ge 6$ in the left-hand side and $x\ge 12$ in the right-hand side, we have
\begin{align}
\frac{11}{1920}\frac{1}{(x+3)^4}<E_0(x)<\frac{11}{1920 (x-5)^4}.
\end{align}
(ii) For every real number $x\ge 9$ in the left-hand side and $x\ge 10$ in the right-hand side, then
\begin{align}
-\frac{459733}{124185600}\frac{1}{(x-2)^6}<E_1(x)
<-\frac{459733}{124185600}\frac{1}{(x+2)^6}.
\end{align}
\end{thm}
\proof We use the idea of Theorem 2 in~\cite{XY} or Theorem 1 in~\cite{CXY}. Let $G_k(x)=E_k(x)-E_k(x+1)$ for $k\ge 0$. We will employ the telescoping method. It follows from $\lim_{x\rightarrow\infty}E_k(x)=0$ that
\begin{align}
E_k(x)=\sum_{m=0}^{\infty}G_k(x+m), \quad(k=0,1).\label{Telecsoping Identity}
\end{align}
If $g(\infty)=g'(\infty)=0$, it is not difficult to prove that
\begin{align}
g(x)=-\int_{x}^{\infty}g'(s) ds=
\int_{x}^{\infty}\left(\int_{s}^{\infty}g''(t) dt\right) ds.
\label{Integral-Change-Form}
\end{align}
Note that the convenience $\mathrm{MC}_0(x)=0$. By~\eqref{E0-Gamma-Def} and~\eqref{Ek-Gamma-Def}, we have
\begin{align}
&E_k(x)=-6\ln \Gamma(x+1)+2\ln 2\pi +6 x(\ln x-1)+ \ln(\Phi_0(x)+\mathrm{MC}_k(x)),\\
&G_k(x)=E_k(x)-E_k(x+1)=6\left(1-x \ln(1+\frac 1x)\right)+\ln\frac{\Phi_0(x)+\mathrm{MC}_k(x)}
{\Phi_0(x+1)+\mathrm{MC}_k(x+1)}.\label{Ek-difference-Exp}
\end{align}
By using \emph{Mathematica} software, we can check that if $x>0$, then
\begin{align}
&G_0''(x)- \frac{11}{16 x^7}+ \frac{29}{8 x^8}=\frac{\Psi_1(15;x)}{96 x^8 (1 + x)^2 \Psi_2(12;x)}>0,\\
&G_0''(x)- \frac{11}{16 x^7}+ \frac{29}{8 x^8}-\frac{9031}{800 x^9}
=-\frac{\Psi_3(13;x)}{800 x^9 (1 + x)^2\Psi_2(12;x)}<0.
\end{align}
By~\eqref{Integral-Change-Form}, we get that when $x>0$,
\begin{align}
\frac{11}{480 x^5}- \frac{29}{336 x^6}<G_0(x)<\frac{11}{480 x^5}- \frac{29}{336 x^6}+\frac{9031}{44800 x^7}.\label{G0-bounds}
\end{align}
Similarly, if $x\ge \frac {1}{16}$, we have
\begin{align}
&G_1''(x)+ \frac{459733}{369600 x^9}- \frac{39872247}{
4743200 x^{10}}\\
=&-\frac{\Psi_4(20;x)(x-\frac{1}{16})+\frac{4490\dots 0225}{1441\cdots 5872}}{28459200 x^{10}\Psi_5(6;x)\left(\Psi_6(3;x)(x - \frac{1}{23})+\frac{7670381}{279841}\right)^2\Psi_7(8;x) }<0,\nonumber\\
&G_1''(x)+ \frac{459733}{369600 x^9}- \frac{39872247}{
4743200 x^{10}}+\frac{1092949825573}{32724285440 x^{11}}\\
=&\frac{\Psi_8(20;x)(x-\frac{1}{16})+\frac{2388\cdots 7275}{
1125\cdots 2624}}{490864281600 x^{11}\Psi_5(6;x)\left(\Psi_6(3;x)(x - \frac{1}{23})+\frac{7670381}{279841}\right)^2\Psi_7(8;x) }>0,\nonumber
\end{align}
and
\begin{align}
-\frac{459733}{20697600 x^7}+ \frac{13290749}{
113836800 x^8}-\frac{1092949825573}{2945185689600 x^9}<G_1(x)<-\frac{459733}{20697600 x^7}+ \frac{13290749}{
113836800 x^8}.\label{G1-bounds}
\end{align}
Now, combining \eqref{Telecsoping Identity}, \eqref{G0-bounds} and \eqref{G1-bounds}, we attain that
\begin{align}
&0<E_0(x)-\frac{11}{480}\sum_{m=0}^{\infty}\frac{1}{(x+m)^5}
+\frac{29}{336 } \sum_{m=0}^{\infty}\frac{1}{(x+m)^6}<\frac{9031}{44800 }\sum_{m=0}^{\infty}\frac{1}{(x+m)^7},\quad (x>0),\\
&-\frac{1092949825573}
{2945185689600 }\sum_{m=0}^{\infty}\frac{1}{(x+m)^9}<\\
&E_1(x)+\frac{459733}{20697600 }\sum_{m=0}^{\infty}\frac{1}{(x+m)^7}- \frac{13290749}{
113836800 }\sum_{m=0}^{\infty}\frac{1}{(x+m)^8}
<0,\quad (x>\frac{1}{16}).\nonumber
\end{align}
Let $j\ge 2$ and $x>\frac 12$. By Lemma 2, we obtain
\begin{align}
&\frac{1}{(j-1)x^{j-1}}=\int_x^{\infty}\frac{dt}{t^j}
<\sum_{m=0}^{\infty}\frac{1}{(x+m)^j}\\
&<\sum_{m=0}^{\infty}
\int_{x+m-\frac 12}^{x+m-\frac 12}\frac{dt}{t^j}
=\int_{x-\frac 12}^{\infty}\frac{dt}{t^j}=\frac{1}{(j-1)(x-\frac 12)^{j-1}}.\nonumber
\end{align}
By applying (5.22) and (5.24), under the condition $x\ge 6$ we have
\begin{align}
E_0(x)>&\frac{11}{480}\frac{1}{4x^4}-\frac{29}{336}\frac{1}{5 (x-\frac 12)^5}\\
=&\frac{11}{1920}\frac{1}{(x+3)^4}+
\frac{\Psi_1(7;x)(x-6)+2164192911}{13440 x^4 (3 + x)^4 (-1 + 2 x)^5}\nonumber\\
>&\frac{11}{1920}\frac{1}{(x+3)^4}.\nonumber
\end{align}
Similarly to (5.25), if $x\ge 12$, then
\begin{align}
E_0(x)<&\frac{11}{480}\frac{1}{4(x-\frac 12)^4}-\frac{29}{336}\frac{1}{5 x^5}+
\frac{9031}{44800}\frac{1}{6 (x-\frac 12)^6}\\
=&\frac{11}{1920 (x-5)^4}-\frac{\Psi_2(9;x)(x-12)+12561000435989768}{67200 (-1 + x)^4 x^5 (-1 + 2 x)^6}\nonumber\\
<&\frac{11}{1920 (x-5)^4}.\nonumber
\end{align}
This completes the proof of assertion (i). Finally, it is not difficult to check that if $x\ge 9$, then
\begin{align}
E_1(x)>&-\frac{459733}{20697600}\frac{1}{6 (x - 1/2)^6}+\frac{ 13290749}{
113836800}\frac{1}{7 x^7} -\frac{1092949825573}{2945185689600}\frac{1}{8 (x - 1/2)^8}\\
=&-\frac{459733}{124185600}\frac{1}{(x - 2)^6}+\frac{\Psi_3(13;x)(x-9)+67733478135399363858702201}{736296422400 (-2 + x)^6 x^7 (-1 + 2 x)^8}\nonumber\\
>&-\frac{459733}{124185600}\frac{1}{(x - 2)^6},\nonumber
\end{align}
and if $x\ge 10$, we have
\begin{align}
E_1(x)<&-\frac{459733}{20697600}\frac{1}{6 x^6}+\frac{ 13290749}{
113836800}\frac{1}{7 (x - 1/2)^7}\\
=&-\frac{459733}{20697600}\frac{1}{6 (x+2)^6}-\frac{\Psi_4(11;x)(x-10)+470994290293217661904}{2390572800 x^6 (2 + x)^6 (-1 + 2 x)^7}\nonumber\\
<&-\frac{459733}{124185600}\frac{1}{(x + 2)^6},\nonumber
\end{align}
This will finish the proof of Theorem 1.\qed
\begin{thm} Assume $n\ge 24$, we have
the following Ramanujan-type inequalities
\begin{align}
\frac{1}{\sqrt \pi}\left(\frac{2\pi e}{n}\right)^{\frac n2}\frac{1}{\sqrt[6]{n^3+ n^2+ \frac n2+\frac{1}{30}}}<\Omega_n<\frac{1}{\sqrt \pi}\left(\frac{2\pi e}{n}\right)^{\frac n2}\frac{\exp\left(\frac{11}{720 (n-10)^4} \right)}{\sqrt[6]{n^3+ n^2+ \frac n2+\frac{1}{30}}}.
\end{align}
If $n\ge 20$, then
\begin{align}
&\frac{1}{\sqrt \pi}\left(\frac{2\pi e}{n}\right)^{\frac n2}\frac{1-\frac{459733}{11642400 (-4 + n)^6}}{\sqrt[6]{n^3+ n^2+ \frac n2+\frac{1}{30}-\frac{847}{9240 n + 9480}}}<\Omega_n\\
&<\frac{1}{\sqrt \pi}\left(\frac{2\pi e}{n}\right)^{\frac n2}\frac{1}{\sqrt[6]{n^3+ n^2+ \frac n2+\frac{1}{30}-\frac{847}{9240 n + 9480}}}.\nonumber
\end{align}
\end{thm}
\proof It follows from~\eqref{Omega-n-Def}, \eqref{E0-Gamma-Def} and \eqref{Ek-Gamma-Def} that
\begin{align}
\Omega_n=\frac{1}{\sqrt \pi}\left(\frac{2\pi e}{n}\right)^{\frac n2}\frac{\exp\left(\frac 16 E_0(\frac n2)\right)}{\sqrt[6]{8\Phi_0(\frac n2)}},\quad \Omega_n=\frac{1}{\sqrt \pi}\left(\frac{2\pi e}{n}\right)^{\frac n2}\frac{\exp\left(\frac 16 E_1(\frac n2)\right)}{\sqrt[6]{8\Phi_0(\frac n2)+8\mathrm{MC}_1(\frac n2)}}.
\end{align}
Now (5.29) follows from (5.10) and (5.31).
We begin to prove (5.30). It is well-known that $\exp(t)\ge 1+t$. When $n\ge 20$, by the inequality of the right-hand side in (5.11), we have the following trivial estimate
\begin{align}
\exp\left(\frac 16 E_1(\frac n2)\right)<1.
\end{align}
In addition, by the lower bound in (5.11), we get
\begin{align}
\exp\left(\frac 16 E_1(\frac n2)\right)>1+\frac 16 E_1(\frac n2)
>1-\frac{459733}{11642400 (-4 + n)^6},
\quad (n\ge 20).
\end{align}
Combining (5.31), (5.32) and (5.33) completes the proof of (5.30).\qed
Following the same approach as Theorem 2, it is not difficult to prove the following Ramanujan-type inequalities for the gamma function.
\begin{cor} Let $x\ge 12$. Then
\begin{align*}
\sqrt{\pi}\left(\frac x e\right)^x
\left(8x^3+4x^2+x+\frac{1}{30}\right)^{\frac 16}\exp\left(-\frac{11}{11520 (x-5)^4} \right)<\Gamma(x+1)<\sqrt{\pi}\left(\frac x e\right)^x
\left(8x^3+4x^2+x+\frac{1}{30}\right)^{\frac 16}.
\end{align*}
\end{cor}
\begin{rem}
It should is noted that the method described in Theorem 1 and 2 also can be used to look for $CF_k(F(x))$, and prove some inequalities involving the ratio of gamma functions.
\end{rem}
\begin{rem}
We will give some other results involving $\Omega_n$ in the subsection 6.2.
\end{rem}
\section{Lord Brouncker's continued fraction formula}
\subsection{Lord Brouncker's continued fraction formula}
The following formula is taken from Corollary 1 of Berndt~[8, p. 145], which was first proved by Bauer~\cite{Bau} in 1872.
\begin{lem} If $\Re x>0$, then
\begin{align}
\frac{\Gamma^2(\frac 14(x+1))}{\Gamma^2(\frac 14(x+3))}
=\frac{4}{x+}\K_{m=1}^{\infty}\left(\frac{(2m-1)^2}{2x}\right)
.\label{General
-Lord Brouncker}
\end{align}
\end{lem}
By taking $x=4n+1$ in the above formula, we obtain the so-called Lord Brouncker's continued fraction formula
\begin{align}
q(n):=\frac{\Gamma^2(n+\frac 12)}{\Gamma^2(n+1)}=
\frac{4}{4n+1+\frac{1^2}{2(4n+1)+\frac{3^2}{2(4n+1)
+\frac{5^2}{2(4n+1)+\ddots}}}}.\label{Brouncker's formula}
\end{align}
For a very interesting history of formula~\eqref{General
-Lord Brouncker}, see Berndt~[8, p. 145]. In addition, Lord Brouncker's continued fraction formula also plays an important role in Landau's constants, see~\cite{CXY,Cao1}.
The main aim in this subsection is to illustrate~(without proof) that the formula ~\eqref{General
-Lord Brouncker} is the fastest possible by making use of the method formulated in Sec.~4. Replacing $x$ by $4x+1$ in~\eqref{General
-Lord Brouncker} and then making some simple calculation, we obtain its equivalent forms as follows.
\begin{lem} Let $\Re x>-\frac 14$, we have
\begin{align}
\frac{\Gamma^2(x+\frac 12)}{\Gamma^2(x+1)}=\frac{1}{x+\frac 14+\frac{\frac{1}{32}}{x+\frac 14+\K_{m=1}^{\infty}\left(\frac{\frac{(2m+1)^2}{64}}{x+\frac 14}\right)}}.\label{General
-Lord Brouncker-1}
\end{align}
\end{lem}
\proof Now, we are in a position to treat the above formula directly. Let
\begin{align}
f(x)=\frac{\Gamma^2(x+\frac 12)}{\Gamma^2(x+1)}.
\end{align}
By the recurrence relation $\Gamma(x+1)=x \Gamma(x)$, we have
\begin{align}
\frac{f(x)}{f(x+1)}= \frac{(x+1)^2}{(x+\frac 12)^2}.\label{f-ratio-1}
\end{align}
By the Stirling's formula, it is not difficult to prove
\begin{align}
\lim_{x\rightarrow\infty}x^{b-a}\frac{\Gamma(x+a)}{\Gamma(x+b)}
=1.\label{Stirling's approximation}
\end{align}
Also see [1, p. 257, Eq. 6.1.47] or [8, p. 71, Lemma 2].
It follows readily from~\eqref{Stirling's approximation} that
\begin{align}
\lim_{x\rightarrow +\infty}x f(x)=1,\label{Brouncker-Initial-Con}
\end{align}
i.e, we take $\nu=1$ in~\eqref{Condition-1}.
\bigskip
\noindent {\bf(Step 1) The initial-correction.} According to~\eqref{Brouncker-Initial-Con}, we take $\Phi_0(x)=x+a$ for some constant $a$, to be specified below. From \eqref{f-ratio-1} and \eqref{MT-Mathematica-Program}, it is not difficult to prove that
\begin{align}
&\ln \frac{f(x)}{f(x+1)}+\ln\frac{\Phi_0(x)}{\Phi_0(x+1)}
=2\ln \frac{x+1}{x+\frac 12}+\ln\frac{x+a}{x+1+a}\\
=&\frac{-1/4 + a}{x^2}+O\left(\frac{1}{x^3}\right).\nonumber
\end{align}
Solve the equation $-1/4 + a=0$, we get $a=1/4$. By~\emph{Mortici-transformation}, we obtain
\begin{align}
\Phi_0(x)=x+\frac 14,\quad CF_0(f(x))=\frac{1}{x+\frac 14}.
\end{align}
As we need to use ~\emph{Mortici-transformation} in each correction-process, so will not mention it for the sake of simplicity.
\bigskip
\noindent {\bf(Step 2) The first-correction.} Let us expand the following function into a power series in terms of $1/x$:
\begin{align}
&\ln \frac{f(x)}{f(x+1)}+\ln\frac{\Phi_0(x)+\frac{\kappa_0}{x}}
{\Phi_0(x+1)+\frac{\kappa_0}{x+1}}
=2\ln \frac{x+1}{x+\frac 12}+\ln\frac{x+\frac 14+\frac{\kappa_0}{x}}
{x+\frac 54+\frac{\kappa_0}{x+1}}\\
=&\frac{-1/16 + 2\kappa_0}{x^3}+O\left(\frac{1}{x^4}\right).\nonumber
\end{align}
We solve the equation $-1/16 + 2\kappa_0=0$, and obtain $\kappa_0=1/32\neq 0$. Hence we take the first-correction $\mathrm{MC}_1(x)$ to be \emph{Type-I}, i.e.
\begin{align}
\mathrm{MC}_1(x)=\frac{\kappa_0}{x+\lambda_0}.
\end{align}
Since
\begin{align}
\ln \frac{f(x)}{f(x+1)}+\ln\frac{\Phi_0(x)+\frac{\kappa_0}{x+\lambda_0}}
{\Phi_0(x+1)+\frac{\kappa_0}{x+1+\lambda_0}}
=\frac{\frac{3}{128} - \frac{3 \lambda_0}{32}}{x^4}+O\left(\frac{1}{x^5}\right),\nonumber
\end{align}
we enforce $\frac{3}{128} - \frac{3 \lambda_0}{32}=0$, and deduce
$\lambda_0=\frac 14$. Thus,
\begin{align}
\mathrm{MC}_1(x)=\frac{\frac{1}{32}}{x+\frac 14},
\quad CF_1(f(x))=\frac{1}{x+\frac 14+\frac{\frac{1}{32}}{x+\frac 14}}.
\end{align}
\bigskip
\noindent {\bf(Step 3) The second-correction to the sixth-correction.} Now we take $\mathrm{MC}_2(x)$ to be \emph{Type-I}, and let
\begin{align}
\mathrm{MC}_2(x)=\frac {\kappa_0}{ x+\lambda_0+
\frac {\kappa_1}{x+\lambda_1}}.
\end{align}
By using\eqref{MT-Mathematica-Program}, we have
\begin{align}
&\ln \frac{f(x)}{f(x+1)}+\ln\frac{\Phi_0(x)+\mathrm{MC}_2(x)}
{\Phi_0(x+1)+\mathrm{MC}_2(x+1)}\\
=&\frac{\frac{9}{512} - \frac{\kappa_1}{8}}{x^5}+\frac{5 (-27 + 176 \kappa_1 + 64 \kappa_1 \lambda_1)}{2048 x^6}+O\left(\frac{1}{x^7}\right).\nonumber
\end{align}
Solve the equations
\begin{align}
\begin{cases}
\frac{9}{512} - \frac{\kappa_1}{8}=0,\\
-27 + 176 \kappa_1 + 64 \kappa_1 \lambda_1=0,
\end{cases}
\end{align}
we attain
\begin{align}
\kappa_1=\frac{9}{64},\quad \lambda_1=\frac 14.
\end{align}
We take the $k$th-correction $\mathrm{MC}_k(x)$ to be \emph{Type-I}, then repeat the above approach like the second-correction, and solve successively the coefficients $\kappa_j$ and $\lambda_j$ ($2\le j\le 6$) as follows:
\begin{align}
& \kappa_2=\frac{25}{64},\quad \lambda_2=\frac 14;\quad \kappa_3=\frac{49}{64},\quad \lambda_3=\frac 14;\quad
\kappa_4=\frac{81}{64},\quad \lambda_4=\frac 14;\\ &\kappa_5=\frac{121}{64},\quad \lambda_5=\frac 14;\quad \kappa_6=\frac{169}{64},\quad \lambda_6=\frac 14;\quad\kappa_7=\frac{225}{64},\quad \lambda_7=\frac 14.
\end{align}
From these results, it is not difficult to guess that
\begin{align}
\kappa_m=\frac{(2m+1)^2}{64},\quad \lambda_m=\frac 14.
\end{align}
Further, we apply~\eqref{MT-Mathematica-Program} to check that the above conjecture holds true for some larger $m$. In this way, we finally test that the fastest possible formula should be~\eqref{General
-Lord Brouncker-1}.\qed
\subsection{The continued fraction formulas involving the volume of the unit ball}
Let $\Omega_n$ be defined by~\eqref{Omega-n-Def}. The main purpose of this subsection is to present the following two theorems.
\begin{thm} Let $n\ge 1$ be a positive integer. Then
\begin{align}
\frac{\Omega_{n}^2}{\Omega_{n-1}\Omega_{n+1}}=
\frac{2(n+1)}{2n+1+\K_{m=0}^{\infty}\left(
\frac{(2m+1)^2}{2(2n+1)}\right)}.
\end{align}
\end{thm}
\proof. It follows from~\eqref{Omega-n-Def} and the recurrence relation $\Gamma(x+1)=x \Gamma(x)$ that
\begin{align}
\frac{\Omega_{n}^2}{\Omega_{n-1}\Omega_{n+1}}=
\frac{\Gamma(\frac n2+\frac 12)\Gamma(\frac n2+\frac 32)}{\Gamma^2(\frac n2+1)}=\frac{n+1}{2}\frac{\Gamma^2(\frac n2+\frac 12)}{\Gamma^2(\frac n2+1)}.
\end{align}
Replacing $x$ by $\frac n2$ in~\eqref{General
-Lord Brouncker-1}, then after simplification, we get easily the desired assertion.\qed
\begin{thm} Let $n\in \mathbb{N}$, then
\begin{align}
\frac{\Omega_{n-1}}{\Omega_n}=\frac{1}{2\sqrt{\pi}}
\sqrt{2n+1+\K_{m=0}^{\infty}
\left(\frac{(2m+1)^2}{2(2n+1)}\right)
}.
\end{align}
\end{thm}
\proof From~\eqref{Omega-n-Def}, we have
\begin{align}
\frac{\Omega_{n-1}}{\Omega_n}=\frac{1}{\sqrt{\pi}}\frac{\Gamma(\frac n2+1)}{\Gamma(\frac n2+\frac 12)}.
\end{align}
Replacing $x$ by $\frac n2$ in~\eqref{General
-Lord Brouncker-1}, then taking reciprocals of both sides, finally substituting it into the above formula, this will complete the proof of Theorem 4.\qed
\begin{rem}
Condition~\eqref{Condition-1} is not an essential restriction. Actually, we can extend our method to any negative integer $\nu$.
For example, by taking reciprocals of both sides in~\eqref{General
-Lord Brouncker-1}, we have
\begin{align}
\frac{\Gamma^2(x+1)}{\Gamma^2(x+\frac 12)}=x+\frac 14+\frac{\frac{1}{32}}{x+\frac 14+}\K_{m=1}^{\infty}\left(\frac{\frac{(2m+1)^2}{64}}{x+\frac 14}\right),\quad \Re x>0.
\end{align}
In this case, we take $\nu=-1$. It should be remarked that we can discover the above formula directly by using an approach similarly to Lemma 5.
\end{rem}
\begin{rem}
To the best of our knowledge, formula~\eqref{General
-Lord Brouncker} and~\eqref{General
-Lord Brouncker-1} were
possibly neglected by many mathematicians for about more than twenty years, until 2013 I. Gavrea and M. Ivan mentioned it in their paper~\cite{GI}.
\end{rem}
\section{A continued fraction formula of Ramanujan}
The following lemma is Entry 39 in Berndt~[8, p. 159], which is one of three principal formulas involving gamma functions given by Ramanujan. It is very difficult for us to imagine how Ramanujan discovered those beautiful continued fraction formulas. Maybe our method provides a theoretical basis.
\begin{lem} Let $l$ and $n$ denote arbitrary complex numbers. Suppose that $x$ is complex with $\Re x >0$ or that either $n$ or $l$ is an odd integer. Then
\begin{align}
P:=&\frac{\Gamma\left(\frac 14(x+l+n+1)\right)\Gamma\left(\frac 14(x-l+n+1)\right)\Gamma\left(\frac 14(x+l-n+1)\right)\Gamma\left(\frac 14(x-l-n+1)\right)}{\Gamma\left(\frac 14(x+l+n+3)\right)\Gamma\left(\frac 14(x-l+n+3)\right)\Gamma\left(\frac 14(x+l-n+3)\right)\Gamma\left(\frac 14(x-l-n+3)\right)}\label{Entry-39}\\
=&
\begin{array}{ccccccccccc}
8 & & 1^2-n^2 & & 1^2-l^2 & & 3^2-n^2 & & 3^2-l^2 &\\
\cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}
(x^2-l^2+n^2-1)/2 & + & 1 & + & x^2-1 & + & 1 & +&x^2-1 & +\cdots
\end{array}.\nonumber
\end{align}
\end{lem}
By replacing $x$ by $4x$, and taking $(l,n)=(0,0)$, $(l,n)=(1/4,1/2)$,
$(l,n)=(1/3,1/2)$, $(l,n)=(1/8,1/2)$, respectively, the authors have checked that Lemma 6 is not optimal continued fraction expansion. Now, by employing these test, we may refine it in a uniform expression as follows.
\begin{thm} Under the same conditions of Lemma 6, we have
\begin{align}
P=&\begin{array}{ccccccc}
8& & (1^2-n^2)(1^2-l^2) & & (3^2-n^2)(3^2-l^2) & \\
\cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7}
\frac 12(x^2-l^2-n^2+1)&-&x^2-l^2-n^2+(3^2+1^2-1)&-&x^2-l^2-n^2+(5^2+3^2-1)
&-\cdots
\end{array}\\
=&\begin{array}{ccc}
8&\\
\cline{1-1}\cline{3-3}
\frac 12(x^2-l^2-n^2+1)&+
\end{array}
\K_{m=1}^{\infty}
\left(\frac{-\left((2m-1)^2-n^2\right)\left((2m-1)^2-l^2\right)}
{x^2-l^2-n^2+8 m^2+1}\right).\nonumber
\end{align}
\end{thm}
\proof
We follow the method of Entry 25 in Berndt~[8, p. 141]. First, we rewrite Lemma 6 in the form
\begin{align*}
\frac{8}{P}+\frac{1}{2}(x^2+l^2-n^2-1)=x^2-1+
\begin{array}{ccccccccc}
1^2-n^2 & & 1^2-l^2 & & 3^2-n^2 & & 3^2-l^2 &\\
\cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1 & + & x^2-1 & + & 1 & +&x^2-1 & +\cdots
\end{array},
\end{align*}
or
\begin{align*}
\frac{1}{8/P+\frac{1}{2}(x^2+l^2-n^2-1)}=
\begin{array}{ccccccccccc}
1&&1^2-n^2 & & 1^2-l^2 & & 3^2-n^2 & & 3^2-l^2 &\\
\cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}
x^2-1&+&1 & + & x^2-1 & + & 1 & +&x^2-1 & +\cdots
\end{array}.
\end{align*}
Secondly, by Entry 14 of Berndt~[8, p. 121]~(an infinity form see [8, p. 157]), we have
\begin{align}
&\frac{1}{8/P+\frac{1}{2}(x^2+l^2-n^2-1)}\\
=&
\begin{array}{ccccccc}
1& & (1^2-n^2)(1^2-l^2)& & (3^2-n^2)(3^2-l^2) &\\
\cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7}
x^2-n^2&-&x^2-l^2-n^2+(3^2+1^2-1)&-&x^2-l^2-n^2+(5^2+3^2-1)&
-\cdots
\end{array}\nonumber\\
&
\begin{array}{ccc}
\quad&\left( (2m-1)^2-n^2\right)\left( (2m-1)^2-l^2\right)& \\
\cline{2-2}
-&x^2-l^2-n^2+\left( (2m+1)^2+(2m-1)^2-1\right)&-\cdots
\end{array}.\nonumber
\end{align}
Note that $(2m+1)^2+(2m-1)^2-1=8 m^2+1$. Now take the reciprocal of both sides above and then solve for $P$,
which again involves taking reciprocals. This will finish the proof of Theorem 5.\qed
The following theorem is the fastest possible form for Entry 26 in Berndt~[8, p. 145].
\begin{thm}
Suppose that either $n$ is an odd integer and $x$ is any complex number or that $n$ is an arbitrary complex number and $\Re x>0$. Then
\begin{align}
&\frac{\Gamma^2\left(\frac 14(x+n+1)\right)\Gamma^2\left(\frac 14(x-n+1)\right)}{\Gamma^2\left(\frac 14(x+n+3)\right)\Gamma^2\left(\frac 14(x-n+3)\right)}\\
=&\begin{array}{ccc}
8&\\
\cline{1-1}\cline{3-3}
\frac 12(x^2-n^2+1)&+
\end{array}
\K_{m=1}^{\infty}\left(
\frac{-(2m-1)^2\left((2m-1)^2-n^2\right)}
{x^2-n^2+8 m^2+1}\right).\nonumber
\end{align}
\end{thm}
\proof Set $l=0$ in Theorem 5, the desired equality follows at once.\qed
Similarly, we give another form of the Corollary in Berndt~[8, p. 146].
\begin{cor} If $\Re x>0$, then
\begin{align}
\frac{\Gamma^4\left(\frac 14(x+1)\right)}{\Gamma^4\left(\frac 14(x+3)\right)}
=\begin{array}{ccc}
8&\\
\cline{1-1}\cline{3-3}
\frac 12(x^2+1)&+
\end{array}
\K_{m=1}^{\infty}
\left(\frac{-(2m-1)^4}
{x^2+8 m^2+1}\right).
\end{align}
\end{cor}
\proof We set $n=0$ in Theorem 6, this completes the proof of the corollary readily.\qed
\section{Some new conjectural continued fraction formulas}
In this section, we will give three examples to illustrate how to guess their fastest possible continued fraction expansions. For the recent results involving these functions, see Mortici, Cristea and Lu~\cite{MCL}, Cao and Wang~\cite{CW}, and Chen~\cite{Chen}.
\subsection{For $\frac{\Gamma^3(x+\frac 13)}{\Gamma^3(x+1)}$}
In this subsection, we will use the function $\frac{\Gamma^3(x+\frac 13)}{\Gamma^3(x+1)}$ as an example to explain how to guess its fastest possible continued fraction expansion, which consists of the following steps.
\bigskip
{\bf (1).} Define
\begin{align*}
f(x):=\frac{\Gamma^3(x+\frac 13)}{\Gamma^3(x+1)}.
\end{align*}
Find the structure of $CF_k\left(f(x)\right)$ or $\mathrm{MC_k(x)}$ by \emph{Mortici transformation} and~\eqref{MT-Mathematica-Program}. We may determine that $CF_k\left(f(x)\right)$ has the form of \emph{Type-II}, and its $\mathrm{MC}$-point $\omega$ equals to $1/6$. Here we omit the details for finding those coefficients in $CF_k\left(f(x)\right)$, since the proof is very similar to that of Sec.~5 or Subsection 8.3 below.
\bigskip
{\bf (2).} We denote $CF_k\left(f(x)\right)$ in the \emph{simplified form} like~\eqref{canonical-form}:
\begin{align}
CF_k\left(f(x)\right)=\frac{1}{(x+\omega)^2+\lambda_{-1}+
\K_{j=0}^{k-1}\left(\frac{\kappa_j}{(x+\omega)^2+\lambda_{j}}
\right)},
\end{align}
where $\lambda_{-1}=\frac{5}{2^2 3^3}$.
\bigskip
{\bf (3).} We write two sequences $(\kappa_m)_{m\ge 0}$ and $(\lambda_m)_{m\ge 0}$ in the \emph{canonical form}, then extract their common factors, respectively. For example, one may use \emph{Mathematica} command ``FactorInteger" to do that. In this way, we denote these two
sequences in the form
\begin{align*}
&(\kappa_0,\kappa_1,\kappa_2,\kappa_3,\ldots)=-\frac{2}{3^6}
\left(\frac{1^3 1^3}{1^3},\frac{2^3 5^3}{3^2},\frac{4^3 7^3}{5^2},\frac{5^3 11^3}{7^2},
\frac{7^3 13^3}{9^2},\ldots\right),\\
&\lambda_{-1}=\frac{5}{2^2 3^3}, (\lambda_0,\lambda_1,\lambda_2,\lambda_3,\ldots)=\frac{1}{2^23^3}(
\frac{5^27}{1\cdot 3},\frac{3307}{3\cdot 5},\frac{17167}{5\cdot 7},\frac{5\cdot 31\cdot 353}{7\cdot 9},\ldots ).
\end{align*}
\bigskip
{\bf (4).} Now we will look for the general terms of the sequences $(\kappa_m)_{m\ge 0}$ and $(\lambda_m)_{m\ge 0}$. We try to decompose them into some more simpler ``partial sequences".
\bigskip
(4-1). We observe easily that the sequence $(a_m)_{m\ge 0}=(1,3,5,\ldots)$ has the general term $a_m=2m+1$. While, for the sequence $(b_m)_{m\ge 0}=(1\cdot 3,3\cdot 5,5\cdot 7,\ldots)$, its general term is $b_m=(2m+1)(2m+3)$.
\bigskip
(4-2). Let us consider the sequence $(\alpha_m)_{m\ge 0}=(1,2,4,5,7,8,10,11,\ldots)$, which is the sequence generated by deleting the sequence $(3k)_{k\ge 1}$ from the positive integer sequence $(k)_{k\ge 1}$. We can check that the sequence $(\alpha_m)_{m\ge 0}$ satisfies the following difference equation
\begin{align*}
\alpha_m-\alpha_{m-1}=\begin{cases}
1,&\mbox{if $m$ is an odd,}\\
2,& \mbox{if $m$ is an even,}
\end{cases}
\end{align*}
with the initial value $\alpha_0=1$. Hence, we deduce that the general term equals to
\begin{align*}
\alpha_m=m+1+\lfloor\frac{m}{2}\rfloor.
\end{align*}
\bigskip
(4-3). Similarly, the sequence $(\beta_m)_{m\ge 0}=(1,5,7,11,13,17,19,\ldots)$ satisfies the following difference equation
\begin{align*}
\beta_m-\beta_{m-1}=\begin{cases}
4,&\mbox{if $m$ is an odd,}\\
2,& \mbox{if $m$ is an even,}
\end{cases}
\end{align*}
with the initial condition $\beta_0=1$, and we get
\begin{align*}
\beta_m=3 m+1+\frac{1-(-1)^m}{2}.
\end{align*}
\bigskip
(4-4). The sequence $(\xi_m)_{m\ge 0}=(5^27,3307,17167,5\cdot 31\cdot 353,5\cdot 13\cdot 2063,5\cdot 7\cdot 19\cdot 419,516847,7\cdot 13\cdot 9697, \ldots)$ is most difficult. Consider a new sequence
$(u_m)_{m\ge 0}$ to be defined by
\begin{align}
u_m:=\xi_m \mod (2m+1)(2m+3).
\end{align}
By using \emph{Mathematica} command ``mod", we can verify
\begin{align}
(u_0,u_1,u_2,\dots)=&(58, 220, 490, 868, 1354, 1948, 2650, 3460,\ldots)\\
=&2(29, 110, 245, 434, 677, 974, 1325, 1730,\ldots)\nonumber\\
:=&2 (v_m)_{m\ge 0}.\nonumber
\end{align}
\bigskip
We may check that the sequence
$(v_m)_{m\ge 0}$ satisfies the following difference equation
\begin{align}
v_{m}-2 v_{m-1}+v_{m-2}=108
\end{align}
with the initial conditions $v_0=29$ and $v_1=110$. Solve this difference equation of \emph{order} 2, we can deduce that the general term equals to
\begin{align}
v_m=27(m+1)^2+2.
\end{align}
Now we rewrite the general term $\xi_m$ in the form
\begin{align}
\xi_m=2(2m+1)(2m+3)v_m+w_m.
\end{align}
We may check that $(w_0,w_1,w_2,\ldots)=(1, 7, 17, 31, 49, 71, 97, 127,\dots)$. Quite similarly to the previous sequence $(v_m)_{m\ge 0}$, $(w_m)_{m\ge 0}$ also satisfies a difference equation as follows
\begin{align}
w_{m}-2 w_{m-1}+w_{m-2}=4
\end{align}
with the initial conditions $w_0=1$ and $w_1=7$. In this way, we get
\begin{align}
w_m=2 (m+1)^2-1.
\end{align}
Substituting (8.5) and (8.8) into (8.6), we discover
\begin{align}
\xi_m=2(2m+1)(2m+3)\left(27(m+1)^2+2\right)+2 (m+1)^2-1.
\end{align}
Combining the above results and after some simplification, we conjecture that the general terms should be
\begin{align}
\kappa_m=&-\frac{2}{729}\frac{\left(m+1+\lfloor\frac{m}{2}
\rfloor\right)^3\left(3 m+1+\frac{1-(-1)^m}{2}
\right)^3}{(2m+1)^2},\quad (m\ge 0)\label{kappa-def}\\
\lambda_m=&\frac{1}{108}\left(2(27 (m+1)^2+2)+\frac{2 (m+1)^2-1}{(2m+1)(2m+3)}\right),\quad (m\ge -1).\label{lambda-def}
\end{align}
Note that we used the fact that the last formula also holds true for $m=-1$.
\bigskip
{\bf (5).} Define two sequences $(\kappa_m)_{m\ge 0}$ and $(\lambda_m)_{m\ge -1}$ by~\eqref{kappa-def} and~\eqref{lambda-def}, respectively. By making use of~\eqref{MT-Mathematica-Program}, we check that the above conjectures are still true for some ``larger" $m$.
\bigskip
{\bf (6).} Further simplification for the general term $\kappa_m$. Actually, we have
\begin{align}
\left(m+1+\lfloor\frac{m}{2}
\rfloor\right)\left(3 m+1+\frac{1-(-1)^m}{2}
\right)=&\frac{9}{8}\left((2 m + 1)^2 - (\frac 13)^2\right)\\
=&\frac{(3m+1)(3m+2))}{2},\nonumber
\end{align}
which may be proved easily according to $m$ is an odd and an even, respectively. Hence
\begin{align}
\kappa_m=-\frac{1}{2916}\frac{(3m+1)^3(3m+2)^3}{(2m+1)^2},\quad (m\ge 0).\label{kappa-def-1}
\end{align}
\bigskip
Finally, we propose the following reasonable conjecture.
\bigskip
\noindent{\bf Open Problem 1.} Let two sequences $(\kappa_m)_{m\ge 0}$ and $(\lambda_m)_{m\ge -1}$ be define by~\eqref{kappa-def-1} and~\eqref{lambda-def}, respectively. Let real $x> -1/6$, then we have
\begin{align}
\frac{\Gamma^3(x+\frac 13)}{\Gamma^3(x+1)}=\frac{1}{(x+\frac 16)^2+\lambda_{-1}+\frac{\kappa_0}{(x+\frac 16)^2+\lambda_0+\frac{\kappa_1}{(x+\frac 16)^2+\lambda_1+\frac{\kappa_2}{(x+\frac 16)^2+\lambda_2+\frac{\kappa_3}{(x+\frac 16)^2+\lambda_3+\ddots}}}}}.\label{Open Problem 1}
\end{align}
\begin{rem}
Open Problem 1 means that if there exists a fastest possible continued fraction expansion for the function $\frac{\Gamma^3(x+\frac 13)}{\Gamma^3(x+1)}$, then it must be the continued fraction expression of the right side in~\eqref{Open Problem 1}.
\end{rem}
\bigskip
Replacing $x$ by $x-1/6$ and then after some simplification, we get the following equivalent forms of Open Problem 1.
\bigskip
\noindent{\bf Open Problem $1^{\prime}$.} Let real $x> 0$, then
\begin{align}
\frac{\Gamma^3(x+\frac 16)}{\Gamma^3(x+\frac 56)}
=\begin{array}{ccc}
1&\\
\cline{1-1}\cline{3-3}
x^2+\frac{5}{108}&+
\end{array}\K_{n=1}^{\infty}\left(\frac{-\frac{(3n-2)^3(3n-1)^3}
{2916(2n-1)^2}}
{x^2+\frac{1}{108}\left(2(27 n^2+2)+\frac{2 n^2-1}{(2n-1)(2n+1)}\right)}\right).
\end{align}
\subsection{For $\frac{\Gamma^3(x+\frac 23)}{\Gamma^3(x+1)}$}
The main purpose of this subsection is to conjecture the fastest possible continued fraction expansion for the function $f(x)$, which is defined by
$$f(x):=\frac{\Gamma^3(x+\frac 23)}{\Gamma^3(x+1)}.$$
We follow the same method described in last subsection. By testing, we observe that $CF_k\left(f(x)\right)$ has the form of \emph{Type-I}, and its $\mathrm{MC}$-point $\omega$ is $1/3$. Some computation data
are listed as follows:
\begin{align}
CF_k\left(f(x)\right)=\frac{1}{x+\frac 13+\K_{j=0}^{k-1}\left(\frac{\kappa_j}{x+\frac 13}\right)},
\end{align}
where
\begin{align}
\kappa_0=\frac{1}{27},\quad (\kappa_1,\kappa_2,\kappa_3,\ldots)=\frac{1}{54}\left(\frac{2^3}{1},
\frac{4^3}{3},\frac{5^3}{3},\frac{7^3}{5},
\frac{8^3}{5},\frac{10^3}{7},\frac{11^3}{7},
\frac{13^3}{9},\frac{14^3}{9},\frac{16^3}{11},\frac{17^3}{11},
\ldots\right).\label{kappa-1}
\end{align}
\bigskip
Similarly to the sequence $(\alpha_m)_{m\ge 0}$ in last subsection, it is not difficult to verify that the general term of
the sequence $(\lambda_m)_{m\ge 1}$ should be
\begin{align}
\lambda_m=\frac{1}{54}\frac{\left(m+1+\lfloor\frac{m}{2}
\rfloor\right)^3}{2\lfloor\frac{m}{2}\rfloor+1}.\label{kappa-2}
\end{align}
\noindent{\bf Open Problem 2.} For all real $x>-\frac 13$, we have
\begin{align}
\frac{\Gamma^3(x+\frac 23)}{\Gamma^3(x+1)}=\frac{1}{x+\frac 13 +\frac{\frac{1}{27}}{x+\frac 13+\frac{\frac{1}{54}\frac{2^3}{1}}{x+\frac 13+\frac{\frac{1}{54}\frac{4^3}{3}}{x+\frac 13+\frac{\frac{1}{54}\frac{5^3}{3}}{x+\frac 13+\ddots}}}}}.
\end{align}
Replace $x$ by $x-1/3$, we have the following equivalent forms of Open Problem 2.
\bigskip
\noindent{\bf Open Problem $2^{\prime}$.} Let $\kappa_0=\frac {1}{27}$, and the sequence $(\lambda_m)_{m\ge 1}$ be defined as~\eqref{kappa-2}. Let $x>0$, then
\begin{align}
\frac{\Gamma^3(x+\frac 13)}{\Gamma^3(x+\frac 23)}=\begin{array}{ccc}
1&\\
\cline{1-1}\cline{3-3}
x&+
\end{array}
\K_{m=0}^{\infty}\left(\frac{\kappa_m}{x}\right).\label{Open Problem-2-1}
\end{align}
Since the partial coefficients of the continued fraction of the right side in~\eqref{Open Problem-2-1} are all positive, we can prove the following consequence easily.
\begin{cor} Let $x>0$. Assume that Open Problem $2^{\prime}$ is true, then for all non-negative integer $k$
\begin{align}
\begin{array}{ccc}
1&\\
\cline{1-1}\cline{3-3}
x&+
\end{array}
\K_{j=0}^{2k+1}\left(\frac{\kappa_j}{x}\right)
<\frac{\Gamma^3(x+\frac 13)}{\Gamma^3(x+\frac 23)}<\begin{array}{ccc}
1&\\
\cline{1-1}\cline{3-3}
x&+
\end{array}
\K_{j=0}^{2k}\left(\frac{\kappa_j}{x}\right).
\end{align}
\end{cor}
\begin{rem}
The authors have checked that Corollary 3 is true for $k\le 10$.
\end{rem}
\subsection{For $\frac{\Gamma(x+\eta)\Gamma(x+1-\eta)}{\Gamma^2(x+1)}$}
Let $\eta$ be a real number with $0<\eta<1$. In this subsection, we will
discuss the continued fraction approximation for the ratio of the gamma functions
\begin{align}
G_{\eta}(x):=\frac{\Gamma(x+\eta)\Gamma(x+1-\eta)}{\Gamma^2(x+1)}.
\label{G-eta-Def}
\end{align}
It follows from~\eqref{Stirling's approximation} that
\begin{align}
\lim_{x\rightarrow\infty}x G_{\eta}(x)=1.
\end{align}
Now let us begin to look for $CF_k(G_{\eta}(x))$.
\bigskip
\noindent {\bf(Step 1) The initial-correction.} Note that $\nu=1$ in~\eqref{Mortici-transformation}. It follows readily from the recurrence formula $\Gamma(x+1)=x\Gamma(x)$ that
\begin{align*}
\frac{G_{\eta}(x)}{G_{\eta}(x+1)} =\frac{(x+1)^2}{(x+\eta)(x+1-\eta)}.
\end{align*}
Now we apply \emph{Mortici-transformation} to determine $\Phi_0(x)=x+c_0$. By making use of \emph{Mathematica} software, one has
\begin{align}
\ln\frac{(x+1)^2}{(x+\eta)(x+1-\eta)}+\ln \frac{x+c_0}{x+1+c_0}
=\frac{c_0-\eta+ \eta^2}{x^2}+O\left(\frac{1}{x^3}\right).
\end{align}
Solve $c_0-\eta+ \eta^2=0$, we obtain $c_0=\eta - \eta^2$.
\bigskip
\noindent {\bf(Step 2) The first-correction.} Let
\begin{align}
\mathrm{MC}_1(x)=\frac{\kappa_0}{x+\lambda_0},
\end{align}
similarly to the initial-correction, we also have
\begin{align}
&\ln\frac{(x+1)^2}{(x+\eta)(x+1-\eta)}+\ln \frac{x+c_0+\mathrm{MC}_1(x)}{x+1+c_0+\mathrm{MC}_1(x+1)}\\
=&\frac{2 \kappa_0 - \eta^2 + 2 \eta^3 - \eta^4}{x^3}+\frac{-3 \kappa_0 - 3 \kappa_0 \lambda_0 - 3 \kappa_0 \eta + 2 \eta^2 + 3 \kappa_0 \eta^2 -
3 \eta^3 - \eta^4 + 3 \eta^5 - \eta^6}{x^4}\nonumber\\
&+\frac{g(x)}{x^5}
+O\left(\frac{1}{x^6}\right),\nonumber
\end{align}
where
\begin{align*}
g(x)=&4 \kappa_0 - 2 \kappa_0^2 + 6 \kappa_0 \lambda_0 + 4 \kappa_0 \lambda_0^2 + 6 \kappa_0 \eta + 4 \kappa_0 \lambda_0 \eta - 3 \eta^2 - 2 \kappa_0 \eta^2 \\
&- 4 \kappa_0 \lambda_0 \eta^2 + 4 \eta^3 - 8 \kappa_0 \eta^3 + 2 \eta^4 + 4 \kappa_0 \eta^4 - 2 \eta^5 - 4 \eta^6 + 4 \eta^7 - \eta^8.
\end{align*}
Solve
\begin{align}
\begin{cases}
&2 \kappa_0 - \eta^2 + 2 \eta^3 - \eta^4=0\\
&-3 \kappa_0 - 3 \kappa_0 \lambda_0 - 3 \kappa_0 \eta + 2 \eta^2 + 3 \kappa_0 \eta^2 -
3 \eta^3 - \eta^4 + 3 \eta^5 - \eta^6=0\\
&g(x)=0,
\end{cases}
\end{align}
we get
\begin{align}
\kappa_0 = \frac{(-1 + \eta)^2 \eta^2}{2},\qquad \lambda_0 =
\frac{1 - \eta + \eta^2}{3}.
\end{align}
\bigskip
\noindent {\bf(Step 3) The second-correction to the sixth-correction.} Similarly to the first-correction, we use \emph{Mathematica} software to
find that the second-correction to the sixth-correction are the form of \emph{Type-I}, and then solve all coefficients in these correction functions. It should be remarked that there is a parametric $\eta$, the related computations will become very huge and complex. So we need to manipulate \emph{Mathematica} command ``{\emph{Simplify} }" . Here we list the final computing results as follows:
\begin{align*}
&\kappa_1 =\frac{(-2 - \eta + \eta^2)^2}{36}
,\qquad \lambda_1 =\frac{4 - \eta + \eta^2}{15}
;\\
& \kappa_2 =\frac{(-6 - \eta + \eta^2)^2}{100}
,\qquad \lambda_2 =\frac{9 - \eta + \eta^2}{35}
;\\
& \kappa_3 =\frac{(-12 - \eta + \eta^2)^2}{196}
,\qquad \lambda_3 =\frac{16 - \eta + \eta^2}{63}
;\\
& \kappa_4 =\frac{(-20 - \eta + \eta^2)^2}{324}
,\qquad \lambda_4 =\frac{25 - \eta + \eta^2}{99};\\
& \kappa_5 =\frac{(-30 - \eta + \eta^2)^2}{484}
,\qquad \lambda_5 =\frac{36 - \eta + \eta^2}{143}.
\end{align*}
Similarly to Open Problem 1, by careful data analysis and further checking, we may propose the following conjecture.
\bigskip
\noindent {\bf Open Problem 3.} For $x>0$, then
\begin{align}
\frac{\Gamma(x+\eta)\Gamma(x+1-\eta)}{\Gamma^2(x+1)}=\frac{1}
{x+\eta-\eta^2+\K_{m=0}^{\infty}\left(\frac{\kappa_m}{x+\lambda_m}
\right)},
\end{align}
where $\kappa_0=\frac{(-\eta+\eta^2)^2}{2}$ and
\begin{align}
\kappa_m=&\frac{\left(-m (m + 1) - \eta+ \eta^2\right)^2}{4 (2 m + 1)^2}=\frac{(m+\eta)^2(m-\eta+1)^2}{4 (2 m + 1)^2},\quad m\ge 1\\
\lambda_m=&\frac{(1 + m)^2 - \eta + \eta^2}{(2 m + 1) (2 m + 3)},\quad m\ge 0.
\end{align}
\begin{rem}
If we take $\eta=\frac 12$, then the above conjecture implies \eqref{General
-Lord Brouncker-1}~(i.e. the generalized Lord Brouncker's continued fraction formula). Here we note that
\begin{align*}
&\frac{(m+\eta)^2(m-\eta+1)^2}{4 (2 m + 1)^2}=\frac{(m+\frac 12)^4}{4 (2 m + 1)^2}=\frac{(2 m + 1)^2}{2^6},\\
&\frac{(1 + m)^2 - \eta + \eta^2}{(2 m + 1) (2 m + 3)}=
\frac{(1 + m)^2 - (\frac 12)^2}{(2 m + 1) (2 m + 3)}=\frac{(m+\frac 12)(m+\frac 32)}{(2 m + 1) (2 m + 3)}=\frac 14.
\end{align*}
\end{rem}
\section{Conclusions}
In this paper, we present a systematical way to construct a best possible finite and infinite continued fraction approximations for a class of functions. In particular, the method described in Sec.~4 is suitable for the ratio of the gamma functions, e.g. many examples can be found in the nice survey papers Qi~\cite{Qi} and Qi and Luo~\cite{QL}. As our method is constructive, so all involving computations may be manipulated by a suitable symbolic computation software, e.g.~\emph{Mathematica}. In some sense, the main advantage of our method is that such formal continued fraction approximation of order $k$ is the fastest possible when $x$ tends to infinity. Concerning applications in approximation theory, numerical computation, our method represents a much better approximation formula than the power series approach~(e.g. Taylor's formula) for a kind of ``good functions".
In addition, the \emph{multiple-correction method} provides a useful tool for testing and guessing the continued fraction expansion involving a specified function. So our method should help advance the approximation theory, the theory of continued fraction, the generalized hypergeometric function, etc. Further, if we can obtain some new continued fraction expansions, probably these formulas could be used to study the irrationality, transcendence of the involved constants.
|
1,108,101,564,120 | arxiv | \section{Introduction}
$2^K$ factorial designs involve $K$ factors each with two levels, often denoted as the ``high level'' and ``low level'' of the factor (Yates 1937, Fisher 1942). With $K$ factors, there are $2^K$ unique treatment combinations to which units can be assigned, and often the same number of units are assigned to each combination. Factorial designs are often discussed in an industrial setting, where units are essentially identical and the assignment of units to treatments is arbitrary. However, in recent years factorial designs have become more prevelant in clinical trials and the social sciences, where pre-treatment covariates are available and reveal that units differ. For example, the New York Department of Education (NYDE) had five ``incentive programs'' to introduce to high schools, and it wanted to estimate the effect of these programs and their combinations on schools' performance. Given 50 pre-treatment covariates for each of the 1,376 schools, how should the department allocate the schools to the 32 different treatment combinations of the design such that the effects of the incentive programs and their combinations are well-estimated?
An initial idea for this example is to randomize the schools to the 32 treatment combinations. Randomized experiments are considered the ``gold standard" because randomization balances all potential confounders \textit{on average} (Krause and Howard 2003, Morgan and Rubin 2012). However, many have noted that randomized experiments can yield ``bad allocations,'' where some covariates are not well-balanced across treatment groups (Seidenfeld 1981, Lindley 1982, Papineau 1994, and Rosenberger and Sverdlov 2008). Covariate imbalance among different treatment groups complicates the interpretation of estimated treatment effects.
If ``bad allocations" are a concern for treatment-versus-control experiments, they are even more of a concern for factorial designs, because any randomization may create covariate imbalance across some of the $2^K$ treatment combinations. This point has been given little attention in the literature. Classic experimental design textbooks like Box, Hunter, and Hunter (2005) and Wu and Hamada (2009) suggest using blocking to balance important covariates; however, the use of blocking is not obvious with many covariates, some with many levels. Additionally, Wu and Hamada (2009) note that blocking can increase precision for some factorial effect estimators and ``sacrifice" the precision of other factorial effect estimators. To address this issue, we propose a rerandomization algorithm for balanced $2^K$ factorial designs based on Morgan and Rubin (2012), which developed a framework for rerandomization in the treatment-versus-control case. Here we establish several theoretical properties of rerandomization in balanced $2^K$ factorial designs that increase the precision of factorial effect estimators.
Both rerandomization and factorial designs have been explored since Fisher in the 1920s. To our knowledge, however, no one has laid out the framework for implementing rerandomization for factorial designs. Rubin (2008) noted that many did not implement rerandomization because it was computationally intensive; however, with recent improvements in computational power, some have revisited rerandomization. For example, Cox (2009), Bruhn and McKenzie (2009), and Worrall (2010) all discuss and recommend rerandomization, and Morgan and Rubin (2012) formalized these recommendations in treatment-versus-control settings.
Also, often there are few pre-experiment covariates to consider in a factorial design, or they are categorical - such as the ``batch" of units produced, as described in Box, Hunter, and Hunter (2005) - and thus simple blocking is an appropriate strategy. In contrast, Wald, et al. (1991), Apfel, et al. (2002), and Bays et, al. (2004) all describe clinical trials that utilized randomized factorial designs with non-categorical covariates, which could have benefited from a design that ensured covariate balance. To illustrate how rerandomization can be utilized in such situations, we use an education example discussed in Dasgupta, et al. (2015).
Our proposed rerandomization algorithm is not the first procedure that attempts to balance non-categorical covariates for experiments with multiple treatments. The Finite Selection Model (FSM) developed by Morris (1979) assigns units to multiple treatment groups such that covariates are relatively balanced among the groups. Morgan and Rubin (2012) noted that rerandomization and the FSM both attempt to ensure covariate balance, but the FSM does not maintain the correlation structure among covariates, whereas rerandomization can.
Xu and Kalbfleisch (2013) proposed the ``balance match weighted design'' for multiple treatment groups, which performs many randomizations and then selects the randomization that yields the best covariate balance. This is similar to rerandomization, but rerandomization's theoretical guarantees, such as balancing on unobserved covariates on average in addition to improving balance for observed covariates, is appealing. Our rerandomization algorithm can also incorporate various desiderata, such as factorial effects and covariates that vary in importance, which makes the procedure particularly flexible.
In Section \ref{s:review} we review rerandomization for the treatment-versus-control case, and in Section \ref{s:2K} we establish notation for $2^K$ factorial designs using the potential outcomes framework. In Section \ref{ss:obs} we outline the proposed rerandomization procedure, and in Section \ref{s:properties} we establish theoretical properties that formalize the ways rerandomization is preferable to standard randomization. In Section \ref{s:schools} we use our rerandomization procedure on data from the NYDE.
\section{Review of Rerandomization} \label{s:review}
Rubin (2008) recalled a conversation with Bill Cochran, who in turn recalled a conversation with R.A. Fisher, who asserted that a way to protect ourselves against particularly bad randomizations is to rerandomize until a randomization is ``acceptable.'' Morgan and Rubin (2012) suggested implementing rerandomization for a treatment-versus-control experiment as follows:
\begin{enumerate}
\item Collect covariate data.
\item Specify a balance criterion determining when a randomization is acceptable.
\item Randomize units to treatment and control groups.
\item Check the balance criterion. If the criterion is met, go to Step 5. Otherwise, return to Step 3.
\item Conduct the experiment using the final randomization obtained in Step 4.
\item Analyze the results using a randomization test, keeping only simulated randomizations that satisfy the balance criteria specified in Step 2.
\end{enumerate}
Morgan and Rubin (2012) used the squared Mahalanobis distance (Mahalanobis 1936) as a measure for covariate balance. With $n$ units, half assigned to treatment and half assigned to control, and $p$ observed covariates for each unit, the squared Mahalanobis distance for the treatment-versus-control situation is defined as:
\begin{align*}
M \equiv (\bar{{\bm{x}}}_T - \bar{{\bm{x}}}_C)^T \text{cov}[(\bar{{\bm{x}}}_T - \bar{{\bm{x}}}_C)]^{-1}(\bar{{\bm{x}}}_T - \bar{{\bm{x}}}_C),
\end{align*}
where $\bar{{\bm{x}}}_T$ is the $p$-component column vector of covariate means for units assigned to the treatment and $\bar{{\bm{x}}}_C$ is analogously defined for the control. A randomization is declared acceptable if $M \leq a$ for some threshold $a$. The Mahalanobis distance is well-known within the matching and observational study literature, where it is used to find subsets of the treatment and control that are similar (Rubin 1976, Rosenbaum and Rubin 1985, Gu and Rosenbaum 1993, Rubin and Thomas 2000). Constraining $M \leq a$ can be viewed as finding allocations where the treatment and control covariate means are ``similar enough,'' where the ``enough'' is determined by the threshold $a$. Morgan and Rubin (2012) note that - similar to Rosenbaum and Rubin's (1985) argument that matching using the Mahalanobis distance reduces bias due to imbalances in covariates from observational studies - rerandomization using $M$ reduces the sampling \textit{variance} of the standard treatment effect estimator when outcome variables are correlated with covariates.
Morgan and Rubin (2012) showed that $M$ closely follows a chi-squared distribution with degrees of freedom equal to $p$. Thus, $a$ can be selected by first deciding the percentage, $p_a$, of randomizations that will be ``acceptably well-balanced,'' and then setting $a$ to the $p_a$th percentile of the $\chi^2_p$ distribution. For example, if there are five covariates, and we want to select from the 1\% ``most balanced'' randomizations, then $a$ is set equal to the first percentile of the $\chi^2_5$ distribution.
Morgan and Rubin (2012) mention two options for balancing covariates among multiple treatment groups:
\begin{enumerate}
\item Create a criterion for each pairwise comparison among the treatment groups, and then rerandomize if any group does not satisfy the criterion.
\item Use a statistic that measures multivariate balance, such as those used in standard MANOVA analyses.
\end{enumerate}
To implement (1), a criterion for each ${2^K \choose 2} = 2^{K-1}(2^K - 1)$ pairwise comparison must be chosen, which may be computationally burdensome. To implement (2), there must be a notion of ``within-group" variance, which is not immediate for unreplicated $2^K$ factorial designs where only one unit is assigned to each treatment combination. Furthermore, we may not want to estimate all factorial effects with the same level of precision; for instance, typically we want to estimate main effects more precisely than high-order interactions, and it is not clear how to incorporate this desideratum into (1) or (2). We propose an intuitive adjustment to (1) for balanced $2^K$ factorial designs, which is equivalent to (2) for replicated factorial designs. The proposed adjustment also allows hierarchies of effect importance.
\section{Notation for $2^K$ Designs under the Potential Outcomes Framework} \label{s:2K}
Consider a balanced $2^K$ factorial design with $n = r2^K$ units and $r$ replicates assigned to each of the $2^K$ treatment combinations. In a $2^K$ factorial design there are $2^K -1$ factorial effects: $K$ main effects, $\binom{K}{2}$ two-way interactions, $\binom{K}{3}$ three-way interactions, and so on.
The $2^K$ treatment combinations of a $2^K$ factorial design are often arranged in a specific order and represented as a $2^K \times K$ matrix whose elements are either -1 (representing the ``low level'' of a factor) or +1 (representing the ``high level'' of a factor), and thus each row indicates a unique combination of factor assignments. This matrix is often referred to as the \textit{design matrix} (Wu and Hamada 2009). One such order for the -1s and +1s is the lexicographic order (Espinosa, et al. 2015) in which each column starts with $-1$, making the first row of the matrix a $K$-component vector of -1s. In the first column, the sign is switched to $+1$ for the second half (i.e., $2^{K-1}$) of the components. In the second column, the sign is switched after every one-fourth (i.e., $2^{K-2}$) of the components. Proceeding this way, the last column consists of alternating -1s and +1s. We denote the design matrix by $\mathbf{G}$; see Table \ref{tab:designMatrix} for a $2^3$ factorial design using the lexicographic order. Another well-known order is Yates' order, in which the columns appear in exactly the reverse order of the lexicographic order.
\begin{table}
\centering
\caption{The Design Matrix, $\mathbf{G}$, for a $2^3$ design} \label{tab:designMatrix}
\begin{tabularx}{0.25\textwidth}{|Y|Y|Y|}
\hline
$A$ & $B$ & $C$ \\ \hline
-1 & -1 & -1 \\
-1 & -1 & +1 \\
-1 & +1 & -1 \\
-1 & +1 & +1 \\
+1 & -1 & -1 \\
+1 & -1 & +1 \\
+1 & +1 & -1 \\
+1 & +1 & +1 \\ \hline
\end{tabularx}
\end{table}
\begin{table}
\centering
\caption{$\widetilde{\mathbf{G}}$ for a $2^3$ design (columns 2-4 represent the design matrix $\mathbf{G}$)} \label{tab:matrix}
\begin{tabularx}{0.75\textwidth}{|c||*3{Y}||*4{Y}|}
\hline
Mean Column & \multicolumn{3}{c||}{Main effect columns} & \multicolumn{4}{c|}{Interaction columns} \\
& $A$ & $B$ & $C$ & $AB$ & $AC$ & $BC$ & $ABC$ \\ \hline
+1 & -1 & -1 & -1 & +1 & +1 & +1 & -1 \\
+1 & -1 & -1 & +1 & +1 & -1 & -1 & +1 \\
+1 & -1 & +1 & -1 & -1 & +1 & -1 & +1 \\
+1 & -1 & +1 & +1 & -1 & -1 & +1 & -1 \\
+1 & +1 & -1 & -1 & -1 & -1 & +1 & +1 \\
+1 & +1 & -1 & +1 & -1 & +1 & -1 & -1 \\
+1 & +1 & +1 & -1 & +1 & -1 & -1 & -1 \\
+1 & +1 & +1 & +1 & +1 & +1 & +1 & +1 \\ \hline
\end{tabularx}
\end{table}
To define the interaction effects, we expand $\mathbf{G}$ (the columns labeled ``main effect columns'' in Table \ref{tab:matrix}) by augmenting its columns. The column for a specific interaction is created using component-wise multiplication of the corresponding main-effects columns. For example, the last column in Table \ref{tab:matrix} represents the three-way interaction among factors $A$, $B$ and $C$, and is obtained by multiplying the components in the three columns of $\mathbf{G}$. Having appended the interaction columns to the right of $\mathbf{G}$ (columns 5-8 of Table \ref{tab:matrix}) to define the interaction effects, a column of +1s is appended to the left of $\mathbf{G}$ (first column of Table \ref{tab:matrix}) which defines the mean effect. The result is a $2^K \times 2^K$ matrix, denoted by $\widetilde{\mathbf{G}}$. The rows of $\widetilde{\mathbf{G}}$ are indexed by $j = 1, \dots, 2^K$, one row for each treatment combination, as indicated by $\mathbf{G}$, and the columns are indexed by $f = 0, 1, \dots, 2^K-1$; ``$f$'' for factorial effects. Let $\widetilde{\mathbf{G}}_j.$ and $\widetilde{\mathbf{G}}_{.f}$ denote the $j$th row and $f$th column of $\widetilde{\mathbf{G}}$, respectively.
Let $Y_i(j)$, $i=1, \ldots, n$, $j = 1, \ldots, 2^K$ denote the potential outcome for the $i$th unit when exposed to the $j$th treatment combination, and let ${\bm Y}_i = \left(Y_i(1), \ldots, Y_i(2^K) \right)$ denote the row vector of the $2^K$ potential outcomes for unit $i$. The $i$th row of the left part of Table \ref{tab:fac_effects} shows ${\bm Y}_i$ for a $2^3$ design.
\begin{table}
\centering
\caption{Unit-level and Population-Level Factorial Effects for a $2^3$ Design} \label{tab:fac_effects}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{ccccc}
\hline
Unit ($i$) & Potential outcomes (${\bm Y}_i)$ & Mean of unit $i$ ($\theta_{i0}$) & Factorial effect $\theta_{i,f}$ \\ \hline
1 & ${\bm Y}_1 = \left( Y_1(1), \cdots, Y_1(8) \right)$ & $ 8^{-1} {\bm Y}_1 \widetilde{\mathbf{G}}_{.0} $ & $ 4^{-1} {\bm Y}_1 \widetilde{\mathbf{G}}_{.f} $ \\
2 & ${\bm Y}_2 = \left( Y_2(1), \cdots, Y_2(8) \right)$ & $ 8^{-1} {\bm Y}_2 \widetilde{\mathbf{G}}_{.0} $ & $ 4^{-1} {\bm Y}_2 \widetilde{\mathbf{G}}_{.f}$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$n$ & ${\bm Y}_n = \left( Y_n(1), \cdots, Y_n(8) \right)$ & $ 8^{-1} {\bm Y}_n \widetilde{\mathbf{G}}_{.0} $ & $ 4^{-1} {\bm Y}_n \widetilde{\mathbf{G}}_{.f} $ \\ \hline
Average & $\bar{\bm Y} = n^{-1} \left( \sum_i Y_i(1), \cdots, \sum_i Y_i(8) \right)$ & $ 8^{-1} \bar{\bm Y} \widetilde{\mathbf{G}}_{.0} $ & $\bar{\theta}_f = 4^{-1} \bar{\bm Y} \widetilde{\mathbf{G}}_{.f}$ \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
Following Dasgupta, et al. (2015), the $f$th linear factorial effect for unit $i$ is:
\begin{align*}
\theta_{if} = \frac{1}{2^{K-1}} {\bm Y}_i \widetilde{\mathbf{G}}_{.f}, \;\ i=1, \ldots, n, \;\ f=1, \ldots, 2^K-1
\end{align*}
and the population-level $f$th factorial-effect is defined as:
\begin{equation}
\bar{\theta}_f = \frac{1}{n} \sum_{i=1}^n \theta_{if}. \label{eq:pop_fac}
\end{equation}
The $f$th factorial effect at the unit level and the population level, represented as functions of the potential outcomes, are shown in the last column of Table \ref{tab:fac_effects}. The second-to-last column of Table \ref{tab:fac_effects} shows the unit-level mean of the potential outcomes
\begin{align*}
\theta_{i0} = \frac{1}{2^K} {\bm Y}_i \widetilde{\mathbf{G}}_{.0}
\end{align*}
and their grand mean $\bar{\theta}_0$. The population-level grand mean $\bar{\theta}_0$ and the linear factorial effects $\bar{\theta}_1, \ldots, \bar{\theta}_{2^K-1}$ are the estimands (objects of interest) in the standard linear finite-population framework described here. They need to be estimated because only one element of ${\bm Y}_i$ can be observed for each $i$. We discuss unbiased estimators of these estimands in Section \ref{ss:obs}.
The vector $(\theta_{i0}, \ldots, \theta_{i(2^K-1)})$ of estimands for unit $i$ is a linear transformation of the vector ${\bm Y}_i$ of potential outcomes. Letting the factorial effects vector for unit $i$ be
\begin{equation}
{\bm \theta}_i = \left( \theta_{i0}, \frac{ \theta_{i1}}{2}, \ldots, \frac{\theta_{i(2^K-1)}}{2} \right), \hspace{0.1 in} i = 1, \dots, n, \label{eq:thetaVector}
\end{equation}
straightforward algebra shows that the potential outcomes for unit $i$ can be written as
\begin{align*}
{\bm Y}_i = {\bm \theta}_{i.} \widetilde{\mathbf{G}}^T
\end{align*}
so the $j$th component of ${\bm Y}_i$ is
\begin{equation}
Y_i(j) = {\bm \theta}_i \widetilde{\mathbf{G}}_{j.}^T , \label{eq:model1a}
\end{equation}
\section{The assignment mechanism, unbiased estimation of factorial effects, and the rerandomization algorithm} \label{ss:obs}
Randomized balanced $2^K$ factorial designs assign $n = r2^K$ units to one of $2^K$ treatment combinations with equal probability such that $r$ units are assigned to each combination. Each combination corresponds to a row of the design matrix $\mathbf{G}$. Let ${\bm{W}}$ be a $n \times K$ random matrix where the $i$th row of ${\bm{W}}$, ${\bm{W}}_{i.}$, indicates the treatment assignment for unit $i$, and has probability $\frac{1}{2^K}$ of being the $j$th row of $\mathbf{G}$: $P(\bm{W}_{i.} = \mathbf{G}_{j.}) = \frac{1}{2^K}$ for $i = 1, \dots, n$, $j = 1, \dots, 2^K$. For notational convenience, we expand ${\bm{W}}$ to $\widetilde{\bm{W}}$ such that $P(\widetilde{\bm{W}}_{i.} = \widetilde{\mathbf{G}}_{j.}) = \frac{1}{2^K}$, where the first column of $\widetilde{\bm{W}}$, $\widetilde{\bm{W}}_{.0}$, is not stochastic and is +1s, as in $\widetilde{\mathbf{G}}$; every other element of $\widetilde{\bm{W}}$ for $i = 1, \dots, n$ and $f \in F \equiv \{1, \dots, 2^K-1 \}$ is defined as
\begin{align}
\widetilde{W}_{i f} = \begin{cases} +1 &\mbox{if the $i$th unit is assigned to high level of $f$} \\
-1 & \mbox{if the $i$th unit is assigned to low level of $f$} \end{cases} \label{eq:Wf}
\end{align}
Let $\widetilde{\bm{W}}_{.f}$ be the $n$ x $1$ column vector denoting the assigned level of some $f \in F$ for all units. A particular random allocation of units in a $2^K$ design corresponds to one realization of $\widetilde{\bm{W}}$, the observed one, $\widetilde{\bm{W}}^{\text{obs}}$. The observed outcome for the $i$th unit will be the potential outcome $Y_i(j)$ when $\widetilde{\bm{W}}^{\text{obs}}_{i.} = \widetilde{\bm{G}}_{j.}$. Let ${\bm y}_{\text{obs}}$ be the $n$-component column vector of observed outcomes for the $n$ units. The standard estimator of the factorial effect $\bar{\theta}_f$ defined in (\ref{eq:pop_fac}) can be written in terms of the observed outcomes and $\widetilde{\bm{W}}$:
\begin{align}
\hat{\theta}_f = \bar{y}_{f^+} - \bar{y}_{f^-} = \frac{{\bm{y}}_{\text{obs}}^T \widetilde{\bm{W}}_{.f}}{n/2} \label{eq:factorialEffectEstimator}
\end{align}
where $\bar{y}_{f^+}$ is the mean outcome for units assigned to the high level of some $f \in F$, and $\bar{y}_{f^-}$ is analogously defined for the low level of $f$.
Rerandomization involves randomizing until an allocation is declared ``acceptable,'' using an acceptance criterion $\phi({\mathbf{X},\widetilde{\bm{W}}})$, where $\mathbf{X}$ is the $n \times p$ covariate matrix, and $\phi$ equals one if an allocation is ``acceptable'' and zero otherwise.
Consider an acceptance criterion that is symmetric in $\widetilde{{\bm{W}}}$, i.e., a $\phi$ such that $\phi({\mathbf{X},\widetilde{\bm{W}}}) = \phi({\mathbf{X}, -\widetilde{\bm{W}}})$. Theorem 1 below establishes that the standard factorial effect estimators are unbiased under any rerandomization scheme that uses a symmetric acceptance criterion.
\noindent
\textbf{Theorem 1}: Suppose a completely randomized balanced $2^K$ factorial design is rerandomized when $\phi({\mathbf{X},\widetilde{\bm{W}}}) = 0$ for some symmetric acceptance criterion. Then, for all $f \in F$,
\begin{align*}
\mathbb{E}[\hat{\theta}_f | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] = \bar{\theta}_f,
\end{align*}
where $\hat{\theta}_f$ is the estimator defined in (\ref{eq:factorialEffectEstimator}) and $\bar{\theta}_f$ is the population-level estimand defined in (\ref{eq:pop_fac}). Because $\phi$ is symmetric in $\widetilde{\bm W}$, the proof of the unbiasedness of $\hat{\theta}_f$ under rerandomization is analogous to that in Morgan and Rubin (2012) for the treatment-versus-control situation.
If the potential outcomes are correlated with pre-experiment covariates, then so will be the observed outcomes and the estimator $\hat{\theta}_f$ for any $f \in F$. Intuitively, we can increase the precision of $\hat{\theta}_f$ by ensuring covariates are ``well-balanced'' over the two groups used to calculate $\hat{\theta}_f$: the ``treatment'' (units assigned to the high level of $f$) and the ``control'' (units assigned to the low level), which suggests a balance function that measures the covariate balance between all pairs of these groups.
One such balance function is the squared Mahalanobis distance proposed by Morgan and Rubin (2012). A way to measure the covariate balance between the ``treatment'' and ``control'' for a particular $f$ is to define
\begin{align}
M_f &\equiv (\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-})^T \text{cov}[(\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-})]^{-1} (\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-}) \notag \\
&= \frac{n}{4} (\bar{{\bm{x}}}_{f^+} -\bar{{\bm{x}}}_{f^-})^T \text{cov}[\mathbf{X}]^{-1}(\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-}) \label{eq:mahalanobisDistance}
\end{align}
where $\bar{{\bm{x}}}_{f^+}$ is the $p$-component vector of covariate means for units assigned to the high level of $f$ and $\bar{{\bm{x}}}_{f^-}$ is analogously defined. Note that, analogous to (\ref{eq:factorialEffectEstimator}), $\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-} = \frac{\mathbf{X}^T \widetilde{\bm W}_{.f}}{n/2}$.
The covariate balance between the ``treatment'' and the ``control'' for a particular $f$ is declared ``acceptable'' by the acceptance criterion
\begin{align}
\phi_f({\mathbf{X}, \widetilde{\bm{W}}}) = \begin{cases}
1 &\mbox{if } M_f \leq a \\
0 &\mbox{if } M_f > a
\end{cases} \label{eq:phif}
\end{align}
for a predetermined threshold $a$. An intuitive procedure that parallels Morgan and Rubin (2012) is to randomize until $\phi_f({\mathbf{X}, \widetilde{\bm{W}}}) = 1$ in order to increase the covariate balance between the ``treatment'' and ``control'' for a particular $f$. We can do this for every $f \in F$, and thereby define the overall acceptance criterion as
\begin{align}
\phi({\mathbf{X},\widetilde{\bm{W}}}) = \prod_{f \in F} \phi_f({\mathbf{X}, \widetilde{\bm{W}}}) = \begin{cases}
1 &\mbox{if } \max_{f \in F} M_f \leq a \\
0 &\mbox{if } \max_{f \in F} M_f > a \label{eq:phi}
\end{cases}
\end{align}
We thus propose the following rerandomization procedure for balanced $2^K$ factorial designs:
\begin{enumerate}
\item Create a squared Mahalanobis distance criterion $M_f$ for each $f \in F$.
\item Choose a threshold criterion $a$ as in Morgan and Rubin (2012).
\item Randomize until $\phi({\mathbf{X},\widetilde{\bm{W}}}) = 1$, where $\phi$ is defined as in (\ref{eq:phi}).
\end{enumerate}
\noindent
We have the following corollary:
\noindent
\textbf{Corollary 1}: Theorem 1 holds if $\phi({\mathbf{X},\widetilde{\bm{W}}})$ is defined as in (\ref{eq:phi}).
Section \ref{s:properties} establishes that the above rerandomization algorithm increases the precision of all factorial effect estimators compared to pure randomization.
\section{Precision Properties of Rerandomization} \label{s:properties}
The proposed rerandomization algorithm checks $M_f$ for \textit{all} $f \in F$, i.e., $\phi({\mathbf{X},\widetilde{\bm{W}}}) = 1$ iff $\phi_f({\mathbf{X}, \widetilde{\bm{W}}}) = 1$ for all $f \in F$. Thus, both the marginal and joint distributions of $\{\bar{\mathbf{x}}_{f^+} - \bar{\mathbf{x}}_{f^-} : f \in F\}$ and $\{\hat{\theta}_f : f \in F\}$ need to be examined.
\noindent
\textbf{Theorem 2}: Assume a completely randomized balanced $2^K$ factorial design is rerandomized using the algorithm proposed at the end of Section \ref{ss:obs}. Then,
\begin{align*}
\mathbb{E}[\overline{{\bm{x}}}_{f^+} - \overline{{\bm{x}}}_{f^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] = 0
\end{align*}
The proof of Theorem 2 follows immediately by symmetry of the acceptance criterion, as in Morgan and Rubin (2012).
\noindent
\textbf{Lemma 1}: Assume a completely randomized balanced $2^K$ factorial design is rerandomized using the algorithm proposed at the end of Section \ref{ss:obs}, and the covariate means are multivariate normal. Then, the elements of $\{\phi_f({\mathbf{X}, \widetilde{\bm{W}}}): f \in F\}$ defined in (\ref{eq:phif}) are mutually independent.
The proof of Lemma 1 is in the Appendix.
\noindent
\textbf{Theorem 3}: Assume a completely randomized balanced $2^K$ factorial design is rerandomized using the algorithm proposed at the end of Section \ref{ss:obs}, and the covariate means are multivariate normal. Then:
\noindent
First, for all $f \in F$,
\begin{align*}
\text{cov}[\overline{{\bm{x}}}_{f^+} - \overline{{\bm{x}}}_{f^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] = v_a \text{cov}[\overline{{\bm{x}}}_{f^+} - \overline{{\bm{x}}}_{f^-}]
\end{align*}
where
\begin{align}
v_a = \frac{2}{p} \frac{\gamma ( \frac{p}{2} + 1, \frac{a}{2})}{\gamma(\frac{p}{2}, \frac{a}{2})}, \label{eq:vaDef}
\end{align}
and $\gamma$ is the incomplete gamma function $\gamma(b, c) \equiv \int_0^c y^{b-1} e^{-y} dy$.
\noindent
And second, for $f_1, f_2 \in F, f_1 \neq f_2$,
\begin{align*}
\text{cov}[\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] = \mathbf{0}
\end{align*}
Theorem 3 is proved in the Appendix.
Theorems 2 and 3 establish that rerandomization leads to unbiased estimators and reduces the variance of ($\bar{{\bm x}}_{f^+} - \bar{{\bm x}}_{f^-})$, and that this reduction is the same for all covariates. We define the \textit{percent reduction in variance} for any covariate $p$ and $f \in F$ as:
\begin{align}
100 \left( \frac{\text{var} [\bar{x}_{p, f^+} - \bar{x}_{p, f^-} ] - \text{var}[\overline{x}_{p, f^+} - \bar{x}_{p, f^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1]}{\text{var}[\bar{x}_{p,f^+} - \bar{x}_{p,f^-} ]}\right) = 100(1 - v_a) \label{eq:va}
\end{align}
Therefore, for any covariate $p$ and $f \in F$, the rerandomization algorithm will reduce the variance of $(\bar{x}_{p, f^+} - \bar{x}_{p, f^-})$ in expectation by $100(1-v_a) \%$, compared to pure randomization.
To state properties of the marginal and joint distributions of the factorial effect estimators $\{\hat{\theta}_f: f \in F\}$, assumptions must be made about the relationship between the potential outcomes and the factorial effects and covariates. Suppose the factorial effects ${\bm \theta}_i$ defined in (\ref{eq:thetaVector}) are constant across units and there is no interaction between factorial effects and covariate effects. Then, the potential outcomes can be written using the following linear model:
\begin{equation}
Y_i(j) = {\bm \theta}_i \widetilde{\mathbf{G}}_{j.}^T + {\bm{x}}_i {\bm \beta} + \epsilon_i, \hspace{0.1 in} i = 1, \dots, n, j = 1, \dots, 2^K \label{eq:model1b}
\end{equation}
where $\widetilde{\mathbf{G}}_{j.}$ is the $j$th row of $\widetilde{\mathbf{G}}$ defined in Section \ref{s:2K}, ${\bm \beta}$ is the $p$-component column vector of fixed covariate coefficients, and $\epsilon_i$ indicates any deviations from the linear model. Then, the standard unbiased estimator (\ref{eq:factorialEffectEstimator}) can be written as:
\begin{align}
\hat{\theta}_f = \bar{\theta}_f + \boldsymbol{\beta}^T (\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-}) + (\bar{{\bm{\epsilon}}}_{f^+} - \bar{{\bm{\epsilon}}}_{f^-}) \label{eq:linearModel}
\end{align}
and the theorem below follows.
\noindent
\textbf{Theorem 4}: Assume (a) a completely randomized balanced $2^K$ factorial design is rerandomized using the algorithm proposed at the end of Section \ref{ss:obs}, (b) the covariate means are multivariate normal, (c) factorial effects are constant across units, and (d) there is no interaction between factorial effects and covariate effects. Then, for all $f \in F$,
\begin{align}
\text{var}(\hat{\theta}_f | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) = \left(1 - (1 - v_a)R^2) \text{var}(\hat{\theta}_f \right) \label{eq:rerandomizationBenefit}
\end{align}
and for $f_1, f_2 \in F$, such that $f_1 \neq f_2$,
\begin{align*}
\text{cov}(\hat{\theta}_{f_1}, \hat{\theta}_{f_2} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) = 0
\end{align*}
where $R^2$ is the squared multiple correlation coefficient between $\mathbf{y_{\text{obs}}}$ and $\mathbf{X}$, and $v_a$ is defined in (\ref{eq:vaDef}). The proof of Theorem 4 is in the Appendix.
Theorem 4 has several implications. First, randomizing until $M_f \leq a$ for all $f \in F$ on average increases the precision of all factorial effect estimators equally by $100(1-v_a)R^2$ percent. Likewise, for some subset $F^* \subset F$, randomizing until $M_f \leq a$ equally increases the precision of $\hat{\theta}_f$ for all $f \in F^*$, again by $100(1-v_a)R^2$ percent, but this does not affect the precision of $\hat{\theta}_f$ for any $f \notin F^*$, by the uncorrelated result of Theorem 4. Furthermore, different thresholds can be chosen for each squared Mahalanobis distance criterion. For example, one can randomize until $M_f \leq a_f$, where $a_f$ differs across $f$. Choosing a smaller $a_f$ for each $f \in F^*$ ensures a higher increase in precision for the corresponding factorial effect estimator $\hat{\theta}_f$.
Thus, we can adapt our rerandomization procedure according to tiers of importance for factorial effects. Furthermore, we can do the same for covariates, analogous to Morgan and Rubin (2015), which shows how to adapt rerandomization according to tiers of importance for covariates in the treatment-versus-control case.
To conduct inference using rerandomization, the significance levels of hypotheses should be calculated using a permutation test (Fisher 1942). However, during the permutation test, the distribution of the test statistic under Fisher's sharp null must be created using randomizations that would be accepted under rerandomization (Morgan and Rubin 2012). Corrections for multiple testing and selection of active versus inactive effects (as in Espinosa, et al. 2015) are topics for future work.
Theorems 3 and 4 require $n$ to be sufficiently large such that the covariate means are multivariate normal. If the covariate means are multivariate normal, then the Mahalanobis distance is $\chi^2_p$ (Mardia et al. 1980). However, if $n$ is not large enough for the normality assumption to hold via the Central Limit Theorem, then (a) the Mahalanobis distance will not be $\chi_p^2$, and (b) the independence in Lemma 1 will not hold. To address (a), the empirical distribution of each $M_f$ can be used instead of the $\chi^2_p$ distribution to select each corresponding threshold $a_f$. As for (b), the elements of $\{\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-} : f \in F \}$ (and, as a consequence, the elements of $\{M_f: f \in F\}$) are always uncorrelated under our proposed rerandomization algorithm. This implies that, under mild regularity conditions, rerandomization will still increase the precision of factorial effect estimators; however, theoretical results found in Theorems 3 and 4 will not hold exactly.
\section{Illustrating Rerandomization in a $2^5$ Education Example} \label{s:schools}
Dasgupta, et al. (2015) discuss an educational experiment planned by the New York Department of Education (NYDE) with five ``incentive programs'' to be introduced to high schools ``which desperately need performance improvement.'' The programs include a quality review, a periodic assessment, inquiry teams, a school-wide performance bonus program, and an online-resource program (Dasgupta, et al. 2015).
The NYDE measures schools' performance with a score in each school's \textit{Progress Report}, and we consider nine covariates that will likely be correlated with this score: Total number of students, five different race variables (proportion of white, black, Asian, Native American, and Latino students), proportion of female students, enrollment rate, and poverty rate. This situation can be considered an extreme case of a ``tiers of covariates'' framework, where a subset of nine covariates are considered ``important'' and the rest are considered ``not important.'' The goal is to assign 43 schools to each of the 32 different treatment combinations such that the factorial effects of the experiment will be well-estimated.
Interest usually focuses on main effects and possibly two-way interactions, and higher-order interactions are often considered negligible (Wu and Hamada 2009). Thus, we implement a rerandomization algorithm that considers main effects ``most important,'' two-way interactions ``less important,'' and higher-order interactions ``not important.'' We created fifteen squared Mahalanobis distances: one for each of the five main effects and ten two-way interactions. The rerandomization algorithm involves randomizing until $\max (M_1, \dots, M_5) \leq a_{\text{main}}$ and $\max (M_6, \dots, M_{15}) \leq a_{\text{interaction}}$, where $a_{\text{main}}$ is the $100(0.01^{1/5})$ percentile of the $\chi^2_9$ distribution so $P(M_1, \dots, M_5 \leq a_{\text{main}}) = 1\%$, because, according to Lemma 1, the squared Mahalanobis distances are independent (and approximately $\chi^2_9$). Similarly, $a_{\text{interaction}}$ is the $100(0.1^{1/10})\%$ percentile of the $\chi^2_9$ distribution, making the criterion corresponding to the interaction effects less stringent than that of the main effects.
We performed pure randomization and rerandomization 1,000 times. For each (re)randomization, the covariate mean difference $(\bar{x}_{p, f^+} - \bar{x}_{p, f^-})$ was calculated for each covariate $p$ and factor/interaction $f$. Figure \ref{fig:lovePlot} displays the empirical percent reduction in variance, which shows how much rerandomization reduced the variance of the covariate mean difference for various covariates and factors/interactions compared to pure randomization. Main effects are marked with circles, two-way interaction effects with squares, and three-way interaction effects with triangles. The percent reduction in variance expected given Theorem 3 is marked by a vertical line for each type of factorial effect.
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.75]{"lovePlotEducation"}
\caption{Percent reduction in variance in the covariate mean difference after rerandomization for various covariates and factorial effects. The expected percent reduction in variance given Theorem 3 for each type of factorial effect is marked by a vertical line. Displayed are the nine covariates considered during rerandomization as well as ``number of teachers'' and ``number of students in temporary housing,'' which were not considered.}
\label{fig:lovePlot}
\end{figure}
The nine covariates we considered during rerandomization are displayed at the top of the vertical axis of Figure \ref{fig:lovePlot}. Rerandomization reduced the variance of the covariate mean difference across factors and two-way interactions compared to pure randomization for these covariates, and the reduction varies around what we would expect given Theorem 3. There is more reduction for individual factors than for interactions, as is expected, because the threshold $a_{\text{main}}$ was more stringent than $a_{\text{interaction}}$. The percent reduction in variance across three-way interactions is occassionally negative - implying randomization yielded better covariate balance in this case - but this reduction averages close to zero, as expected. Therefore, rerandomization on average increased the covariate balance across main effects and two-way interactions without sacrificing the covariate balance across higher-order interactions. Although one may be concerned about some fairly negative values for three-way interactions - there are two percent reduction in variances below -19\% - this behavior is similar to what would happen if we instead compared 1,000 randomizations to 1,000 different randomizations. On average, rerandomization was equivalent to randomization in terms of three-way interactions, which is expected, because three-way interactions were not considered during rerandomization.
Figure \ref{fig:lovePlot} also displays the percent reduction in variance for two covariates not considered during rerandomization: ``number of teachers'' and ``number of students in temporary housing.'' Rerandomization yielded more balance for ``number of teachers'' compared to pure randomization, because ``number of teachers'' is highly correlated ($R^2 = 0.95$) with ``number of students,'' which was considered during rerandomization. Likewise, ``number of students in temporary housing'' was only mildly correlated with the covariates considered during rerandomization, and thus it did not benefit greatly from rerandomization. If the NYDE decided that these two covariates were important to balance, but less so than the nine covariates already considered, we could rerandomize efficiently by balancing only the functions of ``number of teachers'' and ``number of students in temporary housing'' that are orthogonal to the nine ``most important'' covariates, because the parts that are correlated will already be balanced.
If outcome variables of the NYDE experiment are correlated with these covariates, then rerandomization will yield more precise estimates of the main factorial effects and two-way interactions. Furthermore, the precision of higher-order factorial effects will not be worse compared to pure randomization.
\section{Conclusion}
Rerandomization is known to increase the precision of the treatment effect estimator for treatment-versus-control experiments (Morgan and Rubin 2012). Here, rerandomization has been explored for balanced $2^K$ factorial designs. Theoretical results under common assumptions show that rerandomization yields unbiased estimators and increases the precision of factorial effect estimators of interest without sacrificing the precision of other estimators. Empirical results illustrate these theoretical results via a real-data application. The rerandomization algorithm can also be adjusted so tiers of importance for covariates and factorial effects can be incorporated. Extensions for more complex designs - such as unbalanced designs and fractional factorial designs - will be future work.
\section{Appendix}
\noindent
\textit{Proof of Lemma 1}
\noindent
Assume a completely randomized balanced $2^K$ factorial design is rerandomized using the algorithm proposed at the end of Section \ref{ss:obs}, and the covariate means are multivariate normal. Under both randomization and rerandomization, the columns of $\widetilde{\bm W}$ defined in (\ref{eq:Wf}) are orthogonal. Because the factorial design is balanced and the criterion function $\phi$ defined in (\ref{eq:phi}) is symmetric in $\widetilde{\bm W}$, $\mathbb{E}[\widetilde{\bm W}_{.f} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] = \mathbf{0}$ for all $f \in F$. Therefore, for any $f_1, f_2 \in F$, $\text{cov}({\widetilde{\bm{W}}}_{f_1}, {\widetilde{\bm{W}}}_{f_2} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) = \mathbf{0}$.
Therefore, $\text{Cov}(\widetilde{\bm{W}} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1)$ is a block-diagonal matrix. Because $\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-} = \frac{\mathbf{X}^T \widetilde{\bm W}_{.f}}{n/2}$, the covariance matrix of the elements of $\{ \bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-} : f \in F \}$ is block-diagonal under rerandomization. By assumption the covariate means are multivariate normal, and thus this block-diagonal covariance matrix implies the elements of $\{\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-}: f \in F\}$ are mutually independent under rerandomization. Additionally, the elements of $\{M_f: f \in F \}$ are mutually independent, because every $M_f$ is a function of $\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-}$. Similarly, the elements of $\{\phi_f({\mathbf{X}, \widetilde{\bm{W}}}): f \in F\}$ are mutually independent, where $\phi_f({\mathbf{X}, \widetilde{\bm{W}}})$ is defined in (\ref{eq:phif}). $\hspace{0.1 in} \square$ \\
\noindent
\textit{Proof of Theorem 3}
\noindent
Assume a completely randomized balanced $2^K$ factorial design is rerandomized using the algorithm proposed at the end of Section \ref{ss:obs}, and the covariate means are multivariate normal. The elements of $\{\phi_f({\mathbf{X}, \widetilde{\bm{W}}}): f \in F\}$ are mutually independent given Lemma 1. Therefore,
\begin{align*}
\mathbb{E}[\overline{{\bm{x}}}_{f^+} - \overline{{\bm{x}}}_{f^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] &= \mathbb{E}[\overline{{\bm{x}}}_{f^+} - \overline{{\bm{x}}}_{f^-} | \phi_f({\mathbf{X}, \widetilde{\bm{W}}}) = 1] \\
&= \mathbb{E}[\overline{{\bm{x}}}_{f^+} - \overline{{\bm{x}}}_{f^-} | M_f \leq a]
\end{align*}
where $\phi({\mathbf{X},\widetilde{\bm{W}}})$ is defined in (\ref{eq:phi}). Similarly, for $f_1 = f_2$,
\begin{align*}
\text{cov}[\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-}| \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] &= \text{cov}[\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-} | \phi_f({\mathbf{X}, \widetilde{\bm{W}}}) = 1] \\
&= \text{cov}[\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-} | M_f \leq a]
\end{align*}
while for $f_1 \neq f_2$,
\begin{align*}
\text{cov}[\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-}| \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] = \mathbf{0}
\end{align*}
because the elements of $\{ \bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-} : f \in F \}$ are mutually independent. The remainder of the proof is identical to the treatment-versus-control case, where the units assigned to the high level of $f$ are the ``treatment'' and the units assigned to the low level are the ``control.'' Thus, analogous to Morgan and Rubin (2012), for $f_1 = f_2$,
\begin{align*}
\text{cov}[\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-} | M_f \leq a] = v_a \text{cov}[\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-}]
\end{align*}
where $v_a$ is defined as in (\ref{eq:va}). \hspace{0.1 in} $\square$ \\
\noindent
\textit{Proof of Theorem 4}
\noindent
Assume (a) a completely randomized balanced $2^K$ factorial design is rerandomized using the algorithm proposed at the end of Section \ref{ss:obs}, (b) the covariate means are multivariate normal, (c) factorial effects are constant across units, and (d) there is no interaction between factorial effects and covariate effects. Because the factorial effects are constant across units, each factorial effect estimator $\hat{\theta}_f$ can be written as (\ref{eq:linearModel}). By Lemma 1, for $f_1 \neq f_2$, $\text{cov}(\overline{{\bm{x}}}_{f_1^+} - \overline{{\bm{x}}}_{f_1^-}, \overline{{\bm{x}}}_{f_2^+} - \overline{{\bm{x}}}_{f_2^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) = \mathbf{0}$. Furthermore, the difference of the covariate means is orthogonal to the difference of the residual means, and therefore the covariance between them is zero. Therefore,
\begin{align*}
\text{cov}(\hat{\theta}_{f_1}, \hat{\theta}_{f_2} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) = \text{cov}[(\bar{\epsilon}_{f_1^+} - \bar{\epsilon}_{f_1^-}) , (\bar{\epsilon}_{f_2^+} - \bar{\epsilon}_{f_2^-}) | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1] = 0
\end{align*}
The final equality holds because, by the balance of the design, under both randomization and rerandomization,
\begin{align*}
\text{cov}(\bar{\epsilon}_{f_1^+}, \bar{\epsilon}_{f_2^+}) = \text{cov}(\bar{\epsilon}_{f_1^+}, \bar{\epsilon}_{f_2^-}) = \text{cov}(\bar{\epsilon}_{f_1^-}, \bar{\epsilon}_{f_2^+}) = \text{cov}(\bar{\epsilon}_{f_1^-}, \bar{\epsilon}_{f_2^-})
\end{align*}
and thus the covariance between any two factorial effect estimators under rerandomization is zero. Furthermore, for all $f \in F$,
\begin{align*}
\text{var}(\hat{\theta}_f | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) &= \boldsymbol{\beta}^T \text{cov}(\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1)\boldsymbol{\beta} + \text{var}(\bar{{\bm{\epsilon}}}_{f^+} - \bar{{\bm{\epsilon}}}_{f^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) \\
&= v_a \boldsymbol{\beta}^T \text{cov}(\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-})\boldsymbol{\beta} + \text{var}(\bar{{\bm{\epsilon}}}_{f^+} - \bar{{\bm{\epsilon}}}_{f^-} | \phi({\mathbf{X},\widetilde{\bm{W}}}) = 1) \\
&= v_a \boldsymbol{\beta}^T \text{cov}(\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-})\boldsymbol{\beta} + \text{var}(\bar{{\bm{\epsilon}}}_{f^+} - \bar{{\bm{\epsilon}}}_{f^-})
\end{align*}
The second equality is a result of Theorem 3. By assumption $n$ is large enough that $\bar{{\bm{x}}}_{f^+} - \bar{{\bm{x}}}_{f^-}$ and $\bar{{\bm{\epsilon}}}_{f^+} - \bar{{\bm{\epsilon}}}_{f^-}$ are normally distributed, and thus orthogonality implies independence. Thus, rerandomization does not affect the variance of $\bar{{\bm{\epsilon}}}_{f^+} - \bar{{\bm{\epsilon}}}_{f^-}$, and the final equality holds. The remainder of the proof is analogous to Morgan and Rubin (2012), because it is identical to the treatment-versus-control case, as in the proof of Theorem 3. \hspace{0.1 in} $\square$
\newpage
|
1,108,101,564,121 | arxiv | \section{Introduction}
In higher dimensional complex dynamics, understanding the degree growth of a rational map under iteration is a fundamental and important
issue. Given any dominant rational map $f:\mathbb{P}^d\dashrightarrow\mathbb{P}^d$, one can define its $p$-th degree
$\deg_p(f):=\deg(f^{-1} L_p)$, where $L_p$ denotes a generic linear subspace of $\mathbb{P}^d$ of codimension $p$.
The problem is to describe the behavior of the sequence $\{\deg_p(f^n)\}_{n=1}^\infty$, especially as $n\to\infty$. It is not difficult to check that this sequence is sub-multiplicative. Following~\cite{MR1488341} one can therefore introduce a numerical invariant which measures the exponential growth of the degree
\[
\lambda_p(f):=\lim_{n\to\infty} \deg_p(f^n)^{1/n} \ge 1.
\]
These are invariants of birational conjugacy that are usually referred to as the {\em dynamical degrees}.
Understanding the degrees of iterates or computing the dynamical degrees of a rational selfmap are not easy tasks.
Most results have been focused on the case $p = 1$,
see \cite{DF,FJ,AABM,AMV,BK1,BK2,BHM,Ng} and the references therein.
On the other hand, the case $2 \le p \le d-2$ is substantially harder since it is
delicate to compute $\deg_p(f)$ even in concrete examples, and
there are only a few references in the
literature.
For monomial maps, the degree growth is obtained in \cite{FW, L}. Other family of maps
have also been investigated, see \cite{Og,DN}.
\smallskip
In this paper, we study the behavior and the degree growth of rational maps
which come from an algebra structure.
One motivation to study such maps comes from the recent
classification of birational maps of type $(2,2)$
on $\mathbb{P}^d$ in terms of involutions on certain Jordan algebra by L. Pirio and F. Russo~\cite{PiRu1, PiRu2,PiRu3}.
There are also some sporadic studies in the literature about the dynamics of this type of maps,
for example, see \cite{CeDe, Usn} which study the dynamics of ratioanl maps on the matrix algebras. In this paper we study the algebraic structure of these maps more systematically and describe the growth of their degrees.
\smallskip
Our starting point is a finite dimensional complex vector space $V$
equipped with a $\mathbb{C}$-algebra structure.
We shall be mainly interested in the dynamics of two classes of maps.
The first family of maps we deal with are the maps induced from single variable rational map.
Here we need to assume that the algebra is {\em power associative} and has a {\em multiplicative unit} (see Section~\ref{sec:setting}
for definitions). This implies that the power map $x\mapsto x^n$, $n\ge 0$ is well-defined, where $x^n$ is the multiplication in $V$ of $x$
with itself for $n$ times in any order. As a consequence, for a single variable polynomial $P(T)\in\mathbb{C}[T]$, $P$ also induces a self map on $V$.
Furthermore, one can show that a generic element of $V$ is invertible. Therefore, any rational function
$\varphi=\frac{Q}{P}\in\mathbb{C}(T)$, $\varphi$ induces a rational selfmap $f_\varphi:V\dashrightarrow V$. Compactifying $V$, we obtain
a rational map $f_\varphi:\mathbb{P}^d\dashrightarrow \mathbb{P}^d$ where $d=\dim_\mathbb{C}(V)$.
\begin{thmA}
Suppose $V$ is a power associative $\mathbb{C}$-algebra with a multiplicative unit.
Then there exists an integer $k\ge 1$ such that
for any rational map
$\varphi(T)=\frac {Q(T)}{P(T)}\in \mathbb{C}(T)$ and for any integer $p \ge1$, we have
$$
\deg_p (f_\varphi^n) \asymp \left( \max\{ \deg(Q), \deg(P)\}^{\min \{ p, k \}}\right)^n.
$$
\end{thmA}
The second type of maps we study are the (generalized) monomial maps. In order for the iterates to be still monomial maps,
we need to require that the algebra structure is {\em abelian} (i.e., commutative, associative and unitary). Now we let $k=\dim_\mathbb{C}(V)$, given an $d\times d$ integer matrix
$A=(a_{ij})\in M_d(\mathbb{Z})$ with $\det(A)\ne 0$, we can define the monomial map
$F_A:V^d \dashrightarrow V^d$ as
\[
\textstyle{ F_A(x_1,\cdots,x_d)=\left(\prod_j x_j^{a_{1,j}},\cdots ,\prod_j x_j^{a_{d,j}} \right), }
\]
where $x_j\in V$ and the product is given by the multiplication of $V$.
Let the number $m$ be given by
$m=\dim_\mathbb{C}(\red(V))$, where $\red(V)=V/N(V)$
and $N(V)$ is the nilradical of $V$, i.e., the ideal of all nilpotent elements.
\begin{thmB}
\label{thm:B}
Let $\diag(A;m)$ be the block-diagonal matrix with $m$ blocks of the matrix $A$ on the diagonal positions. Then
the degree growth of the (generalized) monomial map $F_A$ is given by
\[
\deg_p(F_A^n) \asymp \max_{p-d(k-m)\le i\le p} \left\| \wedge^{i} \diag(A;m)^n \right\|.
\]
\end{thmB}
Observe that $m$ only depends on the
structure of $V$ and not on the map $F_A$.
Also, the norm $\|\wedge^i \diag(A,m)\|$ can be computed solely in terms of $\|\wedge^j A^n\|$, $j\le i$, and $m$. Indeed, one has
\[
\| \wedge^i \diag(A,m)^n \| \asymp \max_{i_1 + ... + i_m = i} \| \wedge^{i_1} A^n \| \cdots \| \wedge^{i_m} A^n \|.
\]
\smallskip
There are many other possibilities to associate a rational map to a structure of algebra. We point out here some generalizations of our setting and natural questions that may arise.
\begin{question}
Let $V$ be an abelian algebra, and pick any rational map
\[
f=[f_0:\cdots:f_d]:\mathbb{P}^d_\mathbb{C}\dashrightarrow\mathbb{P}^d_\mathbb{C},
\]
where each $f_j$ is a homogeneous polynomial of the same degree. Then $f_j$ also induces a map $V^{d+1}\to V$ which is homogeneous of the same degree.
Thus $f$ induces a map $F:\mathbb{P}(V^{d+1})\dashrightarrow\mathbb{P}(V^{d+1})$. Can we describe the relation between the degree growth
of $f^N$ and the degree growth of $F^N$?
\end{question}
\begin{question}
Under the same notation as the above question, one can show that $F$ is birational when
$f$ is birational. In the particular case where $d=2$, it is tempting to explore if
similar results as in \cite{DF} can be obtained about the degree growth of the iterates of $F$?
\end{question}
\begin{question}
If $V$ is a power associative algebra with a multiplicative unit, and $P(T)\in\mathbb{C}[T]$ is a polynomial,
then we can define the generalized H\'{e}non map $H_P: V^2\to V^2$ by
\[
H_P(x,y)=(y, P(x)-cy)
\]
where $c\in\mathbb{C}\setminus\{0\}$. What is the degree growth of the iterates of $H_P$?
\end{question}
\section{Algebra structure and quadratic maps}
\subsection{The algebra structure}
\label{sec:setting}
Let $V$ be a finite dimensional complex vector space
equipped with a $\mathbb{C}$-algebra structure, that is a $\mathbb{C}$-linear map $\mu:V\otimes V\to V$. We shall always denote multiplicatively this law, i.e. we will denote $\mu(x\otimes y)$ by $xy$.
Recall the following list of
classical definitions.
\begin{itemize}
\item
$V$ is {\em unitary} if it has a unit $1$ (i.e. $1 \cdot x = x \cdot 1 = x$ for all $x$).
\item
$V$ is {\em associative} if $x (yz) = (xy) z$ for all $x,y,z$;
\item
$V$ is {\em commutative} if $x y = y x$ for all $x,y$;
\item
$V$ is {\em alternative} if $x(xy) = x^2 y$ and $(yx)x = y x^2$ for all $x,y$;
\item
$V$ is {\em power-associative} if the algebra generated by any element is associative;
\item
$V$ is {\em abelian} if the multiplication law is commutative, associative and unitary
(this is the typical setting for commutative algebra, e.g., \cite{AM});
\item
$V$ is a {\em Jordan algebra} if it is commutative and alternative.
\end{itemize}
Set $ d = \dim_\mathbb{C} V$. Observe that the space ${\mathcal{A}}$ of all algebra structures on $V$ is $\Hom_\mathbb{C}(V\otimes V, V)$, which is
an affine space of dimension $d^3$.
The space of abelian (resp. associative, alternative, power-associative, Jordan) algebras
is a Zariski closed subset of ${\mathcal{A}}$.
When $V$ is power associative, we define the set of nilpotent elements as
$N(V) := \{ x\in V\,|\, x^N = 0 \text{ for some integer } N \}$. When $V$ is abelian then $N(V)$
is an ideal.
\subsection{Quadratic maps}
There is a natural identification between $d$ dimensional complex commutative (but not necessarily associative)
algebras and homogeneous quadratic polynomial maps from $\mathbb{C}^d$ to itself, as follows.
Choose a basis of $V$ as a complex vector space $V = \mathbb{C} e_1 \oplus \ldots \oplus \mathbb{C} e_d$, and write
$e_i\cdot e_j = \sum_k a_{ij}^k e_k$ with $a_{ij}^k\in \mathbb{C}$. Since $V$ is commutative, we have $a_{ij}^k=a_{ji}^k$. Then $f_V(x) = x^2$
is a polynomial map as above, and we have
\[
f_V ( z_1, \ldots , z_d) = \left( \sum_{1\le i, j\le d}
a_{ij}^1 z_i z_j, \ldots ,
\sum_{1\le i, j\le d} a_{ij}^d z_i z_j
\right).
\]
Conversely, given a homogeneous quadratic polynomial map $f:\mathbb{C}^d\to\mathbb{C}^d$, one can define the algebra structure as
\[
x\cdot y = \frac 1 2 \left( f(x+y) - f(x) - f(y) \right).
\]
The two operations are inverse to each other. Therefore, if we use $f_V$ to denote again the induced quadratic map
$\mathbb{P}^{d-1}\dashrightarrow\mathbb{P}^{d-1}$, then we can see that the space of all quadratic rational selfmaps
on $\mathbb{P}^{d-1}$ can be identified with the space of $f_V$ over all algebra structures on $\mathbb{C}^d$.
Moreover, the following results show that the dynamics of $f_V$ plays a special role in the structure of $V$.
\begin{prop}
The algebra is unitary iff $f_V$ admits a fixed point $x$ such that $df_V (x) = 2 \id$.
\end{prop}
\begin{proof}
In one direction this is obvious since $ (1 + tx ) ^2 = 1 + 2tx + O(t^2)$.
In the other direction, pick a fixed point $x$ and let $y$ be any other point.
Then $(x+ ty)^2 = x^2 + 2t (x\cdot y) + O(t^2) = x^2 + df_V(x) (t y) + O(t^2)$,
whence $x \cdot y = y$ as required.
\end{proof}
\begin{prop}
Suppose $V$ and $V'$ are dimension $d$ commutative algebras.
Then $V$ is isomorphic to $V'$ iff $f_V : \mathbb{P}^{d-1} \dashrightarrow \mathbb{P}^{d-1}$ is conjugated in
$\PGL(d, \mathbb{C})$ to $f_{V'}$.
\end{prop}
\begin{proof}
Identify both $V$ and $V'$ to $\mathbb{C}^d$.
Being isomorphic then means that there exists
$A \in \GL(d,\mathbb{C})$ such that $ A ( x \cdot_V y ) = Ax \cdot_{V'} Ay$.
This implies the conjugacy in $\mathbb{C}^d$.
Conversely, being conjugated in $\mathbb{P}^{d-1}$ implies the existence of a matrix
in $\GL(d,\mathbb{C})$ and a function ${\lambda}(x)$ such that
$ A ( x \cdot_V x ) = \lambda(x) \, Ax \cdot_{V'} Ax$.
For degree reasons, ${\lambda}$ is a constant. Changing $A$ by $\mu A$
replaces ${\lambda}$ by ${\lambda} \mu$ hence we can assume ${\lambda} = 1$.
\end{proof}
\begin{rem}
Any two-dimensional unitary commutative algebra is abelian, hence isomorphic to either
$\mathbb{C}[x]/(x^2)$ or to $\mathbb{C} \oplus \mathbb{C}$.
Indeed, the algebra structure is determined by $x^2 = a + bx$.
When $a + b^2/4 = 0$, then there exists an element $y:= \mu x + {\lambda}$ such that
$y^2 = 0$ and we are in the former case.
When $a + b^2/4 \neq 0$, then there exists an element $y:= \mu x + {\lambda}$ such that
$y^2 = y$ and we are in the latter case.
\end{rem}
\section{Some lemmas about degree growth}
In this section, we will prove several lemmas for the degrees and degree growth of rational maps.
The maps we study in this paper all preserve some fibration. The dynamical degrees for maps preserving
a fibration were computed by Dinh, Nguy{\^e}n and Truong \cite{DN,DNT} (see \cite{Tru} and \cite{NB} for a purely algebraic approach to these computations). Their results are very general and their computations are quite involved. For the convenience of the reader we have preferred to reprove some of their (elementary) results that we shall need in the latter section.
First, let us define higher degrees. For a rational selfmap $f:X\dashrightarrow X$ on a projective variety,
and given an ample divisor $D$, we define the {\em $p$-th degree} of $f$ with respect to $D$ as
$\deg_{D,p}(f)=f^*[D]^p\cdot[D]^{d-p}$, where $[D]$ is the class of $D$ in $H^{1,1}(X)$. When $X=\mathbb{P}^d$ and $D=H$
is a generic hyperplane, this coincides with the usual definition of degrees. In this case, we can compute the degree
by $\deg_{p}(f)=f^{-1}L_p\, . \, L_{d-p}$, where $L_p$ and $L_{d-p}$ are generic linear subspaces of $\mathbb{P}^d$ of
codimensions $p$ and $d-p$ respectively, and $f^{-1}L_p$ is the proper transform of $L_p$ under $f$.
\subsection{Product map}
Product maps are the simplest maps preserving a fibration.
Let $f:X\dashrightarrow X$
and $g:X'\dashrightarrow X'$ be two rational maps, and $h=f\times g$. Pick any two ample divisors $D, D'$ on $X, X'$ respectively, and write $n= \dim X$, $n'= \dim X'$. By abuse of notation we shall again denote by $D$ and $D'$ their pull-back to $X\times X'$ so that $D+D'$ is again ample on $X\times X'$. The degrees of $(f,g)$ can be then computed by a direct calculation as
\[
\deg_{D+D',p}(h)=\sum_{i+j=p}{p \choose i}\, {n+n'-p \choose n -i} \, \deg_{D,i}(f)\deg_{D',j}(g).
\]
In dynamics, we are interested in the growth of degrees under iteration. For this reason, we introduce the following notation
for {\em asymptotic equivalence}. For two sequences $\{a_j\}$, $\{b_j\}$ of positive numbers, we say the two sequences are
{\em asymptotic equivalent}, denoted by $a_j \asymp b_j$, if there is a constant $C>0$ such that $C^{-1}a_j\le b_j\le C a_j$
for all large enough $j$.
For a rational map $f$, the asymptotic behavior of the degree sequence $\{\deg_{D,p}(f^n)\}_{n=1}^\infty$
is independent of the ample divisor $D$, and is invariant under birational conjugation
(see \cite{DS} using analytic arguments or \cite{Tru,NB} using algebraic ones).
An important class of invariants to measure the asymptotic growth of the degree sequence is the {\em dynamical degrees}.
The $p$-th dynamical degree of a rational map $f$, denoted by $\lambda_p(f)$, is defined as
\[
\lambda_p(f) = \lim_{n\to\infty} \deg_{D,p}(f^n)^{1/n}.
\]
The existence of the limit follows here from the sub-multiplicativity of the sequence $\{C\deg_{D,p}(f^n)\}_{n=1}^\infty$
for some constant $C>0$, see~\cite{DS}.
With the above notation, we can first conclude that for product map $h=f\times g$, we have
\begin{equation}\label{eq:product}
\deg_{D+D',p}(h^n)\asymp\max_{i+j=p}\left\{\deg_{D,i}(f^n)\deg_{D',j}(g^n)\right\}.
\end{equation}
A consequence of this is a formula for the dynamical degrees of $h$:
\[
{\lambda}_p(h)=\max_{i+j=p}\left\{{\lambda}_i(f){\lambda}_j(g)\right\}.
\]
\subsection{Skew product with fibers $\mathbb{P}^1$}
Another type of rational maps preserving a fibration that we will encounter
is a map $F:\mathbb{P}^d\times\mathbb{P}^1\dashrightarrow\mathbb{P}^d\times\mathbb{P}^1$
which preserves the fibration $\pi:\mathbb{P}^d\times\mathbb{P}^1\to\mathbb{P}^d$.
Denote by $H_d$ the pull-back of ${\mathcal{O}}(1)_{\mathbb{P}^d}$ on $\mathbb{P}^d\times\mathbb{P}^1$, and by $H_1$
the pull-back of ${\mathcal{O}}(1)_{\mathbb{P}^1}$ on the same variety.
\begin{prop}
\label{lem:deg_growth}
Suppose that the rational map $F:\mathbb{P}^d\times\mathbb{P}^1\dashrightarrow\mathbb{P}^d\times\mathbb{P}^1$ is of the form
$F(z,t)=(g(z),h(z,t))$, where $z\in\mathbb{P}^d$, $t\in\mathbb{C}\cup\{\infty\}=\mathbb{P}^1$.
Suppose $F^* H_1 = \delta_d H_d + \delta_1 H_1$.
The $p$-th degree of $F$ with respect to the ample divisor $D:= H_d + H_1$ can be computed as
\[
\deg_{D,p}(F)= (d+1-p) \deg_p(g) +p (\delta_1+(d+1-p)\delta_d)\cdot\deg_{p-1}(g).
\]
\end{prop}
Observe that $\delta_1$ is the degree in $t$ of the rational function $h(z,t)$ whereas $\delta_d$
is the degree in $z$ of $h(z,t)$.
\begin{proof}[Proof of Proposition~\ref{lem:deg_growth}]
One has
\begin{align*}
F^* H_d ^p &= \deg_p(g) H_d^p,\\
F^* (H_d ^{p-1} \cdot H_1) &= \deg_{p-1}(g) H_d^{p-1} \cdot ( \delta_d H_d + \delta_1 H_1).
\end{align*}
The first equality is by definition. For the second equality, we first claim the following:
\begin{claim}
for any subvariety $Z$ of $\mathbb{P}^d\times\mathbb{P}^1$, we can find a linear subpace $L\subset \mathbb{P}^d$ of codimension $p-1$ and a point $q\in \mathbb{P}^1$ such that $L\times \{q\}$, $L\times\mathbb{P}^1$ and $\mathbb{P}^d\times\{q\}$ all intersect $Z$ properly.
\end{claim}
Recall that two pure dimensional subvarieties $W$ and $W'$ intersect properly when the codimension of any irreducible component of $W \cap W'$ is equal to $\codim(W) + \codim(W')$.
\smallskip
Here we show the claim for $L\times \{q\}$, and leave the other two cases to the reader. Since $Z$ is irreducible, the projection of $Z$ on $\mathbb{P}^1$ is either a single point $\{q'\}$ or $\mathbb{P}^1$. In the first case, we can pick any point $q\ne q'$ in $\mathbb{P}^1$ and any linear subvariety $L$ of codimension $p-1$. For the latter case, pick a generic $q\in\mathbb{P}^1$ such that $Z' = (\mathbb{P}^d\times\{q\})\cap Z$ is a variety of pure dimension $\dim(Z)-1$. Next, pick a generic linear subspace $L\in\mathbb{P}^d$ which intersects $Z'$ properly. Then $L\times\{q\}$ will intersect $Z$ properly.
Now let $\Gamma\subset(\mathbb{P}^d\times\mathbb{P}^1)\times(\mathbb{P}^d\times\mathbb{P}^1)$ be the graph of $F$. Denote by $\pi_1$ and $\pi_2$ the projections from $\Gamma$ onto the first and second components, and set
\[
Z=\{q\in \mathbb{P}^d\times\mathbb{P}^1\ |\ \dim(\pi_2^{-1}(q))>0 \}~.
\]
The class $\pi_2^*(H_d^{p-1} \cdot H_1)$ is represented by $\pi_2^{-1}(L\times\{p\})$ (see e.g. ~\cite[Lemma~3.1]{Tru}), which is equal to
$\pi_2^{-1}(L\times\mathbb{P}^1)\cap\pi_2^{-1}(\mathbb{P}^d\times\{q\}) $.
On the other hand this intersection represents the class $\pi_2^*H_d^{p-1}\cdot \pi_2^*H_1$. Thus we have
\[
\pi_2^*(H_d^{p-1} \cdot H_1) = \pi_2^*H_d^{p-1}\cdot \pi_2^*H_1.
\]
Notice that $\pi_2^* H_d^{p-1} = \pi_1^*(\deg_{p-1}(g) H_d^{p-1})+E$ for some class $E$ such that $\pi_{1*}(E)=0$, therefore
\[
\begin{split}
F^* (H_d^{p-1} \cdot H_1) & = \pi_{1*} \pi_2^*(H_d^{p-1} \cdot H_1)= \pi_{1*} (\pi_2^*H_d^{p-1}\cdot \pi_2^*H_1)\\
& = \pi_{1*} (\pi_1^*(\deg_{p-1}(g) H_d^{p-1})\cdot \pi_2^*H_1)+ \pi_{1*} (E\cdot \pi_2^*H_1)\\
& = \deg_{p-1}(g) H_d^{p-1}\cdot (\pi_{1*}\pi_2^*H_1) +0\\
& = \deg_{p-1}(g) H_d^{p-1}\cdot F^*H_1
= \deg_{p-1}(g) H_d^{p-1}\cdot (\delta_d H_d + \delta_1 H_1).
\end{split}
\]
Finally, we can compute
\begin{multline*}
F^* (D^p) = F^* \left(
H_d^p +p\, H_d^{p-1} \cdot H_1
\right) = \\
\left( \deg_p(g) +p \deg_{p-1}(g) \delta_d\right) H_d^p + p \deg_{p-1}(g) \delta_1\, H_d^{p-1} \cdot H_1
\end{multline*}
and the result follows by intersecting with the class
$$D^{d+1-p} = H_d^{d+1-p} + (d+1-p) H_d^{d-p}\cdot H_1~.$$
\end{proof}
\section{Degree growth of the squaring map}
In this section we study the degree growth of the squaring map $f_V:\mathbb{P}^{d-1}\dashrightarrow\mathbb{P}^{d-1}$
introduced in the previous section.
The methods we use in this section will be generalized in later sections to prove our main Theorems A and B.
For squaring maps, these methods are more intuitive and serve as an illustration for the more complicated cases.
\subsection{The abelian case}
\label{sec:square_abelian}
We assume $V$ is an abelian algebra.
Since $V$ is finite dimensional as a complex vector space, the $\mathbb{C}$-algebra $V$ is Artinian
(\cite{AM}*{$\S 8$, Exercise 2}).
By the structure theorem for Artin rings (\cite[Theorem 8.7]{AM}), we can decompose $V$ as
a finite direct product of Artinian local rings $V \simeq \prod_{i=1}^k V_i$.
For the Artin local ring $V_i$, the maximal ideal ${\mathfrak{m}}_i$ is nilpotent
(i.e. ${\mathfrak{m}}_i^l=0$ for some $l\ge 1$) and $V_i/{\mathfrak{m}}_i\simeq \mathbb{C}$.
As a $\mathbb{C}$-vector space, we can write $V_i$ as the
direct sum $V_i\simeq \mathbb{C}\oplus {\mathfrak{m}}_i$.
For $V$, we introduce the map
\[
\Phi : V \simeq \mathbb{C}^k \times \prod_{i=1}^k {\mathfrak{m}}_i \longrightarrow V
\]
sending $((a_1, \ldots, a_k), (h_1, \ldots, h_k))$ to
$\sum_{i=1}^k a_i \exp(h_i)$, where $$\exp(h)=\sum_{j=0}^\infty h^j/j!~.$$
Notice that $\exp(h)$ is well defined since $h_i\in{\mathfrak{m}}_i$ is nilpotent,
so the sum is indeed finite.
Moreover, we claim that $\Phi$ is a birational map.
Indeed, using the vector space isomorphism $V \simeq \prod_{i=1}^k \mathbb{C} \times {\mathfrak{m}}_i$,
we can describe the birational inverse of $\Phi$ concretely as
\[
\Phi^{-1}\left(\prod_{i=1}^k (a_i,h_i)\right)= \prod_{i=1}^k a_i\cdot \log(1+a_i^{-1}h_i)~,
\]
where $\log(1+x)=\sum_{j=1}^\infty (-1)^{j-1}x^j/j$.
The usual rules for the exponential and logarithm functions hold. Hence the fact that
$\Phi$ and $\Phi^{-1}$ are inverse to each other is a reflection of the fact that $\log(\exp(h))=h$.
However, we emphasize here that $\Phi$ and $\Phi^{-1}$ are {\em not} ring homomorphisms.
They are inverse to
each other only as rational maps.
Define $$F\left[(a_1, \ldots, a_k), (h_1, \ldots, h_k) \right]=\left[ (a_1^2, \ldots, a_k^2), (2h_1, \ldots, 2h_k)\right]$$ then
$\Phi \circ F = f_V \circ \Phi$.
Thus, $f_V$ is birationally conjugate to a product of power maps and linear maps.
Recall that the reduced algebra associated to $V$ is by definition the quotient of $V$ by its nilradical $N(V)$.
Since for each $V_i$, we have $\red(V_i)\cong\mathbb{C}$, thus $\red(V)\cong\mathbb{C}^k$.
From the product structure of $f_V$, we
obtain the following.
\begin{thm}
Suppose $V$ is abelian, and write $k := \dim_\mathbb{C} \red (V)$. Then for any $p$, we have
$$
\deg_p (f_V^n) \asymp \left(2^{\min \{ p, k \}}\right)^n
$$
\end{thm}
\mbox{}\hfill\qed
\begin{ex}
Suppose that $V$ is power associative and generated by one element. Then
it is automatically abelian, and there exists a polynomial $P(x)\in\mathbb{C}[x]$ such that
$$
V \simeq \mathbb{C}[x] / (P(x))\simeq \mathbb{C}[x] / \prod_{i=1}^k (x - z_i)^{k_i}
\simeq \oplus_{i=1}^k \mathbb{C} [x_i] /(( x_i-z_i)^{k_i}).
$$
In this case, $k$ is the number of different (complex) roots of the defining polynomial $P$.
\end{ex}
\subsection{Power associative algebras}\label{sec:pwassoc}
In this section, we will deal with the squaring map $f_V$ when $V$ is a power associative algebra with $1$.
We start by analyzing the structure of the algebra.
For any non-zero $x\in V$ denote by $\mathbb{C}[x]$ the algebra generated by $x$ and
${\delta}(x) := \dim_\mathbb{C} \mathbb{C}[x]$.
Since $\mathbb{C}$ is always a subspace of $\mathbb{C}[x]$, ${\delta}(x)\ge 1$, and
since $V$ is power associative, $\mathbb{C}[x]$ is abelian.
Moreover, $\mathbb{C}[x]$ is invariant under $f_V$, i.e., $f_V(\mathbb{C}[x])\subseteq\mathbb{C}[x]$.
Observe that
\[
V_k = \{ x\in V\, | \, (1, x, x^2, \ldots, x^k) \text{ are linearly dependent}\}
\]
is a Zariski closed subset of $V$ since it is defined by the vanishing
of finitely many determinants of matrices of size $(k+1)\times(k+1)$. Since $V_k = \{x\,|\, {\delta}(x) \le k \}$
and $V_k\subseteq V_l$ if $k\le l$,
we conclude that the function $x\mapsto {\delta}(x)$ is lower semicontinuous for the Zariski topology.
Introduce ${\delta}={\delta}_V := \max \{{\delta} (x)\,|\, x\in V\}$ and $U':= \{ x \in V, \, {\delta}(x) = {\delta}_V\}$.
The latter is a Zariski dense open subset of $V$.
Let $F := V\setminus U'$ and pick $x, y\in U'$,
we have the following observations:
\begin{enumerate}
\item $y\in\mathbb{C}[x]\ \Longleftrightarrow\ x\in\mathbb{C}[y]\ \Longleftrightarrow\ \mathbb{C}[x]=\mathbb{C}[y]$,
\item if $y \notin \mathbb{C}[x]$, then $\mathbb{C}[x]\cap \mathbb{C}[y]\cap U' = \emptyset$, i.e. $\mathbb{C}[x]\cap \mathbb{C}[y]\subseteq F$.
\end{enumerate}
Moreover, we claim the following:
\begin{claim}
There is a further open dense subset $U\subset U'$ such that for any two $x,y\in U$, we have $\mathbb{C}[x]\cong\mathbb{C}[y]$ as $\mathbb{C}$-algebras.
\end{claim}
\begin{proof}[Proof of the Claim]
For each $x\in U'$, there is a canonical map $\mathbb{C}[T]\to\mathbb{C}[x]$, where $T$ is a variable, by sending
$T\mapsto x$. Thus, $\mathbb{C}[x]\cong\mathbb{C}[T]/(P_x(T))$ for a unique monic polynomial $P_x(T)$ of degree ${\delta}$.
The coefficients of $P_x(T)$ depend algebraically on $x$.
Let $S^{\delta}\mathbb{C}$ denote the ${\delta}$-th symmetric product of $\mathbb{C}$.
Then one can parameterize all monic polynomials of degree ${\delta}$ either by $\mathbb{C}^{\delta}$ (using the coefficients, excluding the leading one),
or by $S^{\delta} \mathbb{C}$ (using the ${\delta}$ roots of the polynomial, counting multiplicities); and the two spaces $S^{\delta}\mathbb{C}$ and $\mathbb{C}^{\delta}$ are isomorphic.
We define the map $U\to S^{\delta}\mathbb{C}$ by sending $x$ to the multiset of complex solutions of $P_x(T)$.
Each multiset consists of ${\delta}$ elements, hence gives rise to a partition of ${\delta}$ by counting the
multiplicities of different elements.
Denote $\Gamma$ as the set of all partitions of ${\delta}$, then the above process defines a map $S^{\delta}\mathbb{C} \to \Gamma$.
Use $\psi$ to denote the composition $U\to S^{\delta}\mathbb{C} \to \Gamma$. Define a partial order $\preceq$ on $\Gamma$ by
$\gamma\preceq\gamma'$ if $\gamma'$ is a refinement of $\gamma$.
Also, define $U_\gamma = \{ x\in U'\ |\ \psi(x)\preceq \gamma\}$.
Notice that each inequality $\gamma\prec\gamma'$ (this notation means $\gamma\preceq\gamma'$ but $\gamma\ne\gamma'$)
can be factored into a sequence of minimal refinements
such that each partition in the sequence is obtained by decomposing one number in the
previous partition as the sum of two positive numbers. For instance, we may have
$\gamma=\{\nu_1,\nu_2\}$ and $\gamma'=\{\nu'_1,\nu''_1,\nu_2\}$ where $\nu'_1+\nu''_1=\nu_1$
as an example of a minimal refinement. Then an element $x\in U_{\gamma'}$ will have the corresponding
polynomial $P_x(T)$ of the form $P_x(T)=(T-\alpha_0)^{\nu'_1}(T-\alpha_1)^{\nu''_1}(T-\alpha_2)^{\nu_2}$,
where the $\alpha_i$'s are not necessarily different. And $x\in U_\gamma\subset U_{\gamma'}$
will then defined by the close condition $\alpha_0=\alpha_1$ in $U_{\gamma'}$.
Generalizing the above instance, one can obtain the conclusion that if $\gamma\prec\gamma'$,
then $U_\gamma\subset U_{\gamma'}$ is a closed subset.
Therefore, with respect to this partial order, $\psi$
is lower semicontinuous.
Lower semicontinuity implies that we can find $\gamma_1,\cdots,\gamma_m\in\Gamma$, no two of them are comparable under ``$\preceq$'',
such that if we define $U_i = \{ x\in U'\ |\ \psi(x)\preceq \gamma_i\}$, then each $U_i$ is closed in $U'$, and $U'=\cup_{i=1}^m U_i$.
However, since $U'$ is an open dense subset of the irreducible space $V$, $U'$ is itself irreducible.
This implies that $m=1$, and there
is a unique partition $\gamma\in\Gamma$, and an open dense subset $U\subset U'$ such that $U=\{x\in U'\ |\ \psi(x)=\gamma\}$.
Finally, since we have $\mathbb{C}[T]/((T-\alpha)^k)\cong\mathbb{C}[T]/(T^k)$ for all $\alpha\in\mathbb{C}$ (via the isomorphism $T\mapsto T+\alpha$),
the isomorphic class of $\mathbb{C}[x]$ for $x\in U$ only depend on the multiplicity of different roots of $P_x(T)$,
which is recorded as $\psi(x)$. Since
$\psi(x)=\gamma$ for all $x\in U$, we conclude that for all $x\in U$, $\mathbb{C}[x]$ are isomorphic to each other.
\end{proof}
Fix a monic polynomial $P_0(T)$ of degree ${\delta}$ whose roots gives rise to the partition $\gamma$.
Then for each $x\in U$, we have the isomorphism
\[
\phi_x:V_0:=\mathbb{C}[T]/(P_0(T))\longrightarrow \mathbb{C}[x]\cong\mathbb{C}[T]/(P_x(T)).
\]
Notice that $\phi_x$ depend algebraically on $x$.
Next, choose a generic affine (i.e, a translation of a linear) subspace $L\subset V$ of dimension $d-{\delta}$, such that
$L\cap U$ is open and dense in $L$. Then for a generic $x\in U$, $\mathbb{C}[x]$ intersects $L$
at a single point. One defines the following birational map
\[
\begin{split}
\Phi: L\times V_0 & \dashrightarrow V \\
( x, v ) & \longmapsto \phi_x(v) \in \mathbb{C}[x]\subset V,
\end{split}
\]
whose inverse is given by
\[
\begin{split}
\Psi: V & \dashrightarrow L\times V_0\\
y & \longmapsto \left(x=\mathbb{C}[y]\cap L\, ,\, \phi_x^{-1}(y)\right)~.
\end{split}
\]
Observe that it also induces a birational map
$$
\Phi : \mathbb{P}^{d-{\delta}} \times \mathbb{P}^{{\delta}} \dashrightarrow \mathbb{P}(V)~.
$$
Since $\Phi$ is birational, one can lift $f_V$ to a product map
$\tilde{f}_V : \mathbb{P}^{d-{\delta}} \times \mathbb{P}^{{\delta}} \dashrightarrow \mathbb{P}^{d-{\delta}} \times \mathbb{P}^{{\delta}}$
which acts as $\tilde{f}_V=\id\times f_{V_0}$.
Therefore, we conclude the following
\begin{thm}
Suppose $V$ is power associative. Let $k=\dim_\mathbb{C}\left(\red\mathbb{C}[x]\right)$ for any $x\in U$
as described above.
Then for any $p$, we have
$$
\deg_p (f_V^n) \asymp \left(2^{\min \{ p, k \}}\right)^n
$$
\end{thm}
\mbox{}\hfill\qed
\begin{ex}
Let $V=M_m(\mathbb{C})$ be the algebra of $m\times m$ complex matrices. For a matrix $A$ with $m$ distinct
eigenvalues $\mu_1,\ldots,\mu_m$, its characteristic polynomial is also the minimal polynomial.
Thus $\mathbb{C}[A]\cong \oplus_{i=1}^m \mathbb{C} [x] /( x-\mu_i)\cong \mathbb{C}^m$.
Notice that ``having $m$ distinct eigenvalues'' is a generic property in $M_m(\mathbb{C})$.
Therefore, for the matrix algebra $M_m(\mathbb{C})$, the number $k$ in the theorem is equal to $m$.
\end{ex}
\section{Generalization to rational maps}
\subsection{Polynomial maps}
Suppose $V$ is power associative and pick any polynomial $P\in \mathbb{C}[T]$.
Then one can look at the map $f_P(v) = P(v)$ on $V$ and compute the
degree growth of $f_P$ on the affine space $V$.
\begin{thm}
Suppose $V$ is power associative, and pick $P\in \mathbb{C}[T]$.
Then there exists $k$ such that for any $p$, we have
$$
\deg_p (f_P^n) \asymp \left(\deg(P)^{\min \{ p, k \}}\right)^n
$$
\end{thm}
\begin{proof}
By the same trick as in the previous section, we first treat the case that
$V$ is abelian and generated by one element. That is we assume
$$
V \cong \mathbb{C}[T] / \prod_{i=1}^l (T - z_i)^{m_i} \cong \prod_{i=1}^l \mathbb{C} [T_i] /( T_i^{m_i} )~.
$$
Since $f_P$ preserves each factor of the product decomposition above,
it is sufficient to treat the case $V = \mathbb{C}[x]/ (x^m)$.
A point in $V$ can then be written as $\sum_{i=0}^{m-1} {\lambda}_i x^i$
and if $P(T) = \sum_j a_j T^j$ we get
\begin{multline*}
P \left( \sum_{i=0}^{m-1} {\lambda}_i x^i \right) =
\sum_j
a_j \left( \sum_{i=0}^{m-1} {\lambda}_i x^i \right)^j
\\
=P(\lambda_0) + x \left[ \lambda_1 P'({\lambda}_0) \right] + x^2 \left[{\lambda}_2 P'({\lambda}_0) + {\frac 1 2} {\lambda}_1^2 P''({\lambda}_0) \right]+ \ldots \\
+ x^j \left[{\lambda}_j P'({\lambda}_0) + Q_j({\lambda}_0,{\lambda}_1, \ldots , {\lambda}_{j-1})\right] + \ldots \\
= P(\lambda_0) + \sum_{j=1}^{m-1} x^j \bigl( {\lambda}_j P'({\lambda}_0) + Q_j({\lambda}_0,{\lambda}_1, \ldots , {\lambda}_{j-1})\bigr)
\end{multline*}
In other words, we may write
\begin{align*}
f_V ( {\lambda}_0, {\lambda}_1 , \ldots , {\lambda}_{m-1}) = \bigl(\ & P(\lambda_0) , \lambda_1 P'({\lambda}_0), {\lambda}_2 P'({\lambda}_0) + \textstyle{\frac 1 2} {\lambda}_1^2 P''({\lambda}_0), \\
& \ldots , \lambda_j P'({\lambda}_0) + Q_j ({\lambda}_0,\ldots,{\lambda}_{j-1}) , \\
& \ldots , {\lambda}_{m-1} P'({\lambda}_0) + Q_{m-1}({\lambda}_0,{\lambda}_1, \ldots , {\lambda}_{m-2})\ \bigr)
\end{align*}
Observe that the constant term (i.e., the first coordinate of the function) is just $P({\lambda}_0)$, and for
$1\le j\le m-1$ the coefficient of $x^j$ (i.e., the $j$-th coordinate of the function)
has the following two properties:
\begin{itemize}
\item[(1)] it is a polynomial function of ${\lambda}_0,{\lambda}_1,\ldots,{\lambda}_j$;
\item[(2)] it is an affine function in ${\lambda}_j$.
\end{itemize}
If for $0\le i\le m-1$ we let $V^{(i)}=\mathbb{C}^{i+1}$ be the first $i+1$
coordinates of $V$, then by observation (1)
the map $f_P$ induces a selfmap on each $V^{(i)}$. Moreover, for $1\le i\le k-1$ let $\pi_i: V^{(i)}\to V^{(i-1)}$
be the projection, then $f_P|_{V^{(i)}}$ preserve the fibration $\pi_i$ and the map on a generic fiber is
a linear isomorphism of $\mathbb{P}^1$ by (2) above. Moreover, for each $i\ge 1$,
the function on the last coordinate
$(f_P)_i:V^{(i)}\to V^{(i)}/V^{(i-1)}\cong \mathbb{C}$,
as a function of ${\lambda}_0,\cdots,{\lambda}_{i-1}$, is of degree either
$\deg(P)-1$ (for $i=1$) or $\deg(P)$ (for $i\ge 2$).
When we pass to the $n$-th iterate, we can see that $f_P^n= f_{P^n}$ and $\deg(P^n)=\deg(P)^n$.
We then use Proposition~\ref{lem:deg_growth} repeatedly for $i=1,\cdots,m-1$.
Notice that by (1) above, the number
$\delta_1$ in Proposition~\ref{lem:deg_growth} always equals to one (for each $i$); by the previous paragraph,
the number $\delta_d$ in Proposition~\ref{lem:deg_growth} is asymptotic to $\deg(P)^n$.
This implies that the degree growth for each $p$ is indeed
\[
\deg_p (f_P^n) \asymp \deg(P)^{pn}.
\]
This is for the special case of $\mathbb{C}[x] \cong \mathbb{C}[T]/ (T^m)$. If $V = \mathbb{C}[x]$, then the same formula is true
by~\eqref{eq:product} since we have
$\mathbb{C}[x] \cong \prod_i \mathbb{C} [T_i] /( T_i^{m_i} )$.
Finally let $k$ be the dimension of $\red\mathbb{C}[x]$ for a generic $x\in V$.
By the same argument as in \S\ref{sec:pwassoc}, we know that for generic $x,y$, we have
$\mathbb{C}[x]\cong \mathbb{C}[y]$, and we can further use this fact to make $f_P$ a product map.
Therefore, we can conclude again that we have
\[
\deg_p (f_P^n) \asymp \left(\deg(P)^{\min \{ p, k \}}\right)^n.
\]
\end{proof}
\subsection{Rational maps}
Next, we claim that for a rational function $\varphi(T)=\frac{Q(T)}{P(T)}\in \mathbb{C}[T]$,
we have the same result for degree growth
for the induced map $f(v):=P(v)^{-1}Q(v)$ on $V$. We will show in a moment that for a generic $v\in V$,
$P(v)$ is invertible, thus $f$ induces a dominant rational map from $V$ to itself.
First, assume $V = \mathbb{C}[x]\cong \mathbb{C}[T]/(T^m)$, $v=\sum_{i=0}^{m-1} {\lambda}_i x^i$,
and $P(x) = \sum_i a_i x^i$.
We get
\[
P(v)
= P(\lambda_0) + \sum_{j=1}^{m-1} x^j \bigl( {\lambda}_j P'({\lambda}_0) + Q_j({\lambda}_0,{\lambda}_1, \ldots , {\lambda}_{j-1})\bigr)
\]
If $P({\lambda}_0)\ne 0$, which is the generic case, then
$P(v)-P({\lambda}_0)\in xV$ is nilpotent. Thus $P(v)$ is invertible, and its inverse is given by
\begin{align*}
P(v)^{-1}
&=& \left\{P(\lambda_0)\left(1+\sum_{j=1}^{m-1} x^j
\cdot \frac{{\lambda}_j P'({\lambda}_0) + Q_j({\lambda}_0,{\lambda}_1, \ldots , {\lambda}_{j-1})}
{P({\lambda}_0)}\right)\right\}^{-1}\\
&=& P(\lambda_0)^{-1}\cdot \sum_{i=0}^{m-1}\left(-\sum_{j=1}^{m-1} x^j
\cdot \frac{{\lambda}_j P'({\lambda}_0) + Q_j({\lambda}_0,{\lambda}_1, \ldots , {\lambda}_{j-1})}{P({\lambda}_0)}\right)^i
\end{align*}
Expanding the last line, we get a polynomial expression in $x$.
In order to find the coefficient for $x^j$, we observe that in the expansion, $x^j$ can be formed by
products of terms coming from $x^{j_1}, \ldots, x^{j_\ell}$ with $j_1+\ldots + j_\ell=j$. If, say $j_1=j$, then all the others satisfy $j_i=0$.
However, since $j$ starts from $1$ in the sum, this
means we are looking at the term of $x^j$ in the linear term $i=1$,
and the contribution of the coefficient from that product is
$${\lambda}_j \, \frac{-P'({\lambda}_0)}{P({\lambda}_0)^2}+\frac{-Q_j({\lambda}_0,\cdots,{\lambda}_{j-1})}{P({\lambda}_0)^2}~.$$
If all $j_i < j$, then the contribution of the coefficient
from that product is a polynomial in ${\lambda}_1,\ldots,{\lambda}_{j-1}$ for a generic ${\lambda}_0$, and
is rational in ${\lambda}_0,{\lambda}_1,\cdots,{\lambda}_{j-1}$. More precisely, the contribution from that
product is of the form
\[
\frac {\widetilde{Q}_j({\lambda}_1,\cdots,{\lambda}_{j-1})}{P({\lambda}_0)^\ell},
\]
and since $1\le \ell\le m$,
the degree of this rational function is equivalent to $\deg(P)$ asymptotically.
Therefore, we conclude that the coefficient for $x^j$ is a linear function in ${\lambda}_j$
and is a rational function in ${\lambda}_0,\cdots,{\lambda}_{j}$ of degree asymptotically equivalent to
$\deg(P)$.
Furthermore, if we expand the product
$f_\varphi(v) = P(v)^{-1}Q(v)$ and look at the coefficient of $x^j$,
then the argument in the previous paragraph can also be applied. We have that the constant term
is $Q({\lambda}_0)/P({\lambda}_0)$. For $1\le j\le m-1$, the coefficient of $x^j$
is a rational function in ${\lambda}_0,\cdots,{\lambda}_j$ of degree asymptotic equivalent to
$\deg(\varphi):=\max\{\deg(P),\deg(Q)\}$,
and for generic ${\lambda}_0$ it
is a linear function of ${\lambda}_j$. Thus using the same notation as in the polynomial case,
$f$ induces a selfmap on each $V^{(i)}$ and for $2\le i\le m$, $f_P|_{V^{(i)}}$ preserve the
fibration $\pi_i$ and the map on a generic fiber is a linear isomorphism of $\mathbb{P}^1$.
Moreover, we have $f_{\varphi}^n=f_{\varphi^n}$, where $\varphi^n$ means we iterate the single
variable rational map $\varphi(T)$ for $n$ times. Also, notice that
$\deg(\varphi^n)=\deg(\varphi)^n$.
Thus we can use Proposition~\ref{lem:deg_growth} on each $\pi_i$ inductively again. To conclude, we obtain
in this case that $\mathbb{C}[x] \cong \mathbb{C}[T]/ (T^m)$, and we have
$\deg_p (f_\varphi^n) \asymp \max\{ \deg(Q), \deg(P)\}^{pn}$.
In general, we can write
$\mathbb{C}[x] \cong \prod_i \mathbb{C} [T_i] /( T_i^{m_i} )$, and for generic $x,y$, we have
$\mathbb{C}[x]\cong \mathbb{C}[y]$, and $f_P$ is birational to a product map.
Therefore, we can conclude again that for $k=\dim_\mathbb{C}\left(\red\mathbb{C}[x]\right)$, where $x$
is generic in $V$, for any $p$, we have
$$
\deg_p (f_\varphi^n) \asymp \left( \max\{ \deg(Q), \deg(P)\}^{\min \{ p, k \}}\right)^n
$$
This completes the proof of Theorem A.\hfill\qed
\section{Maps of several variables}
In this section, we will assume that $V$ is an abelian (i.e., commutative, associative and unitary)
$\mathbb{C}$-algebra, and $\dim_{\mathbb{C}}(V)=k$.
Take any dominant rational map
\[
f=(f_0:\cdots:f_d):\mathbb{C}^d\dashrightarrow\mathbb{C}^d,
\]
where each $f_j$ is a rational function.
Interpreting the multiplication and inverse in $f_j$ as multiplication and inverse in $V$
(notice that a generic element in $V$ is invertible),
$f$ also induces a rational map $F:V^d\dashrightarrow V^d$.
As we saw in Section~\ref{sec:square_abelian}, $V$ is Artinian and can be factored as
a product $V=\prod_{i=1}^m V_i$ of local Artinian $\mathbb{C}$-algebras. Let ${\mathfrak{m}}_i$ be the
maximal ideal of $V_i$, $\pi_i:V_i\to V_i/{\mathfrak{m}}_i\cong\mathbb{C}$ be the quotient map, and
$F_i:V_i^d\dashrightarrow V_i^d$ be the component of $F$ on $V_i$.
Then one sees that the fibration defined by
$\prod\pi_i:V_i^d\to\mathbb{C}^d$ is preserved by $F_i$, and
the induced map on the base $\mathbb{C}^d$ is exactly $f$. That is, the following diagram of maps
is commutative.
\[
\xymatrix{
V_i^d \ar@{-->}[rr]^{F_i}\ar[d]_{\prod\pi_i} && V_i^d \ar[d]^{\prod\pi_i} \\
\mathbb{C}^d \ar@{-->}[rr]_{f} && \mathbb{C}^d
}
\]
Coming back to $V$, we can in fact describe the fibration more succinctly using
the quotient map $\pi: V\to V/N(V)\cong\mathbb{C}^m$. That is, the fibration is given by
$\prod_{i=1}^k\pi : V^d \to (V/N(V))^d\cong\mathbb{C}^{md}$, and $F$ is preserving the fibration in
the sense that the following diagram is commutative.
\[
\xymatrix{
V^d \ar@{-->}[rr]^{F}\ar[d]_{\prod\pi} && V^d \ar[d]^{\prod\pi} \\
\mathbb{C}^{md} \ar@{-->}[rr]_{\prod f} && \mathbb{C}^{md}
}
\]
In particular, if $N(V)\ne (0)$, then $\prod\pi$ is a fibration with positive dimensional fibers.
In the following, we show that
in the special case of monomial maps, $F$ is moreover a product map.
\subsection{Generalized monomial maps}
Given an $d\times d$ integer matrix
$A=(a_{ij})\in M_d(\mathbb{Z})$ with $\det(A)\ne 0$, we define the (generalized) monomial map
$F_A:V^d \dashrightarrow V^d$ as
\[
\textstyle{ F_A(x_1,\cdots,x_d)=\left(\prod_j x_j^{a_{1,j}},\cdots ,\prod_j x_j^{a_{d,j}} \right), }
\]
where $x_j\in V$. We then compactify $V^d\subset \mathbb{P}^{dk}$ and lift the monomial map
as $F_A:\mathbb{P}^{dk}\dashrightarrow \mathbb{P}^{dk}$.
The goal of this section is to prove Theorem B. That is,
we will compute the degree growth of the generalized
monomial map $F_A$. The method we use is similar to Section~\ref{sec:square_abelian} when we
dealt with the squaring map on an abelian algebra.
Now $V$ is finite dimensional as a complex vector space, hence is Artinian as a $\mathbb{C}$-algebra,
hence can be decomposed as
a finite direct product of Artinian local rings $V \simeq \prod_{i=1}^m V_i$.
When $V$ is an Artin local ring with maximal ideal ${\mathfrak{m}}$, then ${\mathfrak{m}}$ is nilpotent,
i.e. ${\mathfrak{m}}^l=0$ for some $l\ge 1$, and $V/{\mathfrak{m}}\simeq \mathbb{C}$.
As a $\mathbb{C}$-vector space, we can write $V$ as the
direct sum $V\simeq \mathbb{C}\oplus {\mathfrak{m}}$, where ${\mathfrak{m}}$ is the maximal ideal of $V$.
Introduce the map
\[
\Phi : \mathbb{C} \oplus {\mathfrak{m}} \longrightarrow V
\]
sending $(a,h)$ to $a\cdot \exp(h)$, where $\exp(h)=\sum_{i=0}^\infty h^i/i!$.
The function $\exp(h)$ is well defined since the sum is indeed finite, as we explained before.
Moreover, $\Phi$ is a birational map with birational inverse
\[
\Phi^{-1}(a,h)= a\cdot \log(1+a^{-1}h).
\]
Here $\log(1+x)=\sum_{i=1}^\infty (-1)^{i-1}x^i/i$. Notice that the $\Phi^{-1}$ above defines a map $\mathbb{C} \oplus {\mathfrak{m}} \longrightarrow V$, but using the identification $V\simeq\mathbb{C} \oplus {\mathfrak{m}} $ (as vector spaces) we can interpret it as a map $V \longrightarrow \mathbb{C} \oplus {\mathfrak{m}} $. Moreover, one can check that it indeed is the rational inverse of $\Phi$ when we interpret it this way.
For $d$ copies of $V$, we have, as $\mathbb{C}$-vector spaces, $V^d \simeq \mathbb{C}^d \oplus {\mathfrak{m}}^{\oplus d}$. We use
${\mathfrak{m}}^{\oplus d}$ to stress that it is the direct sum of $d$ copies of ${\mathfrak{m}}$, not the usual
power of ideal ${\mathfrak{m}}^d$.
Then, the map $F_A$, after conjugating $\Phi$, becomes a product of a monomial map and a linear map, i.e.,
\[
F_A\circ \Phi = \Phi\circ ( f_A , T_A),
\]
where $f_A:\mathbb{C}^d\dashrightarrow\mathbb{C}^d$ is the usual monomial map on $\mathbb{C}^d$ induced by $A$, and $T_A$ is the
linear map given by $A$ on the vector space ${\mathfrak{m}}^{\oplus d}$.
The linear map $T_A$ has degree $1$ in codimension $0\le j\le \dim({\mathfrak{m}}^{\oplus d}) = d(k-1)$, and degree $0$ for $j > d(k-1)$. The degree growth of the usual monomial map $f_A$ is shown
as
\[
\deg_p(f_A) \asymp \left\| \wedge^{p} A^n \right\|,
\]
where $\|\cdot\|$ is any norm (for the proof of this result, see \cite{FW,L}).
Therefore, by \eqref{eq:product}, the degree growth of the map $F_A$ can be described as
\[
\deg_p(F_A^n) \asymp \max_{i+j=p} \{ \deg_i(f_A^n)\deg_j(T_A^n) \}
\asymp \max_{p+d-dk\le i\le p}
\left\| \wedge^{i} A^n \right\|.
\]
Here, we use the convention that negative exterior product is zero.
In the general case, we have $V \simeq \prod_{i=1}^m V_i$ with each $V_i$ being an Artin local ring.
Hence $F_A:V^d\dashrightarrow V^d$ is a product of maps $F_{A,i}:V_i^d\dashrightarrow V_i^d$, and each
$F_{A,i}$ has a further product structure as explained above.
The map $F_A$ is the again a product of usual monomial maps and a linear map.
The monomial map maps $\mathbb{C}^{md}\to\mathbb{C}^{md}$ and is associated to the matrix
\[
\left(
\begin{matrix}
A & 0 & \cdots & 0 \\
0 & A & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & A
\end{matrix}
\right).
\]
Let $\diag(A;m)$ denote the above block-diagonal matrix with
$m$ blocks of the matrix $A$ on the diagonal positions. The degree growth behavior of $F_A$ is
therefore as follows.
\[
\deg_p(F_A^n) \asymp \max_{p-d(k-m)\le i\le p} \left\| \wedge^{i} \diag(A;m)^n \right\|.
\]
Notice that the number $m$ is the number of copies of $V_i$'s in the decomposition for $V$.
Another way to write $m$ is as
$m=\dim_\mathbb{C}(\red(V))$, where $\red(V)=V/N(V)$
and $N(V)$ is the nilradical of $V$, i.e., the ideal of all nilpotent elements.
This completes the proof of Theorem B.
\begin{bibdiv}
\begin{biblist}
\bib{AABM}{article}{
author={Abarenkova, N.},
author={Angl{\`e}s d'Auriac, J.-Ch.},
author={Boukraa, S.},
author={Maillard, J.-M.},
title={Growth-complexity spectrum of some discrete dynamical systems},
journal={Phys. D},
volume={130},
date={1999},
number={1-2},
pages={27--42},
issn={0167-2789},
}
\bib{AMV}{article}{
author={Angl{\`e}s d'Auriac, J.-Ch.},
author={Maillard, J.-M.},
author={Viallet, C. M.},
title={On the complexity of some birational transformations},
journal={J. Phys. A},
volume={39},
date={2006},
number={14},
pages={3641--3654},
issn={0305-4470},
}
\bib{AM}{book}{
author={Atiyah, M. F.},
author={Macdonald, I. G.},
title={Introduction to commutative algebra},
publisher={Addison-Wesley Publishing Co., Reading, Mass.-London-Don
Mills, Ont.},
date={1969},
pages={ix+128},
}
\bib{EB}{article}{
author={Bedford, Eric},
title={The dynamical degrees of a mapping},
conference={
title={Proceedings of the Workshop Future Directions in Difference
Equations},
},
book={
series={Colecc. Congr.},
volume={69},
publisher={Univ. Vigo, Serv. Publ., Vigo},
},
date={2011},
pages={3--13},
review={\MR{2905566 (2012m:37080)}},
}
\bib{BK1}{article}{
author={Bedford, Eric},
author={Kim, Kyounghee},
title={On the degree growth of birational mappings in higher dimension},
journal={J. Geom. Anal.},
volume={14},
date={2004},
number={4},
pages={567--596},
issn={1050-6926},
}
\bib{BK2}{article}{
author={Bedford, Eric},
author={Kim, Kyounghee},
title={Degree growth of matrix inversion: birational maps of symmetric,
cyclic matrices},
journal={Discrete Contin. Dyn. Syst.},
volume={21},
date={2008},
number={4},
pages={977--1013},
issn={1078-0947},
}
\bib{BHM}{article}{
author={Boukraa, S.},
author={Hassani, S.},
author={Maillard, J.-M.},
title={Noetherian mappings},
journal={Phys. D},
volume={185},
date={2003},
number={1},
pages={3--44},
issn={0167-2789},
}
\bib{CeDe}{article}{
author={Cerveau, Dominique},
author={D{\'e}serti, Julie},
title={It\'eration d'applications rationnelles dans les espaces de
matrices},
journal={Conform. Geom. Dyn.},
volume={15},
date={2011},
pages={72--112},
issn={1088-4173},
}
\bib{NB}{article}{
author={Dang, Nguyen Bac},
title={Degrees of iterates of rational maps on normal projective varieties},
status={preprint}
}
\bib{DF}{article}{
author={Diller, J.},
author={Favre, C.},
title={Dynamics of bimeromorphic maps of surfaces},
journal={Amer. J. Math.},
volume={123},
date={2001},
number={6},
pages={1135--1169},
issn={0002-9327},
}
\bib{DN}{article}{
author={Dinh, Tien-Cuong},
author={Nguy{\^e}n, Vi{\^e}t-Anh},
title={Comparison of dynamical degrees for semi-conjugate meromorphic
maps},
journal={Comment. Math. Helv.},
volume={86},
date={2011},
number={4},
pages={817--840},
issn={0010-2571},
}
\bib{DNT}{article}{
author={Dinh, Tien-Cuong},
author={Nguy{\^e}n, Vi{\^e}t-Anh},
author={Truong, Tuyen Trung},
title={On the dynamical degrees of meromorphic maps preserving a
fibration},
journal={Commun. Contemp. Math.},
volume={14},
date={2012},
number={6},
pages={1250042, 18},
issn={0219-1997},
}
\bib{DS}{article}{
author={Dinh, Tien-Cuong},
author={Sibony, Nessim},
title={Une borne sup\'erieure pour l'entropie topologique d'une
application rationnelle},
journal={Ann. of Math. (2)},
volume={161},
date={2005},
number={3},
pages={1637--1644},
issn={0003-486X},
}
\bib{FJ}{article}{
author={Favre, Charles},
author={Jonsson, Mattias},
title={Dynamical compactifications of ${\bf C}^2$},
journal={Ann. of Math. (2)},
volume={173},
date={2011},
number={1},
pages={211--248},
issn={0003-486X},
}
\bib{FW}{article}{
author={Favre, Charles},
author={Wulcan, Elizabeth},
title={Degree growth of monomial maps and McMullen's polytope algebra},
journal={Indiana Univ. Math. J.},
volume={61},
date={2012},
number={2},
pages={493--524},
issn={0022-2518},
}
\bib{L}{article}{
author={Lin, Jan-Li},
title={Pulling back cohomology classes and dynamical degrees of monomial
maps},
journal={Bull. Soc. Math. France},
volume={140},
date={2012},
number={4},
pages={533--549 (2013)},
issn={0037-9484},
}
\bib{Ng}{article}{
author={Nguy{\^e}n, Vi{\^e}t-Anh},
title={Algebraic degrees for iterates of meromorphic self-maps of ${\mathbb
P}^k$},
journal={Publ. Mat.},
volume={50},
date={2006},
number={2},
pages={457--473},
issn={0214-1493},
}
\bib{Og}{article}{
author={Oguiso, Keiji},
title={A remark on dynamical degrees of automorphisms of hyperk\"ahler
manifolds},
journal={Manuscripta Math.},
volume={130},
date={2009},
number={1},
pages={101--111},
issn={0025-2611},
}
\bib{PiRu1}{article}{
author={Pirio, Luc},
author={Russo, Francesco},
title={Quadro-quadric Cremona maps and varieties 3-connected by cubics: semi-simple part
and radical},
journal={International Journal of Mathematics},
volume={24},
number={13},
date={2013},
pages={1350105},
}
\bib{PiRu2}{article}{
author={Pirio, Luc},
author={Russo, Francesco},
title={Quadro-quadric Cremona transformations in low dimensions via the JC-correspondence},
journal={Ann. Inst. Fourier},
volume={64},
date={2014},
pages={71-111},
}
\bib{PiRu3}{article}{
author={Pirio, Luc},
author={Russo, Francesco},
title={The XJC-correspondence},
journal={Journal f\"ur die reine und angewandte Mathe-
matik},
volume={716},
date={2016},
pages={229--250},
}
\bib{MR1488341}{article}{
author={Russakovskii, Alexander},
author={Shiffman, Bernard},
title={Value distribution for sequences of rational mappings and complex
dynamics},
journal={Indiana Univ. Math. J.},
volume={46},
date={1997},
number={3},
pages={897--932},
issn={0022-2518},
}
\bib{Tru}{article}{
author={Truong, Tuyen Trung},
title={(Relative) dynamical degrees of rational maps over an algebraic closed field},
eprint={arXiv:1501.01523 [math.AG]}
}
\bib{Usn}{article}{
author={Usnich, Alexander},
title={A discrete dynamical system acting on pairs of matrices},
journal={Dokl. Nats. Akad. Nauk Belarusi},
volume={53},
date={2009},
number={3},
pages={21--24, 124},
issn={0002-354X},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,108,101,564,122 | arxiv | \section{Introduction}
\label{sec:intro}
Toda field theories in two dimensions are described
by the characteristic exponential type interactions.
One can not naively apply ordinary quantization
procedure for them since there is no vacuum for finite
field configurations and the couplings between the
fluctuations around any classical background tend to
zero as it approaches to the potential minima. In spite
of such a seemingly complicated feature, they are
completely integrable and the exact solution at the
classical level is known \cite{ls}. It can be
expressed in terms of free fields which are
related to Toda fields by canonical
transformation. These free fields can be regarded as
normal modes of the interacting fields. Thus the
quantum theory of Toda fields can be defined by
imposing canonical commutation relations on the
free fields.
The simplest Toda theory associated with the Lie
algebra $A_1$ is Liouville theory, for which extensive
canonical approaches have been developed so far \cite{gn,bcgt}.
There are two different operator approaches
for quantum Liouville theory. The one advocated by
Gervais and Neveu \cite{gn} is based on free field
realization of stress tensor.
Basic building blocks are chiral vertex operators
or Bloch wave solutions, which satisfy characteristic
exchange algebra \cite{gn84} related with $U_q(sl_2)$ quantum
group symmetry \cite{Babe,cgr}. Liouville exponential operators
can be constructed from them correspondingly to
half-integral spin representation of $U_q(sl_2)$.
We refer to this approach as chiral scheme since left- and
right-moving variables are taken as independent.
The other approach
was proposed by Braaten, Curtright and Thorn \cite{bcgt},
who noted that the B\"acklund transformation
relating Liouville field with free one is a canonical
mapping and it can be used to define quantum theory as
mentioned above. They gave exact expressions of some
basic operators of the theory. In this approach left-
and right-moving variables are not independent due to
symmetrical assignment of zero-mode variables of the
canonical free fields. We refer to this as vector scheme.
Construction of Liouville exponential operators of
arbitrary conformal weights were investigated by Otto and
Weigt \cite{ow} basically along the line of thought of
ref. \cite{bcgt}
but with a different parametrization of the
classical solution \cite{dhj}. Assuming most general expansion
of Liouville exponential operators as power series of
screening charge consistently with conformal invariance,
they showed that the Liouville exponential operators are
determined almost uniquely by imposing locality condition.
They carried out the analysis to third order in cosmological
constant and inferred exact operator solution, which can
be interpreted as a quantum deformation of
the classical expressions. Gervais and Schnittger \cite{gs93,gs94}
arrived at exact solution by extending the formalism of
chiral vertex operators \cite{cgr} to continuous spin representation.
They made detailed comparision between the chiral and
vector schemes and established the equivalence of their
operator solution with the conjecture given in ref.
\cite{ow}.\footnote{We thank J. Schnittger for useful
comments on this point.} Three of the present authors
also carried out the analysis within the vector scheme
by applying the algebraic method developed in ref. \cite{gs93}
and reconfirmed the operator solution to all order
of cosmological constant \cite{fit96}.
It is very natural to expect that similar development
can also be achieved for quantum Toda field theories.
Such is of special interest since they incorporate
$W$-symmetry \cite{wsymm} and serve as gravity sector of
extended conformal field theories. In fact
Toda systems have been investigated extensively with
emphasis on their extended conformal structures
\cite{Babe,bg,bg89,bfo,wgeom}.
In particular Bilal and Gervais \cite{bg,bg89} have shown that
the apporach of chiral vertex operators developed for
Liouville theory can be extended to Toda theories.
Construction of Toda exponential operators of arbitrary
conformal weights as in Liouville theory, however, has
not been achieved for general Toda
theories.\footnote{Fedoseev and Leznov \cite{fl} have
investigated generalized Toda system within vector scheme
and have consturcted exponential operators of particular
types.}
Three of the present authors have recently extended the
approach of ref. \cite{ow} to $A_2$-Toda system and obtained
arbitrary Toda exponential operators to fourth order in
the cosmological constant by analyzing directly the
locality conditons within vector scheme, giving a
conjecture for exact operator solution of $A_2$-Toda
theory \cite{fit98}. The method is rather restricted to $A_2$ case
and becomes intractable as one goes to higher orders
though the basic ideas undelying the arguments presented
there could equally well be applied for general Toda
systems. The purpose of this paper is to investigate
it from a more general setting that is not specialized
to a particular Toda system as possible as we can, and to
generalize the algebaric method
developed for Liouville theory \cite{gs93,fit96} to higher rank
systems containing more than one screening charges.
Since chiral approach is more convenient
than vectorial one, we shall show that a chiral structure
can be defined for Toda systems written in vector scheme
without introducing any extra degrees of freedom, and
mostly work in the chiral schematic description. This not
only explains the equivalence between the two approaches
in a direct manner but also enables us to restate the
locality of Toda exponential operators as a property of
the ${\cal R}$-matrix \cite{gn84,gs93}. The canonical commutation
relations will turn out to be a straightforward consequence
of the locality.
As stressed in ref. \cite{fit98}, it is very convenient to consider
Toda exponential operators associated with the fundamental
weights. They contain screening charges and a free
field vertex operator which are mutually commuting, hence
no ordering problem arise in the operator products. A full
set of those exponential operators is sufficient to
reconstruct not only arbitrary exponential operators
but also the local Toda field operator itself.
The generalization of the algebraic method of ref. \cite{fit96} to
$A_2$-system is not straightforward due to the fact that
the exchange algebra of the screening charges can not be
realized by a simple quantum mechanical system as in
Liouville theory. We need to truncate the full screening
charges to solve the locality constraints so that the
algebra of the truncated operators allows a quantum
mechanical realization. It will be shown that this can be
done and the Toda exponential operators can be determined
up to a constant related to the arbitrariness of the
cosmological constant. General exponential operators such
as the Toda potential can be obtained as operator products
from these operators. That the local Toda field satisfies
the operatorial field equation can be established
in a straighforward way. Strictly speaking, the
locality should be reexamined with
the operator solution with the full screening charges
since one can not a priori expect locality to be built-in.
It turns out to be a rather hard problem to check this
directly with the full operator expression. One way to
achieve this is to establish the property of the
${\cal R}$-matrix from which the locality follows. This
can be done explicitly for Liouville theory and partially
for $A_2$-Toda case. We think it to be a technical aspect
of the theory and will be solved positively.
This paper is organized as follows: In sect. 2 we argue
canonical structure of classical Toda theories. Chiral
schematic description of vectorial theories is introduced.
Canonicity of the mapping between Toda system and the free
theory defined by the classical solution is formulated
as constraints for the $r$-matirx, which are examined
explicitly for $A_1$ and $A_2$. Free field quantization of
$A_2$-Toda field is described in sect. 3. It is shown that
locality leads to nontrivial relations between the
${\cal R}$-matrix and the coefficients appearing in the
Toda exponential operator. That the locality automatically
guarantees the canonical commutation relations is also
argued. In sect. 4 the exponential operators of
$A_2$-system are obtained by extending the algebraic
approach of ref. \cite{fit96}. Sect. 5 deals with the operatorial
field equation. Closed forms of the ${\cal R}$-matrix
and their property discussed in sect. 3 are established
for some special cases in sect. 6. The final section is
devoted to discussions. In Appendices A and B we summarize
Poisson brackets among the screening charges, quadratic
Poisson algebra satisfied by the chiral fields of
$A_2$-system and the basic quantum exchange algebra.
Appendix C provides an elementary proof for a function
introduced in sect. 6.
\section{Canonical structure of classical Toda field theory}
\label{sec:classical toda}
Though we will be mostly concerned with $A_2$-Toda field theory, we
begin with some discussions on the exact solution of classical Toda
theories associated with an arbitrary simple Lie algebra ${\cal G}$
of rank $r$. It is described by the action
\begin{eqnarray}
\label{cl-act}
S=\frac{1}{\gamma^2}\int_{-\infty}^{+\infty}d\tau
\int_0^{2\pi}d\sigma \Biggl(\frac{1}{2}\partial_\alpha\varphi\cdot
\partial^\alpha\varphi-\mu^2\sum_{a=1}^r
{\rm e}^{\alpha^a\cdot\varphi}\Biggr) ~,
\end{eqnarray}
where $\varphi$ is the $r$-component Toda field and $\alpha^a$
($a=1,\cdots,r$) stand for the simple roots of ${\cal G}$. The coupling
constant $\gamma$ is a free parameter, which is determined by the
requirement of conformal invariance in the presence of conformal
matters.
The Toda field equations
\begin{eqnarray}
\label{teq}
\partial_\alpha\partial^\alpha\varphi-\mu^2\sum_{a=1}^r \alpha^a
{\rm e}^{\alpha^a\cdot \varphi}=0
\end{eqnarray}
can be solved exactly by Lie algebraic method \cite{ls,Babe,btb}. We
work in the Cartan-Weyl
basis and denote the generators of the Cartan subalgebra of ${\cal G}$ by
$H_k$ ($k=1,\cdots,r$) and the step operators corresponding to the simple
root $\alpha^a$ by $E_{\pm\alpha^a}$. They satisfy $[H,E_{\pm\alpha^a}]=
\pm\alpha^aE_{\pm\alpha^a}$, $[E_{\alpha^a},E_{-\alpha^b}]=\delta^{ab}\alpha^a
\cdot H$, where $H$ is an $r$-component vector with $H_k$ as the $k$-th
component. Let $\lambda^a$ ($a=1,\cdots,r$) be the fundamental weights
satisfying $2\lambda^a\cdot\alpha^b/(\alpha^b)^2=\delta^{ab}$ and
$|\lambda^a\rangle$ be the corresponding highest weight vector characterized
by $H|\lambda^a\rangle=\lambda^a|\lambda^a\rangle$,
$E_{\alpha^b}|\lambda^a\rangle=0$. We also introduce the adjoint vectors
$\langle\lambda^a|$ satisfying $\langle\lambda^a|\lambda^b\rangle
=\delta^{ab}$, $\langle\lambda^a|H=\alpha^a\langle\lambda^a|$ and
$\langle\lambda^a|E_{-\alpha^b}=0$. Then the exact solution to (\ref{teq})
is given by
\begin{eqnarray}
\label{exacsol}
{\rm e}^{\lambda^a\cdot\varphi(x)}=\frac{{\rm e}^{\lambda^a\cdot\psi(x)}}{%
\langle\lambda^a|M_+(x^+)M_-^{-1}(x^-)|\lambda^a\rangle}~,
\end{eqnarray}
where $\psi(x)=\psi_+(x^+)+\psi_-(x^-)$ with $\psi_\pm(x^\pm)$ being
arbitrary functions of the light-cone variables $x^\pm=\tau\pm\sigma$.
It is identified with the canonical
free field. $M_\pm(x^\pm)$ are defined by the
differential equations
\begin{eqnarray}
\label{meq}
\partial_\pm M_\pm(x^\pm)=\mp\frac{\mu}{2}\sum_{a=1}^r
V_a^\pm(x^\pm)E_{\pm\alpha^a}M_\pm(x^\pm)~,
\end{eqnarray}
where $V^\pm_a(x^\pm)={\rm e}^{\alpha^a\cdot\psi_\pm(x^\pm)}$ are the
classical vetices.
To ensure the periodic boundary condition on $\varphi$, we assume the
same periodicity for $\psi$. The left- and the right-moving modes
$\psi_\pm$, however, are not $2\pi$ periodic. To see this, we note the
normal mode expansions
\begin{eqnarray}
\label{nme}
\psi_\pm(x^\pm)=\frac{\gamma}{2}Q+\frac{\gamma}{4\pi}Px^\pm
+\frac{i\gamma}{\sqrt{4\pi}}\sum_{n\neq 0}\frac{1}{n}a_n^{(\pm)}
{\rm e}^{-inx^\pm} ~.
\end{eqnarray}
Then $\psi_\pm$ satisfy the periodicity $\psi(x^\pm\pm2\pi)
=\psi_\pm(x^\pm)\pm\displaystyle{\frac{\gamma}{2}P}$. To establish the
$2\pi$ periodicity of $\varphi$, we need to show the invariance
of the denominator of the rhs of (\ref{exacsol}). This can be seen from
the periodicity of $M_\pm$
\begin{eqnarray}
\label{mperiod}
M_\pm(x^\pm\pm2\pi)={\rm e}^{\frac{\gamma}{2}P\cdot H}M_\pm(x^\pm)
{\rm e}^{-\frac{\gamma}{2}P\cdot H}~,
\end{eqnarray}
which can be deduced from (\ref{meq}).
We can integrate (\ref{meq}) iteratively by noting the periodicity
(\ref{mperiod}) as
\begin{eqnarray}
\label{mpm}
M_+(x^+)&=&\sum_{n=0}^\infty\Biggl(-\frac{\mu}{2}\Biggr)^n
\sum_{a_1,\cdots,a_n}A_{a_1\cdots a_n}(x^+)
E_{\alpha^{a_1}}\cdots E_{\alpha^{a_n}}~, \\
M_-^{-1}(x^-)&=&\sum_{n=0}^\infty\Biggl(-\frac{\mu}{2}\Biggr)^n
\sum_{a_1,\cdots,a_n}B_{a_1\cdots a_n}(x^-)
E_{-\alpha^{a_n}}\cdots E_{-\alpha^{a_1}}~,
\end{eqnarray}
where the screening charge $A_{a_1\cdots a_n}$ is defined
\begin{eqnarray}
\label{a}
A_{a_1\cdots a_n}(x)\hskip -.2cm&=&\hskip -.2cm
C_{\alpha^{a_1}+\cdots+\alpha^{a_n}}
C_{\alpha^{a_2}+\cdots+\alpha^{a_n}}\cdots C_{\alpha^{a_n}}
\int_0^{2\pi}\hskip -.15cm dy_1
{\cal E}_{\alpha^{a_1}+\cdots+\alpha^{a_n}}(x-y_1)
V^+_{a_1}(y_1) \nonumber\\
\hskip -.2cm&&\hskip -.2cm \times \int_0^{2\pi}\hskip -.15cm dy_2
{\cal E}_{\alpha^{a_2}+\cdots+\alpha^{a_n}}(y_1-y_2)V^+_{a_2}(y_2)
\cdots\int_0^{2\pi\hskip -.15cm }dy_n{\cal E}_{\alpha^{a_n}}(y_{n-1}-y_n)
V^+_{a_n}(y_n)~.
\end{eqnarray}
In (\ref{a}) we have introduced for an arbitrary vector $\beta$ in the
root space
\begin{eqnarray}
\label{ce}
C_\beta=\Biggl(2{\rm sinh}\frac{\gamma}{4}\beta\cdot P\Biggr)^{-1}~, \qquad
{\cal E}_\beta(x)=\exp \frac{\gamma}{4} \beta\cdot P \epsilon(x)
\end{eqnarray}
with $\epsilon(x)$ being stair-step function defined by $\epsilon(x)=
1$ for $0<x<2\pi$ and $\epsilon(x+2\pi)=\epsilon(x)+2$. The screening
charges satisfy the quasi-periodicity
\begin{eqnarray}
\label{quasip}
A_{a_1\cdots a_n}(x^++2\pi)={\rm e}^{\frac{\gamma}{2}(\alpha^{a_1}
+\cdots+\alpha^{a_n})\cdot P}A_{a_1\cdots a_n}(x^+) ~.
\end{eqnarray}
The screening charges in the right-moving sector $B_{a_1\cdots a_n}$
are obtained by replacing $V_a^+$ with $V_a^-$ in the rhs of (\ref{a}). One
can verify from (\ref{a}) and (\ref{ce}) that $A_{a_1\cdots a_n}$ satisfy
the differential equations
\begin{eqnarray}
\label{aseq}
\partial_+A_a(x^+)=V_a^+(x^+)~, \qquad
\partial_+A_{a_1\cdots a_n}(x^+)=V_{a_1}^+(x^+)A_{a_2\cdots a_n}(x^+)~.
\end{eqnarray}
That $M_+$ satisfies (\ref{meq}) can be seen easily from (\ref{mpm}) and
(\ref{aseq}). Similar argument also applies to $B_{a_1\cdots a_n}$ and
$M_-$.
Putting (\ref{mpm}) into the rhs of (\ref{exacsol}), we can express
the classical solution in the following form
\begin{eqnarray}
\label{exacsol2}
{\rm e}^{\lambda^a\cdot\varphi(x)}=\frac{{\rm e}^{\lambda^a\cdot\psi(x)}}{%
1+\displaystyle{\sum_{n=1}^{\infty}\Biggl(\frac{\mu^2}{4}\Biggr)^n
\sum_{\{a\}_n,\{b\}_n}C^a_{\{a\}_n;\{b\}_n}
A_{\{a\}_n}(x^+)B_{\{b\}_n}(x^-)}}~,
\end{eqnarray}
where $\{a\}_n$ stands for the set of $n$ ordered indices $a_1\cdots a_n$ and
$C^a_{\{a\}_n;\{b\}_n}$ are numerical constants given by
\begin{eqnarray}
\label{C}
C^a_{\{a\}_n;\{b\}_n}=\langle\lambda^a|E_{\alpha^{a_1}}\cdots
E_{\alpha^{a_n}}
E_{-\alpha^{b_n}}\cdots E_{-\alpha^{b_1}}|\lambda^a\rangle ~.
\end{eqnarray}
These coefficients are determined only from the Lie algebra ${\cal G}$
and vanish for sufficiently large $n$ due to the finite dimensionality
of the fundamental representations. Furthermore, they are symmetric, i.e.,
$C^a_{\{a\}_n;\{b\}_n}=C^a_{\{b\}_n;\{a\}_n}$, and satisfy
\begin{eqnarray}
\label{C0}
C^a_{\{a\}_n;\{b\}_n}=0 \quad {\rm unless}\quad
\alpha^{a_1}+\cdots+\alpha^{a_n}=\alpha^{b_1}+\cdots+\alpha^{b_n}~.
\end{eqnarray}
We thus obtain the generalization of the well-known classical solution to
Liouville equation which corresponds to $A_1$ case. For $A_2$-Toda theory the
solution is explicitly given by
\begin{eqnarray}
\label{a2csol}
{\rm e}^{\lambda^a\cdot\varphi(x)}=\frac{{\rm e}^{\lambda^a\cdot\psi(x)}}{%
1+\displaystyle{\frac{\mu^2}{4}A_a(x^+)B_a(x^-)
+\biggl(\frac{\mu^2}{4}\biggr)^2A_{a\bar a}(x^+)B_{a\bar a}(x^-)}} ~,
\qquad
(a=1,2)
\end{eqnarray}
where the convention $\bar 1=2$, $\bar 2=1$ is employed for the indices,
specifically for $A_2$ case.
The remarkable property of Toda theories is that (\ref{exacsol2}) defines a
canonical transformation from $\psi$ to $\varphi$. In other words the
fundamental Poisson brackets among the canonical variables $\varphi$ and
$\pi_\varphi\equiv\displaystyle{\frac{1}{\gamma^2}\partial_\tau\varphi}$
are guaranteed by the canonical pairs of the free fields $\psi$ and
$\pi_\psi\equiv\displaystyle{\frac{1}{\gamma^2}\partial_\tau\psi}$, or
equivalently by the Poisson brackets
\begin{eqnarray}
\label{poisbra}
\{\psi_k(x),\psi_l(x')\}=-\frac{\gamma^2}{4}(\epsilon(x^+-x'{}^+)
+\epsilon(x^--x'{}^-))\delta_{kl} ~. \qquad
(k,l=1,\cdots,r)
\end{eqnarray}
One can be convinced of this by noting that the conserved higher-spin
currents generating the extended conformal symmetry can be written in
the same form by the substitution $\partial_\pm\varphi\rightarrow
\partial_\pm\psi$ \cite{Babe,bg}. This can be shown from the general ground
of Toda theories. The simplest is the stress tensor, for which one
can verify this most directly as
\begin{eqnarray}
\label{st}
T_{\pm\pm}&=&\frac{1}{\gamma^2}
(\partial_\pm\varphi\cdot\partial_\pm\varphi
-2\rho\cdot\partial_\pm^2\varphi) \nonumber\\
&=&\frac{1}{\gamma^2}
(\partial_\pm\psi\cdot\partial_\pm\psi
-2\rho\cdot\partial_\pm^2\psi) ~,
\end{eqnarray}
where $\rho$ is assumed to satisfy $\alpha^a\cdot\rho=1$ for any simple
root and is explicitly given by
\begin{eqnarray}
\label{rho}
\rho=\sum_a\frac{2\lambda^a}{(\alpha^a)^2} ~.
\end{eqnarray}
Similar situation occurs for higher-spin conserved currents.
In terms of canonical variables these two expressions, the one in
terms of interacting Toda fields and the other in terms of the canonical
free fields, coincide each other except for the terms containing the
cosmological constant $\mu^2$. They form a closed algebra in the
sense of Poisson bracket, which is insensitive to the parameter $\mu^2$.
This almost implies the free field Poisson bracket
\begin{eqnarray}
\label{dpoisbra}
\{\partial_\pm\psi_{k\pm}(x^\pm),\partial_\pm\psi_{l\pm}(x'{}^\pm)\}
=\frac{\gamma^2}{2}\partial_\pm\delta(x^\pm
-x'{}^\pm)\delta_{kl}~. \qquad (k,l=1,\cdots,r)
\end{eqnarray}
Later in this section we will show more rigorously the canonicity
between the interacting and the free fields for the $A_1$ and $A_2$ cases.
Eq. (\ref{dpoisbra}) does not contain the zero-modes of $\psi_\pm$.
To fix the zero-mode dependence we require that
the conformal dimension of ${\rm e}^{\lambda^a\cdot\varphi}$ coincides with
that of ${\rm e}^{\lambda^a\cdot\psi}$. By treating the left- and the
right-moving modes symmetrically we arrive at (\ref{poisbra}) with the
normal mode expansions (\ref{nme}). We refer to the left-right symmetrical
description as vector scheme as mentioned in the introduction.
One may consider the left- and the right-moving modes as independent.
Hence the zero-modes $P_\pm$ of $\partial_\pm\psi_\pm$ are independent
dynamical variables. The zero-modes of $\psi_\pm$ are determined from the
requirement that ${\rm e}^{\lambda^a\cdot\psi_+}$
and ${\rm e}^{\lambda^a\cdot\psi_-}$, respectively, are of conformal
weight $(\Delta,0)$ and $(0,\Delta)$, where $2\Delta$ is the conformal
scaling dimension of ${\rm e}^{\lambda^a\cdot\varphi}$. Then the
zero-modes $Q_\pm$ of $\psi_\pm$ which are canonical conjugate to
$P_\pm$ are also independent.
We call such treatment as chiral scheme. The normal mode expansions in
the chiral scheme are obtained from (\ref{nme}) by the replacement
$Q,~P\rightarrow2Q_\pm,~P_\pm$. Since the zero-mode variables
are doubled in the chiral scheme, the phase space is somewhat enlarged
than that of the vector scheme. The restriction of the phase
space of the chiral scheme to that of the vector scheme can be consistently
carried out by imposing the conditions $Q=Q_++Q_-$ and $P=P_+=P_-$. This
can be understood by noting that only the combination $Q_++Q_-$ appears
in (\ref{exacsol2}) and the periodic boundary condition on $\varphi$
implies $P_+-P_-=0$.
Though the chiral scheme involves extra variables, it
has an advantageous point that the left- and the right-moving variables
appearing in (\ref{exacsol2}) such as
${\rm e}^{\lambda^a\cdot\psi_\pm(x^\pm)}$, $A_{a_1\cdots a_n}(x^+)$,
$B_{a_1\cdots a_n}(x^-)$ form closed algebras under Poisson brackets.
This greatly facilitates the canonical analysis. In the vector scheme,
however, the two chiral sectors are not completely independent since
they have the zero-modes in common, and the chiral components do not
form closed algebra. At the quantum level these chiral fields
satisfy characteristic exchange algebra \cite{gn84} and quantum group structure
arises naturally in the chiral scheme \cite{Babe}, whereas these are not
manifest in the vector scheme.
One might think that the useful chiral structure were lost at the
expense of dealing only with the physical variables. But this is not
the case. We can fit the chiral structure in the vector scheme by
using a trick of rearranging the zero-mode. Let us redefine
$\psi_\pm(x^\pm)$ given by (\ref{nme}) as
\begin{eqnarray}
\label{cnme}
\psi_\pm(x^\pm)=\gamma Q+\frac{\gamma}{4\pi}Px^\pm
+\frac{i\gamma}{\sqrt{4\pi}}\sum_{n\neq 0}\frac{1}{n}a_n^{(\pm)}
{\rm e}^{-inx^\pm} ~.
\end{eqnarray}
They satisfy the Poisson brackets
\begin{eqnarray}
\label{cpoisbr}
\{\psi_{k\pm}(x^\pm),\psi_{l\pm}(x'{}^\pm)\}=-\frac{\gamma^2}{4}
\epsilon(x^\pm-x'{}^\pm)\delta_{kl}~.
\end{eqnarray}
We also redefine $V_a^\pm$, $A_{\{a\}_n}$ and $B_{\{a\}_n}$ by using
(\ref{cnme}) instead of (\ref{nme}) for $\psi_\pm$.
Since only the products of the left- and the right-moving variables with the
equal exponential dependence on $Q$ appear in the rhs of (\ref{exacsol2}),
we may define a product denoted by $\star$.
It is the weighted multiplication of an
arbitrary left-moving variable $L$ and an arbitrary right-moving one $R$,
\begin{eqnarray}
\label{starp}
L\star R\equiv L{\rm e}^{-\gamma\omega\cdot Q}R~,
\end{eqnarray}
where $\omega$ is chosen so as for the Poisson bracket between
$L{\rm e}^{-\gamma\omega\cdot Q}$
and ${\rm e}^{-\gamma\omega\cdot Q}R$ to vanish. This satisfies the
product rule
\begin{eqnarray}
\label{prodrule}
L\star R L'\star R'=LL'\star RR'~.
\end{eqnarray}
In terms of the $\star$-product an arbitrary free field exponential can be
written as
\begin{eqnarray}
\label{fexp}
{\rm e}^{\beta\cdot\psi(x)}={\rm e}^{\beta\cdot\psi_+(x^+)}\star
{\rm e}^{\beta\cdot\psi_-(x^-)}~.
\end{eqnarray}
A remarkable property of the $\star$-product is that for any pair
$L\star R$ and $L'\star R'$, their Poisson bracket satisfies
\begin{eqnarray}
\label{pbialg}
\{L\star R,L'\star R'\}=\{L,L'\}\star RR'+LL'\star\{R,R'\} ~,
\end{eqnarray}
Even if the left- and the right-moving variables develop nonvanishing
Poisson brackets, this implies that they can be regarded as
independent variables under the $\star$-product. By considering the
Poisson bracket between (\ref{fexp}) and $L\star R$, and then computing
the derivative with respect to $\beta$ at $\beta=0$, we obtain
\begin{eqnarray}
\label{psipb}
\{\psi(x),L\star R\}=\{\psi_+(x^+),L\}\star R+L\star\{\psi_-(x^-),R\}~.
\end{eqnarray}
Hence $\psi(x)$ can be regarded as $\psi_+(x^+)\star1+1\star\psi_-(x^-)$
in the Poisson bracket. The independence of the left- and the right-moving
variables in the chiral scheme corresponds to $\{\psi_+(x^+)\star1,1\star
\psi_-(x^-)\}=0$. Similarly, the stress tensors $T_{++}$ and
$T_{--}$, for instance, can be regarded as $T_{++}\star1$ and
$1\star T_{--}$, respectively. As we will see in the next section, the
chiral structure in the vector scheme can be extended to quantum theory.
We now turn to the analysis of the intimate connection of quadratic
Poisson algebra satisfied by the chiral fields with the canonical
structure of the Toda theory.\footnote{This has been investigated in
refs. \cite{Babe,btb} within chiral scheme for general Toda theories.}
Using the redefinition (\ref{cnme}) for
$\psi_\pm$, we put (\ref{exacsol2}) into the following form
\begin{eqnarray}
\label{cexacsol2}
{\rm e}^{-\lambda^a\cdot\varphi(x)}
=\sum_{n=0}^{\infty}\Biggl(\frac{\mu^2}{4}\Biggr)^n
\sum_{\{a\}_n,\{b\}_n}C^a_{\{a\}_n;\{b\}_n}
\psi^+_{\{a\}_n}(x^+)\star \psi^-_{\{b\}_n}(x^-)~,
\end{eqnarray}
where the chiral fields $\psi^\pm_{\{a\}_n}$ are defined by
\begin{eqnarray}
\label{psipm}
\psi^+_{\{a\}_n}(x^+)={\rm e}^{-\lambda^a\cdot\psi_+(x^+)}
A_{\{a\}_n}(x^+) ~,\qquad
\psi^-_{\{a\}_n}(x^-)={\rm e}^{-\lambda^a\cdot\psi_-(x^-)}
B_{\{a\}_n}(x^-) ~.
\end{eqnarray}
To simplify the expressions further we use condensed indices $A$,
$\cdots$ for $\{a\}_n$, $\cdots$, and write (\ref{cexacsol2}) as
\begin{eqnarray}
\label{simp}
{\rm e}^{-\lambda^a\cdot\varphi}=\sum_{A,B}C^a_{AB}\psi^+_A\star\psi^-_B~,
\end{eqnarray}
where we have included the cosmological constants in the numerical
coefficients $C^a_{AB}$. The basic fact known for the chiral fields is
that they satisfy quadratic Poisson algebras \cite{Babe,bdf,btb}
\begin{eqnarray}
\label{qpoisalg}
\{\psi^\pm_A(x),\psi^\pm_B(x')\}=-\frac{\gamma^2}{4}
\sum_{C,D}[\theta(x-x')r_{AB}^{CD}
-\theta(x'-x)\bar r_{AB}^{CD}]\psi^\pm_C(x)\psi^\pm_D(x')~,
\end{eqnarray}
where we have restricted ourselves to $0\le x,~x'<2\pi$ and $\theta(x)$
is the unit step function. The $r$-matrix may depend on
the zero-mode momenta $P$ and must satisfy
\begin{eqnarray}
\label{rbar}
\bar r_{AB}^{CD}=r_{BA}^{DC} ~,
\end{eqnarray}
due to the anti-symmetry property of the Poisson bracket.
They are also constrained by the classical Yang-Baxter
equations. It is now straightforward to write down the condition for
the canonicity of the transformation $\psi\rightarrow\varphi$ in terms
of the $r$-matrix. We first consider the classical locality
\begin{eqnarray}
\label{cloc}
0\hskip -.2cm&=&\hskip -.2cm\{{\rm e}^{-\lambda^a\cdot\varphi(0,\sigma)},
{\rm e}^{-\lambda^b\cdot\varphi(0,\sigma')}\} \nonumber\\
\hskip -.2cm&=&\hskip -.2cm -\frac{\gamma^2}{4}
\sum_{A,B}\sum_{A',B'}\sum_{C,C'}\Bigl[\theta(\sigma-\sigma')
\bigl(C^a_{CB}C^b_{C'B'}r_{CC'}^{AA'}-C^a_{AC}C^b_{A'C'}\bar r_{CC'}^{BB'}
\bigr) \nonumber\\
\hskip -.2cm&&\hskip -.2cm+\theta(\sigma'-\sigma)\bigl(
C^a_{AC}C^b_{A'C'}r_{CC'}^{BB'}-C^a_{CB}C^b_{C'B'}
\bar r_{CC'}^{AA'}\bigr)\Bigr]\psi^+_A(\sigma)\psi^+_{A'}(\sigma')\star
\psi^-_B(-\sigma)\psi^-_{B'}(-\sigma') ~.
\end{eqnarray}
This leads to
\begin{eqnarray}
\label{cloc2}
\sum_{C,C'}\bigl(C^a_{AC}C^b_{A'C'}r_{CC'}^{BB'}-C^a_{CB}C^b_{C'B'}
r_{C'C}^{A'A}\bigr)=0~.
\end{eqnarray}
We next consider the Poisson bracket
$\{{\rm e}^{-\lambda^a\cdot\varphi(0,\sigma)},
\partial_+{\rm e}^{-\lambda^b\cdot\varphi(0,\sigma')}\}$. Using
(\ref{cexacsol2}) and (\ref{qpoisalg}), we arrive at
\begin{eqnarray}
\label{cfpblhs}
\{{\rm e}^{-\lambda^a\cdot\varphi(0,\sigma)},
\partial_+{\rm e}^{-\lambda^b\cdot\varphi(0,\sigma')}\}
\hskip -.2cm
&=&\hskip -.2cm
\frac{\gamma^2}{4}\delta(\sigma-\sigma')\sum_{A,B}\sum_{A',B'}\sum_{C,C'}
C^a_{CB}C^b_{C'B'}(r_{CC'}^{AA'}+r_{C'C}^{A'A}) \nonumber\\
&&\hskip 2cm\times\psi^+_A(\sigma)\psi^+_{A'}(\sigma)\star
\psi^-_B(-\sigma)\psi^-_{B'}(-\sigma) ~.
\end{eqnarray}
In the derivation, use has been made of (\ref{cloc2}). Note that the Poisson
bracket is proportional to $\delta(\sigma-\sigma')$. Such $\delta$-function
can only arise from the Poisson bracket
\begin{eqnarray}
\label{cfpb}
\{{\rm e}^{-\lambda^a\cdot\psi(0,\sigma)},
\partial_+{\rm e}^{-\lambda^b\cdot\psi(0,\sigma')}\}=
\frac{\gamma^2}{2}\lambda^a\cdot\lambda^b
\delta(\sigma-\sigma')
{\rm e}^{-(\lambda^a+\lambda^b)\cdot\psi(0,\sigma)}~.
\end{eqnarray}
This leads to the second fundamental Poisson bracket
\begin{eqnarray}
\label{2ndfpb}
\{\varphi_k(0,\sigma),\pi^l_\varphi(0,\sigma')\}
=\{\psi_k(0,\sigma),\pi^l_\psi(0,\sigma')\}
=\delta_k^l\delta(\sigma-\sigma')~.
\end{eqnarray}
Note that the classical locality automatically guarantees (\ref{2ndfpb}).
We will see that similar situation also
occurs in quantum theory. We thus see that the transformation from the
free field $\psi$ to the Toda field $\varphi$ is a canonical mapping
if the $r$-matrix satisfies (\ref{cloc2}).
Though (\ref{cloc2}) suffices for the canonicity of the free field, it is
interesting to write down the conditions leading to (\ref{2ndfpb})
in term of the $r$-matrix. In doing this we note that the chiral fields
(\ref{psipm}) may happen to satisfy quadratic identities
\begin{eqnarray}
\label{quadid}
\sum_{A,B}f_r^{AB}\psi_A^\pm(x)\psi_{B}^\pm(x)&=&0~,
\end{eqnarray}
where $f_r^{AB}=f_r^{BA}$ may depend on $P$ and $r$ labels the identities.
These can be dealt with via multiplier method. From (\ref{cfpblhs}),
(\ref{quadid}) and the expansion
\begin{eqnarray}
\label{cfpbrhs}
{\rm e}^{-(\lambda^a+\lambda^b)\cdot\varphi(0,\sigma)}
=\sum_{A,B}\sum_{A',B'}C^a_{AB}C^b_{A'B'}
\psi^+_A(\sigma)\psi^+_{A'}(\sigma)\star
\psi^-_B(-\sigma)\psi^-_{B'}(-\sigma) ~,
\end{eqnarray}
we obtain the desired condition for the $r$-matrix
\begin{eqnarray}
\label{cfpb2}
&&\sum_{C,C'}(C^a_{AC}C^b_{A'C'}+C^a_{A'C}C^b_{AC'})
(r_{CC'}^{BB'}+r_{CC'}^{B'B}+r_{C'C}^{B'B}+r_{C'C}^{BB'})
+\sum_r(\mu_{AA'}^{ab;r}f_r^{BB'}+\nu_{BB'}^{ab;r}f_r^{AA'})
\nonumber\\
&&\hskip 2.cm=\rho^{ab}(C^a_{AB}C^b_{A'B'}+C^a_{A'B}C^b_{AB'}
+C^a_{AB'}C^b_{A'B}+C^a_{A'B'}C^b_{AB}) ~,
\end{eqnarray}
where $\mu_{AA'}^{ab;r}$ and $\nu_{BB'}^{ab;r}$ are the multipliers
for the quadratic identities (\ref{quadid}) and we have defined
$\displaystyle{\rho^{ab}=2\lambda^a\cdot\lambda^b}$.
The fundamental Poisson brackets can also be established by examining
(\ref{cloc2}) and (\ref{cfpb2}).
As an illustration, we consider the Liouville case. Though the complete
algebra is contained in that of $A_2$-Toda theory, we examine (\ref{cloc2})
and (\ref{cfpb2}) for the Liouville case, separately. Choosing the only
simple root to be unity, we define the chiral fields by
\begin{eqnarray}
\label{liov}
\psi_0^+(x^+)={\rm e}^{-\frac{1}{2}\psi_+(x^+)}~, \qquad
\psi^+_1(x^+)={\rm e}^{-\frac{1}{2}\psi_+(x^+)}A(x^+) ~,
\end{eqnarray}
where $A(x)$ is assumed to satisfy $\partial_+ A={\rm e}^{\psi_+}$ as in
(\ref{aseq}), and the right-moving fields are similarly defined.
The nonvanishing components of the $r$-matrix are given by
\begin{eqnarray}
\label{liouvrmat}
r_{00}^{00}=-r_{01}^{01}=-r_{10}^{10}=r_{11}^{11}=\frac{1}{4} ~, \quad
r_{01}^{10}=\frac{1}{2}\Bigl(1+\coth\frac{\gamma}{2}P\Bigr)~,
\quad r_{10}^{01}=\frac{1}{2}\Bigl(1-\coth\frac{\gamma}{2}P\Bigr)~.
\end{eqnarray}
In the Liouville case there is no quadratic identity of the form
(\ref{quadid}) and the conditions (\ref{cloc2}) and (\ref{cfpb2}) are
reduced to
\begin{eqnarray}
\label{lcons}
r_{AA'}^{BB'}=r_{B'B}^{A'A} ~, \qquad
r_{AA'}^{BB'}+r_{AA'}^{B'B}+r_{A'A}^{B'B}+r_{A'A}^{BB'}
=\frac{1}{2}(\delta_{AB}\delta_{A'B'}+\delta_{AB'}\delta_{A'B})~.
\end{eqnarray}
These relations are obviously satisfied by the $r$-matrix (\ref{liouvrmat}).
We now turn to the case of $A_2$-Toda theory. We use the convention
$(\alpha^a)^2=2$ for the simple roots and define the chiral fields by
\begin{eqnarray}
\label{a2cf}
\psi^+_0(x^+)={\rm e}^{-\lambda^1\cdot\psi_+(x^+)} , ~
\psi^+_1(x^+)={\rm e}^{-\lambda^1\cdot\psi_+(x^+)}A_1(x^+) , ~
\psi^+_2(x^+)={\rm e}^{-\lambda^1\cdot\psi_+(x^+)}A_{12}(x^+) ,
\nonumber \\
\psi^+_{\bar 0}(x^+)={\rm e}^{-\lambda^2\cdot\psi_+(x^+)} , ~
\psi^+_{\bar 1}(x^+)={\rm e}^{-\lambda^2\cdot\psi_+(x^+)}A_2(x^+) , ~
\psi^+_{\bar 2}(x^+)={\rm e}^{-\lambda^2\cdot\psi_+(x^+)}A_{21}(x^+) .
\end{eqnarray}
The $r$-matrix satisfies $r_{A\bar A}^{\bar BB}=r_{\bar AA}^{B\bar B}=0$,
and the charge conservation $r_{AB}^{CD}=r_{A\bar B}^{C\bar D}=0$ unless
$A+B=C+D$. Then the nonvanishing elements can be easily read off from the
quadratic Poisson algebra given in Appendix \ref{sec:appA} as
\begin{eqnarray}
\label{a2rmat}
&& r_{00}^{00}=r_{11}^{11}=r_{22}^{22}=\frac{2}{3}~, \quad
r_{01}^{01}=r_{10}^{10}=r_{02}^{02}=r_{20}^{20}=r_{12}^{12}=r_{21}^{21}
=-\frac{1}{3} ~,\nonumber \\
&&r_{01}^{10}=1-\coth\frac{\gamma}{4}\alpha^a\cdot P ~, \quad
r_{10}^{01}=1+\coth\frac{\gamma}{4}\alpha^a\cdot P ~,\nonumber\\
&&r_{02}^{20}=1-\coth\frac{\gamma}{4}(\alpha^a+\alpha^{\bar a})\cdot P ~,
\quad
r_{20}^{02}=1+\coth\frac{\gamma}{4}(\alpha^a+\alpha^{\bar a})\cdot P ~,
\nonumber \\
&&r_{12}^{21}=1-\coth\frac{\gamma}{4}\alpha^{\bar a}\cdot P ~, \quad
r_{21}^{12}=1+\coth\frac{\gamma}{4}\alpha^{\bar a}\cdot P ~, \nonumber\\
&&r_{0\bar 0}^{0\bar 0}=r_{\bar 00}^{\bar 00}
=r_{0\bar1}^{0\bar1}=r_{\bar01}^{\bar01}
=r_{1\bar0}^{1\bar0}=r_{\bar10}^{\bar10}
=r_{1\bar2}^{1\bar2}=r_{\bar12}^{\bar12}
=r_{2\bar1}^{2\bar1}=r_{\bar21}^{\bar21}
=r_{2\bar2}^{2\bar2}=r_{\bar22}^{\bar22}=\frac{1}{3} ~,\nonumber\\
&&r_{0\bar2}^{0\bar2}=r_{\bar02}^{\bar02}
=r_{2\bar0}^{2\bar0}=r_{\bar20}^{\bar20}
=r_{1\bar1}^{1\bar1}=r_{\bar11}^{\bar11}=-\frac{2}{3} ~,\nonumber\\
&&r_{0\bar2}^{1\bar1}=r_{\bar11}^{\bar20}
=1-\coth\frac{\gamma}{4}\alpha^a\cdot P ~, \quad
r_{1\bar1}^{0\bar2}
=r_{\bar20}^{\bar11}=1+\coth\frac{\gamma}{4}\alpha^a\cdot P ~, \nonumber\\
&&r_{1\bar1}^{2\bar0}=r_{\bar02}^{\bar11}
=1-\coth\frac{\gamma}{4}\alpha^{\bar a}\cdot P ~, \quad
r_{2\bar0}^{1\bar1}
=r_{\bar11}^{\bar02}=1+\coth\frac{\gamma}{4}\alpha^{\bar a}\cdot P ~,
\nonumber\\
&&r_{0\bar2}^{2\bar0}=r_{\bar02}^{\bar20}=-1+\coth\frac{\gamma}{4}
(\alpha^a+\alpha^{\bar a})\cdot P ~,\quad
r_{2\bar0}^{0\bar2}=r_{\bar20}^{\bar02}=-1-\coth\frac{\gamma}{4}
(\alpha^a+\alpha^{\bar a})\cdot P ~.\nonumber\\
\end{eqnarray}
In the present case there is a pair of quadratic identities
\begin{eqnarray}
\label{a2qid}
\psi_1^\pm(x)\psi_{\bar1}^\pm(x)-\psi_0^\pm(x)\psi_{\bar2}^\pm(x)
-\psi_2^\pm(x)\psi_{\bar0}^\pm(x)=0~.
\end{eqnarray}
The conditions to be examined out of (\ref{cloc2}) and (\ref{cfpb2})
are then given by
\begin{eqnarray}
\label{a2cons}
&& r_{AA'}^{BB'}=r_{B'B}^{A'A} ~, \quad
r_{AA'}^{BB'}+r_{AA'}^{B'B}+r_{A'A}^{B'B}+r_{A'A}^{BB'}
=\frac{4}{3}(\delta_{AB}\delta_{A'B'}+\delta_{AB'}\delta_{A'B})~,
\nonumber\\
&&r_{A\bar A}^{B\bar B}=r_{\bar BB}^{\bar AA} ~, \quad
r_{A\bar A}^{B\bar B}+r_{\bar AA}^{\bar BB}
+\mu_{A\bar A}f^{B\bar B}+\nu_{B\bar B}f^{A\bar A}
=\frac{2}{3}\delta_{AB}\delta_{\bar A\bar B}~,
\end{eqnarray}
where the nonvanishing coefficients arising from (\ref{a2qid}) are
$f^{1\bar1}=-f^{0\bar2}=-f^{2\bar0}=1$. The multipliers $\mu_{A\bar A}$
and $\nu_{B\bar B}$ are not uniquely determined since the simultaneous shifts
$\mu_{A\bar A}\rightarrow\mu_{A\bar A}+\kappa f^{A\bar A}$,
$\nu_{B\bar B}\rightarrow\nu_{B\bar B}-\kappa f^{B\bar B}$ for arbitrary
$\kappa$ has no effect on (\ref{a2cons}). One can directly verify that
the $r$-matrix (\ref{a2rmat})
satisfies (\ref{a2cons}) for the multipliers $\mu_{A\bar A}=\nu_{A\bar A}
=-f^{A\bar A}$. This establishes that the transformation from the
free fields to the Toda fields defined by the classical solution
(\ref{a2csol}) is indeed a canonical transformation.
\section{Quantum $A_2$-Toda Theory}
\label{sec:qa2toda}
\setcounter{equation}{0}
After a somewhat long preliminary description of the chiral structure
of classical Toda field theories from the canonical theoretic view point, we
turn to the quantization of $A_2$-Toda field. The reason of the
restriction of our analysis to the specific case is that it is
the simplest but nontrivial extension of Liouville theory and can be
expected to reveal common features of higher rank theories, for
which direct canonical approach presented in this paper might be
difficult to apply.
As we argued in the previous section, the interacting Toda fields
can be expressed in terms of the canonical free fields. Then the
quantization of the system can be carried out by imposing the canonical
commutation relations for the free fields by the prescription
$\{~,~\}\rightarrow\frac{1}{i}[~,~]$. Equivalently, we may assume that
the normal modes satisfy the standard commutation relations
\begin{eqnarray}
\label{ccrpsi}
[Q_k,P_l]=i\delta_{kl}, \qquad
[a^{(+)}_{kn},a^{(+)}_{lm}]
=[a^{(-)}_{kn},a^{(-)}_{lm}]=n\delta_{n+m,0}\delta_{kl}~.\quad
(k,l=1,2)
\end{eqnarray}
The Hilbert space of the state vectors of the Toda field sector
is the direct sum $\bigoplus_{p}{\cal H}_{p}$, where
$p$ specifies the eigenvalue of $P$ and ${\cal H}_{p}$ stands
for the Fock space generated by the oscillators $a^{(\pm)}_{kn}$ ($n<0$)
from the ground state with eigenvalue $p$. The Toda exponential operators,
however, do not have a well-defined action on the Hilbert space since they
are composed of operators shifting $p$ to imaginary direction in a formal
sense. We do not consider such subtleties in this article. See the
remarks of sect. 4.1.6 of ref. \cite{gs94}.
The chiral vertex operators are defined by the free field normal ordering
for the oscillatory modes and the symmetric ordering for the zero-mode
operators by the rule $:{\rm e}^{\beta\cdot Q}f(P):={\rm e}^{\frac{1}{2}
\beta\cdot Q}f(P){\rm e}^{\frac{1}{2}\beta\cdot Q}$. We redefine the chiral
screening charges $A_a$ and $A_{a\bar a}$ by
\begin{eqnarray}
\label{cscop}
A_a(x)&=&\int_0^{2\pi}dy:{\cal E}_{\alpha^a}(x-y)V^+_a(y): ~,
\nonumber\\
A_{a\bar a}(x)&=&\int_0^{2\pi}dydz
:{\cal E}_{\alpha^a}(x-y)V^+_a(y):
:{\cal E}_{\alpha^{\bar a}}(x-y)
{\cal E}_{\alpha^{\bar a}}(y-z)V^+_{\bar a}(z):~.
\end{eqnarray}
Two major modifications are made here from the
classical expressions (\ref{a}). The one is the omission of the overall
$P$ dependent coefficients to simplify the operator algebra satisfied
by the screening charges. The other is the rescaling $\psi_+\rightarrow
\eta\psi_+$ to keep the conformal weight of the vertex operator $V^+_a$
to be $(1,0)$ \cite{ct,gn84,ow}. We assume that the conformal
symmetry is
generated by the normal-ordered free field stress tensor given by
\begin{eqnarray}
\label{qst}
T_{\pm\pm}&=&\frac{1}{\gamma^2}
(:\partial_\pm\psi\cdot\partial_\pm\psi:
-2\rho\cdot\partial_\pm^2\psi) ~.
\end{eqnarray}
Then the conformal weight $\Delta_\beta$ of the vertex operators
$:{\rm e}^{\eta\beta\cdot\psi}:$ is given by
\begin{eqnarray}
\label{cdim}
\Delta_\beta=\beta\cdot\Bigl(\eta\rho-\frac{\gamma^2\eta^2}{8\pi}
\beta\Biggr)~.
\end{eqnarray}
In particular $\eta$ satisfies
\begin{eqnarray}
\label{ita}
\Delta_{\alpha^a}=\eta-\frac{\gamma^2\eta^2}{4\pi}=1 ~.
\end{eqnarray}
We also make the replacement $P$ by $\eta P$ in
${\cal E}_\beta(x)$ to ensure the quasi-periodicity of the screening
charges under the shift $x\rightarrow x+2\pi$.
The screening charges $B_a$ and $B_{a\bar a}$ in the right-moving sector
are similarly defined.
For later convenience, we introduce here some
notation
\begin{eqnarray}
\label{somedef}
\varpi\equiv-\frac{iP}{\gamma\eta}~, \qquad
\varpi^a\equiv\alpha^a\cdot\varpi ~, \qquad
g\equiv\frac{\gamma^2\eta^2}{8\pi}=\frac{\eta-1}{2}~, \qquad
q\equiv{\rm e}^{2\pi ig}~.
\end{eqnarray}
We will see that $q$ is the quantum deformation parameter and, hence,
$g$ plays the role of the Planck constant. In terms of these
variables (\ref{cscop}) can be written as
\begin{eqnarray}
\label{cscop2}
A_a(x)&=&\int_0^{2\pi}dyq^{(\varpi^a+1)\epsilon(x-y)}
V_a^+(y) \nonumber\\
A_{a\bar a}(x)&=&\int_0^{2\pi}dydz
q^{(\varpi^a+\varpi^{\bar a}+1)\epsilon(x-y)
+\varpi^{\bar a}\epsilon(y-z)} V_a^+(y)V^+_{\bar a}(z)~.
\end{eqnarray}
In quantum theory the quasi-periodicity (\ref{quasip}) is modified to
\begin{eqnarray}
\label{qquasip}
A_a(x+2\pi)=q^{2(\varpi^a+1)}A_a(x) ~, \quad
A_{a\bar a}(x+2\pi)=q^{2(\varpi^a+\varpi^{\bar a}+1)}A_{a\bar a}(x) ~
\end{eqnarray}
The screening charges and the vertex operators $V^{a\pm}_\kappa
\equiv:{\rm e}^{\eta\kappa\lambda^a\cdot\psi_\pm}:$ are the building
blocks of the Toda exponential operators. They are hermitian for the standard
assignment of hermiticity $\psi_\pm^\dagger=\psi_\pm$ as can be shown by
noting the relation
\begin{eqnarray}
\label{VV}
:{\rm e}^{\eta\beta\cdot\psi_\pm(x)}::{\rm e}^{\eta\beta'\cdot\psi_\pm(y)}:
=q^{-\beta\cdot\beta'\epsilon(x-y)}
:{\rm e}^{\eta\beta'\cdot\psi_\pm(y)}:
:{\rm e}^{\eta\beta\cdot\psi_\pm(x)}: ~.
\end{eqnarray}
Furthermore they satisfy a characteristic exchange algebra. In Appendix
\ref{sec:appB} we summarize the exchange properties of the screening
charges needed for the computation of the ${\cal R}$-matrix discussed
below. In the construction of the Toda exponential operator the crucial
property of the chiral vertex operators is the mutual commutativity of
$V^{a+}_\kappa(x)$, $A_a(x)$ and $A_{a\bar a}(x)$.
The $\star$-product (\ref{starp}) introduced in the previous section
can be defined also in quantum theory. Let $L$ and $R$, respectively,
be the left- and the right-chiral operators satisfying
$[L{\rm e}^{-\gamma\eta\omega\cdot Q},{\rm e}^{-\gamma\eta\omega\cdot Q}R]
=0$ for some numerical constant $\omega$, then we define their
$\star$-product by
\begin{eqnarray}
\label{qstarp}
L\star R\equiv L{\rm e}^{-\gamma\eta\omega\cdot Q}R ~.
\end{eqnarray}
The $\star$-product satisfies the multiplication rule
\begin{eqnarray}
\label{qstp}
L_1\star R_1 L_2\star R_2=L_1L_2\star R_1R_2~.
\end{eqnarray}
This is the quantum generalization of (\ref{pbialg}). It implies
that the left- and the right-chiral operators can be considered as
commuting under the $\star$-product. In particular one easily see
$L\star f(\varpi)R=f(\varpi)L\star R$ for arbitrary $f$ as a special
case of (\ref{qstp}).
In terms of the $\star$-product the screening charges ${\cal Y}_a$
and ${\cal Y}_{a\bar a}$ introduced in ref. \cite{fit98} are simply given by
\begin{eqnarray}
\label{yayaab}
{\cal Y}_a(x)=A_a(x^+)\star B_a(x^-)~, \qquad
{\cal Y}_{a\bar a}(x)=A_{a\bar a}(x^+)\star B_{a\bar a}(x^-)~.
\end{eqnarray}
Then the Toda exponential operator can be expressed as a power series
of the screening charges
\begin{eqnarray}
\label{teo}
{\rm e}^{\eta\kappa\lambda^a\cdot\varphi(x)}
&=&V_\kappa^a(x)\sum_{n,m=0}^{\infty}\Biggl(\frac{\mu^2}{4}\Biggr)^{n+2m}
C^a_{nm}(\kappa;\varpi)({\cal Y}_a(x))^n({\cal Y}_{a\bar a}(x))^m
\nonumber\\
&=&\sum_{n,m=0}^{\infty}\Biggl(\frac{\mu^2}{4}\Biggr)^{n+2m}
C^a_{nm}(\kappa;\varpi+\kappa\lambda^a)\psi^{a+}_{nm}(\kappa;x^+)
\star \psi^{a-}_{nm}(\kappa;x^-)
\end{eqnarray}
where $V^a_\kappa(x)=V^{a+}_\kappa(x^+)\star V^{a-}_\kappa(x^-)$
is the free field vertex operator in the vector scheme.
We have rescaled $\varphi$ by $\eta$ as was done for the canonical
free field to make the operatorial mapping $\psi\rightarrow\varphi$
to keep the canonical commutation relations.\footnote{In the previous
work \cite{fit98} this rescaling was not considered for the Toda
fields.} We have also introduced the chiral fields by
\begin{eqnarray}
\label{qchif}
\psi^{a+}_{nm}(\kappa;x)=V^{a+}_\kappa(x)(A_a(x))^n
(A_{a\bar a}(x))^m~, \quad
\psi^{a-}_{nm}(\kappa;x)=V^{a-}_\kappa(x)(B_a(x))^n
(B_{a\bar a}(x))^m ~.
\end{eqnarray}
Since the vertex operators and the screening charges are commuting,
there arises no ordering problem in the expansion (\ref{teo}).
The conformal covariance of the Toda exponential is attributed to
that of the free field vertex operator. The coefficients $C^a_{nm}
(\kappa;\varpi)$ may depend on the zero-mode $\varpi$ without
affecting the conformal symmetry \cite{ow}. They are assumed to satisfy
the conditions
\begin{eqnarray}
\label{Cscon}
&&C_{00}^a(\kappa;\varpi)=1 ~, \quad C^a_{nm}(0;\varpi)=\delta_{n0}
\delta_{m0} ~, \nonumber\\
&& (C^a_{nm}(\kappa;\varpi))^\dagger=C^a_{nm}(\kappa;\varpi+\kappa\lambda^a
-(n+m)\alpha^a-m\alpha^{\bar a})~.
\end{eqnarray}
The last relation corresponds to the hermiticity of the Toda exponential
operator for real $\kappa$. In ref. \cite{fit98} the coefficients $C^a_{nm}$
are explicitly given as a conjecture based upon the analysis of
the locality condition
\begin{eqnarray}
\label{qloc}
[e^{\kappa\lambda^a\cdot\varphi(0,\sigma)},
e^{\kappa\lambda^b\cdot\varphi(0,\sigma')}]=0
\end{eqnarray}
up to the fourth
order in $\mu^2$. We will fill the gap and give a complete
proof for the conjectured forms of the coefficients in the next
section.
At this point we introduce the additive $\varpi$-charges. If an operator
${\cal O}$ satisfies $[\varpi^a,{\cal O}]=-\nu^a{\cal O}$, we
assign ${\cal O}$ the $\varpi^a$-charges $\nu^a$. Then the chiral
fields (\ref{qchif}) can be assigned definite charges
since the $\varpi^a$-charges of the vertex operators $V_\kappa^{a\pm}$
and $V_a^\pm$ are, respectively, $\kappa$ and $2$, whereas the
$\varpi^{\bar a}$-charges are $0$ and $-1$.
One can easily be convinced from the operator algebra given in
Appendix \ref{sec:appB} that the chiral fields form a closed
exchange algebra in each chiral sector as
\begin{eqnarray}
\label{excalg}
\psi^{a\pm}_{nm}(\kappa;x)\psi^{b\pm}_{n'm'}(\nu;x')
&=&\cases{\displaystyle{\sum_{r,r',s,s'}
{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}({\scriptstyle {a\atop
\kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
\psi^{b\pm}_{r's'}(\nu;x')
\psi^{a\pm}_{rs}(\kappa;x)} &$(x>x')$\cr
\displaystyle{\sum_{r,r',s,s}
\overline{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}(
{\scriptstyle {a\atop\kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
\psi^{b\pm}_{r's'}(\nu;x')
\psi^{a\pm}_{rs}(\kappa;x)} &$(x<x')$\cr}
\end{eqnarray}
where the ${\cal R}$-matrix depends on the zero-mode momentum
$\varpi$ and the sum should be taken over nonnegative integers $r$, $r'$,
$s$, $s'$ satisfying $r+r'=n+n'$, $s+s'=m+m'$ if $b=a$ and
$r+s+s'=n+m+m'$, $r'+s+s'=n'+m+m'$ if $b=\bar a$ due to the
charge conservation. We can implement the charge conservation
by assuming
\begin{eqnarray}
\label{cc}
&&{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}({\scriptstyle {a\atop
\kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
=\overline{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}(
{\scriptstyle {a\atop\kappa}}|{\scriptstyle {b\atop \nu}};
\varpi\bigr)=0 \nonumber\\
&& {\rm unless} \quad
\cases{r+r'=n+n',~s+s'=m+m' &for $b=a$\cr
r+s+s'=n+m+m', ~r'+s+s'=n'+m+m' &for $b=\bar a$}
\end{eqnarray}
Applying the exchange algebra twice
for $\psi^{a+}_{kl}(\kappa;x)\psi^{b+}_{nm}(\nu;x')$ ($x<x'$) is simply
the identity operation. This implies the consistency
conditions
\begin{eqnarray}
\label{cRcond}
\sum_{k',l',n',m'}
\overline{\cal R}_{kl}^{k'l'}{}_{;nm}^{;n'm'}({\scriptstyle{a\atop
\kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
{\cal R}_{n'm'}^{n''m''}{}_{;k'l'}^{;k''l''}
({\scriptstyle{b\atop
\nu}}|{\scriptstyle {a\atop \kappa}};\varpi\bigr)=\delta_k^{k''}
\delta_l^{l''}\delta_n^{n''}\delta_m^{m''} ~.
\end{eqnarray}
Hence, $\overline{\cal R}$ can be considered as the inverse of ${\cal R}$.
Furthermore, the associativity of the operator
products leads to the Yang-Baxter relations \cite{gn84,bg89}. For example,
consider the two different ways of applying the exchange
algebra for
\begin{eqnarray}
\label{opasoc}
(\psi^{a+}_{kl}(\kappa;x)
\psi^{b+}_{nm}(\nu;x'))\psi^{c+}_{rs}(\rho;x'')
=\psi^{a+}_{kl}(\kappa;x)
(\psi^{b+}_{nm}(\nu;x')\psi^{c+}_{rs}(\rho;x''))~.
\quad (x>x'>x'')
\end{eqnarray}
Such operations are consistent only if the ${\cal R}$-matrix
satisfies the relations
\begin{eqnarray}
\label{YBeq}
&&\sum_{k',l',n',m',r',s'}{\cal R}_{kl}^{k'l'}{}_{;nm}^{;n'm'}
({\scriptstyle{a\atop
\kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
{\cal R}_{k'l'}^{k''l''}{}_{;rs}^{;r's'}({\scriptstyle{a\atop
\kappa}}|{\scriptstyle {c\atop \rho}};\varpi+\beta_{n'm'}^{b;\nu}\bigr)
{\cal R}_{n'm'}^{n''m''}{}_{;r's'}^{;r''s''}({\scriptstyle{b\atop
\nu}}|{\scriptstyle {c\atop \rho}};\varpi\bigr) \nonumber\\
&&\hskip .5cm =\sum_{k',l',n',m',r',s'}{\cal R}_{kl}^{k'l'}
{}_{;r's'}^{;r''s''}({\scriptstyle{a\atop
\kappa}}|{\scriptstyle {c\atop \rho}};\varpi\bigr)
{\cal R}_{k'l'}^{k''l''}{}_{;n'm'}^{;n''m''}
({\scriptstyle{a\atop
\kappa}}|{\scriptstyle {b\atop \nu}};
\varpi+\beta_{r''s''}^{c;\rho}\bigr)
{\cal R}_{mn}^{n'm'}{}_{;rs}^{;r's'}
({\scriptstyle{b\atop
\nu}}|{\scriptstyle {c\atop \rho}};
\varpi+\beta_{kl}^{a;\kappa}\bigr) ~,\nonumber\\
\end{eqnarray}
where $\beta^{a;\kappa}_{nm}$ stands for the shift of $\varpi$ arising
in moving it from the right of $\psi^a_{nm}(\kappa;x)$ to the left and is
given by
\begin{eqnarray}
\label{beta}
\beta^{a;\kappa}_{nm}=\kappa\lambda^a+(n+m)\alpha^a+m\alpha^{\bar a}~.
\end{eqnarray}
There are similar relations corresponding to different choices of
the operator products.
We now turn to the locality (\ref{qloc}). Using (\ref{teo}) and
(\ref{excalg}), we can rewrite it as a condition for the ${\cal R}$-matrix
as
\begin{eqnarray}
\label{locR}
&&\sum_{n,n',m,m'}C^a_{nm}(\kappa;\varpi
+\kappa\lambda^a)C^b_{n'm'}(\nu;\varpi
+\nu\lambda^b+\beta^{a;\kappa}_{nm}){\cal R}_{nm}^{kl}
{}_{;n'm'}^{;k'l'}
({\scriptstyle {a\atop \kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
\overline{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}(
{\scriptstyle {a\atop\kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
\nonumber\\
&&\hskip 3cm=\delta_{kr}\delta_{ls}\delta_{k'r'}\delta_{l's'}
C^a_{kl}(\kappa;\varpi+\kappa\lambda^a+\beta^{b;\nu}_{k'l'})
C^b_{k'l'}(\nu;\varpi+\nu\lambda^b) ~.
\end{eqnarray}
By noting (\ref{cRcond}), one can alternatively express this
in a simpler form as
\begin{eqnarray}
\label{locR2}
&&C^a_{nm}(\kappa;\varpi
+\kappa\lambda^a)C^b_{n'm'}(\nu;\varpi
+\nu\lambda^b+\beta^{a;\kappa}_{nm})
{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}
({\scriptstyle {a\atop\kappa}}|{\scriptstyle {b\atop \nu}};\varpi\bigr)
\nonumber\\
&&\hskip 2cm=C^a_{rs}(\kappa;\varpi+\kappa\lambda^a
+\beta^{b;\nu}_{r's'})
C^b_{r's'}(\nu;\varpi+\nu\lambda^b)
{\cal R}_{r's'}^{n'm'}{}_{;rs}^{;nm}
({\scriptstyle {b\atop\nu}}|{\scriptstyle {a\atop\kappa}};\varpi\bigr) ~.
\end{eqnarray}
We see that the locality of the quantum Toda fields can be
attributed to the property of the ${\cal R}$-matrix as in the classical
theory and (\ref{locR2}) corresponds to the quantum theoretical extension
of the condition (\ref{cloc2}) for the $A_2$-Toda theory.
The quantum Toda field $\varphi$ can be obtained from the Toda exponential
operator (\ref{teo}) as the derivative with respect to $\kappa$ at the
origin $\kappa=0$. This leads to
\begin{eqnarray}
\label{qtf}
\eta\lambda^a\cdot\varphi(x)=\eta\lambda^a\cdot\psi(x)
+\Upsilon^a(x) ~,
\end{eqnarray}
where $\Upsilon^a(x)$ is a power series of the screening charges and will be
given in sect. \ref{sec:feq} in establishing the field equations. We
need not here the explicit form. Using (\ref{locR}), one can show
for $\sigma\ne\sigma'$
\begin{eqnarray}
\label{etc2}
[{\rm e}^{\eta\kappa\lambda^a\cdot\varphi(0,\sigma)},
\partial_\tau{\rm e}^{\eta\nu\lambda^a\cdot\varphi(0,\sigma')}]
=[\partial_\tau{\rm e}^{\eta\kappa\lambda^a\cdot\varphi(0,\sigma)},
\partial_\tau{\rm e}^{\eta\nu\lambda^a\cdot\varphi(0,\sigma')}]=0 ~.
\end{eqnarray}
This together with (\ref{qloc}) imply the
following relations
\begin{eqnarray}
\label{etc3}
&&[\eta\lambda^a\cdot\psi(0,\sigma),\Upsilon^b(0,\sigma')]
+[\Upsilon^a(0,\sigma),\eta\lambda^b\cdot\psi(0,\sigma')]
+[\Upsilon^a(0,\sigma),\Upsilon^b(0,\sigma')]=0~, \nonumber\\
&&[\eta\lambda^a\cdot\psi(0,\sigma),\dot\Upsilon^b(0,\sigma')]
+[\Upsilon^a(0,\sigma),\eta\lambda^b\cdot\dot\psi(0,\sigma')]
+[\Upsilon^a(0,\sigma),\dot\Upsilon^b(0,\sigma')]=0~,\\
&&[\eta\lambda^a\cdot\dot\psi(0,\sigma),\dot\Upsilon^b(0,\sigma')]
+[\dot\Upsilon^a(0,\sigma),\eta\lambda^b\cdot\dot\psi(0,\sigma')]
+[\dot\Upsilon^a(0,\sigma),\dot\Upsilon^b(0,\sigma')]=0~.\nonumber
\end{eqnarray}
The lhs' of these expressions may exhibit at most finite discontinuities
at $\sigma=\sigma'$. Hence the contributions from
$\Upsilon^a$ are canceled in the equal-time commutation
relations between $\varphi$ and $\displaystyle{\pi_\varphi
\equiv\frac{1}{\gamma^2}\dot\varphi}$. We thus arrive at the full set
of canonical commutation relations
\begin{eqnarray}
\label{etcc}
&&[\varphi_k(0,\sigma),\varphi_l(0,\sigma')]
=[\pi^k_\varphi(0,\sigma),\pi^l_\varphi(0,\sigma')]=0~,
\nonumber\\
&&[\varphi_k(0,\sigma),\pi^l_\varphi(0,\sigma')]
=[\psi_k(0,\sigma),\pi^l_\psi(0,\sigma')]
=i\delta_k^l\delta(\sigma-\sigma')~.
\end{eqnarray}
As in the classical theory, the locality of the Toda exponential
operators ensures the set of canonical commutation relations of the
Toda fields. This establishes that the operatorial transformation
(\ref{teo}) from $\psi$ to $\varphi$ induces a canonical mapping
between the canonical pairs of the Toda system and those of the free
theory.
In ref. \cite{fit96} the locality and the canonical commutation
relations in Liouville theory were separately discussed and their
intimate relationship was not so clarified. This was mainly due to
the use of the vector scheme. As has been thoroughly investigated
for Liouville theory in refs. \cite{gn,gs93}, the virtue of the chiral
description is that it not only enables systematic analysis of the
locality but also makes clear the role of the underlying quantum group
symmetry of the operator algebra of the chiral fields.
Before closing the present section we make a remark for the
generalization to higher rank cases. In general only a specific
set of screening charges appears in the expansion of the Toda
exponential operator associated with an individual fundamental
weight. If such screening charges and the free field vertex
operator associated with the corresponding fundamental weight
are mutually commuting as in the $A_1$ and $A_2$ cases, one
can define chiral fields without the ordering problem. Since
the chiral fields are expected to form a closed exchange
algebra, the above arguments for the $A_2$ case are considered to be
applicable for general Toda theories without essential modifications.
\section{Quantum $A_2$-Toda Exponential Operators}
\label{sec:qteo}
\setcounter{equation}{0}
As announced in the previous section we determine the coefficients
$C^a_{nm}(\kappa;\varpi)$ by requiring the locality (\ref{qloc}) for
the Toda exponential operator (\ref{teo}). In ref. \cite{fit98} the locality
is investigated directly order by order in the cosmological constant
$\mu^2$. This approach suffices to guess the general forms of the
expansion coefficients. However, such direct method does not seem to
work well due to the increasing complication in handling the operator
equations if one tries to find and solve the constraints on the
coefficients at an arbitrary higher order. In order to systematically
analyze the locality constraints it is desired to extend the approach
of ref. \cite{fit96} developed for Liouville theory to the $A_2$ case.
The point that makes it possible to solve the locality conditions
in the Liouville case is that the screening charge can be decomposed
into a set of operators satisfing simple exchange algebra which allows
one dimensional quantum mechanical realization. Making use of the
property, one can convert the locality condition in operator form
into the algebraic relation containing only commuting variables.
It integrates the functional recurrence relations for the expansion
coefficients of the Liouville exponential operator with two arbitrary
parameters and can be solved with respect to the expansion
coefficients by making use of the arbitrariness of the two parameters.
This successful approach can be applied also for the $A_2$-Toda theory
with some modifications. The sharp difference with the $A_1$ case
is that the screening charges (\ref{cscop2}) can not be decomposed
into a set of operators which would lead to the generalization of the
integrated recurrence relations for the $A_1$ case through
a finite dimensional quamtum mechanical realization. Fortunately,
we do not need to take acount of the full operator algebra of the
screening charges as is given in Appendix \ref{sec:appB}. We see
that a judicious choice of a set of the independent components of
operators appearing in the screening charges that allow a quantum
mechanical realization is sufficient for the purpose to determine
the expansion coefficients.
The locality condition (\ref{qloc}) can be decomposed into components
with definite $\varpi$-charge as
\begin{eqnarray}
\label{dcpqloc}
&&\sum_{{n+m+r+s=K}\atop {m+s=L}}[
C^a_{nm}(\kappa;\varpi+\kappa\lambda^a)
C^a_{rs}(\nu;\varpi+\nu\lambda^a+\beta_{nm}^{a;\kappa})
I^a_{nm}{}^{;a}_{;rs}(\kappa,\nu;\sigma,\sigma') \nonumber\\
&&\hskip 2cm-C^a_{nm}(\kappa;\varpi+\kappa
\lambda^a+\beta_{rs}^{a;\nu})C^a_{rs}(\nu;\varpi+\nu\lambda^a)
I^a_{rs}{}^{;a}_{;nm}(\nu,\kappa;\sigma',\sigma)]=0 ~,\nonumber\\
&&\sum_{{n+m+s=K}\atop {r+m+s=L}}[
C^a_{nm}(\kappa;\varpi+\kappa\lambda^a)
C^{\bar a}_{rs}(\nu;\varpi+\nu\lambda^{\bar a}+\beta_{nm}^{a;\kappa})
I^{a,\kappa}_{nm}{}^{;{\bar a}\nu}_{;rs}(\sigma,\sigma')
\nonumber\\
&&\hskip 2cm-C^a_{nm}(\kappa;\varpi+
\kappa\lambda^a+\beta_{rs}^{\bar a;\nu})
C^{\bar a}_{rs}(\nu;\varpi+\nu\lambda^{\bar a})
I^{\bar a,\nu}_{rs}{}^{;a}_{;nm}(\nu,\kappa;\sigma',\sigma)]=0 ~,
\end{eqnarray}
where $I^a_{nm}{}^{;b}_{;rs}(\kappa,\nu;\sigma,\sigma')$ is defined by
\begin{eqnarray}
\label{Inmrs}
I_{nm}^{a,\kappa}{}_{;rs}^{;b,\nu}(\sigma,\sigma')
&=&V_\kappa^a(0,\sigma)({\cal Y}_a(0,\sigma))^n
({\cal Y}_{a\bar a}(0,\sigma))^m
V_\nu^b(0,\sigma')({\cal Y}_b(0,\sigma'))^r
({\cal Y}_{b\bar b}(0,\sigma'))^s\nonumber\\
&=&\psi_{nm}^{a+}(\kappa;\sigma)\psi_{rs}^{b+}(\nu;\sigma')\star
\psi_{nm}^{a-}(\kappa;-\sigma)\psi_{rs}^{b-}(\nu;-\sigma')~,
\end{eqnarray}
and the sum should be taken over nonnegative integers $n,m,r,s$ for
given $K,L$.
For definiteness, we consider the case $0<\sigma'<\sigma<2\pi$. To
reduce (\ref{dcpqloc}) further, let us choose an arbitrary point
$\sigma''$ between $\sigma'$ and $\sigma$, and introduce operators by
\begin{eqnarray}
\label{yzs}
&& Y_1^\pm=\int_\sigma^{2\pi}dzV_a^\pm(\pm z)~, \quad
Y_2^\pm=\int_{\sigma''}^\sigma dzV_a^\pm(\pm z)~, \nonumber\\
&& Z_1^\pm=\int_{\sigma'}^{\sigma''}dzV_{\bar a}^\pm(\pm z) ~, \quad
Z_2^\pm=\int_0^{\sigma'}dzV_{\bar a}^\pm(\pm z) ~.
\end{eqnarray}
These operators satisfy the following operator algebra
\begin{eqnarray}
\label{exchrel}
&&Y_1^\pm Y_2^\pm=q^{\mp 2}Y_2^\pm Y_1^\pm ~, \quad
Y_1^\pm Z_1^\pm=q^{\mp 1}Z_1^\pm Y_1^\pm ~, \quad
Y_1^\pm Z_2^\pm=q^{\pm 1}Z_2^\pm Y_1^\pm ~, \nonumber\\
&&\hskip 4.2cm Y_2^\pm Z_1^\pm=q^{\pm 1}Z_1^\pm Y_2^\pm ~, \quad
Y_2^\pm Z_2^\pm=q^{\mp 1}Z_2^\pm Y_2^\pm ~, \\
&&\hskip 8.4cm Z_1^\pm Z_2^\pm=q^{\mp 2}Z_2^\pm Z_1^\pm~.\nonumber
\end{eqnarray}
We also need the exchange algebra with the free field chiral vertex
operators
\begin{equation}
\label{exchvyz}
\begin{array}{ll}
Y_{1}^\pm V_\kappa^{a\pm}(\pm\sigma)=q^{\mp \kappa}
V_\kappa^{a\pm}(\pm\sigma)Y_{1}^\pm ~, \quad
&Y_{2}^\pm V_\kappa^{a\pm}(\pm\sigma)=q^{\pm \kappa}
V_\kappa^{a\pm}(\pm\sigma)Y_{2}^\pm ~, \\
Y_{1,2}^\pm V_\kappa^{a\pm}(\pm\sigma')=q^{\mp\kappa}
V_\kappa^{a\pm}(\pm\sigma')Y_{1,2}^\pm ~, \quad
& Z_{1,2}^\pm V_\kappa^{a\pm}(\pm \sigma)=
V_\kappa^{a\pm}(\pm \sigma)Z_{1,2}^\pm ~,\\
Y_{1,2}^\pm V_\nu^{\bar a\pm}(\pm \sigma)=
V_\nu^{\bar a\pm}(\pm \sigma)Y_{1,2}^\pm ~,\quad
& Z_{1,2}^\pm V_\nu^{\bar a\pm}(\pm\sigma)=q^{\pm\nu}
V_\nu^{\bar a\pm}(\pm\sigma)Z_{1,2}^\pm ~, \\
Z_1^\pm V_\nu^{\bar a\pm}(\pm\sigma')=q^{\mp\nu}
V_\nu^{\bar a\pm}(\pm\sigma')Z_1^\pm ~, \quad
& Z_2^\pm V_\nu^{\bar a\pm}(\pm\sigma')=q^{\pm\nu}
V_\nu^{\bar a\pm}(\pm\sigma')Z_2^\pm ~.
\end{array}
\end{equation}
These relations are used to move the vertex operators from the right
to the left of $Y_k^\pm$ and $Z_k^\pm$ or vice versa
in operator products.
We single out the contributions from the operators (\ref{yzs}) to the
screening charges and neglect all others as
\begin{equation}
\label{scrchYZ}
\begin{array}{ll}
A_a(\sigma)\sim f^+(\varpi^a+1)~,\quad
&A_{a\bar a}(\sigma)\sim q^{\varpi^{\bar a}}f^+(\varpi^a
+\varpi^{\bar a}+1)g^+(0) ~,\\
A_a(\sigma')\sim q^{-\varpi^a-1}f^+(0)~,\quad
& A_{a\bar a}(\sigma')\sim q^{-\varpi^a-1}f^+(0)g^+(0) ~, \\
A_{\bar a}(\sigma)\sim q^{\varpi^{\bar a}+1}g^+(0)~, \quad
& A_{\bar aa}(\sigma)\sim q^{\varpi^{\bar a}+1}g^+(0)f^+(0)~,\\
A_{\bar a}(\sigma')\sim g^+(\varpi^{\bar a}+1)~, \quad
& A_{\bar aa}(\sigma')\sim q^{-\varpi^a}g^+(\varpi^a+\varpi^{\bar a}+1)
f^+(0)~, \\
B_a(-\sigma)\sim f^-(\varpi^a+1) ~, \quad
& B_{a\bar a}(-\sigma)\sim q^{-\varpi^{\bar a}}
f^-(\varpi^a+\varpi^{\bar a}+1)g^-(0) ~, \\
B_a(-\sigma')\sim q^{\varpi^a+1}g^-(\varpi^{\bar a}+1) ~, \quad
&B_{a\bar a}(-\sigma')\sim q^{\varpi^a+1}
f^-(0)g^-(0) ~, \\
B_{\bar a}(-\sigma)\sim q^{-\varpi^{\bar a}-1}g^-(0) ~, \quad
& B_{\bar aa}(-\sigma)\sim q^{-\varpi^{\bar a}-1}g^-(0)f^-(0) ~, \\
B_{\bar a}(-\sigma')\sim g^-(\varpi^{\bar a}+1) ~, \quad
& B_{\bar aa}(-\sigma')\sim q^{\varpi^a}g^-(\varpi^a+\varpi^{\bar a}+1)
f^-(0) ~,
\end{array}
\end{equation}
where $f^\pm$ and $g^\pm$ are defined by
\begin{eqnarray}
\label{fg}
f^\pm(\xi)=q^{\mp\xi}Y^\pm_1+q^{\pm\xi}Y^\pm_2 ~, \qquad
g^\pm(\xi)=q^{\mp\xi}Z^\pm_1+q^{\pm\xi}Z^\pm_2 ~.
\end{eqnarray}
Note the factorized forms for $A_{ab}$ and $B_{ab}$. The vector forms
of the screening charges ${\cal Y}_a$ and ${\cal Y}_{ab}$ are more
convenient for the locality analysis. We can easily derive their
truncated expressions from (\ref{scrchYZ}) as
\begin{eqnarray}
\label{tranys}
\begin{array}{ll}
{\cal Y}_a(0,\sigma)\sim f(\varpi^a+1)~, &
{\cal Y}_{a\bar a}(0,\sigma)\sim f(\varpi^a+\varpi^{\bar a}+1)g(0)~,\\
{\cal Y}_{\bar a}(0,\sigma)\sim g(0)~, &
{\cal Y}_{\bar aa}(0,\sigma)\sim g(0)f(0)~, \\
{\cal Y}_{a}(0,\sigma')\sim f(0) ~, &
{\cal Y}_{a\bar a}(0,\sigma')\sim f(0)g(0)~, \\
{\cal Y}_{\bar a}(0,\sigma')\sim g(\varpi^{\bar a}+1)~, &
{\cal Y}_{\bar aa}(0,\sigma')\sim g(\varpi^a+\varpi^{\bar a}+1)f(0)~,
\end{array}
\end{eqnarray}
where $f$ and $g$ are defined by
\begin{eqnarray}
\label{vfg}
f(\xi)=f^+(\xi)\star f^-(\xi)~, \qquad
g(\xi)=g^+(\xi)\star g^-(\xi)~.
\end{eqnarray}
The operator algebra (\ref{exchrel}) possesses a realization in terms
of the zero-mode operators $Q$ and $P$. Let us introduce operators by
\begin{eqnarray}
\label{taua}
\Lambda_a={\rm e}^{\gamma\eta\alpha^a\cdot Q}~,\quad
y^\pm=\frac{1}{3}(2\varpi^a+\varpi^{\bar a})+y_0^\pm~, \quad
z^\pm=\frac{1}{3}(\varpi^a+2\varpi^{\bar a})+z_0^\pm~,
\end{eqnarray}
where $y_0^\pm$ and $z_0^\pm$ are arbitrary constants. Then the operator
algebra (\ref{exchrel}) is realized by the operators\footnote{We use
the same notation for the quantum mechanical realization.} defined by
\begin{eqnarray}
\label{qmr}
Y_1^\pm= \Lambda_a~, \quad
Y_2^\pm=-q^{\mp2y^\pm}\Lambda_a~, \quad
Z_1^\pm= q^{\mp\varpi^{\bar a}}\Lambda_{\bar a}~, \quad
Z_2^\pm=-q^{\mp\varpi^{\bar a}}q^{\mp2z^\pm}\Lambda_{\bar a}~.
\end{eqnarray}
This can be seen by noting that the operators given by (\ref{taua})
satisfy
\begin{eqnarray}
\label{opalgyz}
\begin{array}{lll}
\Lambda_ay^\pm=(y^\pm+1)\Lambda_a ~,
&&\Lambda_{\bar a}y^\pm=y^\pm\Lambda_{\bar a} ~,\\
\Lambda_az^\pm=z^\pm\Lambda_a ~,
&&\Lambda_{\bar a}z^\pm=(z^\pm+1)\Lambda_{\bar a}~.
\end{array}
\end{eqnarray}
After the substitution of (\ref{qmr}) into (\ref{fg}) and (\ref{vfg}), we
arrive at interesting expressions for $f$ and $g$ as
\begin{eqnarray}
\label{qmrfg}
f(\xi)=[\xi-y^+][\xi-y^-]\tilde\Lambda_a~, \qquad
g(\xi)=[\xi-z^+][\xi-z^-]\tilde\Lambda_{\bar a}~,
\end{eqnarray}
where we have introduced $q$-numbers by
\begin{eqnarray}
\label{qn}
[x]=\frac{q^x-q^{-x}}{q-q^{-1}}~,
\end{eqnarray}
and $\tilde\Lambda$ is defined by
\begin{eqnarray}
\label{tlam}
\tilde\Lambda_a=-q^{-y_0^++y_0^-}(q-q^{-1})^2\Lambda_a ~, \qquad
\tilde\Lambda_{\bar a}
=-q^{-z_0^++z_0^-}(q-q^{-1})^2\Lambda_{\bar a} ~.
\end{eqnarray}
Due to the independence of the chiral vertex operators $V_a^\pm$ at different
points, the locality constraints (\ref{dcpqloc}) should hold true even if we
replace the screening charges by their truncated forms given by
(\ref{tranys}). We apply this idea of truncation to find the expansion
coefficients.
As we shall see shortly, any coefficients $C_{nm}^a$ can be expressed in
terms of $C^a_{n0}$ by solving the second conditions of (\ref{dcpqloc}),
whereas they are obtained from the first of (\ref{dcpqloc}) for $L=0$.
It is essentially equivalent to the locality constraints for the
Liouville theory and has been investigated in ref. \cite{fit96}.
To illustrate our
method we solve here the Liouville case in the present notation.
From (\ref{tranys}), the first constraint of (\ref{dcpqloc}) for $L=0$
can be cast into the following form
\begin{eqnarray}
\label{flocL=0}
&&\sum_{n+r=K}\{C^a_{n0}(\kappa;\varpi-\nu\lambda^a)
C^a_{r0}(\nu;\varpi+n\alpha^a)(f(\varpi^a-\nu+1))^n (f(0))^r \nonumber\\
&&\hskip 2cm -C^a_{n0}(\kappa;\varpi+r\alpha^a)
C^a_{r0}(\nu;\varpi-\kappa\lambda^a)(f(\kappa))^r
(f(\varpi^a+1))^n\}=0 ~,
\end{eqnarray}
where we have removed the vertex operators $V_\kappa^a$ and $V_\nu^a$
after moving them to the left of the expressions by using (\ref{exchvyz}).
At this point we go over to the quantum mechanical realization (\ref{qmr})
and use (\ref{qmrfg}). We then remove all the $\tilde\Lambda$ after
moving them to the right of the expression. This leads to an equivalent
form
\begin{eqnarray}
\label{flocL2=0}
&&\sum_{n+r=K}\{C^a_{n0}(\kappa;\varpi-\nu\lambda^a)
C^a_{r0}(\nu;\varpi+n\alpha^a)\nonumber\\
&&\hskip 2cm \times[\varpi^a-\nu+1-y^+]_n
[\varpi^a-\nu+1-y^-]_n[y^++n]_r[y^-+n]_r \nonumber\\
&&\hskip 1cm -C^a_{n0}(\kappa;\varpi+r\alpha^a)
C^a_{r0}(\nu;\varpi-\kappa\lambda^a)\nonumber\\
&&\hskip 2cm \times[\varpi^a+r+1-y^+]_n
[\varpi^a+r+1-y^-]_n[y^+-\kappa]_r[y^--\kappa]_r\}=0 ~,
\end{eqnarray}
where $[x]_n\equiv[x][x+1]\cdots[x+n-1]$ with $[x]_0=1$.
Since $y^\pm$ are arbitrary, we may freely choose them
to find $C_{n0}^a$. For $y^+=1-K$ and $y^-=\kappa$ (\ref{flocL2=0})
reduces to
\begin{eqnarray}
\label{step1}
&&C_{K0}^a(\kappa;\varpi-\nu\lambda^a)[\varpi^a-\nu+K]_K
[\varpi^a-\kappa-\nu+1]_K \nonumber\\
&&\hskip 2.0cm =C_{K0}^a(\kappa;\varpi)[\varpi^a+K]_K
[\varpi^a-\kappa+1]_K~.
\end{eqnarray}
Since $C^a_{n0}$ does not depend on $\varpi^{\bar a}$ as we shall
show below, this completely fix the $\varpi$ depedence as
\begin{eqnarray}
\label{cn01}
C^a_{n0}(\kappa;\varpi)=\frac{f_n(\kappa)}{[\varpi^a+n]_n
[\varpi^a-\kappa+1]_n}~,
\end{eqnarray}
where $f_n(\kappa)$ is a function of $\kappa$. To determine $f_n$
we next assume $y^+=\varpi^a-\nu+1$ and $y^-=\kappa$. Then
(\ref{flocL2=0}) and (\ref{cn01}) give $f_K(\kappa)[\nu]_K
=f_K(\nu)[\kappa]_K$. Hence we may put
\begin{eqnarray}
\label{fn}
f_n(\kappa)=c_n[\kappa]_n~,
\end{eqnarray}
where $c_n$ is independent of $\kappa$. It satisfies the recurrence
relation $[n]c_n=c_1c_{n-1}$. This can be obtained from (\ref{flocL2=0})
combined with (\ref{cn01}) and (\ref{fn}) for $y^+=1-K$ and $y^-=\kappa-1$.
Since $c_0=1$ by definition, we find
\begin{eqnarray}
\label{cn}
c_n=\frac{c_1^n}{[n]!}~, \qquad (n=0,1,\cdots)
\end{eqnarray}
where $[n]!\equiv[1][2]\cdots[n]$ is the $q$-factorial and $c_1$ is not
fixed by locality constraints. It is determined by the
requirement of the field equation as
\begin{eqnarray}
\label{c1}
c_1=\frac{\eta}{8\pi g\sin2\pi g}~.
\end{eqnarray}
We will show this in the next section. Combining all these results,
we obtain $C^a_{n0}$ as
\begin{eqnarray}
\label{can0}
C^a_{n0}(\kappa;\varpi)=\Biggl(\frac{\eta}{8\pi g\sin2\pi g}\Biggr)^n
\frac{[\kappa]_n}{[n]![\varpi^a+n]_n
[\varpi^a-\kappa+1]_n}~.
\end{eqnarray}
We now turn to the second condition of (\ref{dcpqloc}). It can be
dealt with by the similar method as above. We substitute (\ref{tranys})
into the condition and then moving the free vertex operators $V_\kappa^a$
and $V_\nu^{\bar a}$ to the left of the expression. After the manipulation
we can safely remove them and the quantum mechanical realization
(\ref{qmrfg}) can be applied. We gather all the $\tilde\Lambda$ to the
right of the expression by noting (\ref{opalgyz}) and then remove them from
the equation. The truncated locality condition can thus be cast into the
following form
\begin{eqnarray}
\label{2ndlocc}
&&\sum_{{n+m+s=K}\atop {r+m+s=L}}\{
C^a_{nm}(\kappa;\varpi-\nu\lambda^{\bar a})
C^{\bar a}_{rs}(\nu;\varpi+(n+m)\alpha^a+m\alpha^{\bar a})
\nonumber\\
&&\hskip 2.5cm \times
[\varpi^a+1-y^+]_n[\varpi^a+1-y^-]_n \nonumber\\
&&\hskip 2.5cm \times[\varpi^a+\varpi^{\bar a}-\nu+1-y^+]_m
[\varpi^a+\varpi^{\bar a}-\nu+1-y^-]_m[z^+-\nu]_m[z^--\nu]_m
\nonumber\\
&&\hskip 2.5cm \times
[\varpi^{\bar a}-n+1-z^+]_r[\varpi^{\bar a}-n+1-z^-]_r \nonumber\\
&&\hskip 2.5cm \times [y^++n+m]_s[y^-+n+m]_s
\nonumber\\
&&\hskip 2.5cm \times
[\varpi^a+\varpi^{\bar a}+n+m+1-z^+]_s[\varpi^a+\varpi^{\bar a}+n+m+1-z^-]_s
\nonumber\\
&&\hskip 1.4cm-C^a_{nm}(\kappa;\varpi+s\alpha^a+(r+s)\alpha^{\bar a})
C^{\bar a}_{rs}(\nu;\varpi-\kappa\lambda^a)\nonumber\\
&&\hskip 2.5cm \times [\varpi^a-r+1-y^+]_n[\varpi^a-r+1-y^-]_n \nonumber\\
&&\hskip 2.5cm \times[\varpi^a+\varpi^{\bar a}+r+s+1-y^+]_m
[\varpi^a+\varpi^{\bar a}+r+s+1-y^-]_m \nonumber\\
&&\hskip 2.5cm \times[z^++r+s]_m[z^-+r+s]_m \nonumber\\
&&\hskip 2.5cm \times[\varpi^{\bar a}+1-z^+]_r[\varpi^{\bar a}+1-z^-]_r
[y^+-\kappa]_s[y^--\kappa]_s
\nonumber\\
&&\hskip 2.5cm \times[\varpi^a+\varpi^{\bar a}-\kappa+1-z^+]_s
[\varpi^a+\varpi^{\bar a}-\kappa+1-z^-]_s\} =0 ~.
\end{eqnarray}
We first confirm that the $C^a_{n0}$ is independent of $\varpi^{\bar a}$ as
announced before. For $L=0$, (\ref{2ndlocc}) reduces to
\begin{eqnarray}
\label{wabindep}
C^a_{K0}(\kappa;\varpi-\nu\lambda^{\bar a})=C^a_{K0}(\kappa;\varpi)~,
\end{eqnarray}
which implies that the $C^a_{n0}$ depends only on $\varpi^a$.
We now consider the case $K\ge L$ and put $n=K-L$, $m=L$. For the choice of
the arbitrary parameters
\begin{eqnarray}
\label{ypmzpm}
y^+=\kappa~, \quad y^-=1-n-m~, \quad z^+=\nu~, \quad
z^-=\varpi^{\bar a}+1~,
\end{eqnarray}
we obtain from (\ref{2ndlocc})
\begin{eqnarray}
\label{cnm-cl0}
C^a_{nm}(\kappa;\varpi)&=&(-1)^m\frac{[n+m]_m[\varpi^a+2n+m]_m
[\varpi^a-\kappa+n+1]_m[\varpi^{\bar a}-\nu-n-m+1]_m}{%
[\nu]_m[\varpi^a+\varpi^{\bar a}+n+m]_m
[\varpi^a+\varpi^{\bar a}-\kappa+1]_m[\varpi^{\bar a}+1]_m}\nonumber\\
&&\times C^a_{n+m\:0}(\kappa;\varpi-\nu\lambda^{\bar a})
C^{\bar a}_{m\:0}(\nu;\varpi+(n+m)\alpha^a) ~.
\end{eqnarray}
We see that the coefficients $C^a_{nm}$ with $m\ne0$, which are
characteristic of $A_2$-Toda theory, are related to those of $A_1$
case, i.e., the Liouville theory. This combined with (\ref{can0})
leads to the full expression for the expansion coefficients
\begin{eqnarray}
\label{canm}
C^a_{nm}(\kappa;\varpi)&=&(-1)^m
\Biggl(\frac{\eta}{8\pi g\sin2\pi g}\Biggr)^{n+2m}
\frac{[\kappa]_{n+m}}{[n]![m]!}
\frac{1}{[\varpi^a+n+m]_n[\varpi^a-\kappa+1]_n}\nonumber\\
&&\times \frac{1}{
[\varpi^a+\varpi^{\bar a}+n+m]_m
[\varpi^a+\varpi^{\bar a}-\kappa+1]_m
[\varpi^{\bar a}-n]_m[\varpi^{\bar a}+1]_m}~.
\end{eqnarray}
Except for the apparent difference due to the slight change of
convention, this coincides to the the conjecture given in ref.
\cite{fit98}. It is easy to verify that (\ref{canm}) satisfies
the conditions (\ref{Cscon}).
We can interpret (\ref{cnm-cl0}) as quantum deformation of the
classical Toda exponential function. This can be seen by
taking the classical limit, which is defined by $g\rightarrow0$
with $\psi_\pm$ kept fixed. Since $[c]\rightarrow c$ and
$2\pi g\:[\beta\cdot\varpi+c]\rightarrow -i\:{\rm sinh}
\displaystyle{\frac{\gamma}{4}}\beta\cdot P$ for any
constant $c$, we can easily compute the classical limit of
(\ref{cnm-cl0}) as
\begin{eqnarray}
\label{climcanm}
C^a_{nm}(\kappa;\varpi)&\rightarrow&{-\kappa\choose n+m}
{n+m\choose n}(C_{\alpha^a})^{2n}(C_{\alpha^a+\alpha^{\bar a}}
C_{\alpha^{\bar a}})^{2m} ~,
\end{eqnarray}
where $\displaystyle{\nu\choose n}$ is the ordinary binomial
coefficient and $C_\beta$ is given by (\ref{ce}).
This coincides with the coefficients appearing in the expansion
of the classical Toda exponential
${\rm e}^{\kappa\lambda^a\cdot\varphi}$ obtained from
(\ref{a2csol}), where the coefficients $C$'s are contained
in the screening charges.
If $\kappa=-j$ with some nonnegative integer $j$, $[-j]_{n+m}=0$
for $n+m>j$. Hence, ${\rm e}^{-\eta j\lambda^a\cdot\varphi}$
reduces to a finite polynomial in the screening charges as
is expected from the classical solution (\ref{a2csol}) \cite{fl,bg,bg89}.
It contains $\frac{1}{2}(j+1)(j+2)$ terms corresponding to
the dimension of the completely symmetric tensor product
of $j$ defining representations.
So far our main concern is the construction of the Toda exponential
operators associtated with the fundamental weights. Arbitrary
exponential operators can be constructed from them as composite
operators. Let $\beta$ be an arbitrary vector in the $A_2$ root space,
then it can be expressed as a linear combination of the fundamental
weights as
\begin{eqnarray}
\label{bll}
\beta=\beta^a+\beta^{\bar a} \qquad
{\rm with}\quad \beta^a\equiv\lambda^a\alpha^a\cdot\beta
~, \quad \beta^{\bar a}\equiv\lambda^{\bar a}\alpha^{\bar a}\cdot\beta ~.
\end{eqnarray}
Hence we may define
\begin{eqnarray}
\label{expbphi}
{\rm e}^{\eta\beta\cdot\varphi(x)}
&=&\lim_{x'\to x}|(1-{\rm e}^{-i(x^+-x'{}^+)})
(1-{\rm e}^{-i(x^--x'{}^-)})|^{\Delta_\beta-\Delta_{\beta^a}
-\Delta_{\beta^{\bar a}}}
{\rm e}^{\eta\beta^a\cdot\varphi(x)}
{\rm e}^{\eta\beta^{\bar a}\cdot\varphi(x')} \nonumber\\
&=&\sum_{n,m}\sum_{r,s}C^a_{nm}(\alpha^a\cdot\beta;\varpi
+\beta^a)C^{\bar a}_{rs}(\alpha^{\bar a}\cdot\beta;
\varpi+\beta+(n+m)\alpha^a+m\alpha^{\bar a})\nonumber\\
&&\hskip 1.5cm\times({\cal Y}_a(x))^n({\cal Y}_{a\bar a}(x))^m
:{\rm e}^{\eta\beta\cdot\psi(x)}:({\cal Y}_{\bar a}(x))^r
({\cal Y}_{\bar aa}(x))^s ~,
\end{eqnarray}
where $\Delta$'s are the conformal weights of the exponential operators
and are given by (\ref{cdim}). In particular, we need
such operators associated with the simple roots to establish
the operatorial field equations. From the general formula
(\ref{expbphi}) we obtain for $\beta=\alpha^a=2\lambda^a-\lambda^{\bar a}$
\begin{eqnarray}
\label{ealp}
{\rm e}^{\eta\alpha^a\cdot\varphi(x)}&=&\sum_{n,m}
\sum_{r+s\le1}
C^a_{nm}(2;\varpi+2\lambda^a)C^{\bar a}_{rs}(-1;\varpi+(n+m+1)
\alpha^a+m\alpha^{\bar a})\nonumber\\
&&\hskip 1.5cm\times({\cal Y}_a(x))^n({\cal Y}_{a\bar a}(x))^m
V_a(x)({\cal Y}_{\bar a}(x))^r
({\cal Y}_{\bar aa}(x))^s ~,
\end{eqnarray}
where $V_a(x)=V_a^+(x^+)\star V_a^-(x^-)$. This will be used in the next
section.
\section{Field Equations}
\label{sec:feq}
\setcounter{equation}{0}
From the Toda exponential operator (\ref{teo}) one can define the local
Toda field operators as the derivative with respect to $\kappa$ at
$\kappa=0$. It is explicitly given by
\begin{eqnarray}
\label{ltf}
\eta\lambda^a\cdot\varphi=\eta\lambda^a\cdot\psi-\frac{1}{\sin^22\pi g}
\sum_{n,m}\Biggl(\frac{\mu^2}{4}\Biggr)^{n+2m}D^a_{nm}(\varpi)
{\cal Y}_a^n{\cal Y}_{a\bar a}^m~,
\end{eqnarray}
where $D^a_{nm}$ is defined by
\begin{eqnarray}
\label{danm}
D^a_{nm}(\varpi)&=&-\sin^22\pi g\left.\frac{d}{d\kappa}
C^a_{nm}(\kappa;\varpi)
\right|_{\kappa=0}
\nonumber\\
&=&(-1)^{m-1}\frac{c_1^{n+2m-1}}{4}\frac{[n+m-1]!}{[n]![m]!}
\frac{1}{[\varpi^a+n+m]_n[\varpi^a+1]_n}\nonumber\\
&&\times \frac{1}{[\varpi^a+\varpi^{\bar a}+n+m]_m[\varpi^a+\varpi^{\bar a}
+1]_m[\varpi^{\bar a}-n]_m[\varpi^{\bar a}+1]_m} ~.
\end{eqnarray}
We show that the Toda fields thus defined satisfy the operator field equations
\begin{eqnarray}
\label{fieldeq}
\partial_\mu\partial^\mu\varphi
+\mu^2\sum_{a=1,2}\alpha^a{\rm e}^{\eta\alpha^a\cdot\varphi}=0~,
\end{eqnarray}
where the exponential operators associated with the simple roots are
defined by (\ref{ealp}). By decomposing (\ref{fieldeq}) into sectors
with definite $\varpi$-charges, we can cast (\ref{fieldeq}) into an
equivalent from
\begin{eqnarray}
\label{fequiv}
&&\frac{1}{4\sin^22\pi g}\partial_+\partial_-({\cal Y}_a^{n+1}
{\cal Y}_{a\bar a}^m) \nonumber\\
&&\hskip 1cm=-[n+1][n+m+1]
[\varpi^a+n+1][\varpi^a+n+m+1] \nonumber\\
&&\hskip 1.8cm\times\frac{[\varpi^a+\varpi^{\bar a}+n+m+1]
[\varpi^{\bar a}-n-1]}{[\varpi^a+\varpi^{\bar a}+n+2m+1]
[\varpi^{\bar a}-n+m-1]}
{\cal Y}_a^n{\cal Y}_{a\bar a}^mV_a\nonumber\\
&&\hskip 1cm-[m][n+m+1][\varpi^a+\varpi^{\bar a}+n+m+1]
[\varpi^a+\varpi^{\bar a}+m]\nonumber\\
&&\hskip 1.8cm
\times\frac{[\varpi^a+n+m+1][\varpi^{\bar a}+m]}{[\varpi^a+2n+m+2]
[\varpi^{\bar a}-n+m-1]}{\cal Y}_a^{n+1}{\cal Y}_{a\bar a}^{m-1}
V_a{\cal Y}_{\bar a}\nonumber\\
&&\hskip 1cm+[n+1][m][\varpi^{\bar a}-n-1][\varpi^{\bar a}+m]\nonumber\\
&&\hskip 1.8cm\times
\frac{[\varpi^a+n+1][\varpi^a+\varpi^{\bar a}+m]}{[\varpi^a+2n+m+2]
[\varpi^a+\varpi^{\bar a}+n+2m+1]} {\cal Y}_a^n{\cal Y}_{a\bar a}^{m-1}
V_a{\cal Y}_{\bar aa}~,
\end{eqnarray}
where use has been made of the explicit forms (\ref{canm}) and (\ref{danm}).
To show (\ref{fequiv}) we first note the relations
\begin{eqnarray}
\label{dav}
\partial_+A_a=2i\sin2\pi g[\varpi^a+1]V_a^+~, \qquad
\partial_+A_{a\bar a}=2i\sin2\pi g[\varpi^a+\varpi^{\bar a}+1]V_a^+
A_{\bar a}~.
\end{eqnarray}
Since $V_a^+$ commutes with $A_a$, we find
\begin{eqnarray}
\label{davn}
\partial_+A_a^{n+1}=2i\sin2\pi g[n+1][\varpi^a+n+1]A_a^nV_a^+~.
\end{eqnarray}
In deriving this, use has been made of the relation
\begin{eqnarray}
\label{sumqn}
\sum_{k=0}^n[x+2k+1]=[n+1][x+n+1]~.
\end{eqnarray}
Similarly, from the commutativity of $V_a^+A_{\bar a}$ and $A_{a\bar a}$,
we obtain
\begin{eqnarray}
\label{daabam}
\partial_+A_{a\bar a}^m=2i\sin2\pi g[m][\varpi^a+\varpi^{\bar a}+m]
A_{a\bar a}^{m-1}V_a^+A_{\bar a}~.
\end{eqnarray}
This combined with (\ref{davn}) yields
\begin{eqnarray}
\label{daanaabam}
\frac{1}{2i\:\sin2\pi g}\partial_+(A_a^{n+1}A_{a\bar a}^m)
&=&\frac{[n+1][\varpi^a+n+1][\varpi^{\bar a}-n-1]}{[\varpi^{\bar a}-n+m-1]}
A_a^nA_{a\bar a}^mV_a^+\nonumber\\
&&\hskip .5cm+\frac{[m][\varpi^a+\varpi^{\bar a}+m]
[\varpi^{\bar a}+m]}{[\varpi^{\bar a}-n+m-1]}
A_a^{n+1}A_{a\bar a}^{m-1}V_a^+A_{\bar a}~,
\end{eqnarray}
where use has been made of the relation
\begin{eqnarray}
\label{vaaabam}
V_a^+A_{a\bar a}^m=\frac{[\varpi^{\bar a}-1]}{[\varpi^{\bar a}+m+1]}
A_{a\bar a}^m+\frac{[m]}{[\varpi^{\bar a}+m+1]}A_aA_{a\bar a}^{m-1}
V_a^+A_{\bar a} ~.
\end{eqnarray}
This can be shown by mathematical induction. It is now straightforwards to
verify (\ref{fequiv}) from (\ref{daanaabam}) and the analogous relation
for the right-chiral sector obtained by the replacements $V_a^+\rightarrow
V_a^-$, $A\rightarrow B$ and $\partial_+\rightarrow\partial_-$.
We now turn our attention to the lhs of (\ref{fequiv}) and evaluate it as
\begin{eqnarray}
\label{dynym}
\frac{1}{4\sin^22\pi g}\partial_+\partial_-({\cal Y}_a^{n+1}
{\cal Y}_{a\bar a}^m)&=&\alpha(\varpi)^2{\cal Y}_a^n{\cal Y}_{a\bar a}^m
V_a+\beta(\varpi)^2{\cal Y}^{n+1}{\cal Y}_{a\bar a}^{m-1}
V_a{\cal Y}_{\bar a} \nonumber\\
&&+\alpha(\varpi)\beta(\varpi)
{\cal Y}_a^n{\cal Y}_{a\bar a}^{m-1}
(A_{a\bar a}V_a^+\star V_a^-B_aB_{\bar a}
+V_aA_aA_{\bar a}\star B_{a\bar a}V_a^-)~, \nonumber\\
\end{eqnarray}
where $\alpha$ and $\beta$ are given by
\begin{eqnarray}
\label{alpbet}
\alpha(\varpi)\equiv\frac{[n+1][\varpi^a+n+1]
[\varpi^{\bar a}-n-1]}{[\varpi^{\bar a}-n+m-1]} ~,\quad
\beta(\varpi)\equiv\frac{[m][\varpi^a+\varpi^{\bar a}+m]
[\varpi^{\bar a}+m]}{[\varpi^{\bar a}-n+m-1]}~.
\end{eqnarray}
To simplify the crossed terms on the rhs of (\ref{dynym}) further,
we note the relation
\begin{eqnarray}
\label{va}
V_a^+A_{\bar aa}=\gamma(\varpi)A_{a\bar a}V^+_a+\delta(\varpi)
A_aA_{\bar a}V^+_a~,
\end{eqnarray}
where $\gamma$ and $\delta$ are defined by
\begin{eqnarray}
\label{adbd}
\gamma(\varpi)\equiv-\frac{[\varpi^a+3]}{[\varpi^{\bar a}]}~, \qquad
\delta(\varpi)\equiv\frac{[\varpi^a
+\varpi^{\bar a}+3]}{[\varpi^{\bar a}]}~.
\end{eqnarray}
Eq. (\ref{va}) and the analogous relation for the right-chiral
operators give
\begin{eqnarray}
\label{vy}
V_a{\cal Y}_{\bar aa}&=&\gamma(\varpi)^2{\cal Y}_{a\bar a}
V_a+\delta(\varpi)^2{\cal Y}_aV_a{\cal Y}_{\bar a} \nonumber\\
&&+\gamma(\varpi)\delta(\varpi)
(A_{a\bar a}V_a^+\star V_a^-B_aB_{\bar a}
+V_aA_aA_{\bar a}\star B_{a\bar a}V_a^-) ~.
\end{eqnarray}
From (\ref{dynym}) and (\ref{vy}) we arrive at the following
expression
\begin{eqnarray}
\label{dynym2}
\frac{1}{4\sin^22\pi g}\partial_+\partial_-({\cal Y}_a^{n+1}
{\cal Y}_{a\bar a}^m)&=&-\alpha(\varpi)\Biggl(
\alpha(\varpi)-\beta(\varpi)
\frac{\delta(\tilde\varpi)}{\gamma(\tilde\varpi)}\Biggr)
{\cal Y}_a^n{\cal Y}_{a\bar a}^mV_a \nonumber\\
&&-\beta(\varpi)\Biggl(\beta(\varpi)-\alpha(\varpi)
\frac{\gamma(\tilde\varpi)}{\delta(\tilde\varpi)}\Biggr)
{\cal Y}_a^{n+1}{\cal Y}_{a\bar a}^{m-1}
V_a{\cal Y}_{\bar a}\nonumber\\
&&-\frac{\alpha(\varpi)\beta(\varpi)}{\gamma(\tilde\varpi)
\delta(\tilde\varpi)}
{\cal Y}_a^n{\cal Y}_{a\bar a}^{m-1}V_a{\cal Y}_{\bar aa}
\end{eqnarray}
with $\tilde\varpi=\varpi+(n+m-1)\alpha^a+(m-1)\alpha^{\bar a}$.
Using the explicit forms (\ref{alpbet}) and (\ref{adbd}), one
can show that (\ref{dynym2}) is nothing but eq. (\ref{fequiv}).
This completes the proof of the operator field equations
(\ref{fieldeq}).
\section{Quantum Exchange Algebra and Locality}
\label{sec:qealgandloc}
\setcounter{equation}{0}
We have shown that the requirement of locality for (\ref{teo})
determines the coefficients $C^a_{nm}$. In fact, the truncated
screening charges (\ref{scrchYZ}) suffice to solve the locality
conditions (\ref{dcpqloc}). Since the operator algebra (\ref{oep})
of the full screening charges gives stronger restrictions on the
coefficients $C^a_{nm}$ than that
of the truncated ones, it might occur that (\ref{dcpqloc}) would
not be satisfied for any choice of the coefficients. So, we should
confirm the locality with the coefficients (\ref{canm}). One way
to achieve this goal is to show that the ${\cal R}$-matrix
satisfies (\ref{locR2}). As we have argued in sect. \ref{sec:qa2toda},
they are equivalent to the locality. It is,
however, a rather involved task to obtain the explicit forms of
the ${\cal R}$-matrices from the exchange algebra (\ref{oep}).
Fortunately, there are cases where they can be found or
be systematically constructed by the method developed in
sect. \ref{sec:qteo}.
We first consider the exchange algebra (\ref{excalg}) for
$b=a$ in the left-chiral sector. We further assume $x>x'$
so that the decompositions of the screening charges introduced
in sect. \ref{sec:qteo} can be used. We substitute the
screening charges by the truncated ones given by
(\ref{scrchYZ}) and then apply the algebraic manipulations
leading to (\ref{flocL2=0}). We thus obtain
\begin{eqnarray}
\label{rmataa}
&&\sum_{r+r'=n+n'\atop s+s'=m+m'}(-1)^{r'+s}
q^{-(r'+s')(\varpi^a+r'+s')+s(\varpi^{\bar a}-r'+s')
+\frac{2}{3}\kappa\nu+(r+s)\nu-s(r+s')}
\nonumber\\
&&\hskip 1.5cm\times
{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}
({\scriptstyle {a\atop
\kappa}}|{\scriptstyle {a\atop \nu}};\varpi\bigr)
[y]_{r'+s'}[\varpi^a+\nu+1+r'-y]_r
[\varpi^a+\varpi^{\bar a}+\nu+1+s'-y]_s\nonumber\\
&&\hskip 2.5cm=(-1)^{n'+m}q^{-(n'+m')(\varpi^a+\kappa
+2n+m+n'+m')+m(\varpi^{\bar a}-n)-m(n'+m')}\nonumber\\
&&\hskip 3.5cm\times[\varpi^a+1-y]_n[\varpi^a+\varpi^{\bar a}+1-y]_m
[\kappa+n+m+y]_{n'+m'}~,
\end{eqnarray}
where $y\equiv y^+$ is an arbitrary parameter. Since this gives only
$n+m+n'+m'+1$ relations for $(n+n'+1)(m+m'+1)$ elements of the
${\cal R}$-matrix for fixed $n,m,n',m'$, we can not determine all the
elements for $(n+n')(m+m')\ne0$. This is due to the
oversimplification by the use of the truncated screening charges, and
merely implies that the truncation of the screening charges does not
work well here.\footnote{In arriving
at (\ref{rmataa}) we divide out the common factor $[z^+]_{n+n'}$
originating from the truncated screening charge $(g^+(0))^{n+n'}$.}
One can, however, find the matrix elements for $m=m'=0$
or $n=n'=0$ from (\ref{rmataa}). The former corresponds essentially
to the Liouville case and has already been investigated in ref.
\cite{gs93}. On the other hand the latter case is specific to
$A_2$-Toda theory. Interestingly, these two cases can be described by
$A_{nm}^{rs}$ defined by the equation
\begin{eqnarray}
\label{anmrs}
\sum_{r+s=n+m}A_{nm}^{rs}(\alpha,\beta,\gamma)
[y]_{s}[\alpha+s-y]_r=[\beta-y]_n[\gamma+n+y]_m~,
\end{eqnarray}
where $\alpha$, $\beta$ and $\gamma$ are arbitrary parameters.
In terms of $A_{nm}^{rs}$ the ${\cal R}$-matrices for the
above mentioned cases are given by
\begin{eqnarray}
\label{rn0r0}
{\cal R}_{n0}^{r0}{}_{;n'0}^{;r'0}({\scriptstyle {a\atop
\kappa}}|{\scriptstyle {a\atop \nu}};\varpi\bigr)
&=&(-1)^{n-r}q^{-\frac{2}{3}\kappa\nu+(n-r)(\varpi^a+\kappa
+n+r)-r'\kappa-r\nu-2rr'}
\nonumber\\
&&\hskip 2cm\times
A_{nn'}^{rr'}(\varpi^a+\nu+1,\varpi^a+1,\kappa)~,\nonumber\\
{\cal R}_{0m}^{0s}{}_{;0m'}^{;0s'}({\scriptstyle {a\atop
\kappa}}|{\scriptstyle {a\atop \nu}};\varpi\bigr)
&=&(-1)^{m-s}q^{-\frac{2}{3}\kappa\nu+(m-s)(\varpi^a+
\varpi^{\bar a}+\kappa+m+s)-s'\kappa-s\nu-2ss'}
\nonumber\\
&&\hskip 2cm\times A_{mm'}^{ss'}(\varpi^a+\varpi^{\bar a}+\nu+1,
\varpi^a+\varpi^{\bar a}+1,\kappa)~.
\end{eqnarray}
The solution to (\ref{anmrs}) has been obtained in ref. \cite{gs93}
and is given by
\begin{eqnarray}
\label{anmrssol}
A_{nm}^{rs}(\alpha,\beta,\gamma)&\equiv&\frac{1}{[s]!}
\frac{[\alpha]_{2s}}{[\alpha+s-1]_s}\frac{[\beta]_n
[\gamma+n]_m}{[\alpha]_{n+m}}\nonumber\\
&&\times\sum_{k=0}^s(-1)^k{s\choose l}_q
\frac{[\alpha+s-1]_k}{[\alpha+n+m]_k}
\frac{[\beta+n]_k}{[\beta]_k}
\frac{[\gamma+n-k]_k}{[\gamma+n+m-k]_k} \nonumber\\
&=&\frac{1}{[s]!}
\frac{[\alpha]_{2s}}{[\alpha+s-1]_s}\frac{[\beta]_n
[\gamma+n]_m}{[\alpha]_{n+m}}\nonumber\\
&&\times{}_4F_3\left[\left.{\alpha+s-1,\atop \alpha+n+m,}
{\beta+n,\atop \beta,}{1-\gamma-n,\atop 1-\gamma-n-m}
{-s\atop {}}\:\right|\:q\:;\:1\:\right] ~,
\end{eqnarray}
where $\displaystyle{{n\choose m}_q\equiv\frac{[n]!}{[m]![n-m]!}}$ is the
$q$-binomial coefficient and ${}_4F_3$ is a $q$-deformed
hypergeometric function defined by
\begin{eqnarray}
\label{qdhgf}
{}_4F_3\left[\left.{a,\atop e,}{b,\atop f,}{c,\atop g}
{d\atop {}}\right|\:q\:;z\right]
\equiv\sum_{n=0}^\infty\frac{[a]_n[b]_n[c]_n[d]_n}{[e]_n[f]_n[g]_n[n]!}z^n
\end{eqnarray}
In Appendix C we give an elementary account of (\ref{anmrssol}).
By making the substitution $y\rightarrow \beta-y$ in (\ref{anmrs}) and
then defining new parameters by
\begin{eqnarray}
\label{adbdcd}
\alpha'=\beta+\gamma~, \quad
\beta'=\beta~, \quad
\gamma'=\alpha-\beta~,
\end{eqnarray}
we find that (\ref{anmrs}) can be inverted by the relation
\begin{eqnarray}
\label{aad}
\sum_{n+n'=r+r'}A_{nn'}^{rr'}(\alpha,\beta,\gamma)
A_{s's}^{n'n}(\alpha',\beta',\gamma')
=\delta_{rs}\delta_{r's'}~.
\end{eqnarray}
We now show that $A_{nm}^{rs}$ satisfies
for some suitable choice of $c_{nm}$
\begin{eqnarray}
\label{caca}
c_{nn'}(\alpha,\beta,\gamma)A_{nn'}^{rr'}(\alpha,\beta,\gamma)
=c_{r'r}(\alpha',\beta',\gamma')
A_{r'r}^{n'n}(\alpha',\beta',\gamma')~.
\end{eqnarray}
To find $c_{nm}$, we need a property of the balanced
$q$-hypergeometric function \cite{gs93}
\begin{eqnarray}
\label{bqdhgf}
&&{}_4F_3\left[\left.{a,\atop e,}{b,\atop f,}
{c,\atop 1+a+b+c-e-f-n}
{-n\atop {}}\right|\:q\:;1\right]\nonumber\\
&&\hskip .5cm=\frac{[f-c]_n[e+f-a-b]_n}{[f]_n[e+f-a-b-c]_n}
{}_4F_3\left[\left.{e-a,\atop e,}{e-b,\atop e+f-a-b,}
{c,\atop 1+c-f-n}
{-n\atop {}}\right|\:q\:;1\right]~,
\nonumber\\
\end{eqnarray}
where $n$ is a nonnegative integer. Using this together
with the fact that ${}_4F_3$ defined by (\ref{qdhgf})
is symmetric in $a,b,c,d$ and in $e,f,g$, we obtain
\begin{eqnarray}
\label{doubletr}
&&{}_4F_3\left[\left.{\alpha+s-1,\atop \alpha+n+m,}
{\beta+n,\atop \beta,}{1-\gamma-n,\atop 1-\gamma-n-m}
{-s\atop {}}\:\right|\:q\:;\:1\:\right] \nonumber\\
&&\hskip 1.5cm =\frac{[m]![\alpha-\beta+m]_n
[\alpha+\gamma+2n+m-1]_m
[\beta+\gamma+n+m]_n}{[r]![\alpha'-\beta'+r]_s
[\alpha'+\gamma'+2s+r-1]_r[\beta'+\gamma'+r+s]_s}
\nonumber\\&&\hskip 2.5cm\times
{}_4F_3\left[\left.{\alpha'+n-1,\atop \alpha'+r+s,}
{\beta'+s,\atop \beta',}{1-\gamma'-s,\atop -\gamma'-r-s}
{-n\atop {}}\:\right|\:q\:;\:1\:\right] ~.
\end{eqnarray}
We thus find $c_{nm}$ from (\ref{anmrssol}), (\ref{caca}) and
(\ref{doubletr}) as
\begin{eqnarray}
\label{cnm}
c_{nm}(\alpha,\beta,\gamma)
=\frac{[\gamma]_n[\alpha-\beta]_m}{[n]![m]![\beta]_n
[\alpha+\gamma+2n+m-1]_m[\beta+\gamma+n-1]_n
[\beta+\gamma+2n]_m} ~.
\end{eqnarray}
where we have chosen the indeterminate multiplicative
factor arbitrarily.
In the above argument the explicit form of $A_{nm}^{rs}$
is used to show (\ref{caca}). It is possible to find
$c_{nm}$ directly from (\ref{anmrs}) without referring
to the details of $A_{nm}^{rs}$. This approach is
promising since it can be applied for the cases where
no explicit forms of the ${\cal R}$-matrix is not
available. In the present case $c_{nm}$ can be defined as
a solution to the following equation
\begin{eqnarray}
\label{cseq}
&&\sum_{n,n'} c_{nn'}(\alpha',\beta',\gamma')
[y]_{n}[\alpha+n-y]_{n'}
[\beta'-y']_{n}[\gamma'+n+y']_{n'} \nonumber\\
&&\hskip 1cm =\sum_{n,n'} c_{nn'}(\alpha,\beta,\gamma)
[y']_n[\alpha'+n-y']_{n'}
[\beta-y]_n[\gamma+n+y]_{n'} ~,
\end{eqnarray}
where the sum should be taken over $n$ and $n'$ with $n+n'$
being fixed.
A remarkable fact is that this becomes an identity
with respect to $y$ and $y'$ for a suitable choice of the
coefficient functions and $\alpha'$, $\beta'$, $\gamma'$
satisfying (\ref{adbdcd}). That (\ref{cseq}) leads to
(\ref{caca}) can be shown by noting (\ref{anmrs}).
As one may easily realize, eq. (\ref{cseq}) is
nothing but (\ref{flocL2=0}) after a suitable identification
of the parameters. Consequently, $c_{nm}$ can be expressed
in terms of $C^a_{n0}$ given by (\ref{can0}). In fact it can be
shown that the resultant expression obtained in this way
coincides with (\ref{cnm}) up to a
multiplicative factor. This immediately leads to the conclusion that
the ${\cal R}$-matrices (\ref{rn0r0}) satisfy the locality
constraints (\ref{locR2}).
We next consider the exchange algebra (\ref{excalg}) for
$b={\bar a}$. This is the second case where the ${\cal R}$-matrix
is obtained systematically by the technique of truncated screening
charges. It is straightforward to show that the ${\cal R}$-matrix
must satisfy the relation
\begin{eqnarray}
\label{Raab}
&&\sum_{r+s+s'=n+m+m'\atop r'+s+s'=n'+m+m'}
(-1)^rq^{-\frac{1}{3}\kappa\nu
-s'(\varpi^a-r')+s(\varpi^{\bar a}+\nu+2r'+s'-r)
-(r'+s')(r+s+s')}
{\cal R}_{nm}^{rs}{}_{;n'm'}^{;r's'}
({\scriptstyle {a\atop
\kappa}}|{\scriptstyle {\bar a\atop \nu}};\varpi\bigr)\nonumber\\
&&\hskip 1.5cm\times
[\varpi^a+1-r'-y]_r[\varpi^a+\varpi^{\bar a}+\nu+1+r'+s'-y]_s[y]_{s'}
\nonumber\\
&&\hskip 1.5cm\times
[\varpi^{\bar a}+1-z]_{r'}
[\nu+r'+s'+z]_s
[\varpi^a+\varpi^{\bar a}+1-z]_{s'} \nonumber\\
&&=(-1)^nq^{-m'(\varpi^a+\kappa+2n+m-n')+m(\varpi^{\bar a}-n)+nn'-n'm'-mm'}
\nonumber\\
&&\hskip 1.5cm\times [\varpi^a+1-y]_n[\varpi^a+\varpi^{\bar a}+1-y]_m
[\kappa+n+m+y]_{m'}\nonumber\\
&&\hskip 1.5cm\times
[z]_m[\varpi^{\bar a}+1-n-z]_{n'}
[\varpi^a+\varpi^{\bar a}+\kappa+1+n+m-z]_{m'} ~,
\end{eqnarray}
where $y$ and $z$ are arbitrary variables. This not only allows us to
determine the ${\cal R}$-matrix recursively but also guarantees the
full locality conditions (\ref{locR2}) for $b=\bar a$ when combined
with (\ref{2ndlocc}) by the same reasoning explained above. The Toda
exponential operators ${\rm e}^{\eta\kappa\lambda^a\cdot\varphi}$
with the expansion coefficients (\ref{canm}) which are only determined
by using the truncated screening charges are indeed local with respect to
${\rm e}^{\eta\nu\lambda^{\bar a}\cdot\varphi}$.
\section{Discussion}
\label{sec:discussion}
\setcounter{equation}{0}
We have investigated $A_2$-Toda field theory in terms of the canonical
free field. By introducing chiral schematic approach we have analyzed
locality of the Toda exponentials. Locality turned out to impose
nontrivial constraints on the elements of the $r$- and
${\cal R}$-matrices. The main goal of sect. 2 was to establish that the
classical solution (\ref{exacsol}) induces a canonical mapping from the
interacting Toda system into a free theory. This has been achieved for
Liouville and $A_2$-Toda theories. In ref. \cite{kn} the canonicity of the
mapping was explicitly shown for Liouville theory within vector scheme.
The efficiency of the chiral schematic description can be understood
by noting that it can reproduce the same results in a systematic and
simple way. Though the general treatment given in sect. 2 is not
restricted to $A_2$-system, we have not been able to establish
(\ref{cloc2}) for other Toda theories. Similar issue was studied by
Babelon \cite{Babe} by using transfer matrix method and the chiral components
of the canonical free fields were identified for general Toda theories.
He worked in chiral scheme and the treatment of the boundary condition
are somewhat different from ours. Periodic boundary condition was
taken into account in ref. \cite{btb} and the quadratic Poisson algebra
(\ref{qpoisalg}) was argued for general Toda systems.
As we have done, including the periodicity condition from the beginning
brings about complications such as the extra zero-mode momentum dependences
in the screening charges. This makes the explicit computations of the
$r$-matrix rather cumbersome for higher rank theories.
The presentation of the quantum theory in sect. 3 is restricted to
$A_2$-system from the beginning. It could also be extended to general
Toda theories. Main difficulty in generalizing our results is that
tractable forms of the screening charges are not available for general
cases. The property that the screening charges and the free field vertex
appearing in the Toda exponential operator associated with a
fundamental weight are mutually commuting is very crucial in our analysis
and is considered to be universal for Toda theories. This may be
established for the Toda exponential associated with the fundamental
weight corresponding to the defining representation of $A_N$ algebra
since the set of the screening charges appearing there are a
straightforward generalization of those of $A_1$- and $A_2$-systems.
They are given by $A_{12\cdots k}$ ($k=1,\cdots,N$) for the left-moving
sector with obvious notation. We also encounter new types of screening
charges other than these. For instance in $A_3$-system the screening
charges associated with $\lambda^2$, the six dimensional irreducible
representation, are at the classical level
\begin{eqnarray}
\label{a3sch}
A_2~, \quad A_{21}~, \quad A_{23}~, \quad A_{213}+A_{231}~,
\quad A_{2132}+A_{2312}~.
\end{eqnarray}
The mutual commutatibity of their quantum counterparts is not obvious.
Besides these points, the locality analysis in sect. 3 is not restricted
to $A_2$-system since it relies essentially only on the fact that the
Toda exponentials can be expanded as a bilinear form of chiral fields
satisfying the quantum exchange algebra as given by (\ref{excalg}).
Arbitrary exponential operators for $A_2$-system have been obtained in
sect. 4 by generalizing the algebraic method of ref. \cite{fit96}. The locality
requirement turned out to be so strong that only the truncated screening
charges (\ref{tranys}) allowing two-dimensional quantum mechanical
realization suffice to determine all the expansion coefficients of
the exponential operators. We may stress that the algebraic method based
on the quantum mechanical realization of the trancated screening charges
is powerful in solving the locality conditions. This can be done without
referring to the explicit forms of the quantum exchange algebra
(\ref{excalg}). As has been argued in sect. 6, this method, however, fails
in finding some elements of the ${\cal R}$-matrix. This simply reflects
the fact that the truncation of the screening charges oversimplifies the
full operator algebra.
In conclusion it is possible to find exact operator solution for $A_2$
Toda field theory as a quantum deformation of the classical solution.
The interacting to the free field canonical mapping is also realized
at the quantum level. Our algebraic method serves as an efficient tool
in solving the locality conditions. We believe it to be applicable
for the similar analysis of general Toda theories.
\newpage
|
1,108,101,564,123 | arxiv | \section{{Introduction}}
\label{secProtein}
Proteins are the most important organisms in a living cell. The function of a protein depends on the three dimensional native structure that it folds into in a particular environment. Knowledge about this native structure can have an enormous impact on the field of drug discovery. Computational methods for protein structure prediction (PSP) are of great interest since the \textit{in vitro} laboratory methods are very slow, expensive, and error-prone. In absence of any known templates for the proteins, computational methods like homology modeling and threading are not applicable. \textit{Ab initio} methods start from scratch and perform a search on the conformational space of structures. High resolution models require all atomic details and are not computationally preferable. Moreover, the contributing factors of different forces of the energy function are unknown and the space of the conformations is very large and complex. Simplified models, though lack many details, provide realistic backbone for the proteins.
Even in the simplified models, the search space is not suitable for complete search methods. Local search methods can produce good quality conformations very quickly. However, they suffer from re-visitation and stagnation. The nature of the stagnation also depends on the fitness function. In Hydrophobic-Polar (HP) energy model, PSP essentially becomes searching for a conformation having a compact hydrophobic core at the center. Local search algorithms can quickly find a compact core. However, once such a core is found, the search stagnates and spends enormous effort in quest of an alternative core.
In this paper, we attempt to restructure segments of a conformation with a very compact core. We select one large segment or a number of small segments and apply exhaustive local search. The total number of amino-acid positions affected by the segments selected in an iteration is dynamically adjusted with the stagnation period. We also use a tabu list to prevent recently changed amino-acid positions from being modified again. Moreover, we apply a mix of heuristics so that one heuristic can help escape local minima of another. These heuristics are derived from domain specific knowledge. Experimental results show that our approach significantly outperforms the state-of-the-art methods on a set of standard benchmark proteins on Face Centered Cubic (FCC) lattice.
\section{{Problem Definition}}
\label{secBack}
A protein is a polymer of amino-acids, which are also called monomers. There are only 20 different amino acids. In the simplified model, each amino acid is represented by the position of its $\alpha$-$C$ atom. The position is a valid point in the three dimensional lattice. Moreover, a simplified function is used in calculating the energy of a conformation. Note, every two consecutive monomers in the sequence are in \textit{contact} or neighbors on the lattice (called the \textit{chain constraint}) and two monomers can not occupy the same lattice point (called the \textit{self avoiding constraint}).
\subsection{FCC Lattice}
Face Centered Cubic (FCC) lattice is preferred to other lattices since it has the highest packing density and it provides the highest degree of freedom for placing an amino acid. Thus, FCC lattice provides a realistic discrete mapping for proteins. An FCC lattice has 12 basis vectors: $\vec{v_1}=(1,1,0)$, $\vec{v_2}=(-1,-1,0)$, $\vec{v_3}=(-1,1,0)$, $\vec{v_4}=(1,-1,0)$, $\vec{v_5}=(0,1,1)$, $\vec{v_6}=(0,1,-1)$, $\vec{v_7}=(0,-1,-1)$, $\vec{v_8}=(0,-1,1)$, $\vec{v_9}=(1,0,1)$, $\vec{v_{10}}=(-1,0,1)$, $\vec{v_{11}}=(-1,0,-1)$, $\vec{v_{12}}=(1,0,-1)$. Two lattice points $p,q$ $\in$ $\mathbb{L}$ are said to be in contact or $neighbors$ of each other, if $q = p+\vec{v_i}$ for some vector $\vec{v}_i$ in the basis of $\mathbb{L}$.
\subsection{HP Energy Model}
The basic Hydrophobic-Polar (HP) model introduced in \cite{hpdill} divides the amino-acids into two groups: hydrophobic H and hydrophilic or polar P. The amino acid sequence of a given protein then becomes a string $s$ of the alphabet $\{H,P\}$. The free energy calculation for the HP model, shown in (\ref{eqHP}) counts only the energy interactions between two non-consecutive amino acid monomers.
\begin{equation}
E=\sum_{i,j:i+1<j} c_{ij}.e_{ij}
\label{eqHP}
\end{equation}
Here, $c_{ij}$ = 1 when two monomers $i$ and $j$ are neighbors (or in contact) on the lattice and 0, otherwise. The other term, $e_{ij}$ is calculated depending on the type of amino acids: $e_{ij} = -1$, if $s_i = s_j = H$ and 0, otherwise. Note that, minimizing the summation in (\ref{eqHP}) is equivalent to maximizing the number of non-consecutive H-H contacts.
Using the HP energy model together with the FCC lattice, the simplified PSP problem is defined as: given a sequence $s$ of length $n$, find a self avoiding walk $p_1\cdots p_n$ on the lattice such that the energy defined by (\ref{eqHP}) is minimized.
\section {{Related Work}}
\label{secRel}
Various techniques and their hybridization have been applied to solve PSP. Genetic algorithms with Metropolis conditions as an acceptance criteria are found more efficient than Monte Carlo simulation \cite{Unger2}. Genetic algorithms are subsequently improved by other researchers \cite{geneticT,Rashid2012GAPlus}. The Constraint-based Hydrophobic Core construction (CHCC) algorithm \cite{yue1995forces} successfully produced optimal structures for the famous Tortilla benchmarks by using constraint programming techniques. Constraint-Based Protein Structure Prediction (CPSP) tools \cite{mann2008cpsp} were developed based on this CHCC algorithm. Later on, another constraint solver, COLA \cite{DalPalu2007COLA} was developed using several biologically inspired heuristics. It solved the problem with finite domains of the existing SICStus libraries. A two stage optimization method was proposed in \cite{twostage} to improve the solutions generated by CPSP tool by using simulated annealing in the second stage. Further, a large neighborhood search method \cite{UllahS10}, when run for a long time, produced better results than simulated annealing. In this work, constraint programming was used for neighborhood generation.
Tortilla benchmarks were solved for the first time in \cite{cebrian2008protein} by using FCC lattice and tabu meta-heuristics. In a subsequent work, more improved results were achieved by applying large neighborhood search and constraint programming \cite{dotu2008protein,dotu2011protein}. A memory based approach \cite{mem2012} on top of the local search framework \cite{dotu2011protein} further improved the results for these benchmarks and other larger proteins taken from CASP.
\section{{Our Approach}}
\label{secMem}
Local search methods produce good results quickly. In HP energy model, they form a compact core of hydrophobic residues at the center of the conformation and search can not progress unless the core is broken and an alternate core is formed \cite{mem2012}. Even when guided by a good heuristic, the search oscillates within the same region of the search space and fails to improve. This obvious nature of local search algorithms results in stagnation. Large neighborhood techniques are adopted to handle this situation in protein structure prediction \cite{UllahS10,dotu2011protein} and in other domains as well \cite {BentH07}. Most of these algorithms depend on constraint programming for neighborhood generation. In this paper, we propose a hybrid local search that can improve the solutions by restructuring a single or multiple segments of the selected points, and thus breaking the compact core to create an alternative core. Our algorithm belongs to local search family and do not use constraint programming for neighborhood generation. The pseudo-code of our method is given below:
\begin{footnotesize}
\begin{verbatim}
Procedure LWS(Protein seq)
1 initializeTabu()
2 while ++it <= maxIt do
3 selectSegmentType()
4 selectSegmentVariables()
5 generateMoves()
6 selectHeuristic()
7 simulateMoves()
8 selectBestMove()
9 executeSelectedMove()
10 updateTabuList()
11 if not Improving for
12 maxStable steps then
13 maxStable *= factor
14 segmentSize++
15 end if
16 end while
17 return globalBestStructure
End Procedure.
\end{verbatim}
\end{footnotesize}
At each iteration, our algorithm selects a number of variables depending on the segment size and segment type (Lines 3-4). Then, all feasible moves are generated (Line 5) using the selected variables. These variables are essentially Cartesian co-ordinates of amino-acid residues. Once the feasible neighborhood is generated, it then simulates the moves and calculates the changes in the heuristic selected in line 6. The simulate function (Line 7) temporarily updates the selected heuristic in an incremental fashion. Once the best move is selected (Line 8), the conformation is updated by executing the move (Line 9). The execution of the move permanently updates the fitness functions and the heuristics. If there is a new global best, all the parameters are reset to the initial condition. Stagnation occurs if there is no improvement in the global best for $maxStable$ steps. The segment size is increased and the stagnation parameter, $maxStable$ is multiplied by a $factor$ at stagnation. We also maintain a tabu list that prevents selecting recently modified variables.
\subsection{Algorithm Details}
In the rest of this section, we describe different parts of the algorithm in details.
\paragraph{Segment Types.}
We select one large segment or a number of small segments (see Figure~\ref{figWin}). At each iteration, a number of variables are selected to fill these segments, which are then used in generating moves. The purpose of the segment search is to locally re-optimize the structure within the segment using exhaustive search. Large segment type allows re-structuring of a large subsequence of the protein while the multiple segment type re-optimizes multiple subsequences simultaneously. We select a segment type randomly at each iteration. The total number of amino-acid positions in the segments selected in an iteration is $segmentSize$.
\begin{figure}[htb]
\begin{center}
\begin{footnotesize}
\begin{tabular}{c}
\includegraphics[width=.30\textwidth]{./con.eps}
\\
(a) Single Segment \\
\includegraphics[width=.30\textwidth]{./non.eps}
\\
(b) Multiple Segments\\
\end{tabular}
\end{footnotesize}
\end{center}
\caption{\footnotesize Two types of segments, for $segmentSize =6$
\label{figWin
\end{figure}
\paragraph{Variable Selection.} We maintain a tabu list to prevent recent moves. The tabu tenure is selected randomly using a uniform distribution from the range $[4,sequenceLength/8]$. In case of the single large segment, we select the variables from the range $[-segmentSize/2$, $+segmentSize/2]$ around a randomly selected point, if none of them are in the tabu list. Though the tabu record is kept for single points, this mechanism requires all the points in a segment to be out of tabu list and hence a different part of the structure is guaranteed to be selected for re-structuring at each iteration. In the case of multiple segments, we randomly select the variables that are not in the tabu list. It creates multiple segments each containing points from different parts of the structure.
\paragraph{Segment Search.}
Once the variables are selected, the algorithm then generates all the possible moves for those variables, keeping the rest of the chain un-affected. Pseudo-code for the procedure $generateMoves()$ is given below:
\begin{footnotesize}
\begin{verbatim}
Procedure generateMoves(Conf c, Segment w)
1 generator = initialize()
2 do
3 for (all points p in w)
4 newPoint = getPosition(generator)
5 if(occupied(newPoint))
6 skip(generator, p)
7 exit for
8 end if
9 add newPoint to currentMove
10 end for
11 if(!skip)
12 add currentMove to moveList
13 end if
14 while(next(generator))
End Procedure
\end{verbatim}
\end{footnotesize}
The algorithm starts with an initial generator string that assigns the same direction vector to all the positions. Each direction vector is one of the basis vector between two consecutive points. For each of the points in the segment, a new point is calculated using the generator string (line 4). If that position is already occupied, then the rest of the generator string is ignored by calling the method $skip(genrator,p)$. If all the new points are valid and guarantee feasibility, the move is added to the move list. The whole process is enumerated until the $next(generator)$ function produces the last generator string. The procedure $skip(generator,p)$ allows necessary pruning in the segment search.
\subsection{Heuristics}
Local search algorithms guided by a single heuristic function often gets trapped in plateaus or local minima. One heuristic can possibly take the search out of the trap of local minima of another heuristic. In stead of guiding the search by a single fitness function, we maintain three different heuristics and select one of them at each iteration. We explored a number of heuristics in our experiments. The best three are finally used:
\begin{enumerate}
\item {\textbf{Maximize pairwise H-H contacts:}}
Select a move that minimizes the number of contacts between two non-consecutive amino-acids.
\begin{equation*}
h_1=\sum_{i+1<j, s_i=H,s_j=H}^{n} c_{ij}
\end{equation*}
Here, $c_{ij}$ = 1 only if two monomers $i$ and $j$ are neighbors (or in contact) on the lattice and 0 otherwise. This heuristic corresponds to the HP energy function.
\item {\textbf{Minimize all pair H-H distance:}}
Select the move that minimizes the sum of the squared distances between all pairs of non-consecutive hydrophobic amino-acids. The heuristic is defined below:
\begin{equation*}
h_2=\sum_{i+1<j, s_i=H,s_j=H}^{n} d(i,j)^2
\end{equation*}
Here, $d(i,j)$ denotes the Euclidean distance between the positions of $i$ and $j$ monomers. This fitness function helps pull all the hydrophobic residues towards each other and form a compact core quickly.
\item {\textbf {Minimize squared distance to hydrophobic centroid:}}
Select the move that minimizes the sum of distances of the H-amino acids to the hydrophobic centroid ($H_c$). We calculate the co-ordinates of the hydrophobic centroid from the average of Cartesian co-ordinates of the hydrophobic amino-acids.
\begin{equation*}
x_c=\frac{1}{n_H}\sum_{i_H=0}^{n_H} x_{i_H}, y_c=\frac{1}{n_H}\sum_{i_H=0}^{n_H} y_{i_H}, z_c=\frac{1}{n_H}\sum_{i_H=0}^{n_H} z_{i_H}
\label{eqHCC}
\end{equation*}
Now the sum of the distances to this hydrophobic centroid ($H_c$) is defined below:
\begin{equation*}
h_3=\sum_{i_H=0}^{n_H} (x_c-x_{i_H})^2+(y_c-y_{i_H})^2+(z_c-z_{i_H})^2
\end{equation*}
Here, $n_H$ is the number of hydrophobic amino-acids in the sequence and $i_H$ is the index of an amino acid in the sequence.
\end{enumerate}
We also explored several other heuristics: minimizing distance from the centroid, where the centroid is defined as $(\frac{1}{n}\sum_{i=1}^{n}x_i, \frac{1}{n}\sum_{i=1}^{n}y_i,\frac{1}{n}\sum_{i=1}^{n}z_i)$; minimizing the distance from the origin defined as $\sum_{i=1}^{n}(x_i^2+ y_i^2+z_i^2$; and maximizing the sum of neighboring contacts for hydrophobic residues defined as $\sum_{i_H}^{n_H}neighbors(i_H)$. However, these heuristics were not effective and not chosen for the algorithm.
\begin{table*}[t]
\caption{Energy levels for the R, f180 and CASP instances\label{tableMain}}
\renewcommand{\arraystretch}{1.4}
\begin{center}
\begin{tabular}{c|c|c||cc||ccc||ccc||c}
\hline
\cline{1-12}
{}&{}&{}&\multicolumn{2}{c||}{\bf LWS}&\multicolumn{3}{c||}{\bf LS-Mem}&\multicolumn{3}{c||}{\bf LS-Tabu}&{}\\
\cline{4-11}
{Seq.}&{Len}&{$E_{l}$}&{best}&{avg}&{best}&{avg}&{R.I.\%}&{best}&{avg}&{R.I.\%}&{\bf LNS}\\
\hline
\cline{1-12}
{R1}&{200}&{-384}&{\textit{-359}}&{\bf -346}&{-353}&{-326}&{34.48}&{-332}&{-318}&{42.42}&{-330}\\
{R2}&{200}&{-383}&{\textit{-360}}&{\bf -346}&{-351}&{-330}&{30.18}&{-337}&{-324}&{37.28}&{-333}\\
{R3}&{200}&{-385}&{\textit{-356}}&{\bf -349}&{-352}&{-330}&{34.54}&{-339}&{-323}&{41.93}&{-334}\\
{f180$_1$}&{90}&{-378$^*$}&{\textit{-362}}&{\bf -346}&{-360}&{-334}&{27.27}&{-338}&{-327}&{37.25}&{-293}\\
{f180$_2$}&{90}&{-381$^*$}&{\textit{-365}}&{\bf -354}&{-362}&{-340}&{34.14}&{-345}&{-334}&{42.55}&{-312}\\
{f180$_3$}&{90}&{-378}&{\textit{-367}}&{\bf -356}&{-357}&{-343}&{37.14}&{-352}&{-339}&{43.58}&{-313}\\
\hline
{3no6}&{229}&{-455}&{\textit{-416}}&{\bf -397}&{-400}&{-375}&{27.5}&{-390}&{-373}&{29.26}&{-}\\
{3mr7}&{189}&{-355}&{\textit{-320}}&{\bf -305}&{-311}&{-292}&{20.63}&{-301}&{-287}&{26.47}&{-}\\
{3mse}&{179}&{-323}&{\textit{-285}}&{\bf -270}&{-278}&{-254}&{23.18}&{-266}&{-249}&{28.37}&{-}\\
{3mqz}&{215}&{-474}&{\textit{-422}}&{\bf -408}&{-415}&{-386}&{25}&{-401}&{-383}&{27.47}&{-}\\
{3on7}&{279}&{?}&{\textit{-509}}&{\bf -493}&{-499}&{-463}&{-}&{-491}&{-461}&{-}&{-}\\
{3no3}&{258}&{-494}&{\textit{-414}}&{\bf -394}&{-397}&{-361}&{24.81}&{-388}&{-359}&{25.92}&{-}\\
\hline
\cline{1-12}
\end{tabular}
\end{center}
\end{table*}
\subsection{Implementation}
We have implemented our algorithm in C++. Cartesian co-ordinates of the amino acids are used in representing protein structures and only the feasible structures are allowed. The representation and the search process ensure the satisfaction of the constraints. The performance of the local segment search mainly depends on the move generation and heuristics calculation at each iteration. Moves are generated by the iterative procedure $generateMoves$. The heuristics are maintained using the invariants provided by Kangaroo \cite{NewtonPSM11} which is a constraint based local search (CBLS) system. Invariants are used in defining mathematical operators over the variables. Calculations due to simulation and execution are performed incrementally by Kangaroo.
\section{{Experimental Results}}
\label{secRes}
We ran experiments on a cluster machine. The cluster has a number of machines each equipped with two 6-core CPUs (AMD Opteron @2.8GHz, 3MB L2/6M L3 Cache) and 64GB Memory, running Rocks OS. We compared the performance of our algorithm with the tabu search \cite{dotu2011protein} and the memory based search \cite{mem2012}\footnote{Source code were provided by the authors}. Throughout this section, tabu search and the memory-based search are denoted by LS-Tabu (or LS-T) and LS-Mem (or LS-M). For each of the protein sequences, we ran each algorithm for 50 times given 5 hours time cutoff. Our algorithm, denoted by LWS was initialized by the best solutions found by LS-Mem in 20 minutes.
\paragraph{Benchmark Set - I.}
The first benchmark set is taken from Sebastian Will's PhD thesis \cite{will2005phd}. These are the $R$ and $f180$ sequences of length 200 and 180 respectively. The best and average energy levels achieved are reported in the upper part of Table~\ref{tableMain}. Parameter settings for LS-Tabu and LS-Mem were set as suggested by the authors. We ran our algorithm with $segmentSize$ and $maxStable$ initially set to 1 and 1000 respectively, and multiplying $factor$ was set to 1.2. We could not run the large neighborhood search algorithm in \cite{dotu2011protein} on these benchmarks since the COMET program exited with `too much memory needed' on our system. However, the best energy levels from their paper are shown in the `LNS' column. Optimal lower bounds of the minimum energy values for the proteins are also reported under the column ‘$E_l$ ’ generated by the CPSP tools \cite{mann2008cpsp}. Note that these values are obtained by using exhaustive search methods and are only used to see how far our results are from them. The missing values indicate where no such bound was found and the values marked with * means the algorithm did not converge.
\paragraph{Benchmark Set - II.}
The second set of benchmarks, derived from the famous CASP competition\footnote{http://predictioncenter.org/casp9/targetlist.cgi}, were originally used in \cite{mem2012}. Six proteins randomly chosen from the target list with length around $230\pm50$ are converted to HP sequences depending on the nature of the amino acids. PDB ids and results for these six proteins are also reported in Table~\ref{tableMain} (lower part). LNS column contains no data for these six proteins since they were not used in \cite{dotu2011protein}.
\paragraph{Tortilla Benchmarks.}
Tortilla benchmarks or ``Harvard'' benchmarks have been extensively used in the literature. All these proteins are 48 in size. We do not report the best or average energy levels for these sequences, since all of the three algorithms reach near optimal results and the difference is very small in terms of energy level. Instead, for each algorithm, we report in Table~\ref{tableTortilla} the success rates to reach the optimal structures. Time cutoff was 10 minutes for these small proteins.
\begin{table*}[!htb]
\begin{center}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{c|c|c|c|c|c||c|c|c|c|c|c}
\hline
\cline{1-12}
{}&{}&\multicolumn{4}{|c||}{\bf Success Rate (\%)}&{}&{}&\multicolumn{4}{c}{\bf Success Rate (\%)}\\
\cline{3-6}
\cline{9-12}
{Seq}&{$E_{l}$}&{LWS}&{LNS}&{LS-M}&{LS-T}&{Seq}&{$E_{l}$}&{LWS}&{LNS}&{LS-M}&{LS-T}\\
\hline
\cline{1-12}
{H1}&{-69}&{\bf 32}&{6}&{4}&{2}&{H6}&{-70}&{\bf 16}&{0}&{0}&{0}\\
{H2}&{-69}&{\bf 18}&{4}&{2}&{2}&{H7}&{-70}&{\bf 12}&{0}&{0}&{0}\\
{H3}&{-72}&{\bf 24}&{0}&{0}&{0}&{H8}&{-69}&{\bf 20}&{4}&{2}&{2}\\
{H4}&{-71}&{\bf 26}&{10}&{0}&{0}&{H9}&{-71}&{\bf 16}&{4}&{0}&{0}\\
{H5}&{-70}&{\bf 22}&{10}&{2}&{0}&{H10}&{-68}&{\bf 24}&{8}&{2}&{0}\\
\hline
\cline{1-12}
\end{tabular}
\caption{Success rates for Torilla benchmarks, LS-Tabu and LS-Mem are denoted by LS-T and LS-M
\label{tableTortilla}}
\end{center}
\end{table*}
\subsection{Analysis}
From the average energy levels shown in bold-faced fonts in Table~\ref{tableMain} and the success rates shown in Table~\ref{tableTortilla}, it is clearly evident that our algorithm performs significantly better than the state-of-the-art algorithms. We also report new lowest energy levels for all 12 proteins in the italic fonts shown in Table~\ref{tableMain}. The success rates to reach the optimal energy levels for the Tortilla benchmarks are also higher for our algorithm as shown in Table~\ref{tableTortilla}.
\paragraph{Relative Improvement.}
We report the relative achievement of our approach measured in terms of the difference with optimal bound of the energy level in the `R.I.\%' column of Table~\ref{tableMain}. This value is significant because it gets harder to find better conformations as the energy level of a protein sequence approaches the optimal. Similar measurements are also used in \cite{mem2012}. Relative improvement (R.I.) is defined as:
\begin{equation*}
R. I. = \frac{E_o - E_r} {E_{l} - E_r} \times 100\%
\label{eqRE}
\end{equation*}
where $E_o$ is the average energy level achieved by our approach, $E_r$ is the average energy level achieved by the other approach, and $E_{l}$ is the optimal lower bound of the energy level. The missing values indicate the absence of any lower bound for the corresponding protein sequence. For all the proteins, our method achieves significant improvement; which we confirmed by performing \textit{t}-test with 95\% confidence level.
\paragraph{Search Progress.}
We show search progress of three algorithms for the protein sequence R1 in Figure~\ref{figProgress}. Average energy levels achieved by each of the algorithms for 50 runs are shown. LS-Tabu and LS-Mem achieve almost the same levels of energy initially, but as soon as the search makes progress, they fail to overcome stagnation and do not improve after a certain level. However, LWS starts from a low energy level and keeps improving the solutions. It adjusts the $segmentSize$ dynamically, which results in more perturbation and produces better results.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{./prog.eps}
\end{center}
\caption{Search progress for protein sequence R1
\label{figProgress}}
\end{figure}
\section{{Conclusion}}
\label{secCon}
In this paper, we have presented a hybrid local search that exhaustively explores segments of a conformation and is guided by a mix of heuristic functions. Our algorithm improved on standard benchmark proteins and significantly outperformed state-of-the-art algorithms. We applied single large segments and multiple small segments, and dynamically adjusted the segment size with stagnation period. We used several heuristics so that one heuristic can help escape local minima of another. In future, we wish to apply these techniques in other domains such as, satisfiability and traveling salesman problem.
\bibliographystyle{unsrt}
{\small
|
1,108,101,564,124 | arxiv |
\section{Introduction}
\label{s:intro}
Deep convolutional approaches
have recently achieved
proficiency on
realistic semantic segmentation
datasets such as Vistas \cite{neuhold17iccv}
or Ade20k \cite{zhou17cvpr}.
This success has increased interest
in exciting real-world applications
such as autonomous driving \cite{zhang19pr}
or medical diagnostics \cite{Xia2020SynthesizeTC}.
However, visual proficiency of the
current state-of-the-art models is
still insufficient to accommodate
the demanding requirements of
these applications
\cite{kendall17nips,nalisnick19iclr}.
Early semantic segmentation approaches
involved small datasets and few classes.
Improved methodology and computing power
led to larger, more diverse datasets
with more complex taxonomies
\cite{everingham10ijcv,cordts16cvpr,neuhold17iccv}.
This development has provided valuable feedback
that led to the current state of research
where most of these datasets
are about to be solved
in the strongly supervised setup.
Despite the hard
selection and annotation work,
most existing datasets are still
an insufficient proxy for real-life operation,
even in a very restricted scenario
such as road driving.
For instance, none of the 20000 images
from the Vistas dataset \cite{neuhold17iccv}
include persons in non-standard poses,
crashed vehicles or rubble.
Additionally, real-life images
may also be degraded due to
hardware faults, inadequate acquisition,
or lens distortion \cite{zendel18eccv}.
This suggests that foreseeing every possible
situation may be an elusive goal and indicates
that algorithms should be able to
recognize image regions
foreign to the
training distribution \cite{kendall17nips}.
These considerations emphasize
the need to further improve
the next generation of datasets.
New datasets should contain
atypical images which are likely to fool
the current generation of models
\cite{zendel18eccv,hendrycks2019anomalyseg}.
Additionally, they should also endorse
open-set evaluation \cite{scheirer14}
where the models are required
to perform inference on arbitrary images.
An open-set model
is not supposed to predict
an exact visual class in outliers.
That would often be impossible
since the exact visual class
may not be present in the training taxonomy.
Instead, it should suffice that
outliers are recognized as such.
The desired test subsets should contain
various degrees of domain shift
with respect to the training distribution.
This should include diverse contexts
(e.g.\ adverse weather, exotic locations)
\cite{neuhold17iccv},
exceptional situations
(e.g.\ accidents, poor visibility)
\cite{zendel18eccv},
and outright outliers
(foreign domain objects and entire images)
\cite{zendel18eccv,blum19iccvw}.
Currently, there are only
two such benchmarks
in the dense prediction domain:
WildDash \cite{zendel18eccv} and
Fishyscapes \cite{blum19iccvw}.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{images/simple_model.jpg}
\caption{
A dense open-set recognition model has
to predict:
i) a dense outlier map, and
ii) a semantic map with C inlier classes.
The merged open-set semantic map (right)
contains outlier pixels (white)
on two objects
which are foreign to the training taxonomy:
the ego-vehicle and the forklift.
}
\label{fig:approach}
\end{figure}
This paper addresses dense open-set recognition
and outlier detection as illustrated in Figure 1.
Unlike previous \cite{kendall17nips}
and concurrent \cite{Xia2020SynthesizeTC} work,
we propose to include test-agnostic
noisy negatives to the training dataset.
We believe that this setup is adequate
due to extremely large capacity of deep models,
which allows them to classify outliers into any class
without hurting empirical accuracy
in inliers \cite{zhang17iclr}.
We believe that our approach will represents
a strong baseline for some time.
It is very hard to bound
the output of a deep model in foreign samples
since they may have almost identical latent
representations to some inliers.
This holds true
even for the current state-of-the-art
generative models \cite{nalisnick19iclr}.
Our contribution is as follows.
We propose a novel approach
for dense outlier detection
based on discriminative training
with noisy negative images
from a very large and diverse
test-agnostic dataset.
We show that successful operation
in dense prediction context
requires random pasting of negative patches
to inlier training images.
Our approach
can share features with
a closed-set semantic segmentation model.
This greatly improves outlier
detection while only slightly
impairing semantic segmentation.
Evaluation on two rigorous benchmarks
and several other datasets indicates
that our approach outperforms
the state of the art
\cite{kendall17nips,hendrycks2019anomalyseg,blum19iccvw,Xia2020SynthesizeTC},
especially on large outliers.
In datasets with small
outliers (FS Lost and Found),
we achieve the best
results complementing our
approach with the max-softmax baseline.
Earlier accounts of this research appeared in
\cite{bevandic18arxiv,bevandic19gcpr}.
We extend our previous work
with improved training procedure,
broader experimental evaluation, and better results.
Our consolidated experiments evaluate performance
on established dense open-set benchmarks
(WildDash 1 \cite{zendel18eccv},
Fishyscapes Static and Fishyscapes
Lost and Found \cite{blum19iccvw}),
the StreetHazard dataset \cite{hendrycks2019anomalyseg},
and the proposed WD-Pascal dataset
\cite{bevandic18arxiv,bevandic19gcpr}.
Our experiments show that
the proposed approach
is broadly applicable
without any dataset-specific tweaking.
All our experiments
use the same negative dataset
and involve the same hyper-parameters.
The resulting models produce
dense open-set prediction
with a single forward pass,
which makes them suitable
for real-time inference.
\section{Related work}
Open-set recognition combines
classification and outlier detection.
Some novelty detection approaches
can be viewed as open-set recognition
though this connection is seldom discussed.
We are especially concerned with
dense open-set recognition
and focus on approaches
that train on negative data.
\subsection{Open-set recognition}
Open-set recognition involves
C known classes during training
and (C+1) classes during inference.
The (C+1)st label signifies
that the sample does not belong
to the training distribution.
Outliers are usually recognized by
thresholding some kind of score.
Open-set classification can be formulated
on top of a classic
closed-set discriminative model
by estimating the outlier score
from the prediction itself.
Most recent work considers
the probability of the winning class,
also known as max-softmax (MSM)
\cite{hendrycks17iclr}.
Unfortunately, deep models usually
have highly confident outputs
regardless of the input
\cite{guo17icml}.
Different strategies
can make max-softmax more informative,
e.g.\ recalibration \cite{guo17icml},
preprocessing \cite{liang18iclr},
MC-Dropout \cite{gal16icml}
or ensembling
\cite{Bergmann_2020_CVPR}.
However, recalibration cannot improve
average precision (AP).
Preprocessing and MC-dropout offer
only slight improvements over the baseline.
MC-Dropout and ensembling require
multiple forward passes,
which may not be acceptable
for large images
and real-time inference.
Prediction uncertainty can also be assessed
with a jointly trained head
of the compound model.
The two heads operate on shared features
for efficiency and cross-task synergy
\cite{devries18arxiv, kendall17nips, zhang2020hybrid}.
Unfortunately, this can only recognize
aleatoric uncertainty \cite{kendall17nips}
which may arise due to inconsistent labels.
Instead, outlier detection is related
with epistemic uncertainty
which arises due to insufficient learning
\cite{kendall17nips,hullermeier21ml}.
Epistemic uncertainty has been assessed
under assumption that MC dropout
approximates Bayesian model sampling
\cite{smith18uai}. However, that
assumption may not be satisfied in practice.
Additionally, existing approaches
\cite{kendall17nips,smith18uai}
confound model uncertainty
with distributional uncertainty
\cite{malinin18nips}.
We are especially interested
in approaches which exploit
negative samples during training.
Most of these approaches complement
the standard discriminative loss
with a term which encourages
high entropy in negative samples,
such as KL-divergence towards a suitable prior
\cite{lee18iclr, hendrycks19iclr,malinin18nips}.
A negative dataset can also be exploited
to train a separate prediction head
which directly predicts the outlier probability
\cite{bevandic18arxiv}.
However, these approaches are sensitive
to the choice of the negative dataset.
An alternative approach
trains on synthetic negatives
which are generated at the border
of the training distribution.
However, experiments suggest
that diverse negative datasets
lead to better outlier detection
than synthetic negative samples
\cite{lee18iclr,hendrycks19iclr}.
\subsection{Novelty detection}
Novelty detection is an umbrella term
which covers anomaly, rare-event,
outlier and OOD detection, and
one-class classification.
Most of this work addresses generative models
which attempt to model
the training distribution.
Anomalous examples should yield
low probabilities in this setup,
though this is
difficult to achieve in practice
\cite{nalisnick19iclr,GrathwohlWJD0S20}.
Generative adversarial networks
can be used to score
the difference between the input
and the corresponding reconstruction \cite{zenati18}
if the generator is formulated
as an auto-encoder where the
latent representation mapping
is trained simultaneously alongside
the GAN \cite{ZHANG2020PR}.
However, the obtained reconstructions
are usually imperfect
regardless of the type of input
\cite{Bergmann_2020_CVPR}.
Several works emphasize contribution of
knowledge transfer \cite{Bergmann_2020_CVPR},
although fine-tuning gradually
diminishes pre-training benefits
due to
forgetting. This effect can
be somewhat attenuated
with a modified loss \cite{perera19}.
\subsection{Dense open-set recognition}
Dense open-set recognition is still an
under-researched field
despite important applications
in intelligent transportation \cite{zhang19pr}
and medical image analysis \cite{Xia2020SynthesizeTC}.
Some of the described novelty detection methods
are capable of dense inference
\cite{Bergmann_2020_CVPR},
however they address simple datasets
and do not report pixel-level metrics.
Hence, it is unclear whether they could be
efficiently incorporated into
competitive semantic segmentation frameworks.
Many image-wide open-set approaches
can be adapted for dense prediction
straightforwardly
\cite{kendall17nips,bevandic19gcpr,blum19iccvw}
though they are unable
to achieve competitive performance
due to many false positive outlier detections.
This likely occurs
because dense prediction incurs
more aleatoric uncertainty
than image-wide prediction
due to being ill-posed
at semantic borders \cite{bevandic18arxiv}.
A concurrent approach \cite{blum19iccvw}
fits an ensemble of normalized flows
to latent features of the segmentation model.
They infer negative log likelihoods
in different layers and threshold with respect
to the most likely activation across all layers.
This approach achieves a fair accuracy
on the Fishyscapes benchmark,
however our submission outperforms it.
Preceding discussions suggest
that dense open-set recognition
is a challenging problem,
and that best results may not be attainable
by only looking at inliers.
Our work is related to two recent
image-wide outlier detection approaches
which leverage negative data.
Perera et al.~\cite{perera19}
learn features for one-class classification
by simultaneously optimizing
cross-entropy on ImageNet images
and feature compactness on the target images.
However, inlier compactness and template-matching
are not suitable for complex training ontologies.
Hendrycks et al.~\cite{hendrycks19iclr}
train a discriminative classifier
to output low confidence in negative images.
However, our experiments suggest tendency
towards false positives
due to aleatoric uncertainty at semantic borders.
Our work is also related to the
dense open-set recognition approach
which treats outlier detection
by extending the inlier ontology \cite{MSeg_2020_CVPR}.
The proposed composite dataset (MSeg)
collects almost 200\,000
densely annotated training images
by merging public datasets
such as Ade20k, IDD, COCO etc.
Currently, this is the only approach
that outperforms our submission
to the WildDash 1 benchmark.
However, the difference in performance
is only 1.4pp although we train on
smaller resolution (768 vs 1024) and
use less negative supervision during training
(bounding boxes from ImageNet-1k instead of
dense labels on COCO, Ade20k and SUN RGB-D).
Their approach does not appear
on the Fishyscapes leaderboard.
\section{Method}
\label{ss:model}
The main components of our approach
are the dense feature extractor
and the open-set recognition module
illustrated in Fig.\,\ref{fig_sim}.
The dense feature extractor
is a fully convolutional module
which transforms the input image
H$\times$W$\times$3 into
a shared abstract representation
H/4$\times$W/4$\times$D
where D is typically 256.
The dense open-set recognition module
incorporates recognition and outlier detection.
We base these two tasks on shared features
to promote fast inference
and cross-task synergy
\cite{devries18arxiv,perera19}.
Our method relies on the following two hypotheses:
i) training with diverse noisy negatives
can improve outlier detection
across various datasets, and
ii) shared features greaatly improve
outlier detection without significant
deterioration of semantic segmentation.
\subsection{Dense feature extraction}
\label{ssec:featex}
Our feature extraction module consists of
a powerful downsampling path
responsible for semantics,
and a lean upsampling path
which restores the spatial detail.
The downsampling path starts with a
pre-trained recognition backbone.
In case of DenseNet-169 it consists of
four densely connected blocks (DB1-DB4) and
three transition layers (T1-T3). Lightweight
spatial pyramid pooling (SPP)
provides wide context information
\cite{kreso20tits, zhao17cvpr}.
The upsampling path consists of
three upsampling modules (U1-U3)
which blend low resolution features
from the previous upsampling stage
with high-resolution features
from the downsampling path.
The resulting encoder-decoder structure is asymmetric.
It has dozens of convolutional layers
in the downsampling path
and only three convolutional layers
along the upsampling path
\cite{kreso17cvrsuad}.
We speed-up and regularize the learning
with auxiliary cross-entropy losses.
These losses target soft
ground truth distribution
across the corresponding window
at full resolution \cite{kreso20tits}.
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{images/full_model.jpg}
\caption{
The proposed dense open-set recognition model
consists of a dense feature extractor
and a dense open-set recognition module.
The dense feature extractor contains densely
connected blocks (DB),
transition blocks (T), spatial
pyramid pooling layer (SPP) and
lightweight upsampling blocks (U)
\cite{kreso20tits}.
We use auxiliary cross-entropy
losses to speed-up and regularize
training.
The open-set recognition module
produces semantic segmentation into C+1
classes, where the C+1st
class is the outlier class.
}
\label{fig_sim}
\end{figure}
\subsection{Two-head recognition module}
\label{ss:two-head-module}
We consider dense open-set recognition
with shared features.
We assume that the training data $\mathcal{D}$
contains both inlier
and noisy negative pixels.
We denote images with $\mathbf{x}$,
dense semantic predictions with $\mathbf{Y}$
and the corresponding
C-way ground truth labels with $\mathbf{y}$.
Similarly, dense outlier predictions
and the corresponding ground truth labels
are $\mathbf{O}$ and $\mathbf{o}$,
respectively. We use $i$ and $j$
to denote the location of pixels.
Most considerations become applicable
to image classification
by removing summation over all pixels (i,j)
and regarding $Y_{ij}$ and $O_{ij}$
as image-wide predictions.
We propose a two-head open set
recognition module which simultaneously emits
dense closed-set posterior over classes
$\mathrm{P}(Y_{ij}|\mathbf{x})$, as well as
the probability $\mathrm{P}(O_{ij}|\mathbf{x})$
that the pixel at coordinates $(i,j)$ is an outlier.
Standard cross-entropy losses
for the two predictions are as follows:
\begin{myalign}
\label{eq:two_head}
\mathcal{L}_\mathrm{cls} &=
-
\sum_{\mathbf{x},\mathbf{y}, \mathbf{o}\in
\mathcal{D}}
\sum_{ij}
[\![o_{ij}=0]\!] \cdot \log \mathrm{P}(Y_{ij}=y_{ij}|\mathbf{x})
\; ,
\nonumber \\
\mathcal{L}_\mathrm{od} &=
-
\sum_{\mathbf{x,o}\in
\mathcal{D}}
\sum_{ij}
\log \mathrm{P}(O_{ij}=o_{ij}|\mathbf{x})
\; .
\end{myalign}
Figure
\ref{fig:two-head-module} shows
that equation \ref{eq:two_head} can be
implemented as a multi-task model with shared
features where the first
head predicts semantic segmentation,
while the second detects outliers.
Outlier detection overrides closed-set recognition
when the outlier probability is over a threshold.
Thus, the classification head
is unaffected by negative data,
which may preserve
the baseline recognition accuracy
even when training on test-agnostic negatives
which are bound to be noisy.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\linewidth]{images/two_head_model.jpg}
\caption{The architecture of the proposed
two head open-set recognition module.
The outlier detection head is a binary
classifier which we train using the
outlier ground truth.
The segmentation head is a C-way classifier
which requires both the segmentation
and the outlier ground truth. The
outlier ground truth is required for segmentation
training in order to be able to
exclude outlier pixels from
$\mathcal{L}_\mathrm{cls}$.
}
\label{fig:two-head-module}
\end{figure}
\subsection{Exploiting noisy negatives}
\label{ssec:noisy_negatives}
We propose to train our model
by sampling negative data
from an extremely diverse test-agnostic dataset
such as ImageNet-1k.
We observe that such dataset
will necessarily
overlap with inliers.
For example, ImageNet-1k
contains many classes
from road-driving ontologies
used in Cityscapes \cite{cordts16cvpr}
and Vistas \cite{neuhold17iccv}
(e.g.\ cab, streetcar).
Additionally, most stuff
classes from Cityscapes
(e.g.\ building, vegetation)
are a regular occurrence
in ImageNet-1k backgrounds.
We refer to this issue as label noise.
We promote resistance to label noise
by training on mixed batches
with approximately equal share
of inlier and negative images.
Hence, inlier pixels in negative images
are vastly outnumbered by true
inliers for each particular class.
We perform many inlier epochs
during one negative epoch,
since our negative training dataset
is much larger than our inlier datasets.
Our batch formation procedure
prevents occasional inliers from negative images
to significantly affect the training
and favours stable development
of batchnorm statistics.
Unlike \cite{blum19iccvw},
we refrain from training on
pixels labeled with the ignore class
since we wish to use the same
negative dataset in all
experiments.
Our early experiments involved
training on whole inlier images
and whole negative images.
The resulting models would work very well
on test images with all inliers or all outliers.
However, the performance was poor in images
with mixed content \cite{bevandic18arxiv}.
It appears that the outlier detection head
must be explicitly trained for mixed inputs
to correctly generalize in such cases.
We address this issue by pasting negative images
into inlier images during training.
We first resize the negative image
to a small percent of the inlier resolution,
and then paste it at random in the inlier image
as illustrated in Figure \ref{fig:training_input}.
Subsequently, our models became capable
of detecting outliers in inlier context
\cite{bevandic19gcpr}.
We obtain the best results
when the size of pasted patches
is randomly chosen from a wide interval.
\begin{figure}[hbt]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{images/sets.jpg}
\caption{ \\ }
\label{fig:neg}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\columnwidth]{images/paste_into_image.jpg}
\caption{ \\ }
\label{fig:neg_gt}
\end{subfigure}
\\
\bigskip
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\columnwidth]{images/create_od_gt.jpg}
\caption{ }
\label{fig:pasted}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{images/negative_gt.jpg}
\caption{ }
\label{fig:pasted_gt}
\end{subfigure}
\caption{We train on images from the target dataset and noisy
negatives from ImageNet-1k (a). We paste a randomly rescaled
noisy negative bounding box into each positive training image
(b). The pasted pixels are labeled as outliers (white) in the
outlier detection ground truth (c). Negative training images are
completely ignored by the semantic segmentation loss (black) and
labeled as outliers only within the bounding box (d).}
\label{fig:training_input}
\end{figure}
\section{Experimental setup}
Our open-set recognition models
aim at achieving robustness
with respect to various forms
of distributional uncertainty
\cite{malinin18nips}.
Consequently all our experiments
evaluate on datasets
which are in some way
different than the training ones.
\subsection{Training datasets}
We train our models on inliers
from Cityscapes train \cite{cordts16cvpr},
Vistas train \cite{neuhold17iccv},
and StreetHazard train \cite{hendrycks2019anomalyseg}.
We train all our models on the same
noisy negative training dataset
which we refer to as ImageNet-1k-bb
\cite{bevandic19gcpr}.
We collect ImageNet-1k-bb by picking
the first bounding box
from the 544546 ImageNet-1k images
with bounding box annotations.
We train on standalone negative images
and mixed-content images obtained by pasting
a resized negative image into an inlier crop.
We resize each negative image
to the desired share $s_n$
of the inlier resolution,
where the default is $s_n$=5\%.
Models with the RSP suffix
(randomly scaled patches)
pick a random $s_n\in[.1\%,10\%]$
for each negative training image.
\subsection{Validation dataset}
Several previous approaches propose
to evaluate dense open-set recognition
on splits of existing real datasets
that contain some visual classes
which are absent from the training split.
Thus, the BDD-Anomaly dataset
\cite{hendrycks2019anomalyseg}
collects all BDD images
without trains and motorcycles
into the training split,
and places all other BDD images
into the test split.
Cityscapes-IDD \cite{angus19arxiv}
proposes training on Cityscapes,
and evaluating on cars (inliers)
and rickshaws (outliers)
from the IDD dataset.
However, this approach
is not easily carried out in practice
since it is hard to avoid similarities
between inlier and outlier classes.
For instance, trains and motorcycles
are similar to buses and bicycles,
respectively, which are inliers in BDD-Anomaly.
Similarly, rickshaws (Cityscapes-IDD outliers)
are similar to motorcycles and cars
(Cityscapes-IDD inliers).
We attempt to avoid this pitfall
by making sure that anomalies
come from a different domain.
We craft WD-Pascal \cite{bevandic18arxiv}
by randomly pasting Pascal animals
into WildDash 1 val images.
We select animals which take up
at least 1\% of the WildDash resolution.
Conversely, we craft WD-LSUN
by complementing WildDash 1 val
with random subsets
of LSUN \cite{yu15arxiv} images,
so that the number of inliers (WildDash 1)
and outliers (LSUN) is approximately equal.
We reduce the variance of all our
validation and ablation experiments
by averaging 50 assays across WildDash 1.
\subsection{Evaluation datasets}
We evaluate our models on
several test dataset
for dense open-set recognition.
Our experiments report
the outlier detection performance
(AP, FPR$_{95}$ \cite{hendrycks17iclr})
and semantic segmentation accuracy (mIoU).
The WildDash 1 benchmark \cite{zendel18eccv}
collects difficult road driving scenarios
and negative images from other domains,
but does not include images of mixed content.
The Fishyscapes benchmark \cite{blum19iccvw}
includes Cityscapes images
with pasted Pascal VOC objects.
It also includes a subset of the
Lost and Found dataset \cite{pinggera16iros}
where the outliers correspond
to small obstacles on the road.
The StreetHazards dataset
\cite{hendrycks2019anomalyseg}
contains fully synthetic road-driving images
while out-of-domain objects
correspond to anomalies.
\subsection{Implementation details}
Our models are based on DenseNet-169
with ladder-style upsampling \cite{kreso20tits}
as described in \ref{ssec:featex}
due to best overall validation
performance \cite{bevandic19gcpr}.
We normalize all images
with ImageNet mean and variance.
We denote the image size
as its shorter dimension.
We resize WD-Pascal and
WD-LSUN images to 512 pixels.
In all other experiments we resize
validation and test images to 768 pixels.
Some experiments train with scale jittering
so that 30\% images are resized to 512 pixels,
while the remaining 70\% images
are randomly resized
between 512 and 1536 pixels.
We denote such models with
the JS suffix (jittered scale).
We form training batches
with random 512$\times$512 crops
which we jitter with horizontal flipping.
We do not use multi scale evaluation
in order to report performance
which could be delivered in real-time.We use the standard Adam optimizer
and divide the learning rate
of pre-trained parameters by 4.
We validate the loss weights of all
open-set recognition modules on a
small subset of WD-Pascal.
We train our two-head models with the compound loss
$\mathcal{L}_{th} = 0.6 \mathcal{L}_{cls} + 0.6*0.2 \mathcal{L}_{od} + 0.4 \mathcal{L}_{aux}$.
We validate all hyper-parameters
on WD-Pascal and WD-LSUN
\cite{bevandic19gcpr}.
We train our models throughout 75 Vistas epochs,
which corresponds to 5 epochs of ImageNet-1k-bb.
This was increased to 20 epochs
for our benchmark submissions.
We detect outliers
by thresholding inlier probability
at $\mathrm{P}(O_{ij}=0|\mathbf{x})$=0.5.
\section{Results}
We validate mIoU accuracy on WildDash 1 val
and outlier detection AP on
WD-Pascal, WD-LSUN and
Fishyscapes Lost and Found.
We evaluate our models on
the WildDash 1 benchmark,
the Fishyscapes benchmark,
and on the test subset of the
StreetHazard dataset.
\subsection{Validation of Dense
Outlier Detection Approaches}
Table \ref{table:OOD_detection} validates
our method against several other dense
open-set recognition approaches
on WD-Pascal and WD-LSUN.
All models have been trained on positive images
from the Vistas dataset.
Section 1 of the table presents
models which are trained without negatives.
We show the performance of max-softmax \cite{hendrycks17iclr},
max-softmax after ODIN \cite{liang18iclr}
epistemic uncertainty after 50 forward
passes with MC-Dropout \cite{smith18uai},
and densely trained confidence \cite{devries18arxiv}
(cf. Figure \ref{fig:conf-module}).
The remaining models use noisy
negatives from ImageNet-1k-bb during training.
Section 2 of the table
evaluates a single-task outlier detection
model.
The model performs better than
the models from section 1, but much worse than
models from section 4 which share features
between the segmentation and the outlier
detection tasks.
This confirms our hypothesis that semantic segmentation loss
forces the model to learn features
which generalize well for outlier detection.
Section 3 evaluates the two-head module
approach from Figure \ref{fig:two-head-module}
when it is trained on whole inlier and whole
negative images. This model is able to
detect outlier images but it performs
badly on images with mixed content.
This shows that training with
pasted negatives is a prerequisite
for detecting outlier objects in
front of an inlier backgroung.
\begin{table}[htb]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{lrrr}
Model &
\multicolumn{1}{r}{AP WD-LSUN} &
\multicolumn{1}{r}{AP WD-Pascal} &
\multicolumn{1}{r}{mIoU WD}\\
\toprule
\multicolumn{1}{l}{
C$\times$ multi-class} &$55.6 \pm 0.8$ & $5.0 \pm 0.5$ & 50.6
\\
\multicolumn{1}{l}{
C$\times$ multi-class, ODIN} & $56.0 \pm 0.8$ & $6.0 \pm 0.5$ & \textbf{51.4}
\\
\multicolumn{1}{l}{
C$\times$ multi-class, MC} & $\textbf{64.1} \pm \textbf{1.0} $ & $\textbf{9.8} \pm \textbf{1.2}$ & 48.4
\\
\multicolumn{1}{l}{
confidence head} & $54.4 \pm 0.8$ & $ 3.4 \pm 0.4 $ & 46.4
\\
\midrule
\multicolumn{1}{l}{single outlier detection head} & $ 99.3 \pm 0.0 $ & $ 15.0 \pm 3.8 $ & N/A
\\
\midrule
\multicolumn{1}{l}{two heads, no pasting} & $98.9 \pm 0.0$
& $3.5 \pm 0.6$ & 46.27\\
\midrule
\multicolumn{1}{l}{two heads(=LDN\_BIN)} & $99.3 \pm 0.0$ & $34.9 \pm 6.8$ & \textbf{47.9}
\\
\multicolumn{1}{l}{
C$\times$ multi-class(=LDN\_OE)} & $ \textbf{99.5} \pm \textbf{0.0}$ & $33.8 \pm 5.1$ & 47.8
\\
\multicolumn{1}{l}{
C+1$\times$ multi-class}&
$98.9 \pm 0.1$ & $25.6 \pm 5.5$ & 46.2
\\
\multicolumn{1}{l}{
C$\times$ multi-label} & $98.8 \pm 0.1$ & $\textbf{49.1}\pm \textbf{5.6}$ & 43.4
\\
\bottomrule
\end{tabular}
}
\caption{Validation of dense
outlier detection approaches.
WD denotes WildDash 1 val, MC denotes models
trained and evaluated using Monte-Carlo dropout.}
\label{table:OOD_detection}
\end{table}
Section 4 of the table
compares different open-set recognition
modules which train on pasted noisy negatives from
ImageNet-1k-bb as explained
in \ref{ssec:noisy_negatives}.
The two-head module architecture
is illustrated in Figure \ref{fig:two-head-module}, while
the other three variants are illustrated
in Figure \ref{fig:dense_osr_modules}.
The C-way multi-class approach
trains the model to emit low max-softmax in
outlier samples \cite{liang18iclr,lee18iclr,hendrycks19iclr}
(Figure \ref{fig:oe_module}).
The C+1-way multi-class
model performs prediction over C+1 classes, where
the C+1st class is the outlier class (Figure \ref{fig:c+1-module}).
Finally, the C-way multi-label approach trains C independent
heads with sigmoidal activation
(Figure \ref{fig:sigmoid_module}).
Comparison with the top section clearly
confirms our hypothesis
that training with diverse noisy negatives
can substantially improve outlier detection.
We also note a slight reduction
of the segmentation score
in the column 4.
This reduction is the lowest for
the C-way multi-class model
and the two-head model.
A closer inspection of models trained
with noisy negatives shows that the C+1-way
multi-class model performs the worst.
The multi-label model performs
well on outlier detection
but quite poorly on inlier segmentation.
The two-head model and the C-way multi-class
model perform quite similarly, though further
qualitative analysis shows that they
differ in the type of errors they produce.
The two-head model is more sensitive
to domain shifts between the training and
the validation sets while the C-way multi-class
approach generates false positive
outliers due to low max-softmax score
at semantic borders.
\begin{figure}[htb]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\columnwidth]{images/confidence_head_model.jpg}
\caption{ }
\label{fig:conf-module}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{images/oe_model_smaller.jpg}
\caption{ }
\label{fig:oe_module}
\end{subfigure}
\\
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\columnwidth]{images/C_1_model_smaller.jpg}
\caption{ }
\label{fig:c+1-module}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\columnwidth]{images/sigmoid_model_smaller.jpg}
\caption{ }
\label{fig:sigmoid_module}
\end{subfigure}
\caption{Four alternative open-set recognition modules.
Two-head approach with trained confidence
\cite{devries18arxiv, kendall17nips} is
similar to our approach in Figure 3,
but it does not train on negative images (a).
C-way multi-class approach \cite{hendrycks19iclr, lee18iclr}
learns uniform prediction in negative samples (b).
C+1-way multi-class approach uses the
negative data as a regular semantic class (c).
C-way multi-label approach learns C one-versus-all
classifiers \cite{franchi20arxiv} (d).}
\label{fig:dense_osr_modules}
\end{figure}
\subsection{Dense open-set recognition on WildDash 1 benchmark}
Table \ref{table:wd_bench_results} presents
open-set recognition results on
the WildDash 1 benchmark. Our models
are listed in the last three rows of the table.
The LDN\_OE model has
a single C-way multi-class head
and uses max-softmax for outlier
detection.
The LDN\_BIN and LDN\_BIN$_{\textrm{JS}}$
models have separate heads
for semantic segmentation and
outlier detection. The $JS$ label
indicates training with
scale jittering.
All three models have been trained on
Vistas train, Cityscapes train,
and WildDash 1 val (inliers) and
ImageNet-1k-bb (noisy negatives).
\setlength{\tabcolsep}{4pt}
\begin{table}[htb]
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{l rrrrrrrr}
\multirow{2}{*}{Model}
& \multirow{2}{*}{Meta Avg} &
& \multicolumn{4}{c}{Classic}
&& \multirow{2}{*}{Negative}
\\
\cmidrule{4-7}
& mIoU cla
&
& mIoU cla
& iIoU cla
& mIoU cat
& iIoU cat
&
& mIoU cla\\
\toprule
\multicolumn{1}{l}{DRN\_MPC \cite{Yu2017}} & 28.3 && 29.1 & 13.9 & 49.2 & 29.2 && 15.9 \\
\multicolumn{1}{l}{DeepLabv3+\_CS \cite{chen2018encoder}} & 30.6 & & 34.2 & 24.6 & 49.0 & 38.6 && 15.7\\
\multicolumn{1}{l}{MapillaryAI\_ROB \cite{bulo2017place}} &38.9 && 41.3 & 38.0 & 60.5 & 57.6 && 25.0\\
\multicolumn{1}{l}{AHiSS\_ROB \cite{meletis2018training}} & 39.0 && 41.0 & 32.2 & 53.9 & 39.3 && 43.6\\
\multicolumn{1}{l}{MSeg \cite{MSeg_2020_CVPR}} & 43.0 && 42.2 & 31.0 & 59.5 & 51.9 && 51.8\\
\multicolumn{1}{l}{MSeg\_1080 \cite{MSeg_2020_CVPR}} & \textbf{48.3} && \textbf{49.8} & \textbf{43.1} & 63.3 & 56.0 && \textbf{65.0}\\
\midrule
\multicolumn{1}{l}{LDN\_BIN (ours)} & 41.8 && 43.8 & 37.3 &58.6 & 53.3 && 54.3\\
\multicolumn{1}{l}{LDN\_OE (ours)} & 42.7 && 43.3 & 31.9 & 60.7 & 50.3 && 52.8\\
\midrule
\multicolumn{1}{l}{LDN\_BIN$_{\textrm{JS}}$(ours)} & 46.9 && 48.8 & 42.8 & \textbf{63.6} & \textbf{59.3} && 47.7\\
\bottomrule
\end{tabular}
}
\caption{Open-set segmentation
results on the WildDash 1 benchmark}
\label{table:wd_bench_results}
\end{center}
\end{table}
LDN\_BIN and LDN\_OE differ only
in open-set recognition modules, with the rest of
the training setup being identical.
The two-head model performs
better in most classic evaluation categories
as well as in the negative category,
however it has a lower meta average score.
This is caused by a larger performance
drop in most hazard categories
(more details can be found on
the WildDash 1 web site).
LDN\_BIN$_{\textrm{JS}}$ has the same
architecture as LDN\_BIN but it is trained
using scale jittering
to be able to perform inference
on larger resolutions
(768x1451).
This setup
improves the segmentation accuracy
across all categories
and reduces sensitivity to hazards
while slightly deteriorating performance
in negative images.
We did not retrain
LDN\_OE using scale jittering since
this model produces
false positives on semantic borders
regardless of the inference resolution.
The best overall performance is achieved
by the MSeg\_1080 \cite{MSeg_2020_CVPR}.
However, that model uses much more
negative supervision:
densely labeled Ade20k and COCO (they) vs
bounding boxes from ImageNet-1k (us).
Additionally, they train and evaluate
on a larger resolution (1024 vs 768) and use
a model with almost 4 times more parameters
(65.8M vs 17.4M ).
Mseg\_1080 is somewhat less sensitive to some hazards
(most significantly underexposure) which
may be due to a significantly
larger inlier training dataset.
Aside from Vistas and Cityscapes, they also use
BDD (8000 images) and
IDD (7974 images).
On the other hand, MSeg does not use 70
images from WildDash 1 val.
Our model is competitive
and actually outperforms MSeg when evaluated on
the same resolution (MSeg vs LDN\_BIN).
Figure \ref{fig:bench_mseg_ldn} presents
a qualitative comparison between MSeg and
LDN\_BIN$_{\textrm{JS}}$ as shown
on the WildDash 1 benchmark.
The columns show: i) original image,
ii) MSeg output and
iii) LDN\_BIN$_{\textrm{JS}}$ output.
Images show that MSeg
performs better on small objects
and negative images
which is likely due to larger resolution
and more supervision.
Note however that the MSeg model does not recognize
black rectangles (row 2) as outliers.
Detailed qualitative results for
LDN\_BIN and LDN\_OE
can be found in \cite{bevandic19gcpr}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/123mseg_vs_ldn.jpg}
\\[0.2em]
\includegraphics[width=\columnwidth]{images/141mseg_vs_ldn.jpg}
\\[0.2em]
\includegraphics[width=\columnwidth]{images/152mseg_vs_ldn.jpg}
\caption{Qualitative comparison between
MSeg (middle column) and LDN\_BIN$_{\textrm{JS}}$ (right column)
on WildDash 1 test images (left column).
MSeg performs better on some negative
images (row 3), and small objects (row 1),
but it appears unable to locate outlier patches
in front of inlier background (row 2).
}
\label{fig:bench_mseg_ldn}
\end{figure}
\subsection{Open-set validation on Lost and Found dataset}
\label{ss:fishyval}
Table \ref{table:val_fishy_outlier} shows
evaluation on the validation subset of Fishyscapes
Lost and Found.
All models were trained on inliers from
Vistas train, Cityscapes train, and WildDash 1 val.
LDN$_{\textrm{JS}}$ denotes the max-softmax baseline
trained with scale jittering and without outliers.
All other models were also trained
on noisy negatives from ImageNet-1k-bb.
LDN\_OE, LDN\_BIN and
LDN\_BIN$_{\textrm{JS}}$
are exact same models
we submitted to the WildDash 1 benchmark.
LDN\_BIN$_{\textrm{JS, RSP}}$
has the same architecture
as LDN\_BIN$_{\textrm{JS}}$,
however it varies the size of
pasted negatives during
training in order to improve
detection of smaller outliers.
The last row combines
our OOD head with max-softmax using
multiplication.
Later we show that this formulation succeeds
since max-softmax complements our method
when the outliers are very small
\setlength{\tabcolsep}{4pt}
\begin{table}[htb]
\begin{center}
\begin{tabular}{llrrrr}
Model & criterion &AP & AUROC & FPR95 & mIoU\\
\toprule
LDN$_{\textrm{JS}}$ & MSM & 7.8 & 92.1 & 26.6 & 76.4\\
\midrule
LDN\_OE & MSM & 9.5 & 88.8 & 44.2 & 72.2 \\
LDN\_BIN & OP & 13.2 & 88.0 & 71.9 & 75.1\\
LDN\_BIN$_\textrm{{JS}}$ & OP & 25.4 & 89.8 & 90.0 & \textbf{76.5}\\
[0.5em]
LDN\_BIN$_{\textrm{JS, RSP}}$ & OP & 36.9 & \textbf{96.1} & \textbf{20.0} & 76.3\\
& OP $\times$ MSM & \textbf{45.7} & 95.6 & 24.0 &\\
\bottomrule
\end{tabular}
\caption{Comparison of open-set segmentation approaches
on Fishyscapes Lost and Found (AP, AUROC, FPR95(\%))
and Vistas (mIoU) validation subsets. MSM is short for max-softmax,
while OP stands for outlier probability estimated
by the outlier detection head.
}
\label{table:val_fishy_outlier}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
Both anomaly detection
and semantic segmentation benefit from
scale jittering during training.
This is different than WildDash 1
where scale jittering decreased performance
on negative images (cf. Table \ref{table:wd_bench_results}).
Figure \ref{fig:fishy_val_per_image} explores the influence
of outlier size on model performance
by plotting the relation
between the outlier area and
the detection performance.
The figure shows AP and FPR95
with respect to the area of the outlier object
for LDN$_{\textrm{JS}}$
(which uses max-softmax for outlier detection),
and LDN\_BIN$_{\textrm{JS, RSP}}$.
We see that the accuracy of both models
depend on the size of the outlier.
Max-softmax acts as an edge detector and
therefore performs better
on smaller objects. It however performs
poorly on larger objects because it is unable
to detect the interior of an object as an outlier.
\begin{figure}[htb]
\centering
\begin{tabular}{cccc}
\multicolumn{2}{c}{LDN$_{\textrm{JS}}$}&\multicolumn{2}{c}{LDN\_BIN$_{\textrm{JS, RSP}}$}\\
\includegraphics[width=0.25\columnwidth]{images/aps_baseline.jpg}
&\includegraphics[width=0.25\columnwidth]{images/fprs_baseline.jpg}
&\includegraphics[width=0.25\columnwidth]{images/aps_js_rsp.jpg}
&\includegraphics[width=0.25\columnwidth]{images/fprs_js_rsp.jpg}
\end{tabular}\\
\caption{Influence of the outlier size on the model performance
on Fishyscapes Lost and Found val.
The two leftmost graphs show AP and FPR95
of the max-softmax baseline (LDN$_{\textrm{JS}}$)
and the two rightmost graphs show AP and FPR95
for our model trained with noisy negatives
(LDN\_BIN$_{\textrm{JS, RSP}}$).
Higher AP and lower FPR scores indicate
that our model prevails on
large outliers.
Max-softmax on the other hand achieves better
results on small outliers because it detects
object edges well.}
\label{fig:fishy_val_per_image}
\end{figure}
Figure \ref{fig:fishy_val_per_image} implies that we
can improve the accuracy of our
two head models on small objects by
multiplying the outlier probability
with max-softmax.
\begin{myalign}
\label{eq:comb}
P(outlier_{ij}|x) =
P(o_{ij}=1|x)\cdot(1-max_{c}(P(y_{ijc}|x))
\end{myalign}
This equation suggests that outliers should both
appear strange to outlier detection
head \emph{and} produce small max-softmax scores.
This formulation improves upon max-softmax by dampening
outlier probabilities on semantic borders, since
our trained outlier detection head perceives them
as iniliers. This formulation improves upon our
trained outlier detection head on small outliers, since
that is where max-softmax achieves fair performance.
Note that relatively poor performance of our
model on small outliers does not come as
a great surprise. Our predictions are 4
times subsampled with respect to the input
resolution to reduce computational
complexity and memory footprint during
training. This is a common
trade-off \cite{russakovsky15ijcv}
which can be avoided, but
at a great computational cost
\cite{DBLP:zhu18cvpr}.
Figure \ref{fig:fishy_val_results} shows
qualitative performance of our model.
Column 1 presents the original image.
Column 2 contains the ground truth, with
inlier, outlier and ignore pixels denoted in
gray, white and black respectively.
Finally, column 3 shows the output of our
LDN\_BIN$_{JS,RSP}$ model using
a conjunction between our prediction and
max-softmax probability (OD$\times$MSP).
Our model performs
well on larger and closer objects (rows 1 and 3),
while struggling with distant and small objects (rows 1 and 2).
Finally, we note that some of
the ignore pixels (e.g. ego-vehicle, noise
on image borders) are also
classified as anomalies.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/04_Maurener_Weg_8_000002_000150_lost_and_found.jpg}
\\[0.2em]
\includegraphics[width=\columnwidth]{images/04_Maurener_Weg_8_000007_000140_lost_and_found.jpg}
\\[0.2em]
\includegraphics[width=\columnwidth]{images/04_Maurener_Weg_8_000007_000170_lost_and_found.jpg}
\caption{Outlier detection with LDN\_BIN$_{JS, RSP}$
and OP$\times$MSM
on Fishyscapes Lost and Found val.
Columns present i) the original image,
ii) the ground truth labels, and
iii) the outlier probability.
Our model works better on close objects
than on distant ones (row 1).
The outlier detection confidence grows
as the camera draws nearer (rows 2, 3).
Very small outliers are not detected (rows 1, 2).
}
\label{fig:fishy_val_results}
\end{figure}
\subsection{Dense open-set recognition on the Fishyscapes benchmark}
Table \ref{table:fishy_bench_results} shows current
results on the Fishyscapes benchmark \cite{blum19iccvw}.
The benchmark provides segmentation accuracy
on Cityscapes val, as well as
outlier detection accuracy on
FS Lost and Found and FS Static.
FS Lost and Found comprises 300 images taken from
the Lost and Found dataset. These images are
relabelled to distinguish
between inlier, outlier and void classes, and
filtered to exclude road hazards which correspond to
inlier classes (e.g. bicycles).
FS static was created by pasting
PASCAL VOC objects into Cityscapes images.
Note that LDN\_BIN$_{\textrm{JS}}$
is almost exactly the same model that was
presented in Table \ref{table:wd_bench_results}.
Due to the requirement of the benchmark,
the model had to be reimplemented in Tensorflow 1.
We did not retrain the model, but reused the
parameters learnt in Pytorch.
As in our validation experiments (see \ref{ss:fishyval}),
LDN\_BIN$_{\textrm{JS, RSP}}$
improves the detection of smaller outliers.
This is reflected by an improved FPR95 score
with respect to LDN\_BIN$_{\textrm{JS}}$.
We outperform other models by a large
margin on FS static.
We also achieve the best FPR95 and a close
second-best outlier detection AP on Lost and Found
without significant drop in
segmentation performance that
occurs in the best submission.
\setlength{\tabcolsep}{4pt}
\begin{table*}[htb]
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lllccrrrrrrr}
\multicolumn{2}{l}{\multirow{2}{*}{Model}} & \multirow{2}{*}{Criterion} & \multirow{2}{*}{Train} & \multirow{2}{*}{OoD} & \multirow{2}{*}{City} && \multicolumn{2}{c}{Lost and Found} && \multicolumn{2}{c}{FS Static}\\
\cmidrule{8-9}\cmidrule{11-12}
& & & & & mIoU && AP & FPR95 && AP & FPR95 \\
\toprule
\multicolumn{2}{l}{Dirichlet DeepLab \cite{blum19iccvw}} & prior entropy & \ding{51} & \ding{51} & 70.5 && \textbf{34.3} & 47.4 && 31.3 & 84.6 \\
\multicolumn{2}{l}{Bayesian DeepLab \cite{blum19iccvw}}& mutual information & \ding{51} & \ding{55} & 73.8 && 9.8 & 38.5 && 48.7 & 15.5 \\
OoD training \cite{blum19iccvw} & &maximize entropy & \ding{51} & \ding{51} & 79.0 && 1.74 & 30.6 && 27.5 & 23.6 \\
[0.5em]
\multicolumn{2}{l}{Softmax \cite{blum19iccvw}} & entropy & \ding{55} & \ding{55} & 80.0 && 2.9 & 44.8 && 15.4 & 39.8 \\
& &max-softmax (MSM) & \ding{55} & \ding{55} & && 1.8 & 44.9 && 12.9 & 39.8 \\
[0.5em]
\multicolumn{2}{l}{Learned embedding density \cite{blum19iccvw}} & logistic regression & \ding{55} & \ding{51} & 80.0 && 4.7 & 24.4 && 57.2 & 13.4 \\
& & minimum nll & \ding{55} & \ding{55} & && 4.3 & 47.2 && 62.1 & 17.4 \\
& &single-layer nll & \ding{55} & \ding{55} & && 3.0 & 32.9 && 40.9 & 21.3 \\
[0.5em]
\multicolumn{2}{l}{Image resynthesis} & resynthesis difference & \ding{55} & \ding{55} & \textbf{81.4} && 5.7 & 48.1 && 29.6 & 27.1 \\
\midrule
\multirow{3}{3cm}{Discriminative outlier detection head (ours)}& \multicolumn{1}{l}{LDN\_BIN$_{\textrm{JS}}$} & outlier probability (OP) & \ding{51} & \ding{51} & 77.7 && 15.7 & 76.9 && 82.9 & 5.1 \\
[0.5em]
& \multicolumn{1}{l}{LDN\_BIN$_{\textrm{JS, RSP}}$} & outlier probability (OP) & \ding{51} & \ding{51} & 77.3 && 21.2 & 36.9 && \textbf{86.2} &\textbf{2.4} \\
& \multicolumn{1}{l}{} & OP $\times$ MSM & \ding{51} & \ding{51} && & 30.9 & \textbf{22.2} && 84.0 & 10.3 \\
\bottomrule
\end{tabular}
}
\caption{Open-set segmentation evaluation
on the Fishyscapes benchmark.}
\label{table:fishy_bench_results}
\end{center}
\end{table*}
\setlength{\tabcolsep}{1.4pt}
\subsection{Open-set segmentation on StreetHazard}
Table \ref{table:caos_results} presents
open-set segmentation accuracy
on StreetHazard.
We evaluate the same models as in previous experiments
(LDN$_{\textrm{JS}}$, LDN\_BIN$_{\textrm{JS}}$
and LDN\_BIN$_{\textrm{JS, RSP}}$)
and compare them with the max-softmax baseline.
We ignore outlier pixels when measuring
segmentation accuracy.
Unlike
\cite{hendrycks2019anomalyseg},
we do not use ignore pixels during evaluation
(same as \cite{blum19iccvw}).
Furthermore, we do not report
the mean of per-image scores.
In our view, such practice may yield
over-optimistic estimate of the overall
anomaly detection metrics, since recognition
errors can not propagate across images.
We therefore determine global scores
on 10 times downsampled predictions.
We evaluated the performance by measuring
the mean of per-image scores and obtained
similar results to the ones we report.
\setlength{\tabcolsep}{4pt}
\begin{table}[htb]
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{llrrrr}
model & criterion & AP & AUROC & FPR95 & test mIoU
\\
\toprule
PSPNet \cite{hendrycks2019anomalyseg} & CRF+msm & 6.5 & 88.1 & 29.9 & N/A \\
PSPNet \cite{franchi2019tradi} & TRADI & 7.2 & 89.2 & \textbf{25.3} & N/A\\
SPADE \cite{Xia2020SynthesizeTC} & SynthCP & 9.3 & 88.5 & 28.4 & N/A\\
\midrule
LDN$_{\textrm{JS}}$ & MSM & 7.28 & 87.63 & 38.13 & 65.04\\
LDN\_BIN$_{\textrm{JS}}$ & OP & 18.56 & 87.00 & 79.08 & 66.32\\
[0.5em]
LDN\_BIN$_{\textrm{JS, RSP}}$ & OP & \textbf{19.74} & 88.86 & 56.19 & \textbf{66.94} \\
& OP$\times$MSM & 18.82 & \textbf{89.72} & 30.86 & \\
\bottomrule
\end{tabular}
}
\caption{Performance evaluation
on StreetHazard}
\label{table:caos_results}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
Figure \ref{fig:sh_np_results}
shows some qualitative results.
The columns represent: i) the original image,
ii) the ground truth and iii) our output.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\columnwidth]{images/504_np.jpg}
\\[0.2em]
\includegraphics[width=0.95\columnwidth]{images/993_np.jpg}
\\[0.2em]
\includegraphics[width=0.95\columnwidth]{images/1218_np.jpg}
\caption{Open-set segmentation
on StreetHazard.
The columns show the input
image, ground truth segmentation
and the output of LDN\_BIN$_{\textrm{JS, RSP}}$.
Outliers are white while ignore pixels are black.
Our model performs better on
large outliers (rows 1, 2)
than on small ones (row 3).
}
\label{fig:sh_np_results}
\end{figure}
\section{Conclusion}
We have presented a novel approach
for dense outlier detection
and open-set recognition.
The main idea is to discriminate
an application-specific inlier dataset
(e.g.\ Vistas, Cityscapes),
from a diverse general-purpose dataset
(e.g.\ ImageNet-1k).
Pixels from the latter dataset
represent noisy test-agnostic negative samples.
We train on mixed batches
with approximately equal share
of inliers and noisy negatives.
This promotes robustness
to occasional inliers in negative images
and favours stable development
of batchnorm statistics.
We encourage correct recognition
of spatial borders
between outlier and inlier pixels
by pasting negative patches
at random locations in inlier images.
Consequently, the resulting models
succeed to generalize
in test images with mixed content.
We have shown that feature sharing
greatly improves dense outlier detection,
while only slightly deteriorating
semantic segmentation.
The resulting multi-task architecture
is able to perform dense open-set recognition
with a single forward pass.
This is the first and currently the only method
which competes at both dense
open-set recognition benchmarks, Fishyscapes
and WildDash 1.
Currently, our model is at the top
on Fishyscapes Static leaderboard,
and a close runner-up on WildDash 1
while training with less supervision
than the top rank algorithm
\cite{MSeg_2020_CVPR}.
The same model also achieves
the runner-up AP and competitive FPR 95
on Fishyscapes Lost and Found.
We achieve state-of-the-art AP accuracy on the
StreetHazard dataset despite a
strong domain shift between our negative
dataset (ImageNet-1k-bb) and the test
dataset.
Our method outperformed
the max-softmax baseline in all experiments.
The advantage is greatest when the outliers are large,
such as in Fishyscapes static and WildDash 1.
A conjunction of our method and max-softmax
becomes advantageous on Fishyscapes Lost and Found.
This suggests that our method and max-softmax
target independent traits of outlier pixels.
Most reported experiments
feature the same model,
hyper parameters,
training procedure,
and the negative dataset:
only the inliers are different.
The reported results
confirm our hypotheses
i) that noisy negatives can improve
dense outlier detection
and open-set recognition,
and ii) that the shared features
greatly improve outlier detection
without significant deterioration
of semantic segmentation. The resulting
open-set models
perform comparably with respect
to their closed-set counterparts.
Suitable directions
for future work include
improving our models on small
outliers,
as well as incorporating joint training
with generative models.
\section*{Acknowledgment}
This work has been supported
by the Croatian Science Foundation under
the grant ADEPT and the European Regional
Development Fund
under the grant
DATACROSS.
\bibliographystyle{elsarticle-num-names}
|
1,108,101,564,125 | arxiv | \section{Introduction}
The capacity of continuous-time Gaussian channels and the corresponding
capacity-achieving water-filling power allocation strategy over frequency
are well-known \cite{Gallager68}, and provide much insight and performance
targets for practical communication system design. These results implicitly
assume sampling above the Nyquist rate at the receiver end. However,
channels that are not bandlimited have an infinite Nyquist rate and
hence cannot be sampled at that rate. Moreover, hardware and power
limitations often preclude sampling at the Nyquist rate associated
with the channel bandwidth, especially for wideband communication
systems. This gives rise to several natural questions at the intersection
of sampling theory and information theory, which we will explore in
this paper: (1) how much information, in the Shannon sense, can be
conveyed through undersampled analog channels; (2) under a sub-Nyquist
sampling-rate constraint, which sampling structures should be chosen
in order to maximize information rate.
\begin{comment}
There are two types of fundamental capacity metrics that are of interest:
(1) the capacity for a given sampling mechanism, which incorporates
the sampling mechanism into the channel model; (2) the capacity for
a given sampling rate when optimizing over a class of sampling mechanisms
at that rate. The second metric requires the sampling mechanism to
be optimized with respect to capacity, and the optimal sampling for
the most general class of sampling mechanisms is the focus of \cite{ChenEldarGoldsmith2012}.
In this paper, we study several sampling mechanisms of increasing
complexity, and investigate the interplay between capacity and sampling.
In particular, for each class of sampling strategies, we determine
the capacity as a function of the sampling rate, and identify the
optimal sub-Nyquist sampling schemes within each class. As we show,
the optimal strategy will always be chosen to suppress aliasing.
\end{comment}
\begin{comment}
Although receiver analysis and design in modern communication systems
is based on digital sequences (obtained by sampling the received signals),
the information content of these signals can be preserved if noise
outside the channel bandwidth is filtered out and the filtered output
is sampled above twice the signal's bandwidth over positive frequencies
(termed the Nyquist rate). The majority of capacity analysis of analog
channels thus assumes super-Nyquist sampling (e.g. \cite{Med2000})
without accounting for hardware limitations, which may preclude sampling
at this rate, especially for wideband communications. For instance,
analog-to-digital conversion (ADC) technology scales up slower than
the rapid increase in the bandwidth of communication systems. The
associated processing rate required for the digital signal processors
(DSP) becomes increasingly prohibitive as well as the storage requirements.
Recently, sub-Nyquist sampling methods for structured analog signals
have received growing attention, e.g. sampling of multiband signals
\cite{MisEld2009,MisEld2010Theory2Practice} and signals with a finite
rate of innovation \cite{VetMarBlu2002,GedTurEld2010}. These works
conduct investigation on sub-Nyquist techniques from a sampling theoretic
perspective, namely, how to sample the signal using minimum sampling
rate that allows perfect reconstruction of the whole class of analog
signals in the \textit{noiseless} setting. In the noisy case, although
perfect reconstruction of the whole class of analog signals through
noise-contaminated samples is no longer possible, there is hope to
convey finite-rate information through reduced-rate samples. Consequently,
a new theoretical framework at the intersection of information theory
and sampling theory is needed to characterize the data rate that can
be conveyed through an analog channel in the presence of noise.
Fortunately, the Nyquist rate is not always needed for a large class
of signals beyond bandlimited ones. Perfect reconstruction of original
signals is possible in some cases by exploiting signal structures,
e.g. sparsity. For instance, recent advancement in compressed sensing
\cite{CandRomTao06,CanTaoDecode05,Don2006} enables recovery/approximation
of sparse signals from highly incomplete measurements in a tractable
way. On the other hand, certain structures of communication channels,
e.g. spectral holes, can also be exploited in designing transmit signals
to accommodate for reduced-rate sampling and processing.
In this paper, we explore how channel capacity is affected by reduced-rate
sampling, namely, how much information, in the Shannon sense, can
be conveyed through a sampled analog channel which is sampled at a
rate below the Nyquist rate. An important aspect is to generalize
notions of channel capacity to structured channels that are undersampled.
We investigate how to optimize sampling to maximize capacity for several
classes of sampling methods, which illuminates a connection between
channel capacity and MMSE under multiple forms of sub-Nyquist sampling.
Bridging sampling theory and information theory, our work attempts
to characterize the tradeoff between fundamental data rate limits
and sampling rate constraints.
\end{comment}
\subsection{Related Work}
The derivation of the capacity of linear time-invariant (LTI) channels
was pioneered by Shannon \cite{Sha48}. Making use of the asymptotic
spectral properties of Toeplitz operators \cite{GreSze1984}, this
capacity result established the optimality of a water-filling power
allocation based on signal-to-noise ratio (SNR) across the frequency
domain \cite{Gallager68}. Similar results for discrete-time Gaussian
channels have also been derived using Fourier analysis \cite{HirtMassey1988}.
On the other hand, the Shannon-Nyquist sampling theorem, which dictates
that channel capacity is preserved when the received signal is sampled
at or above the Nyquist rate, has frequently been used to transform
analog channels into their discrete counterparts (e.g.\cite{Bello1963,ForUng1998}).
For instance, this paradigm of discretization was employed by Medard
to bound the maximum mutual information in time-varying channels \cite{Med2000}.
However, all of these works focus on analog channel capacity sampled
at or above the Nyquist rate, and do not account for the effect upon
capacity of reduced-rate sampling.
The Nyquist rate is the sampling rate required for perfect reconstruction
of bandlimited analog signals or, more generally, the class of signals
lying in shift-invariant subspaces. Various sampling methods at this
sampling rate for bandlimited functions have been proposed. One example
is recurrent non-uniform sampling proposed by Yen \cite{Yen1956},
which samples the signal in such a way that all sample points are
divided into blocks where each block contains $N$ points and has
a recurrent period. Another example is generalized multi-branch sampling
first analyzed by Papoulis \cite{Papoulis1977}, in which the input
is sampled through $M$ linear systems. For perfect recovery, these
methods require sampling at an aggregate rate above the Nyquist rate.
In practice, however, the Nyquist rate may be excessive for perfect
reconstruction of signals that possess certain structures. For example,
consider multiband signals, whose spectral content resides continuously
within several subbands over a wide spectrum, as might occur in a
cognitive radio system. If the spectral support is known \textit{a
priori}, then the sampling rate requirement for perfect recovery is
the sum of the subband bandwidths, termed the \textit{Landau rate}
\cite{Landau1967}. One type of sampling mechanism that can reconstruct
multiband signals sampled at the Landau rate is a filter bank followed
by sampling, studied in \cite{LinVai1998,UnsZer1998}. The basic sampling
paradigm of these works is to apply a bank of prefilters to the input,
each followed by a uniform sampler.
When the channel or signal structure is unknown, sub-Nyquist sampling
approaches have been recently developed to exploit the structure of
various classes of input signals, such as multiband signals \cite{MisEld2010Theory2Practice}.
In particular, sampling with modulation and filter banks is very effective
for signal reconstruction, where the key step is to scramble spectral
contents from different subbands through the modulation operation.
Examples includes the modulated wideband converter proposed by Mishali
\textit{et al.} \cite{MisEld2010Theory2Practice,MisEldDouSho2011}.
In fact, modulation and filter-bank sampling represents a very general
class of realizable nonuniform sampling techniques applied in practice.
Most of the above sampling theoretic work aims at finding optimal
sampling methods that admit perfect recovery of a class of analog
signals from \textit{noiseless} samples. There has also been work
on minimum reconstruction error from \textit{noisy} samples based
on certain statistical measures (e.g. mean squared error (MSE)). Another
line of work pioneered by Berger \emph{et. al.} \cite{BerTuf1967,BergerThesis,ChaDon1971,Eri1973}
investigated joint optimization of the transmitted pulse shape and
receiver prefiltering in pulse amplitude modulation over a sub-sampled
analog channel. In this work the optimal receiver prefilter that minimizes
the MSE between the original signal and the reconstructed signal is
shown to prevent aliasing. However, this work does not consider optimal
sampling techniques based on capacity as a metric. The optimal filters
derived in \cite{BerTuf1967,BergerThesis}\emph{ }are used to determine
an SNR metric which in turn is used to approximate sampled channel
capacity based on the formula for capacity of bandlimited AWGN channels.
However, this approximation does not correspond to the precise channel
capacity we derive herein, nor is the capacity of more general undersampled
analog channels considered.
The tradeoff between capacity and hardware complexity has been studied
in another line of work on sampling precision \cite{Sha1994,KochLap2010}.
These works demonstrate that, due to quantization, oversampling can
be beneficial in increasing achievable data rates. The focus of these
works is on the effect of oversampling upon capacity loss due to quantization
error, rather than the effect of quantization-free subsampling upon
channel capacity.
\subsection{Contribution}
In this paper, we explore sampled Gaussian channels with the following
three classes of sampling mechanisms: (1) a filter followed by sampling:
the analog channel output is prefiltered by an LTI filter followed
by an ideal uniform sampler (see Fig. \ref{fig:SingleFilter}); (2)
filter banks followed by sampling: the analog channel output is passed
through a bank of LTI filters, each followed by an ideal uniform sampler
(see Fig. \ref{fig:SingleAntenna}); (3) modulation and filter banks
followed by sampling: the channel output is passed through $M$ branches,
where each branch is prefiltered by an LTI filter, modulated by different
modulation sequences, passed through another LTI filter and then sampled
uniformly. Our main contributions are summarized as follows.
\begin{itemize}
\item \textbf{Filtering followed by sampling.} We derive the sampled channel
capacity in the presence of both white and colored noise. Due to aliasing,
the sampled channel can be represented as a multiple-input single
output (MISO) Gaussian channel in the spectral domain, while the optimal
input effectively performs maximum ratio combining. The optimal prefilter
is derived and shown to extract out the frequency with the highest
SNR while suppressing signals from all other frequencies and hence
preventing aliasing. This prefilter also minimizes the MSE between
the original signal and the reconstructed signal, illuminating a connection
between capacity and MMSE estimation.
\item \textbf{Filter banks followed by sampling.} A closed-form expression
for sampled channel capacity is derived, along with analysis that
relates it to a multiple-input multiple-output (MIMO) Gaussian channel.
We also derive optimal filter banks that maximize capacity. The $M$
filters select the $M$ frequencies with highest SNRs and zero out
signals from all other frequencies. This alias-suppressing strategy
is also shown to minimize the MSE between the original and reconstructed
signals. This mechanism often achieves larger sampled channel capacity
than a single filter followed by sampling if the channel is non-monotonic,
and it achieves the analog capacity of multiband channels at the Landau
rate if the number of branches is appropriately chosen.
\item \textbf{Modulation and filter banks followed by sampling.} For modulation
sequences that are periodic with period $T_{q}$, we derive the sampled
channel capacity and show its connection to a general MIMO Gaussian
channel in the frequency domain. For sampling following a single branch
of modulation and filtering, we provide an algorithm to identify the
optimal modulation sequence for piece-wise flat channels when $T_{q}$
is an integer multiple of the sampling period. We also show that the
optimal single-branch mechanism is equivalent to an optimal filter
bank with each branch sampled at a period $T_{q}$.
\end{itemize}
One interesting fact we discover for all these techniques is the non-monotonicity
of capacity with sampling rate, which indicates that at certain sampling
rates, channel degrees of freedom are lost. Thus, more sophisticated
sampling techniques are needed to maximize achievable data rates at
sub-Nyquist sampling rates in order to preserve all channel degrees
of freedom.
\begin{table}
\caption{\label{tab:Summary-of-Notation}Summary of Notation and Parameters}
\centering{}%
\begin{tabular}{>{\raggedright}p{0.7in}>{\raggedright}p{2.55in}}
$\mathcal{L}_{1}$ & set of measurable functions $f$ such that $\int\left|f\right|\mathrm{d}\mu<\infty$\tabularnewline
$\mathbb{S}_{+}$ & set of positive semidefinite matrices\tabularnewline
$h(t)$,$H(f)$ & impulse response, and frequency response of the analog channel\tabularnewline
$s_{i}(t)$, $S_{i}(f)$ & impulse response, and frequency response of the $i$th post-modulation
filter\tabularnewline
$p_{i}(t)$, $P_{i}(f)$ & impulse response, and frequency response of the $i$th pre-modulation
filter\tabularnewline
$\mathcal{S}_{\eta}(f)$, $\mathcal{S}_{x}(f)$ & power spectral density of the noise $\eta(t)$ and the stationary
input signal $x(t)$\tabularnewline
$M$ & number of prefilters\tabularnewline
$f_{s}$, $T_{s}$ & aggregate sampling rate, and the corresponding sampling interval ($T_{s}=1/f_{s}$)\tabularnewline
$q_{i}(t)$ & modulating sequence in the $i$th channel\tabularnewline
$T_{q}$ & period of the modulating sequence $q_{i}(t)$ \tabularnewline
$\left\Vert \cdot\right\Vert _{\text{F}}$, $\left\Vert \cdot\right\Vert _{2}$ & Frobenius norm, $\ell_{2}$ norm\tabularnewline
$\left[x\right]^{+}$, $\log^{+}x$ & $\max\left\{ x,0\right\} $, $\max\left\{ \log x,0\right\} $\tabularnewline
\end{tabular}
\end{table}
\subsection{Organization}
The remainder of this paper is organized as follows. In Section \ref{sec:Problem-Formulation},
we describe the problem formulation of sampled analog channels. The
capacity results for three classes of sampling strategies are presented
in Sections \ref{sec:General-Uniform-Sampling}--\ref{sec:Multi-channel-Pre-modulated-Pre-filtered}.
In each section, we analyze and interpret the main theorems based
on Fourier analysis and MIMO channel capacity, and identify sampling
structures that maximize capacity. The connection between the capacity-maximizing
samplers and the MMSE samplers is provided in Section \ref{sec:ConnectionCapacityMMSE}.
Proofs of the main theorems are provided in the appendices, and the
notation is summarized in Table \ref{tab:Summary-of-Notation}.$ $
\section{Preliminaries: Capacity of undersampled channels\label{sec:Problem-Formulation}}
\subsection{Capacity Definition}
We consider the continuous-time additive Gaussian channel (see \cite[Chapter 8]{Gallager68}),
where the channel is modeled as an LTI filter with impulse response
$h(t)$ and frequency response $H(f)=\int_{-\infty}^{\infty}h(t)\exp(-j2\pi ft)\text{d}t$.
The transmit signal $x(t)$ is time-constrained to the interval $(0,T]$.
The analog channel output is given as
\begin{equation}
r(t)=h(t)*x(t)+\eta(t),\label{eq:ChannelModel}
\end{equation}
and is observed over%
\footnote{We impose the assumption that both the transmit signal and the observed
signal are constrained to finite time intervals to allow for a rigorous
definition of channel capacity. In particular, as per Gallager's analysis
\cite[Chapter 8]{Gallager68}, we first calculate the capacity for
finite time intervals and then take the limit of the interval to infinity. %
} $\left(0,T\right]$, where $\eta(t)$ is stationary zero-mean Gaussian
noise. We assume throughout the paper that \textit{perfect channel
state information, }\textit{\emph{i.e. perfect knowledge of $h(t)$,}}
is known at both the transmitter and the receiver. The analog channel
capacity is defined as \cite[Section 8.1]{Gallager68}
\[
C=\lim_{T\rightarrow\infty}\frac{1}{T}\sup I\left(\left\{ x(t)\right\} _{t=0}^{T};\left\{ r(t)\right\} _{t=0}^{T}\right),
\]
where the supremum is over all input distributions subject to an average
power constraint $\mathbb{E}(\frac{1}{T}\int_{0}^{T}\left|x(\tau)\right|^{2}\mathrm{d}\tau)\leq P$.
Since any given analog channel can be converted to a countable number
of independent parallel discrete channels by a Karhunen-Loeve decomposition,
the capacity metric quantifies the maximum mutual information between
the input and output of these discrete channels. If we denote $[x]^{+}=\max\{x,0\}$
and $\log^{+}x=\max\left\{ 0,\log x\right\} $, then the analog channel
capacity is given as follows.
\begin{theorem}\label{thmGallagerChannelCapacity}\cite[Theorem 8.5.1]{Gallager68}
Consider an analog channel with power constraint $P$ and noise power
spectral density $\mathcal{S}_{\eta}(f)$. Assume that $\left|H(f)\right|^{2}/\mathcal{S}_{\eta}(f)$
is bounded and integrable, and that either $\int_{-\infty}^{\infty}\mathcal{S}_{\eta}(f)\mathrm{d}f<\infty$
or that $\mathcal{S}_{\eta}(f)$ is white. Then the analog channel
capacity is given by
\begin{align}
C & =\frac{1}{2}{\displaystyle \int}_{-\infty}^{\infty}\log^{+}\left(\nu\frac{\left|H\left(f\right)\right|^{2}}{\mathcal{S}_{\eta}(f)}\right)\mathrm{d}f,\label{eq:GallagerCapacity}
\end{align}
where $\nu$ satisfies
\begin{equation}
{\displaystyle \int}_{-\infty}^{\infty}\left[\nu-\frac{\mathcal{S}_{\eta}(f)}{\left|H\left(f\right)\right|^{2}}\right]^{+}\mathrm{d}f=P.\label{eq:WaterFillingConstraint}
\end{equation}
\end{theorem}
$ $For a channel whose bandwidth lies in $[-B,B]$, if we remove
the noise outside the channel bandwidth via prefiltering and sample
the output at a rate $f\geq2B$, then we can perfectly recover all
information conveyed within the channel bandwidth, which allows (\ref{eq:GallagerCapacity})
to be achieved without sampling loss. For this reason, we will use
the terminology \textbf{\textit{Nyquist-rate channel capacity}} for
the analog channel capacity (\ref{eq:GallagerCapacity}), which is
commensurate with sampling at or above the Nyquist rate of the received
signal after optimized prefiltering.
Under sub-Nyquist sampling, the capacity depends on the sampling mechanism
and its sampling rate. Specifically, the channel output $r(t)$ is
now passed through the receiver's analog front end, which may include
a filter, a bank of $M$ filters, or a bank of preprocessors consisting
of filters and modulation modules, yielding a collection of analog
outputs $\left\{ y_{i}(t):\text{ }1\leq i\leq M\right\} $. We assume
that the analog outputs are observed over the time interval $\left(0,T\right]$
and then passed through ideal uniform samplers, yielding a set of
digital sequences $\left\{ y_{i}[n]:\text{ }n\in\mathbb{Z},\text{ }1\leq i\leq M\right\} $,
as illustrated in Fig. \ref{fig:ProblemFormulation}. Here, each branch
is uniformly sampled at a sampling rate of $f_{s}/M$ samples per
second.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.45]{ProblemFormulation.pdf}}
\par\end{centering}
\caption{\label{fig:ProblemFormulation}Sampled Gaussian channel. The input
$x(t)$, constrained to $(0,T]$, is passed through $M$ branches
of the receiver analog front end to yield analog outputs $\left\{ y_{i}(t):\text{ }1\leq i\leq M\right\} $;
each $y_{i}(t)$ is observed over $\left(0,T\right]$ and uniformly
sampled at a rate $f_{s}/M$ to yield the sampled sequence $y_{i}[n]$.
The preprocessor can be a filter, or combination of a filter and a
modulator.}
\end{figure}
\begin{comment}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.5]{AnalogNonsampledChannel.pdf}}
\par\end{centering}
\begin{centering}
(a)
\par\end{centering}
\begin{centering}
\textsf{\includegraphics[scale=0.33]{ProblemFormulation.pdf}}
\par\end{centering}
\begin{centering}
(b)
\par\end{centering}
\caption{\label{fig:ProblemFormulation-1}(a) Original analog channel: the
input $x(t)$, constrained to $(0,T]$, is passed through the channel
to yield the received signal $r(t)$ without preprocessing and sampling;
(b) sampled analog channel: The input $x(t)$, constrained to $(0,T]$,
is passed through $M$ branches of the receiver analog front end to
yield analog outputs $\left\{ y_{i}(t):\text{ }1\leq i\leq M\right\} $;
each analog output $y_{i}(t)$ is observed over $\left(0,T\right]$
and uniformly sampled by a sampler at a rate $f_{s}/M=\left(MT_{s}\right)^{-1}$
samples per second to yield the sampled sequence $y_{i}[n]$. The
preprocessor can be a filter, or a filter and a modulator followed
by another filter.}
\end{figure}
\end{comment}
Define ${\bf y}[n]=\left[y_{1}[n],\cdots,y_{M}[n]\right]$, and denote
by $I(\left\{ x(t)\right\} _{t=0}^{T};\left\{ {\bf y}[n]\right\} _{t=0}^{T})$
the mutual information between the input $x(t)$ on the interval $0<t\leq T$
and the samples $\left\{ y[n]\right\} $ observed on the interval
$0<t\leq T$. We pose the problem of finding the capacity $C(f_{s})$
of sampled channels as quantifying the maximum mutual information
in the limit as $T\rightarrow\infty$. The sampled channel capacity
can then be expressed as
\[
C(f_{s})=\lim_{T\rightarrow\infty}\frac{1}{T}\sup I\left(\left\{ x(t)\right\} _{t=0}^{T};\left\{ {\bf y}[n]\right\} _{t=0}^{T}\right),
\]
where the supremum is over all possible input distributions subject
to an average power constraint $\mathbb{E}(\frac{1}{T}\int_{0}^{T}\left|x(\tau)\right|^{2}\mathrm{d}\tau)\leq P$.
$ $We restrict the transmit signal $x(t)$ to be continuous with
bounded variance (i.e. $\sup_{t}\mathbb{E}\left|x(t)\right|^{2}<\infty$),
and restrict the probability measure of $x(t)$ to be uniformly continuous.
This restriction simplifies some mathematical analysis, while still
encompassing most practical signals of interests %
\footnote{Note that this condition is not necessary for our main theorems. An
alternative proof based on correlation functions is provided in \cite{ChenEldarGoldsmith2012},
which does not require this condition.%
}.
\begin{comment}
The sequence of discrete-time samples depends on the sampling rate
and the sampling mechanism we employ. In this paper we focus on developing
a general analytic framework for sampled channel capacity that accommodates
a large class of sampling mechanisms, going beyond ideal uniform Nyquist-rate
sampling.
\end{comment}
\subsection{Sampling Mechanisms\label{sub:Sampling-Mechanisms}}
In this subsection, we describe three classes of sampling strategies
with increasing complexity. In particular, we start from sampling
following a single filter, and extend our results to incorporate filter
banks and modulation banks.
\subsubsection{Filtering followed by sampling}
Ideal uniform sampling is performed by sampling the analog signal
uniformly at a rate $f_{s}=T_{s}^{-1}$, where $T_{s}$ denotes the
sampling interval. In order to avoid aliasing, suppress out-of-band
noise, and compensate for linear distortion of practical sampling
devices, a prefilter is often added prior to the ideal uniform sampler
\cite{EldMic2009}. Our sampling process thus includes a general analog
prefilter, as illustrated in Fig. \ref{fig:SingleFilter}. Specifically,
before sampling, we prefilter the received signal with an LTI filter
that has impulse response $s(t)$ and frequency response $S\left(f\right)$,
where we assume that $h(t)$ and $s(t)$ are both bounded and continuous.
The filtered output is observed over $(0,T]$ and can be written as
\begin{equation}
y(t)=s(t)*\left(h(t)*x(t)+\eta(t)\right),\quad t\in\left(0,T\right].\label{eq:PrefilteredReceiveSignals-1}
\end{equation}
We then sample $y(t)$ using an ideal uniform sampler, leading to
the sampled sequence
\[
y[n]=y(nT_{s}).
\]
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.35]{SingleFilter.pdf}}
\par\end{centering}
\caption{\label{fig:SingleFilter}Filtering followed by sampling: the analog
channel output $r(t)$ is linearly filtered prior to ideal uniform
sampling.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.33]{SingleAntenna.pdf}}
\par\end{centering}
\caption{\label{fig:SingleAntenna}A filter bank followed by sampling: the
received analog signal $r(t)$ is passed through $M$ branches. In
the $i$th branch, the signal $r(t)$ is passed through an LTI prefilter
with impulse response $s_{i}(f)$, and then sampled uniformly at a
rate $f_{s}/M$.}
\end{figure}
\subsubsection{Sampling following Filter Banks}
Sampling following a single filter often falls short of exploiting
channel structure. In particular, although Nyquist-rate uniform sampling
preserves information for bandlimited signals, for multiband signals
it does not ensure perfect reconstruction at the Landau rate (i.e.
the total widths of spectral support). That is because uniform sampling
at sub-Nyquist rates may suppress information by collapsing subbands,
resulting in fewer degrees of freedom. This motivates us to investigate
certain nonuniform sampling mechanisms. We begin by considering a
popular class of non-uniform sampling mechanisms, where the received
signal is preprocessed by a bank of filters. Most practical nonuniform
sampling techniques \cite{Papoulis1977,LinVai1998,UnsZer1998} fall
under filter-bank sampling and modulation-bank sampling (as described
in Section \ref{sub:Sampling-via-Modulation}). Note that the filters
may introduce delays, so that this approach subsumes that of a filter
bank with different sampling times at each branch.
In this sampling strategy, we replace the single prefilter in Fig.
\ref{fig:SingleFilter} by a bank of $M$ analog filters each followed
by ideal sampling at rate $f_{s}/M$, as illustrated in Fig. \ref{fig:SingleAntenna}.
We denote by $s_{i}(t)$ and $S_{i}\left(f\right)$ the impulse response
and frequency response of the $i$th LTI filter, respectively. The
filtered analog output in the $i$th branch prior to sampling is then
given as
\begin{equation}
y_{i}(t)=\left(h(t)*s_{i}(t)\right)*x(t)+s_{i}(t)*\eta(t),\quad t\in\left(0,T\right].\label{eq:ContinuousSignalFilterBank}
\end{equation}
These filtered signals are then sampled uniformly to yield
\[
y_{i}[n]\overset{\Delta}{=}y_{i}(nMT_{s})\quad\text{and}\quad{\bf y}[n]\overset{\Delta}{=}\left[y_{1}[n],y_{2}[n],\cdots,y_{M}[n]\right],
\]
where $T_{s}=f_{s}^{-1}$.
\subsubsection{Modulation and Filter Banks Followed by Sampling\label{sub:Sampling-via-Modulation}}
We generalize the filter-bank sampling strategy by adding an additional
filter bank and a modulation bank, which includes as special cases
a broad class of nonuniform sampling methods that are applied in both
theory and practice. Specifically, the sampling system with sampling
rate $f_{s}$ comprises $M$ branches. In the $i$th branch, the received
signal $r(t)$ is prefiltered by an LTI filter with impulse response
$p_{i}(t)$ and frequency response $P_{i}(f)$, modulated by a periodic
waveform $q_{i}(t)$ of period $T_{q}$, filtered by another LTI filter
with impulse response $s_{i}(t)$ and frequency response $S_{i}(f)$,
and then sampled uniformly at a rate $f_{s}/M=\left(MT_{s}\right)^{-1}$,
as illustrated in Fig. \ref{fig:PremodulatedPrefilteredSampler}.
The first prefilter $P_{i}(f)$ will be useful in removing out-of-band
noise, while the periodic waveforms scramble spectral contents from
different aliased sets, thus bringing in more design flexibility that
may potentially lead to better exploitation of channel structure.
By taking advantage of random modulation sequences to achieve incoherence
among different branches, this sampling mechanism has proven useful
for sub-sampling multiband signals \cite{MisEld2010Theory2Practice}.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.35]{ModulatedPrefilteredSampler.pdf}}
\par\end{centering}
\caption{\label{fig:PremodulatedPrefilteredSampler}Modulation and filter banks
followed by sampling: in each branch, the received signal is prefiltered
by an LTI filter with impulse response $p_{i}(t)$, modulated by a
periodic waveform $q_{i}(t)$, filtered by another LTI filter with
impulse response $s_{i}(t)$, and then sampled at a rate $f_{s}/M$.}
\end{figure}
In the $i$th branch, the analog signal after post-modulation filtering
prior to sampling can be written as
\begin{equation}
y_{i}(t)=s_{i}(t)*\left(q_{i}(t)\cdot\left[p_{i}(t)*\left(h(t)*x(t)+\eta(t)\right)\right]\right),\label{eq:ContinuousSignalPremodulatedPrefiltered}
\end{equation}
resulting in the digital sequence of samples
\[
y_{i}[n]=y_{i}(nMT_{s})\quad\text{and}\quad{\bf y}\left[n\right]=\left[y_{1}\left[n\right],\cdots,y_{M}\left[n\right]\right]^{T}.
\]
\begin{comment}
a
\subsection{Exposition Outline}
The main results for the three sampling strategies are presented in
the following sections. For each sampling scenario, we first provide
an approximate treatment based on Fourier analysis by relating the
sampled channel to a MIMO Gaussian channel. This approach, while not
strictly rigorous, allows for a more informative understanding of
our results. We also analyze how to optimize capacity for both filtering
followed by sampling and sampling following filter banks, and interpret
these optimization results from both information theoretic and sampling
theoretic viewpoints. The rigorous analyses of the main theorems are
deferred to the appendices, which are based on asymptotic spectral
properties of Toeplitz and block-Toeplitz matrices.
\end{comment}
\section{A Filter Followed by Sampling\label{sec:General-Uniform-Sampling}}
\subsection{Main Results}
The sampled channel capacity under sampling with filtering is stated
in the following theorem.
\begin{comment}
For given positive integers $k$ and $n$, we define $\Delta:=T_{s}/k$
and introduce a matrix ${\bf S}^{n}(\Delta)$ such that $\left({\bf S}^{n}(\Delta)\right)_{i,l}=s(-iT_{s}+l\Delta)$
for all $0\leq i\leq n-1$ and $l\in\mathbb{Z}$. Clearly, $ $$\left({\bf S}^{n}(\Delta)\right)_{i+1,l+k}=\left({\bf S}^{n}(\Delta)\right)_{i,l}$
holds, and hence ${\bf S}^{n}(\Delta)$ is block Toeplitz. We say
a continuous function $s(t)$ is \emph{right-invertible} if there
exists $\overline{\Delta}$ and $\overline{n}$ such that for all
$n>\overline{n}$ and $\Delta<\overline{\Delta}$, the matrix ${\bf S}^{n}\left(\Delta\right)$
is right-invertible. Right-invertibility ensures that the block Toeplitz
operator associated with $s(t)$ is right-invertible and hence the
filter response is non-degenerate, e.g. $s(t)\not\equiv0$.
\end{comment}
\begin{theorem}\label{thmPerfectCSIPrefilteredSamplerRigorous}Consider
the system shown Fig. \ref{fig:SingleFilter}, where $\eta(t)$ is
Gaussian noise with power spectral density $\mathcal{S}_{\eta}(f)$.
Assume that $h(t)$, $s(t)$, $S\left(f\right)\sqrt{\mathcal{S}_{\eta}\left(f\right)}$
are all continuous, bounded and absolutely Riemann integrable. Additionally,
suppose that $h_{\eta}(t):=\mathcal{F}^{-1}\left(\frac{H\left(f\right)}{\sqrt{\mathcal{S}_{\eta}\left(f\right)}}\right)$
satisfies $h_{\eta}(t)=o\left(t^{-\epsilon}\right)$ for some constant
\footnote{This condition is used in Appendix \ref{sec:Proof-of-Theorem-PerfectCSIPrefilteredSampler}
as a sufficient condition to guarantee asymptotic properties of Toeplitz
matrices. A similar condition will be used in Theorems \ref{thmPerfectCSIFilterBankSingleAntenna}
and \ref{thmPremodulatedFilterBank}.%
} $\epsilon>1$. The capacity $C(f_{s})$ of the sampled channel with
a power constraint $P$ is then given parametrically as
\begin{align}
C\left(f_{s}\right) & ={\displaystyle \int}_{-\frac{f_{s}}{2}}^{\frac{f_{s}}{2}}\frac{1}{2}\log^{+}\left(\nu\gamma^{\text{s}}(f)\right)\mathrm{d}f,\label{eq:CapacityGeneralSamplingColorNoise}
\end{align}
where $\nu$ satisfies
\begin{equation}
{\displaystyle \int}_{-\frac{f_{s}}{2}}^{\frac{f_{s}}{2}}\left[\nu-1/\gamma^{\text{s}}(f)\right]^{+}\mathrm{d}f=P.\label{eq:WaterFillingConstraintColorNoise}
\end{equation}
Here, we denote
\[
\gamma^{\text{s}}(f):=\frac{\underset{l\in\mathbb{Z}}{\sum}\left|H(f-lf_{s})S(f-lf_{s})\right|^{2}}{\underset{l\in\mathbb{Z}}{\sum}\left|S(f-lf_{s})\right|^{2}\mathcal{S}_{\eta}(f-lf_{s})}.
\]
\end{theorem}
\begin{comment}
The assumption (\ref{eq:AssumptionSingleFilter}) ensures that $\left\Vert {\bf V}_{S\sqrt{\mathcal{S}_{\eta}}}\right\Vert _{2}$
is bounded away from zero ensures that the filter response satisfies
$s(t)\neq0$ for all $t$.
\end{comment}
As expected, applying the prefilter modifies the channel gain and
colors the noise accordingly. The color of the noise is reflected
in the denominator term of the corresponding SNR in (\ref{eq:CapacityGeneralSamplingColorNoise})
at each $f\in[-f_{s}/2,f_{s}/2]$ within the sampling bandwidth. The
channel and prefilter response leads to an equivalent frequency-selective
channel, and the ideal uniform sampling that follows generates a folded
version of the non-sampled channel capacity. Specifically, this capacity
expression differs from the analog capacity given in Theorem \ref{thmGallagerChannelCapacity}
in that the SNR in the sampled scenario is $\gamma^{\text{s}}(f)$
in contrast to $\left|H(f)\right|^{2}/\mathcal{S}_{\eta}(f)$ for
the non-sampled scenario. $ $Water filling over $1/\gamma^{\text{s}}(f)$
determines the optimal power allocation.
\subsection{Approximate Analysis}
Rather than providing here a rigorous proof of Theorem \ref{thmPerfectCSIPrefilteredSamplerRigorous},
we first develop an approximate analysis by relating the aliased channel
to MISO channels, which allows for a communication theoretic interpretation.
The rigorous analysis, which is deferred to Appendix \ref{sec:Proof-of-Theorem-PerfectCSIPrefilteredSampler},
makes use of a discretization argument and asymptotic spectral properties
of Toeplitz matrices.
Consider first the equivalence between the sampled channel and a MISO
channel at a single frequency $f\in[-f_{s}/2,f_{s}/2]$. $ $As part
of the approximation, we suppose the Fourier transform $X(f)$ of
the transmitted signal exists%
\footnote{The Fourier transform of the input signal typically does not exist
since the input may be a stationary process. %
} . The Fourier transform of the sampled signal at any $f\in[-f_{s}/2,f_{s}/2]$
is given by
\begin{equation}
\frac{1}{T_{s}}\sum_{k\in\mathbb{Z}}H\left(f-kf_{s}\right)S(f-kf_{s})X\left(f-kf_{s}\right)\label{eq:AliasedSamplesFT}
\end{equation}
due to aliasing. The summing operation allows us to treat the aliased
channel at each $f$ within the sampling bandwidth as a separate MISO
channel with countably many input branches and a single output branch,
as illustrated in Fig. \ref{fig:GeneralUniformSamplerEquivalent}.
By assumption, the noise has spectral density $\mathcal{S}_{\eta}(f)$,
so that the filtered noise has power spectral density $\mathcal{S}_{\eta}(f)|S(f)|^{2}$.
The power spectral density of the sampled noise sequence at $f\in[-f_{s}/2,f_{s}/2]$
is then given by $\sum_{l\in\mathbb{Z}}\mathcal{S}_{\eta}(f-lf_{s})\left|S(f-lf_{s})\right|^{2}$.
If we term $\left\{ f-lf_{s}\mid\text{ }l\in\mathbb{Z}\right\} $
the \emph{aliased frequency set} for $f$, then the amount of power
allocated to $X(f-lf_{s})$ should ``match'' the corresponding channel
gain within each aliased set in order to achieve capacity. Specifically,
denote by $G(f)$ the transmitted signal for every $f\in[-f_{s}/2,f_{s}/2]$.
This signal is multiplied by a constant gain $c\alpha_{l}\text{ }(l\in\mathbb{Z})$,
and sent through the $l$th input branch, i.e.
\begin{equation}
X\left(f-lf_{s}\right)=c\alpha_{l}G(f),\quad\forall l\in\mathbb{Z},
\end{equation}
where $c$ is a normalizing constant, and
\[
\alpha_{l}=\frac{H^{*}\left(f-lf_{s}\right)S^{*}\left(f-lf_{s}\right)}{\sum_{l}\left|H(f-lf_{s})S(f-lf_{s})\right|^{2}}.
\]
The resulting SNR can be expressed as the sum of SNRs (as shown in
\cite{Gold2005}) at each branch. Since the sampling operation combines
signal components at frequencies from each aliased set $\left\{ f-lf_{s}\mid l\in\mathbb{Z}\right\} $,
it is equivalent to having a set of parallel MISO channels, each indexed
by some $f\in[-f_{s}/2,f_{s}/2]$. The water-filling strategy is optimal
in allocating power among the set of parallel channels, which yields
the parametric equation (\ref{eq:WaterFillingConstraintColorNoise})
and completes our approximate analysis.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.32]{GeneralUniformSamplerEquivalent.pdf}}
\par\end{centering}
\caption{\label{fig:GeneralUniformSamplerEquivalent}Equivalent MISO Gaussian
channel for a given $f\in[-f_{s}/2,f_{s}/2]$ under filtering followed
by sampling. The additive noise has power spectral density $\mathcal{S}_{n}(f)=\sum_{l\in\mathbb{Z}}\mathcal{S}_{\eta}(f-lf_{s})\left|S(f-lf_{s})\right|^{2}$.}
\end{figure}
\begin{comment}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.35]{GeneralUniformSamplerEquivalent.pdf}}
\includegraphics[scale=0.35]{GeneralUniformSamplerEquivalentParallel.pdf}
\par\end{centering}
$\hspace{15em}$(a)$\hspace{18em}$(b)
\caption{\label{fig:GeneralUniformSamplerEquivalent-1}Equivalent representations
for filtering followed by sampling: (a) Equivalent MISO Gaussian channel
for a given $f\in[-f_{s}/2,f_{s}/2]$; (b) The equivalent set of parallel
SISO channels representing all $f\in[-f_{s}/2,f_{s}/2]$, where the
SISO channel at a given frequency is equivalent to the MISO channel
in Fig. \ref{fig:GeneralUniformSamplerEquivalent-1}(a).}
\end{figure}
\end{comment}
\subsection{Proof Sketch}
Since the Fourier transform is not well-defined for signals with infinite
energy, there exist technical flaws lurking in the approximate treatment
of the previous subsection. The key step to circumvent these issues
is to explore the asymptotic properties of Toeplitz matrices/operators.
This approach was used by Gallager \cite[Theorem 8.5.1]{Gallager68}
to prove the analog channel capacity theorem. Under uniform sampling,
however, the sampled channel no longer acts as a Toeplitz operator,
but instead becomes a block-Toeplitz operator. Since conventional
approaches \cite[Chapter 8.4]{Gallager68} do not accommodate for
block-Toeplitz matrices, a new analysis framework is needed. We provide
here a roadmap of our analysis framework, and defer the complete proof
to Appendix \ref{sec:Proof-of-Theorem-PerfectCSIPrefilteredSampler}.
\subsubsection{Discrete Approximation}
The channel response and the filter response are both assumed to be
continuous, which motivates us to use a discrete-time approximation
in order to transform the continuous-time operator into its discrete
counterpart. We discretize a time domain process by point-wise sampling
with period $\Delta$, e.g. $h(t)$ is transformed into $\left\{ h[n]\right\} $
by setting $h[n]=h(n\Delta).$ For any given $T$, this allows us
to use a finite-dimensional matrix to approximate the continuous-time
block-Toeplitz operator. Then, due to the continuity assumption, an
exact capacity expression can be obtained by letting $\Delta$ go
to zero.
\subsubsection{Spectral properties of block-Toeplitz matrices}
After discretization, the input-output relation is similar to a MIMO
discrete-time system. Applying MIMO channel capacity results leads
to the capacity for a given $T$ and $\Delta$. The channel capacity
is then obtained by taking $T$ to infinity and $\Delta$ to zero,
which can be related to the channel matrix's spectrum using Toeplitz
theory. Since the filtered noise is non-white and correlated across
time, we need to whiten it first. This, however, destroys the Toeplitz
properties of the original system matrix. In order to apply established
results in Toeplitz theory, we make use of the concept of \emph{asymptotic
equivalence} \cite{Gray06} that builds connections between Toeplitz
matrices and non-Toeplitz matrices. This allows us to relate the capacity
limit with spectral properties of the channel and filter response.
\subsection{Optimal Prefilters\label{sub:Optimal-Prefilters}}
\subsubsection{Derivation of optimal prefilters}
Since different prefilters lead to different channel capacities, a
natural question is how to choose $S(f)$ to maximize capacity. The
optimizing prefilter is given in the following theorem.
\begin{theorem}\label{Cor-OptimalPrefilter-GeneralUniformSampling}Consider
the system shown in Fig. \ref{fig:SingleFilter}, and define
\[
\gamma_{l}(f):=\frac{\left|H\left(f-lf_{s}\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-lf_{s}\right)}
\]
for any integer $l$. Suppose that in each aliased set $\left\{ f-lf_{s}\mid l\in\mathbb{Z}\right\} $,
there exists $k$ such that
\[
\gamma_{k}(f)=\sup_{l\in\mathbb{Z}}\gamma_{l}(f).
\]
Then the capacity in (\ref{eq:CapacityGeneralSamplingColorNoise})
is maximized by the filter with frequency response
\begin{equation}
S(f-kf_{s})=\begin{cases}
1, & \text{ }\mbox{if }\gamma_{k}(f)=\sup_{l\in\mathbb{Z}}\gamma_{l}(f),\\
0, & \text{ }\mbox{otherwise,}
\end{cases}\label{eq:OptimalPrefilterGeneralUniformSampling}
\end{equation}
for any $f\in\left[-f_{s}/2,f_{s}/2\right]$.
\end{theorem}
\begin{IEEEproof}It can be observed from (\ref{eq:CapacityGeneralSamplingColorNoise})
that the frequency response $S(f)$ at any $f$ can only affect the
SNR at $f\mbox{ mod }f_{s}$, indicating that we can optimize for
frequencies $f_{1}$ and $f_{2}$ $\left(f_{1}\neq f_{2};f_{1},f_{2}\in\left[-\frac{f_{s}}{2},\frac{f_{s}}{2}\right]\right)$
separately. Specifically, the SNR at each $f$ in the aliased channel
is given by
\begin{align*}
\gamma^{\text{s}}(f) & =\sum_{l\in\mathbb{Z}}\gamma_{l}(f)\lambda_{l}(f),
\end{align*}
where
\[
\lambda_{l}(f)=\frac{\mathcal{S}_{\eta}(f-lf_{s})\left|S(f-lf_{s})\right|^{2}}{\sum_{l\in\mathbb{Z}}\left|S(f-lf_{s})\right|^{2}\mathcal{S}_{\eta}(f-lf_{s})},
\]
and $\sum_{l}\lambda_{l}(f)=1$. That said, $\gamma^{\text{s}}(f)$
is a convex combination of $\left\{ \gamma_{l},l\in\mathbb{Z}\right\} $,
and is thus upper bounded by $\sup_{l\in\mathbb{Z}}\gamma_{l}$. This
bound can be attained by the filter given in (\ref{eq:OptimalPrefilterGeneralUniformSampling}).
\end{IEEEproof}
The optimal prefilter puts all its mass in those frequencies with
the highest SNR within each aliased set $\left\{ f-lf_{s}\mid l\in\mathbb{Z}\right\} $.
Even if the optimal prefilter does not exist, we can find a prefilter
that achieves an information rate arbitrarily close to the maximum
capacity once $\sup_{l\in\mathbb{Z}}\gamma_{l}(f)$ exists. The existence
of the supremum is guaranteed under mild conditions, e.g. when $\gamma_{l}(f)$
is bounded.
\subsubsection{Interpretations}
Recall that $S(f)$ is applied after the noise is added. One distinguishing
feature in the subsampled channel is the non-invertibility of the
prefiltering operation, i.e. we cannot recover the analog channel
output from sub-Nyquist samples. As shown above, the aliased SNR is
a convex combination of SNRs at all aliased branches, indicating that
$S(f)$ plays the role of \textit{``weighting''} different branches.
As in maximum ratio combining (MRC), those frequencies with larger
SNRs should be given larger weight, while those that suffer from poor
channel gains should be suppressed.
The problem of finding optimal prefilters corresponds to \textit{joint}
optimization over all input and filter responses. Looking at the equivalent
aliased channel for a given frequency $f\in[-f_{s}/2,f_{s}/2]$ as
illustrated in Fig. \ref{fig:GeneralUniformSamplerEquivalent}, we
have full control over both $X(f)$ and $S(f)$. Although MRC at the
transmitter side maximizes the combiner SNR for a MISO channel \cite{Gold2005},
it turns out to be suboptimal for our joint optimization problem.
Rather, the optimal solution is to perform selection combining \cite{Gold2005}
by setting $S(f-lf_{s})$ to one for some $l=l_{0}$, as well as noise
suppression by setting $S(f-lf_{s})$ to zero for all other $l$s.
In fact, setting $S(f)$ to zero precludes the undesired effects of
noise from low SNR frequencies, which is crucial in maximizing data
rate.
Another interesting observation is that optimal prefiltering equivalently
generates an \textit{alias-free} channel. After passing through an
optimal prefilter, all frequencies modulo $f_{s}$ except the one
with the highest SNR are removed, and hence the optimal prefilter
suppresses aliasing and out-of-band noise. This alias-suppressing
phenomena, while different from many sub-Nyquist works that advocate
mixing instead of alias suppressing \cite{MisEld2010Theory2Practice},
arises from the fact that we have control over the input shape.
\subsection{Numerical examples}
\subsubsection{Additive Gaussian Noise Channel without Prefiltering}
The first example we consider is the additive Gaussian noise channel.
The channel gain is flat within the channel bandwidth $B=0.5$, i.e.
$H(f)=1$ if $f\in\left[-B,B\right]$ and $H(f)=0$ otherwise. The
noise is modeled as a measurable and stationary Gaussian process with
the power spectral density plotted in Fig. \ref{fig:LapidothAWGN}(a).
This is the noise model adopted by Lapidoth in\cite{Lapidoth2009}
to approximate white noise, which avoids the infinite variance of
the standard model for unfiltered white noise. We employ ideal point-wise
sampling without filtering.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.9]{LapidothNoise.pdf}
\par\end{centering}
\begin{centering}
(a)
\par\end{centering}
\begin{centering}
\includegraphics[scale=0.9]{LapidothCapacity.pdf}
\par\end{centering}
\centering{}(b)\caption{\label{fig:LapidothAWGN}Capacity of sampled additive Gaussian noise
channel under ideal uniform sampling without filtering. (a) The channel
gain and the PSD of the noise. (b) Sampled channel capacity v.s. analog
channel capacity under a power constraint $P=5$. }
\end{figure}
Since the noise bandwidth is larger than the channel bandwidth, ideal
uniform sampling without prefiltering does not allow analog capacity
to be achieved when sampling at a rate equal to twice the channel
bandwidth, i.e. the Nyquist rate. Increasing the sampling rate above
twice the channel bandwidth (but below the noise bandwidth) spreads
the total noise power over a larger sampling bandwidth, reducing the
noise density at each frequency. This allows the sampled capacity
to continue increasing when sampling above the Nyquist rate, as illustrated
in Fig. \ref{fig:LapidothAWGN}(b). It can be seen that the capacity
does not increase monotonically with the sampling rate. We will discuss
this phenomena in more detail in Section \ref{sub:Capacity-Non-monotonicity-Single-Filter}.
\subsubsection{Optimally Filtered Channel}
In general, the frequency response of the optimal prefilter is discontinuous,
which may be hard to realize in practice. However, for certain classes
of channel models, the prefilter has a smooth frequency response.
One example of this channel class is a \textit{monotone channel},
whose channel response obeys $\left|H(f_{1})\right|^{2}/\mathcal{S}_{\eta}(f_{1})\geq\left|H(f_{2})\right|^{2}/\mathcal{S}_{\eta}(f_{2})$
for any $f_{1}>f_{2}$. Theorem \ref{Cor-OptimalPrefilter-GeneralUniformSampling}
implies that the optimizing prefilter for a monotone channel reduces
to a low-pass filter with cutoff frequency $f_{s}/2$.
\begin{comment}
As an example, Fig. \ref{fig:RaisedCosineUniformSampler} illustrates
the capacity-sampling tradeoff curve for the raised-cosine channel,
for different roll-off factors. The frequency response of the channel
is given by
\begin{equation}
H(f)=\begin{cases}
T, & \quad\left|f\right|\leq\frac{1-\beta}{2T},\\
\frac{T}{2}\left[1+\cos\left(\frac{\pi T}{\beta}\left[\left|f\right|-\frac{1-\beta}{2T}\right]\right)\right], & \quad\frac{1-\beta}{2T}\leq\left|f\right|\leq\frac{1}{2},
\end{cases}
\end{equation}
where $\beta$ denotes the roll-off factor and $T$ is a given period.
It can be observed that below the Nyquist rate, capacity increases
with $f_{s}$ since the effective sampling bandwidth increases, while
oversampling beyond the Nyquist rate does not increase capacity. As
expected, sampling at or above the Nyquist rate creates an alias-free
capacity expression that can be simplified as
\begin{equation}
C(f_{s})=\frac{1}{2}\underset{f\in\mathcal{F}(\nu)}{{\displaystyle \int}}\log\left(\nu\frac{\left|H\left(f\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f\right)}\right)\text{d}f,
\end{equation}
which equals the classical Nyquist-rate (i.e. the analog) channel
capacity derived in \cite{Gallager68}.
\end{comment}
For non-monotone channels, the optimal prefilter may not be a low-pass
filter, as illustrated in Fig. \ref{fig:PolynomialChannelSingleFilter}.
Fig. \ref{fig:PolynomialChannelSingleFilter}(b) shows the optimal
filter for the channel given in Fig. \ref{fig:PolynomialChannelSingleFilter}(a)
with $f_{s}=0.4f_{\text{NYQ}}$, which is no longer a low-pass filter.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics{polynomialChannelOriginal.pdf}\includegraphics{polynomialChannelOptimalFilter.pdf}}
\par\end{centering}
\centerline{$\quad$(a)$\hspace{12em}$ (b)}
\begin{centering}
\textsf{\includegraphics{polynomialChannelPrefiltered.pdf}\includegraphics{polynomialChannelCapacity.pdf}}
\par\end{centering}
\centerline{$\quad$(c)$\hspace{12em}$ (d)}
\caption{\label{fig:PolynomialChannelSingleFilter}Capacity of optimally filtered
channel: (a) frequency response of the original channel; (b) optimal
prefilter associated with this channel for sampling rate 0.4; (c)
optimally filtered channel response with sampling rate 0.4; (d) capacity
vs sampling rate for the optimal prefilter and for the matched filter. }
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.7]{MultibandChannelPlot.pdf}}
\par\end{centering}
\centerline{$\quad$(a)}
\begin{centering}
\textsf{\includegraphics[scale=0.9]{MultiChannelPrefiltered2.pdf}}
\par\end{centering}
\centerline{$\quad$(b)}
\caption{\label{fig:UniformSamplerMultiband}Sampled channel capacity for a
multiband channel under filter-bank sampling. (a) Channel gain of
the multiband channel. The power constraint is $P=10$, and the noise
power is $\sigma_{\eta}^{2}=1$. (b) Sampled channel capacity for
a single filter followed by sampling and for a filter bank followed
by sampling for a bank of two filters and of four filters. }
\end{figure}
\subsubsection{Capacity Non-monotonicity\label{sub:Capacity-Non-monotonicity-Single-Filter}}
When the channel is not monotone, a somewhat counter-intuitive fact
arises: the channel capacity $C(f_{s})$ is not necessarily a non-decreasing
function of the sampling rate $f_{s}$. This occurs, for example,
in multiband channels as illustrated in Fig. \ref{fig:UniformSamplerMultiband}.
Here, the Fourier transform of the channel response is concentrated
in two sub-intervals within the overall channel bandwidth. Specifically,
the entire channel bandwidth is contained in $\left[-0.5,0.5\right]$
with Nyquist rate $f_{\text{NYQ}}=1$, and that the channel frequency
response is given by
\begin{equation}
H(f)=\begin{cases}
1, & \quad\mbox{if }\left|f\right|\in\left[\frac{1}{10},\frac{1}{5}\right]\bigcup\left[\frac{2}{5},\frac{1}{2}\right];\\
0, & \quad\mbox{otherwise.}
\end{cases}
\end{equation}
If this channel is sampled at a rate $f_{s}=\frac{3}{5}f_{\text{NYQ}}$,
then aliasing occurs and leads to an aliased channel with one subband
(and hence one degree of freedom). However, if sampling is performed
at a rate $f_{s}=\frac{2}{5}f_{\text{NYQ}}$. It can be easily verified
that the two subbands remain non-overlapping in the aliased channel,
resulting in two degrees of freedom.
The tradeoff curve between capacity and sampling rate with an optimal
prefilter is plotted in Fig. \ref{fig:UniformSamplerMultiband}(b).
This curve indicates that increasing the sampling rate may not necessarily
increase capacity for certain channel structures. In other words,
a single filter followed by sampling largely constrains our ability
to exploit channel and signal structures. This is not the case for
more general sampling structures, as we show in the next section.
\section{A Bank of Filters Followed by Sampling\label{sec:Multi-channel-Prefiltered-Uniform}}
\subsection{Main Results}
We now treat filter-bank sampling, in which the channel output is
filtered and sampled through $M$ multiple branches as illustrated
in Fig. \ref{fig:SingleAntenna}.
In order to state our capacity results, we introduce two matrices
${\bf F}_{s}$ and ${\bf F}_{h}$ defined in the Fourier domain. Here,
${\bf F}_{s}$ is an infinite matrix of $m$ rows and infinitely many
columns and ${\bf F}_{h}$ is a diagonal infinite matrix such that
for every $i$ ($1\leq i\leq k$) and every integer $l$:
\begin{align*}
\left({\bf F}_{s}(f)\right)_{i,l} & =S_{i}\left(f-\frac{lf_{s}}{M}\right)\sqrt{\mathcal{S}_{\eta}\left(f-\frac{lf_{s}}{M}\right)},\\
\left({\bf F}_{h}(f)\right)_{l,l} & =H\left(f-\frac{lf_{s}}{M}\right)/\sqrt{\mathcal{S}_{\eta}\left(f-\frac{lf_{s}}{M}\right)}.
\end{align*}
\begin{comment}
For a given $\Delta>0$ and an integer $n$, we introduce a block
Toeplitz matrix ${\bf S}_{k}^{n}(\Delta)$ such that $\left({\bf S}_{k}^{n}(\Delta)\right)_{i,l}=s_{k}(iT_{s}+l\Delta)$
for all $0\leq i\leq n-1$, $1\leq k\leq M$ and $l\in\mathbb{Z}$.
We say a set of continuous functions $\left\{ s_{i}(t):1\leq i\leq M\right\} $
is \emph{jointly right-invertible} if there exists $\overline{\Delta}$
and $\overline{n}$ such that for all $n>\overline{n}$ and $\Delta<\overline{\Delta}$,
the matrix ${\bf S}^{n}:=\left[\left({\bf S}_{1}^{n}\left(\Delta\right)\right)^{T},\left({\bf S}_{2}^{n}\left(\Delta\right)\right)^{T},\cdots,\left({\bf S}_{M}^{n}\left(\Delta\right)\right)^{T}\right]^{T}$
is right-invertible. Joint right-invertibility ensures that we cannot
have $s_{1}(t)=s_{2}(t)$ for all $t$.
\end{comment}
\begin{theorem}\label{thmPerfectCSIFilterBankSingleAntenna}Consider
the system shown in Fig. \ref{fig:SingleAntenna}. Assume that $h(t)$
and $s_{i}(t)$ $(1\leq i\leq M)$ are all continuous, bounded and
absolutely Riemann integrable. Additionally, assume that $h_{\eta}(t):=\mathcal{F}^{-1}\left(\frac{H\left(f\right)}{\sqrt{\mathcal{S}_{\eta}\left(f\right)}}\right)$
satisfies $h_{\eta}(t)=o\left(t^{-\epsilon}\right)$ for some constant
$\epsilon>1$, and that ${\bf F}_{s}$ is right-invertible for every
$f$. Define $\tilde{{\bf F}}_{s}\overset{\Delta}{=}\left({\bf F}_{s}{\bf F}_{s}^{*}\right)^{-\frac{1}{2}}{\bf F}_{s}$.
The capacity $C(f_{s})$ of the sampled channel with a power constraint
$P$ is given as
\[
C(f_{s})={\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\frac{1}{2}\sum_{i=1}^{M}\log^{+}\left(\nu\lambda_{i}\left(\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}\right)\right)\mathrm{d}f,
\]
where
\[
{\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\sum_{i=1}^{M}\left[\nu-\frac{1}{\lambda_{i}\left(\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}\right)}\right]^{+}\mathrm{d}f=P.
\]
Here, $\lambda_{i}\left(\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}\right)$
denotes the $i$th largest eigenvalue of $\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}$.
\end{theorem}$ $\begin{remark}We can express this capacity in a
more traditional MIMO capacity form as
\begin{align}
C(f_{s}) & =\max_{\left\{ {\bf Q}(f)\right\} \in\mathcal{Q}}{\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\frac{1}{2}\log\det\left({\bf I}_{M}+\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf Q}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}\right)\mathrm{d}f,\label{eq:ChannelCapacitySingleAntenna}
\end{align}
where $\tilde{{\bf F}}_{s}\overset{\Delta}{=}\left({\bf F}_{s}{\bf F}_{s}^{*}\right)^{-\frac{1}{2}}{\bf F}_{s}$
and
\begin{align*}
\mathcal{Q} & =\left\{ {\bf Q}(f):\left|f\right|\leq\frac{f_{s}}{2M},{\bf Q}(f)\in\mathbb{S}_{+};\right.\\
& \quad\quad\quad\quad\left.\int_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\mathrm{Tr}\left({\bf Q}(f)\right)\mathrm{d}f=P.\right\}
\end{align*}
\end{remark}
The optimal $\left\{ {\bf Q}(f)\right\} $ corresponds to a water-filling
power allocation strategy based on the singular values of the equivalent
channel matrix $\tilde{{\bf F}}_{s}{\bf F}_{h}$, where ${\bf F}_{h}$
is associated with the original channel and $\tilde{{\bf F}}_{s}$
arises from prefiltering and noise whitening. For each $f\in[-f_{s}/2M,f_{s}/2M]$,
the integrand in (\ref{eq:ChannelCapacitySingleAntenna}) can be interpreted
as a MIMO capacity formula. We have $M$ receive branches, and can
still optimize the transmitted signals $\left\{ X\left(f-\frac{lf_{s}}{M}\right)\mid l\in\mathbb{Z}\right\} $
at a countable number of input branches, but this time we have $M$
receive branches. The channel capacity is achieved when the transmit
signals are designed to decouple this MIMO channel into $M$ parallel
channels (and hence $M$ degrees of freedom), each associated with
one of its singular directions.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.33]{SingleAntennaEquivalent.pdf}}
\par\end{centering}
\caption{\label{fig:FilterBankSingleAntennaMIMO}Equivalent MIMO Gaussian channel
for a frequency $f\in[-f_{s}/2M,f_{s}/2M]$ under sampling with a
bank of $M$ filters. Here, $\mathcal{S}_{n}^{i}(f)=\sum_{l\in\mathbb{Z}}\mathcal{S}_{\eta}(f-lf_{s}/M)\left|S_{i}(f-lf_{s}/M)\right|^{2}$.}
\end{figure}
\subsection{Approximate Analysis}
The sampled analog channel under filter-bank sampling can be studied
through its connection with MIMO Gaussian channels (see Fig. \ref{fig:FilterBankSingleAntennaMIMO}).
Consider first a single frequency $f\in[-f_{s}/2M,f_{s}/2M]$. Since
we employ a bank of filters each followed by an ideal uniform sampler,
the equivalent channel has $M$ receive branches, each corresponding
to one branch of filtered sampling at rate $f_{s}/M$. The noise received
in the $i$th branch is zero-mean Gaussian with spectral density
\begin{align*}
& \sum_{l\in\mathbb{Z}}\left|S_{i}\left(f-\frac{lf_{s}}{M}\right)\right|^{2}\mathcal{S}_{\eta}\left(f-\frac{lf_{s}}{M}\right),\quad f\in\left[-\frac{f_{s}}{2M},\frac{f_{s}}{2M}\right],
\end{align*}
indicating the mutual correlation of noise at different branches.
The received noise vector can be whitened by multiplying ${\bf Y}(f)=[\cdots,Y(f),Y(f-f_{s}),\cdots]^{T}$
by an $M\times M$ whitening matrix $\left({\bf F}_{s}(f){\bf F}_{s}^{*}(f)\right)^{-\frac{1}{2}}$.
Since the whitening operation is invertible, it preserves capacity.
After whitening, the channel of Fig. \ref{fig:FilterBankSingleAntennaMIMO}
at frequency $f$ has the following channel matrix
\begin{equation}
\left({\bf F}_{s}(f){\bf F}_{s}^{*}(f)\right)^{-\frac{1}{2}}{\bf F}_{s}(f){\bf F}_{h}(f)=\tilde{{\bf F}}_{s}(f){\bf F}_{h}(f).
\end{equation}
MIMO Gaussian channel capacity results \cite{Tel1999} immediately
imply that the capacity of the channel in Fig. \ref{fig:FilterBankSingleAntennaMIMO}
at any $f\in[-f_{s}/2M,f_{s}/2M]$ can be expressed as
\begin{equation}
\max_{{\bf Q}}\frac{1}{2}\log\det\left[{\bf I}+\tilde{{\bf F}}_{s}(f){\bf F}_{h}(f){\bf Q}(f){\bf F}_{h}^{*}(f)\tilde{{\bf F}}_{s}^{*}(f)\right]
\end{equation}
subject to the constraints that $\text{trace}\left({\bf Q}(f)\right)\leq P(f)$
and ${\bf Q}(f)\in\mathbb{S}_{+}$, where ${\bf Q}(f)$ denotes the
power allocation matrix. Performing water-filling power allocation
across all parallel channels leads to our capacity expression.
\subsection{Optimal Filter Bank\label{sub:Optimal-Filter-Banks}}
\subsubsection{Derivation of optimal filter banks}
In general, $\log\det[{\bf I}_{M}+\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf Q}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}]$
is not perfectly determined by $\tilde{{\bf F}}_{s}(f)$ and ${\bf F}_{h}(f)$
at a single frequency $f$, but also depends on the water-level, since
the optimal power allocation strategy relies on the power constraint
$P/\sigma_{\eta}^{2}$ as well as ${\bf F}_{s}$ and ${\bf F}_{h}$
across all $f$. In other words, $\log\det[{\bf I}_{M}+\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf Q}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}]$
is a function of all singular values of $\tilde{{\bf F}}_{s}{\bf F}_{h}$
and the universal water-level associated with optimal power allocation.
Given two sets of singular values, we cannot determine which set is
preferable without accounting for the water-level, unless one set
is element-wise larger than the other. That said, if there exists
a prefilter that maximizes all singular values simultaneously, then
this prefilter will be universally optimal regardless of the water-level.
Fortunately, such optimal schemes exist, as we characterize in Theorem
\ref{Cor:OptimalFilterBank}.
Since ${\bf F}_{h}(f)$ is a diagonal matrix, $\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right)$
denotes the $k$th largest entry of ${\bf F}_{h}{\bf F}_{h}^{*}$
. The optimal filter bank can then be given as follows.
\begin{theorem}\label{Cor:OptimalFilterBank}Consider the system
shown in Fig. \ref{fig:SingleAntenna}. Suppose that for each aliased
set $\left\{ f-\frac{if_{s}}{M}\mid i\in\mathbb{Z}\right\} $ and
each $k$ $(1\leq k\leq M)$, there exists an integer $l$ such that
$\frac{\left|H\left(f-\frac{lf_{s}}{M}\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-\frac{lf_{s}}{M}\right)}$
is equal to the $k^{\text{th}}$ largest element in $\left\{ \frac{\left|H\left(f-\frac{if_{s}}{M}\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-\frac{if_{s}}{M}\right)}\mid i\in\mathbb{Z}\right\} $.
The capacity (\ref{eq:ChannelCapacitySingleAntenna}) under filter-bank
sampling is then maximized by a bank of filters for which the frequency
response of the $k^{\text{th}}$ filter is given by
\begin{equation}
S_{k}\left(f-\frac{lf_{s}}{M}\right)=\begin{cases}
1, & \quad\mbox{if }\frac{\left|H\left(f-\frac{lf_{s}}{M}\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-\frac{lf_{s}}{M}\right)}=\lambda_{k}\left({\bf F}_{h}(f){\bf F}_{h}^{*}(f)\right);\\
0, & \quad\mbox{otherwise},
\end{cases}\label{eq:optimalFilterBank}
\end{equation}
for all $l\in\mathbb{Z}$, $1\leq k\leq M$ and $f\in\left[-\frac{f_{s}}{2M},\frac{f_{s}}{2M}\right]$.
The resulting maximum channel capacity is given by
\begin{align}
C(f_{s}) & =\frac{1}{2}{\displaystyle \int}_{-f_{s}/2M}^{f_{s}/2M}\sum_{k=1}^{M}\log^{+}\left(\nu\cdot\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right)\right)\mathrm{d}f,
\end{align}
where $\nu$ is chosen such that
\begin{equation}
{\displaystyle \int}_{-f_{s}/2M}^{f_{s}/2M}\sum_{k=1}^{M}\left[\nu-\frac{1}{\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right)}\right]_{+}\mathrm{d}f=P.
\end{equation}
\end{theorem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Corollary-Optimal-Filter-Bank}.
\end{IEEEproof}
The choice of prefilters in (\ref{eq:optimalFilterBank}) achieves
the upper bounds on all singular values, and is hence universally
optimal regardless of the water level. Since $\tilde{{\bf F}}_{s}$
has orthonormal rows, it acts as an orthogonal projection and outputs
an $M$-dimensional subspace. The rows of the diagonal matrix ${\bf F}_{h}$
are orthogonal to each other. Therefore, the subspace closest to the
channel space spanned by ${\bf F}_{h}$ corresponds to the $M$ rows
of ${\bf F}_{h}$ containing the highest channel gains out of the
entire aliased frequency set $\left\{ f-\frac{lf_{s}}{M}\mid l\in\mathbb{Z}\right\} $.
The maximum data rate is then achieved when the filter bank outputs
$M$ frequencies with the highest SNR among the set of frequencies
equivalent modulo $\frac{f_{s}}{M}$ and suppresses noise from all
other branches.
We note that if we consider the enlarged aliased set $\left\{ f-lf_{s}/M\mid l\in\mathbb{Z}\right\} $,
then the optimal filter bank is equivalent to generating an alias-free
channel over the frequency interval $\left[-f_{s}/2M,f_{s}/2M\right]$.
This again arises from the nature of the joint-optimization problem:
since we are allowed to control the input shape and sampling jointly,
we can adjust the input shape based on the channel structure in each
branch, which turn out to be alias-suppressing.
\subsection{Discussion and Numerical Examples }
In a \textit{monotone} channel, the optimal filter bank will sequentially
crop out the $M$ best frequency bands, each of bandwidth $f_{s}/M$.
Concatenating all of these frequency bands results in a low-pass filter
with cut-off frequency $f_{s}/2$, which is equivalent to single-branch
sampling with an optimal filter. In other words, for monotone channels,
using filter banks harvests no gain in capacity compared to a single
branch with a filter followed by sampling.
For more general channels, the capacity is not necessarily a monotone
function of $f_{s}$. Consider again the multiband channel where the
channel response is concentrated in two sub-intervals, as illustrated
in Fig. \ref{fig:UniformSamplerMultiband}(a). As discussed above,
sampling following a single filter only allows us to select the best
single frequency with the highest SNR out of the set $\left\{ f-lf_{s}\mid l\in\mathbb{Z}\right\} $,
while sampling following filter banks allows us to select the best
$f$ out of the set $\left\{ f-l\frac{f_{s}}{M}\mid l\in\mathbb{Z}\right\} $.
Consequently, the channel capacity with filter-bank sampling exceeds
that of sampling with a single filter, but neither capacity is monotonically
increasing in $f_{s}$. This is shown in Fig. \ref{fig:UniformSamplerMultiband}(b).
Specifically, we see in this figure that when we apply a bank of two
filters prior to sampling, the capacity curve is still non-monotonic
but outperforms a single filter followed by sampling.
Another consequence of our results is that when the number of branches
is optimally chosen, the Nyquist-rate channel capacity can be achieved
by sampling at any rate above the Landau rate. In order to show this,
we introduce the following notion of a channel permutation. We call
$\tilde{H}(f)$ a \textit{permutation} of a channel response $H(f)$
at rate $f_{s}$ if, for any $f$,
\[
\left\{ \frac{|\tilde{H}(f-lf_{s})|^{2}}{\mathcal{S}_{\eta}(f-lf_{s})}\mid l\in\mathbb{Z}\right\} =\left\{ \frac{\left|H(f-lf_{s})\right|^{2}}{\mathcal{S}_{\eta}(f-lf_{s})}\mid l\in\mathbb{Z}\right\} .
\]
The following proposition characterizes a sufficient condition that
allows the Nyquist-rate channel capacity to be achieved at any sampling
rate above the Landau rate.
\begin{prop}\label{propLandauRateSampling} If there exists a permutation
$\tilde{H}(f)$ of $H(f)$ at rate $\frac{f_{s}}{M}$ such that the
support of $\tilde{H}(f)$ is $[-f_{L}/2,f_{L}/2]$, then optimal
sampling following a bank of $M$ filters achieves Nyquist-rate capacity
when $f_{s}\geq f_{L}$. \end{prop}
Examples of channels satisfying Proposition \ref{propLandauRateSampling}
include any multiband channel with $N$ subbands among which $K$
subbands have non-zero channel gain. For any $f_{s}\geq f_{L}=\frac{K}{N}f_{\text{NYQ}}$,
we are always able to permute the channel at rate $f_{s}/K$ to generate
a band-limited channel of spectral support size $f_{L}$. Hence, sampling
above the Landau rate following $K$ filters achieves the Nyquist-rate
channel capacity. This is illustrated in Fig. \ref{fig:UniformSamplerMultiband}(b)
where sampling with a four-branch filter bank has a higher capacity
than sampling with a single filter, and achieves the Nyquist-rate
capacity whenever $f_{s}\geq\frac{2}{5}f_{\text{NYQ}}$. The optimal
filter-bank sampling for most general channels is identified in \cite{ChenEldarGoldsmith2012},
where both the number of branches and per-branch sampling rate are
allowed to vary.
\begin{comment}
\section{Filter Bank with Multiple Antennas}
\subsection{Analysis}
\subsubsection{Channel Discretization and Diagonalization}
Similarly, let $\tilde{T}_{s}=mT_{s}$, and suppose we have $T=n\tilde{T}_{s}$
and $\tilde{T}_{s}=k\Delta$ with integers $n$ and $k$. We can transform
\begin{equation}
{\bf y}_{i}=\left[\begin{array}{c}
y_{i}[n]\\
y_{i}[n-1]\\
\vdots\\
y_{i}[1]
\end{array}\right]
\end{equation}
into
\begin{equation}
\tilde{{\bf y}}_{i}=\left({\bf S}_{n}^{i}\left({\bf S}_{n}^{i}\right)^{*}\right)^{-\frac{1}{2}}\tilde{{\bf H}}_{n,nk}^{i}{\bf x}_{nk}+\tilde{{\bf \eta}}_{n}
\end{equation}
where $\tilde{{\bf \eta}}_{n}$ is independent Gaussian noise with
variance $\sigma_{\eta}^{2}/\Delta$. Let $\overline{{\bf H}}_{n,nk}^{i}:=\left({\bf S}_{n}^{i}\left({\bf S}_{n}^{i}\right)^{*}\right)^{-\frac{1}{2}}\tilde{{\bf H}}_{n,nk}^{i}$,
and define the matrix
\begin{equation}
\overline{{\bf H}}_{n,nk}=\left[\begin{array}{c}
\overline{{\bf H}}_{n,nk}^{1}\\
\overline{{\bf H}}_{n,nk}^{2}\\
\vdots\\
\overline{{\bf H}}_{n,nk}^{m}
\end{array}\right]\quad\quad\mbox{and}\quad\quad\tilde{{\bf y}}_{n}=\overline{{\bf H}}_{n,nk}{\bf x}_{nk}+\tilde{{\bf \eta}}_{n}
\end{equation}
Our metric of interest might then become
\begin{align*}
C(f_{s}) & =\lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty}\frac{f_{s}}{mn}\max_{p(x):\frac{1}{nk}\mathbb{E}\left(\left\Vert x_{n,k}\right\Vert _{2}^{2}\right)\leq P}I\left({\bf x}_{n,k};\tilde{{\bf y}}_{n}\right)
\end{align*}
\subsubsection{Proof of Theorem \ref{thmPerfectCSIFilterBankMultiAntenna}}
The channel capacity with multiple receive antennas can be computed
as follows.
$\quad$
The problem now becomes how to determine the asymptotic distribution
of the eigenvalues of
\begin{equation}
\overline{{\bf H}}_{n,nk}\overline{{\bf H}}_{n,nk}^{*}=\left[\begin{array}{cccc}
\overline{{\bf H}}_{n,nk}^{1}\left(\overline{{\bf H}}_{n,nk}^{1}\right)^{*} & \overline{{\bf H}}_{n,nk}^{1}\left(\overline{{\bf H}}_{n,nk}^{2}\right)^{*} & \cdots & \overline{{\bf H}}_{n,nk}^{1}\left(\overline{{\bf H}}_{n,nk}^{m}\right)^{*}\\
\overline{{\bf H}}_{n,nk}^{2}\left(\overline{{\bf H}}_{n,nk}^{1}\right)^{*} & \overline{{\bf H}}_{n,nk}^{2}\left(\overline{{\bf H}}_{n,nk}^{2}\right)^{*} & \cdots & \overline{{\bf H}}_{n,nk}^{2}\left(\overline{{\bf H}}_{n,nk}^{m}\right)^{*}\\
\vdots & \vdots & \vdots & \vdots\\
\overline{{\bf H}}_{n,nk}^{m}\left(\overline{{\bf H}}_{n,nk}^{1}\right)^{*} & \overline{{\bf H}}_{n,nk}^{m}\left(\overline{{\bf H}}_{n,nk}^{2}\right)^{*} & \cdots & \overline{{\bf H}}_{n,nk}^{m}\left(\overline{{\bf H}}_{n,nk}^{m}\right)^{*}
\end{array}\right]
\end{equation}
Suppose $\tilde{{\bf H}}_{n,nk}^{i}=\left[\begin{array}{cccc}
\tilde{{\bf h}}_{0}^{i} & \tilde{{\bf h}}_{1}^{i} & \cdots & \tilde{{\bf h}}_{n-1}^{i}\\
0 & \tilde{{\bf h}}_{0}^{i} & \cdots & \tilde{{\bf h}}_{n-2}^{i}\\
\vdots & \vdots & \vdots & \vdots\\
0 & 0 & 0 & \tilde{{\bf h}}_{0}^{i}
\end{array}\right]$, then we can define a set of $n\times n$ matrices $\left\{ \hat{{\bf H}}_{n}^{u,v},1\leq u,v\leq m\right\} $
that satisfies for any $i\leq j$
\begin{equation}
\left(\hat{{\bf H}}_{n}^{u,v}\right)_{i,j}=\left(\hat{{\bf H}}_{n}^{u,v}\right)_{j,i}^{*}=\sum_{t=0}^{\infty}\tilde{{\bf h}}_{j-i+t}^{u}\left(\tilde{{\bf h}}_{t}^{v}\right)^{*}
\end{equation}
Proceeding with similar spirits as Lemma \ref{lemmaAsymptoticEquivalenceSH},
we can prove that
\begin{equation}
\tilde{{\bf H}}_{n,nk}^{u}\left(\tilde{{\bf H}}_{n,nk}^{v}\right)^{*}\sim\hat{{\bf H}}_{n}^{u,v}
\end{equation}
which immediately yields
\begin{equation}
\overline{{\bf H}}_{n,nk}^{u}\overline{{\bf H}}_{n,nk}^{v}\sim{\bf C}_{n,0.5}^{u}\hat{{\bf H}}_{n}^{u,v}{\bf C}_{n,0.5}^{v}
\end{equation}
where ${\bf C}_{n,0.5}^{i}$ is the circulant matrix asymptotically
equivalent to $\left({\bf S}_{n}^{i}\left({\bf S}_{n}^{i}\right)^{*}\right)^{-\frac{1}{2}}$.
The Fourier series related to $\hat{{\bf H}}_{n}^{u,v}$ can be computed
as
\begin{align*}
f_{\hat{h}}^{u,v}\left(\omega\right) & =\sum_{k=-\infty}^{\infty}\left(\sum_{t=-\infty}^{\infty}\tilde{{\bf h}}_{k+t}^{u}\left(\tilde{{\bf h}}_{t}^{v}\right)^{*}\right)\exp\left(\hat{j}k\omega\right)\\
& \approx\Delta\sum_{k=-\infty}^{\infty}\left({\displaystyle \int}_{-\infty}^{\infty}\tilde{{\bf h}}^{u}\left(t+kT_{s}\right)\left(\tilde{{\bf h}}^{v}\left(t\right)\right)^{*}\text{d}t\right)\exp\left(\hat{j}k\omega\right)\\
& =\Delta{\displaystyle \int}_{-\infty}^{\infty}\left({\displaystyle \int}_{-\infty}^{\infty}\tilde{{\bf h}}^{u}\left(t+\tau\right)\left(\tilde{{\bf h}}^{v}\left(t\right)\right)^{*}\text{d}t\right)\left(\sum_{k=-\infty}^{\infty}\delta\left(\tau-kT_{s}\right)\right)\exp\left(\hat{j}\frac{\omega}{T_{s}}\tau\right)\text{d}\tau
\end{align*}
The following cross spectral density can be computed as
\begin{align*}
& {\displaystyle \int}_{-\infty}^{\infty}\left({\displaystyle \int}_{-\infty}^{\infty}\tilde{{\bf h}}^{u}\left(t+\tau\right)\left(\tilde{{\bf h}}^{v}\left(t\right)\right)^{*}\text{d}t\right)\exp\left(\hat{j}\Omega\tau\right)\text{d}\tau\\
= & \left({\displaystyle \int}_{-\infty}^{\infty}\tilde{{\bf h}}^{u}\left(t+\tau\right)\exp\left(\hat{j}\Omega\left(t+\tau\right)\right)\text{d}\tau\right){\displaystyle \int}_{-\infty}^{\infty}\left(\tilde{{\bf h}}^{v}\left(t\right)\right)^{*}\exp\left(-\hat{j}\Omega t\right)\text{d}t\\
= & \tilde{H}_{c}^{u}(\hat{j}\Omega)\left(\tilde{H}_{c}^{v}(\hat{j}\Omega)\right)^{*}\\
= & H_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\left(H_{c}^{v}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right)^{*}S_{c}^{u}\left(\hat{j}\Omega\right)\left(S_{c}^{v}\left(\hat{j}\Omega\right)\right)^{*}
\end{align*}
Therefore, the Fourier series can be derived by treating it as a
uniform sampling of ${\displaystyle \int}_{-\infty}^{\infty}\tilde{{\bf h}}^{u}\left(t+\tau\right)\left(\tilde{{\bf h}}^{v}\left(t\right)\right)^{*}\text{d}t$:
\begin{equation}
f_{\hat{h}}^{u,v}\left(\omega\right)=\frac{\Delta}{T_{s}}\sum_{i=-\infty}^{\infty}H_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\left(H_{c}^{v}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right)^{*}S_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\left(S_{c}^{v}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right)^{*}
\end{equation}
Therefore, the Fourier series with respect to ${\bf C}_{n,0.5}^{u}\hat{{\bf H}}_{n}^{u,v}{\bf C}_{n,0.5}^{v}$
can now be obtained as
\begin{align*}
f^{u,v}\left(\omega\right) & =\frac{f_{\hat{h}}^{u,v}\left(\omega\right)}{f_{c,0.5}^{u}\left(\omega\right)f_{c,0.5}^{v}\left(\omega\right)}\\
& =\frac{\sum_{i=-\infty}^{\infty}H_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)S_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\left(H_{c}^{v}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)S_{c}^{v}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right)^{*}}{\left(\sum_{i=-\infty}^{\infty}\left|S_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right|^{2}\right)^{\frac{1}{2}}\left(\sum_{i=-\infty}^{\infty}\left|S_{c}^{v}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right|^{2}\right)^{\frac{1}{2}}}\\
& =\sum_{i=-\infty}^{\infty}H_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\left(H_{c}^{v}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right)^{*}\hat{S}_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\left(\hat{S}_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right)^{*}
\end{align*}
where $\hat{S}_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right):=\frac{S_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)}{\left(\sum_{i=-\infty}^{\infty}\left|S_{c}^{u}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right|^{2}\right)^{\frac{1}{2}}}$
denotes the normalized frequency response at $\omega$.
Define the normalized infinite matrix $\hat{{\bf S}}_{m}$ at frequency
$\omega$ as
\begin{equation}
\hat{{\bf H}}_{d}(\omega)=\left[\begin{array}{cccc}
\hat{S}_{c}^{1}\left(\hat{j}\left(\frac{\omega}{T_{s}}\right)\right)H_{c}^{1}\left(\hat{j}\left(\frac{\omega}{T_{s}}\right)\right) & \hat{S}_{c}^{1}\left(\hat{j}\left(\frac{\omega}{T_{s}}+\frac{2\pi}{T_{s}}\right)\right)H_{c}^{1}\left(\hat{j}\left(\frac{\omega}{T_{s}}+\frac{2\pi}{T_{s}}\right)\right) & \hat{S}_{c}^{1}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{2\pi}{T_{s}}\right)\right)H_{c}^{1}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{2\pi}{T_{s}}\right)\right) & \cdots\\
\hat{S}_{c}^{2}\left(\hat{j}\left(\frac{\omega}{T_{s}}\right)\right)H_{c}^{2}\left(\hat{j}\left(\frac{\omega}{T_{s}}\right)\right) & \hat{S}_{c}^{2}\left(\hat{j}\left(\frac{\omega}{T_{s}}+\frac{2\pi}{T_{s}}\right)\right)H_{c}^{2}\left(\hat{j}\left(\frac{\omega}{T_{s}}+\frac{2\pi}{T_{s}}\right)\right) & \hat{S}_{c}^{2}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{2\pi}{T_{s}}\right)\right)H_{c}^{2}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{2\pi}{T_{s}}\right)\right) & \cdots\\
\vdots & \vdots & \vdots & \vdots\\
\hat{S}_{c}^{m}\left(\hat{j}\left(\frac{\omega}{T_{s}}\right)\right)H_{c}^{m}\left(\hat{j}\left(\frac{\omega}{T_{s}}\right)\right) & \hat{S}_{c}^{m}\left(\hat{j}\left(\frac{\omega}{T_{s}}+\frac{2\pi}{T_{s}}\right)\right)H_{c}^{m}\left(\hat{j}\left(\frac{\omega}{T_{s}}+\frac{2\pi}{T_{s}}\right)\right) & \hat{S}_{c}^{m}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{2\pi}{T_{s}}\right)\right)H_{c}^{m}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{2\pi}{T_{s}}\right)\right) & \cdots
\end{array}\right]\label{eq:MultiAntennaHd}
\end{equation}
Then the Fourier symbol associated with the block matrix can be given
as
\begin{equation}
f(\omega)=\left[f^{u,v}\left(\omega\right)\right]=\hat{{\bf H}}_{d}(\omega)\hat{{\bf H}}_{d}^{*}(\omega)\label{eq:MultiAntennaFourierSymbol}
\end{equation}
For a positive semidefinite Hermitian block Toeplitz matrix, the
asymptotic distribution of its eigenvalues (or singular values) can
be calculated through the following result: for a given continuous
function $F(x)$, we have \cite[Theorem 4.3]{Tilli98}
\begin{equation}
\lim_{n\rightarrow}\frac{1}{nm}\sum_{i=1}^{nm}F\left(\lambda_{i}\left(\overline{{\bf H}}_{n,nk}\overline{{\bf H}}_{n,nk}^{*}\right)\right)=\frac{1}{2\pi}{\displaystyle \int}_{-\pi}^{\pi}\frac{1}{m}\sum_{i=1}^{m}F\left(\lambda_{i}\left(f(\omega)\right)\right)\text{d}\omega
\end{equation}
Using water filling argument, we can now obtain
\begin{align}
C & =\frac{f_{s}}{2\pi}\max_{Q(\omega)\in\mathcal{Q}}{\displaystyle \int}\frac{1}{m}\log\left(\det\left({\bf I}_{m}+\hat{{\bf H}}_{d}(\omega)Q(\omega)\hat{{\bf H}}_{d}^{*}(\omega)\right)\right)\text{d}\omega
\end{align}
where
\begin{equation}
\mathcal{Q}=\left\{ Q(\omega):Q(\omega)\text{ is Hermitian positive semidefinite for }\omega\in\left[-\pi,\pi\right],\int_{-\pi}^{\pi}\frac{1}{m}\text{Tr}\left(Q(\omega)\right)\text{d}\omega=PT_{s}\right\}
\end{equation}
\subsection{Implications}
@@
\end{comment}
\section{Modulation and Filter Banks Followed by Sampling\label{sec:Multi-channel-Pre-modulated-Pre-filtered}}
\subsection{Main Results}
We now treat modulation and filter banks followed by sampling. Assume
that $\tilde{T}_{s}:=MT_{s}=\frac{b}{a}T_{q}$ where $a$ and $b$
are coprime integers, and that the Fourier transform of $q_{i}(t)$
is given as $\sum_{l}c_{i}^{l}\delta(f-lf_{q})$. Before stating our
theorem, we introduce the following two Fourier symbol matrices ${\bf F}^{\eta}$
and ${\bf F}^{h}$. The $aM\times\infty$-dimensional matrix ${\bf F}^{\eta}$
contains $M$ submatrices with the $\alpha$th submatrix given by
an $a\times\infty$-dimensional matrix ${\bf F}_{\alpha}^{\eta}{\bf F}_{\alpha}^{p}$.
Here, for any $v\in\mathbb{Z}$, $1\leq l\leq a$, and $1\leq\alpha\leq M$,
we have
\begin{align*}
\left({\bf F}_{\alpha}^{\eta}\right)_{l,v} & =\left({\bf F}_{\alpha}^{p}\right)_{v,v}\left[\sum_{u}c_{\alpha}^{u}S_{\alpha}\left(-f+uf_{q}+v\frac{f_{q}}{b}\right)\right.\\
& \quad\quad\quad\quad\left.\cdot\exp\left(-j2\pi lMT_{s}\left(f-uf_{q}-v\frac{f_{q}}{b}\right)\right)\right].
\end{align*}
The matrices ${\bf F}_{\alpha}^{p}$ and ${\bf F}^{h}$ are infinite
diagonal matrices such that for every integer $l$:
\begin{align*}
\left({\bf F}_{\alpha}^{p}\right)_{l,l} & =P_{\alpha}\left(-f+l\frac{f_{q}}{b}\right)\sqrt{\mathcal{S}_{\eta}\left(-f+l\frac{f_{q}}{b}\right)},\\
\left({\bf F}^{h}\right)_{l,l} & =\frac{H\left(-f+l\frac{f_{q}}{b}\right)}{\sqrt{\mathcal{S}_{\eta}\left(-f+l\frac{f_{q}}{b}\right)}}.
\end{align*}
\begin{theorem}\label{thmPremodulatedFilterBank}Consider the system
shown in Fig. \ref{fig:PremodulatedPrefilteredSampler}. Assume that
$h(t)$, $p_{i}(t)$ and $s_{i}(t)$ $(1\leq i\leq M)$ are all continuous,
bounded and absolutely Riemann integrable, ${\bf F}^{\eta}$ is right
invertible, and that the Fourier transform of $q_{i}(t)$ is given
as $\sum_{l}c_{i}^{l}\delta(f-lf_{q})$. Additionally, suppose that
$h_{\eta}(t):=\mathcal{F}^{-1}\left(\frac{H\left(f\right)}{\sqrt{\mathcal{S}_{\eta}(f)}}\right)$
satisfies $h_{\eta}(t)=o\left(t^{-\epsilon}\right)$ for some constant
$\epsilon>1$. We further assume that $aMT_{s}=bT_{q}$ where $a$
and $b$ are coprime integers. The capacity $C(f_{s})$ of the sampled
channel with a power constraint $P$ is given by
\begin{align}
C(f_{s}) & ={\displaystyle \int}_{-\frac{f_{s}}{2aM}}^{\frac{f_{s}}{2aM}}\frac{1}{2}\sum_{i=1}^{aM}\log^{+}\left(\nu\lambda_{i}\left(\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}{\bf F}^{\eta}{\bf F}^{h}\cdot\right.\right.\nonumber \\
& \quad\quad\quad\quad\quad\quad\left.\left.{\bf F}^{h*}{\bf F}^{\eta*}\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}\right)\right)\mathrm{d}f,\label{eq:CapacityModulationBank}
\end{align}
where $\nu$ is chosen such that
\begin{align*}
P & ={\displaystyle \int}_{-\frac{f_{s}}{2aM}}^{\frac{f_{s}}{2aM}}\sum_{i=1}^{aM}\left[\nu-\lambda_{i}^{-1}\left(\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}{\bf F}^{\eta}{\bf F}^{h}\cdot\right.\right.\\
& \quad\quad\quad\quad\quad\quad\left.\left.{\bf F}^{h*}{\bf F}^{\eta*}\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}\right)\right]^{+}\mathrm{d}f.
\end{align*}
\end{theorem}
\begin{remark}The right invertibility of ${\bf F}^{\eta}$ ensures
that the sampling method is non-degenerate, e.g. the modulation sequence
cannot be zero. \end{remark}
The optimal $\nu$ corresponds to a water-filling power allocation
strategy based on the singular values of the equivalent channel matrix
$\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}{\bf F}^{\eta}{\bf F}^{h}$,
where $\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}$
is due to noise prewhitening and ${\bf F}^{\eta}{\bf F}^{h}$ is the
equivalent channel matrix after modulation and filtering. This result
can again be interpreted by viewing (\ref{eq:CapacityModulationBank})
as the MIMO Gaussian channel capacity of the equivalent channel. We
note that a closed-form capacity expression may be hard to obtain
for general modulating sequences $q_{i}(t)$. This is because the
multiplication operation corresponds to convolution in the frequency
domain which does not preserve Toeplitz properties of the original
operator associated with the channel filter. When $q_{i}(t)$ is periodic,
however, it can be mapped to a spike train in the frequency domain,
which preserves block Toeplitz properties, as described in more detail
in Appendix \ref{sec:Proof-of-Theorem-Premodulated-Filter-Bank}.
\subsection{Approximate Analysis}
The Fourier transform of the signal prior to modulation in the $i$th
branch at a given frequency $f$ can be expressed as $P_{i}(f)R(f)$,
where $R(f)=H(f)X(f)+N(f)$. Multiplication of this pre-modulation
signal with the modulation sequence $q_{i}(t)=\sum_{l}c_{i}^{l}\delta\left(f-lf_{q}\right)$
corresponds to convolution in the frequency domain.
Recall that $bT_{q}=aMT_{s}$ with integers $a$ and $b$. We therefore
divide all samples $\left\{ y_{i}[k]\mid k\in\mathbb{Z}\right\} $
in the $i$th branch into $a$ groups, where the $l$th ($0\leq l<a$)
group contains $\left\{ y_{i}[l+ka]\mid k\in\mathbb{Z}\right\} $.
Hence, each group is equivalent to the samples obtained by sampling
at rate $f_{s}/Ma=f_{q}/b$. The sampling system, when restricted
to the output on each group of the sampling set, can be treated as
LTI, thus justifying its equivalent representation in the spectral
domain. Specifically, for the $i$th branch, we denote by
\[
g_{\eta}^{i}(t,\tau):=\int s_{i}(t-\tau_{1})q_{i}(\tau_{1})p(\tau_{1}-\tau)\mathrm{d}\tau_{1}
\]
the output response of the preprocessing system at time $t$ due to
an input impulse at time $\tau$. We then introduce a new LTI impulse
response $\tilde{g}_{l}^{i}(t)$ associated with the $l$th group
such that $\tilde{g}_{l}^{i}(t):=g_{\eta}^{i}(l\tilde{T}_{s},l\tilde{T}_{s}-t)$.
It can easily be shown that when the same sampling set $\left\{ \left(l+ka\right)\tilde{T}_{s}\mid k\in\mathbb{Z}\right\} $
is employed, the preprocessing system associated with $g_{\eta}^{i}(t,\tau)$
results in the same sampled output as the one associated with $\tilde{g}_{l}^{i}(t)$.
This allows us to treat the samples of each distinct group as the
ones obtained by an LTI preprocessing system followed by uniform sampling.
Suppose the channel output $R(f)$ is passed through the LTI preprocessing
system associated with the $l$th group of the $i$th branch, i.e.
the one associated with $\tilde{g}_{l}^{i}(t)$. The Fourier transform
of the output of this LTI system prior to uniform sampling, as marked
in Fig \ref{fig:Equivalent-MIMO-Gaussian-Channel-Modulation}(b),
can be written as
\begin{align*}
& \tilde{Y}_{i}^{l}(f)\\
\overset{\Delta}{=} & P_{i}(f)R(f)\left(S_{i}(f)\exp\left(j2\pi fl\tilde{T}_{s}\right)*\sum_{u}c_{i}^{u}\delta\left(f-uf_{q}\right)\right)\\
= & P_{i}(f)R(f)\sum_{u}c_{i}^{u}S_{i}\left(f-uf_{q}\right)\exp\left(j2\pi l\tilde{T}_{s}\left(f-uf_{q}\right)\right).
\end{align*}
After uniform sampling at rate $f_{q}/b$, the Fourier transform of
the samples in the $l$th group can be expressed as
\begin{align*}
& Y_{i}^{l}(f)=\sum_{v}\tilde{Y}_{i}^{l}\left(f-\frac{vf_{q}}{b}\right)\\
= & \sum_{v}P_{i}\left(f-\frac{vf_{q}}{b}\right)R\left(f-\frac{vf_{q}}{b}\right)\sum_{u}c_{i}^{u}\cdot\\
& S_{i}\left(f-uf_{q}-\frac{vf_{q}}{b}\right)\exp\left(j2\pi l\tilde{T}_{s}\left(f-uf_{q}-\frac{vf_{q}}{b}\right)\right)\\
= & \sum_{v}A_{l,v}^{i}(f)P_{i}\left(f-v\frac{f_{q}}{b}\right)R\left(f-v\frac{f_{q}}{b}\right),
\end{align*}
where
\begin{align}
A_{l,v}^{i}(f) & \overset{\Delta}{=}\sum_{u}c_{i}^{u}S_{i}\left(f-uf_{q}-\frac{vf_{q}}{b}\right)\cdot\nonumber \\
& \quad\quad\quad\quad\exp\left(j2\pi l\tilde{T}_{s}\left(f-uf_{q}-\frac{vf_{q}}{b}\right)\right).\label{eq:AvlModulation}
\end{align}
\begin{figure}
\centering
\includegraphics[scale=0.35]{ModulatedBankMIMOGaussianChannel.pdf}
\begin{centering}
(a)
\par\end{centering}
\includegraphics[scale=0.35]{ModulatedBankMIMOSubGroup.pdf}
\begin{centering}
(b)
\par\end{centering}
\caption{\label{fig:Equivalent-MIMO-Gaussian-Channel-Modulation}Equivalent
MIMO Gaussian channel for a given $f\in\left[0,f_{q}/b\right)$ under
sampling with modulation banks and filter banks. (a) The overall MIMO
representation, where each branch has $a$ output each corresponding
to a distinct group. (b) The MISO representation of the $l$th group
in the $i$th branch, where $A_{l,v}^{i}(f)$ is defined in (\ref{eq:AvlModulation}).
This is associated with the set of samples $\left\{ y_{i}[l+ka]\mid k\in\mathbb{Z}\right\} $.}
\end{figure}
Since the sampled outputs of the original sampling system are equivalent
to the union of samples obtained by $Ma$ LTI systems each followed
by uniform sampling at rate $f_{q}/b$, we can transform the true
sampling system into a MIMO Gaussian channel with an infinite number
of input branches and finitely many output branches, as illustrated
in Fig. \ref{fig:Equivalent-MIMO-Gaussian-Channel-Modulation}. The
well-known formula for the capacity of a MIMO channel can now be used
to derive our capacity results.
We note that due to the convolution in the spectral domain, the frequency
response of the sampled output at frequency $f$ is a linear combination
of frequency components $\left\{ X(f)\right\} $ and $\left\{ N(f)\right\} $
from several different aliased frequency sets. We define the \emph{modulated
aliased frequency set} as a generalization of the aliased set. Specifically,
for each $f$, the modulated aliased set is given by%
\footnote{We note that although each modulated aliased set is countable, it
may be a dense set when $f_{q}/\tilde{f}_{s}$ is irrational. Under
the assumption in Theorem \ref{thmPremodulatedFilterBank}, however,
the elements in the set have a minimum spacing of $f_{q}/b$.%
} $\left\{ f-lf_{q}-k\tilde{f}_{s}\mid l,k\in\mathbb{Z}\right\} $.
By our assumption that $f_{q}=\frac{b}{a}\tilde{f}_{s}$ with $a$
and $b$ being relatively prime, simple results in number theory imply
that
\begin{align*}
\left\{ f_{0}-lf_{q}-k\tilde{f}_{s}\mid l,k\in\mathbb{Z}\right\} & =\left\{ f_{0}-lf_{q}/b\mid l\in\mathbb{Z}\right\} \\
& =\left\{ f_{0}-l\tilde{f}_{s}/a\mid l\in\mathbb{Z}\right\} .
\end{align*}
In other words, for a given $f_{0}\in\left[-f_{q}/2b,f_{q}/2b\right]$,
the sampled output at $f_{0}$ depends on the input in the entire
modulated aliased set. Since the sampling bandwidth at each branch
is $\tilde{f}_{s}$, all outputs at frequencies $\left\{ f_{0}-lf_{q}/b\mid l\in\mathbb{Z};\text{ }-\tilde{f}_{s}/2\leq f_{0}-lf_{q}/b\leq\tilde{f}_{s}/2\right\} $
rely on the inputs in the same modulated aliased set. This can be
treated as a Gaussian MIMO channel with a countable number of input
branches at the frequency set $\left\{ f_{0}-l\tilde{f}_{s}/a\mid l\in\mathbb{Z}\right\} $
and $aM$ groups of output branches, each associated with one group
of sample sequences in one branch. As an example, we illustrate in
Fig. \ref{fig:Equivalent-MIMO-Gaussian-Channel-Modulation} the equivalent
MIMO Gaussian channel under sampling following a single branch of
modulation and filtering, when $S(f)=0$ for all $f\notin\left[-f_{s}/2,f_{s}/2\right]$.
\begin{comment}
\begin{figure}
\centering\includegraphics[scale=0.42]{SamplingGridModulationGroup.pdf}
\includegraphics[scale=0.45]{ModulatedBankMIMOGaussianChannel.pdf}
\begin{centering}
(b)
\par\end{centering}
\includegraphics[scale=0.3]{ModulatedBankMIMOGaussianChannelParallel.pdf}
\begin{centering}
$\hspace{10em}$(c)
\par\end{centering}
\caption{\label{fig:Equivalent-MIMO-Gaussian-Channel-Modulation-1}(a) Grouping
of the sampling set when $f_{s}=3f_{q}$. The sampling grid is divided
into 3 groups, where each group forms a uniform set with rate $f_{s}/3$;
(b) Equivalent MIMO Gaussian channel for a given $f\in\left[0,\frac{f_{s}}{2}\right)$
under sampling following a single branch of modulation and filtering,
where $f_{q}=\frac{1}{2}f_{s}$ ; (c) An equivalent set of parallel
MIMO channels representing all $f\in[-f_{q}/2b,f_{q}/2b]$, where
the MIMO channel at a given frequency is equivalent to the MIMO channel
of Fig. \ref{fig:Equivalent-MIMO-Gaussian-Channel-Modulation-1}(a).}
\end{figure}
\end{comment}
The effective frequencies of this frequency-selective MIMO Gaussian
channel range from $-f_{q}/2b$ to $f_{q}/2b$, which gives us a set
of parallel channels each representing a single frequency $f$. The
water-filling power allocation strategy is then applied to achieve
capacity.
A rigorous proof of Theorem \ref{thmPremodulatedFilterBank} based
on Toeplitz properties is provided in Appendix \ref{sec:Proof-of-Theorem-Premodulated-Filter-Bank}.
\begin{comment}
Example.
\[
Y\left(f-\frac{f_{s}}{2}\right)=c^{0}X\left(f-\frac{f_{s}}{2}\right)+c^{-1}X\left(f\right)+c^{0}N\left(f-\frac{f_{s}}{2}\right)+c^{-1}N\left(f\right)
\]
and
\[
Y\left(f\right)=c^{1}X\left(f-\frac{f_{s}}{2}\right)+c^{0}X\left(f\right)+c^{1}N\left(f-\frac{f_{s}}{2}\right)+c^{0}N\left(f\right)
\]
In matrix form,
\[
\left[\begin{array}{c}
Y\left(f-\frac{f_{s}}{2}\right)\\
Y\left(f\right)\\
Y\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]=\left[\begin{array}{ccc}
2c^{0} & c^{-1} & c^{-2}\\
2c^{1} & c^{0} & c^{-1}\\
2c^{2} & c^{1} & c^{0}
\end{array}\right]\left[\begin{array}{c}
X\left(f-\frac{f_{s}}{2}\right)\\
X\left(f\right)\\
X\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]+\left[\begin{array}{ccc}
c^{0} & c^{-1} & c^{-2}\\
c^{1} & c^{0} & c^{-1}\\
c^{2} & c^{1} & c^{0}
\end{array}\right]\left[\begin{array}{c}
N\left(f-\frac{f_{s}}{2}\right)\\
N\left(f\right)\\
N\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]
\]
This means that
\[
\]
\begin{align*}
\left[\begin{array}{c}
Y\left(f-\frac{f_{s}}{2}\right)\\
Y\left(f\right)
\end{array}\right] & =\left[\begin{array}{ccc}
2c^{0}+2c^{2} & c^{-1}+c^{1} & c^{-2}+c^{0}\\
2c^{1} & c^{0} & c^{-1}
\end{array}\right]\left[\begin{array}{c}
X\left(f-\frac{f_{s}}{2}\right)\\
X\left(f\right)\\
X\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]+\left[\begin{array}{ccc}
c^{0}+c^{2} & c^{-1}+c^{1} & c^{-2}+c^{0}\\
c^{1} & c^{0} & c^{-1}
\end{array}\right]\left[\begin{array}{c}
N\left(f-\frac{f_{s}}{2}\right)\\
N\left(f\right)\\
N\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]\\
& =\left[\begin{array}{ccc}
2\tilde{c}^{2} & 2c^{1} & \tilde{c}^{-2}\\
2c^{1} & c^{0} & c^{1}
\end{array}\right]\left[\begin{array}{c}
X\left(f-\frac{f_{s}}{2}\right)\\
X\left(f\right)\\
X\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]+\left[\begin{array}{ccc}
\tilde{c}^{2} & 2c^{1} & \tilde{c}^{-2}\\
c^{1} & c^{0} & c^{1}
\end{array}\right]\left[\begin{array}{c}
N\left(f-\frac{f_{s}}{2}\right)\\
N\left(f\right)\\
N\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]
\end{align*}
Let $\tilde{c}^{2}=c^{2}+c^{0}$ and $\tilde{c}^{-2}=c^{-2}+c^{0}$
and we set $c^{1}=c^{-1}$. Take $ $$\tilde{c}^{-2}=-c^{0}=\tilde{c}^{2}$.
Then this becomes
\[
\left[\begin{array}{c}
Y\left(f-\frac{f_{s}}{2}\right)\\
Y\left(f\right)
\end{array}\right]=\left[\begin{array}{ccc}
-2c^{0} & 2c^{1} & -c^{0}\\
2c^{1} & c^{0} & c^{1}
\end{array}\right]\left[\begin{array}{c}
X\left(f-\frac{f_{s}}{2}\right)\\
X\left(f\right)\\
X\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]+\left[\begin{array}{ccc}
-c^{0} & 2c^{1} & -c^{0}\\
c^{1} & c^{0} & c^{1}
\end{array}\right]\left[\begin{array}{c}
N\left(f-\frac{f_{s}}{2}\right)\\
N\left(f\right)\\
N\left(f+\frac{f_{s}}{2}\right)
\end{array}\right]
\]
\[
\left[\begin{array}{ccc}
-2c^{0} & 2c^{1} & -c^{0}\\
2c^{1} & c^{0} & c^{1}
\end{array}\right]\left[\begin{array}{cc}
-2c^{0} & 2c^{1}\\
2c^{1} & c^{0}\\
-c^{0} & c^{1}
\end{array}\right]=\left[\begin{array}{cc}
5\left(c^{0}\right)^{2}+4\left(c^{1}\right)^{2} & -3c^{0}c^{1}\\
-3c^{0}c^{1} & \left(c^{0}\right)^{2}+5\left(c^{1}\right)^{2}
\end{array}\right]
\]
Suppose $c^{1}=\alpha c^{0}$, then this reduces to
\[
\left[\begin{array}{cc}
\frac{1}{\sqrt{2+4\alpha^{2}}}\\
& \frac{1}{\sqrt{1+2\alpha^{2}}}
\end{array}\right]\left[\begin{array}{cc}
5+4\alpha^{2} & -3\alpha\\
-3\alpha & 1+5\alpha^{2}
\end{array}\right]\left[\begin{array}{cc}
\frac{1}{\sqrt{2+4\alpha^{2}}}\\
& \frac{1}{\sqrt{1+2\alpha^{2}}}
\end{array}\right]=\left[\begin{array}{cc}
\frac{5+4\alpha^{2}}{2+4\alpha^{2}} & -\frac{3\alpha}{\sqrt{\left(2+4\alpha^{2}\right)\left(1+2\alpha^{2}\right)}}\\
-\frac{3\alpha}{\sqrt{\left(2+4\alpha^{2}\right)\left(1+2\alpha^{2}\right)}} & \frac{1+5\alpha^{2}}{1+2\alpha^{2}}
\end{array}\right]
\]
\[
\]
\[
\det\left[\begin{array}{cc}
\lambda-\frac{5+4\alpha^{2}}{2+4\alpha^{2}} & \frac{3\alpha}{\sqrt{\left(2+4\alpha^{2}\right)\left(1+2\alpha^{2}\right)}}\\
\frac{3\alpha}{\sqrt{\left(2+4\alpha^{2}\right)\left(1+2\alpha^{2}\right)}} & \lambda-\frac{1+5\alpha^{2}}{1+2\alpha^{2}}
\end{array}\right]=\lambda^{2}-\frac{7}{2}\lambda+\frac{20\alpha^{4}+20\alpha^{2}+5}{\left(2+4\alpha^{2}\right)\left(1+2\alpha^{2}\right)}
\]
\end{comment}
\subsection{An Upper Bound on Sampled Capacity}
Following the same analysis of optimal filter-bank sampling developed
in Section \ref{sub:Optimal-Filter-Banks}, we can derive an upper
bound on the sampled channel capacity.
\begin{corollary}\label{Cor:UpperBoundModulationBank}Consider the
system shown in Fig. \ref{fig:PremodulatedPrefilteredSampler}. Suppose
that for each aliased set $\left\{ f-if_{q}/b\mid i\in\mathbb{Z}\right\} $
and each $k$ $(1\leq k\leq aM)$, there exists an integer $l$ such
that $\frac{\left|H\left(f-lf_{q}/b\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-lf_{q}/b\right)}$
is equal to the $k^{\text{th}}$ largest element in $\left\{ \frac{\left|H\left(f-if_{q}/b\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-if_{q}/b\right)}\mid i\in\mathbb{Z}\right\} $.
The capacity (\ref{eq:CapacityModulationBank}) under sampling following
modulation and filter banks can be upper bounded by
\begin{align}
C^{\text{u}}(f_{s}) & \overset{\Delta}{=}\frac{1}{2}{\displaystyle \int}_{-f_{q}/2b}^{f_{q}/2b}\sum_{k=1}^{aM}\log^{+}\left(\nu\cdot\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right)\right)\mathrm{d}f,
\end{align}
where $\nu$ is chosen such that
\begin{equation}
{\displaystyle \int}_{-f_{q}/2b}^{f_{q}/2b}\sum_{k=1}^{aM}\left[\nu-\frac{1}{\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right)}\right]^{+}\mathrm{d}f=P.
\end{equation}
\end{corollary}
\begin{IEEEproof}By observing that $\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}{\bf F}^{\eta}$
has orthonormal rows, we can derive the result using Proposition \ref{lem_BoundsOnMSingularValues}
in Appendix \ref{sec:Proof-of-Corollary-Optimal-Filter-Bank}. \end{IEEEproof}
The upper bound of Corollary \ref{Cor:UpperBoundModulationBank} coincides
with the upper bound on sampled capacity under $aM$-branch filter-bank
sampling. This basically implies that for a given sampling rate $f_{s}$,
modulation and filter bank sampling does not outperform filter-bank
sampling in maximizing sampled channel capacity. In other words, we
can always achieve the same performance by adding more branches in
filter-bank sampling.
Note however that this upper bound may not be tight, since we restrict
our analysis to periodic modulation sequences. General modulation
is not discussed here.
\subsection{Single-branch Sampling with Modulation and Filtering v.s. Filter-bank
Sampling}
Although the class of modulation and filter bank sampling does not
provide capacity gain compared with filter-bank sampling, it may potentially
provide implementation advantages, depending on the modulation period
$T_{q}$. Specifically, modulation-bank sampling may achieve a larger
capacity region than that achievable by filter-bank sampling with
the same number of branches. We consider here two special cases of
single-branch modulation sampling, and investigate whether any hardware
benefit can be harvested.
\subsubsection{$f_{s}/M=f_{q}/a$ for some integer $a$}
In this case, the modulated aliased set is $\left\{ f-kf_{s}/M-lf_{q}\mid k,l\in\mathbb{Z}\right\} =\left\{ f-kf_{s}/M\mid k\in\mathbb{Z}\right\} $,
which is equivalent to the original aliased frequency set. That said,
the sampled output $Y\left(f\right)$ is still a linear combination
of $\left\{ R\left(f-kf_{s}/M\right)\mid k\in\mathbb{Z}\right\} $.
But since linear combinations of these components can be attained
by simply adjusting the prefilter response $S(f)$, the modulation
bank does not provide any further design degrees of freedom, and hence
does not improve the capacity region achievable by sampling with a
bank of $M$ filters.
\subsubsection{$f_{s}/M=bf_{q}$ for some integer $b$}
In this case, the modulated aliased set is enlarged to $\left\{ f-kf_{s}/M-lf_{q}\mid k,l\in\mathbb{Z}\right\} =\left\{ f-lf_{q}\mid l\in\mathbb{Z}\right\} $,
which may potentially provide implementation gain compared with filter-bank
sampling with the same number of branches. We illustrate this in the
following example.
\begin{example}\label{Example-Modulation}Suppose that the channel
contains $3$ subbands with channel gains as plotted in Fig. \ref{fig:ModulationBankExampleChannelGain},
and that the noise is of unit spectral density within these 3 subbands
and 0 otherwise.
(i) Let us first consider single-branch sampling with filtering with
$f_{s}=2$. As illustrated in Fig. \ref{fig:ModulationBankExampleChannelGain},
Subband 1 and 3 are mixed together due to aliasing. According to Section
\ref{sub:Optimal-Prefilters}, the optimal prefilter without modulation
would be a band-pass filter with passband $[-1.5,0.5]$, resulting
in a channel containing 2 subbands with respective channel gains $2$
and $ $$1$.
\begin{figure}
\centering\includegraphics{ModulationExampleChannelGain.pdf}\caption{\label{fig:ModulationBankExampleChannelGain}The channel gain of Example
\ref{Example-Modulation}. The noise is of unit power spectral density. }
\end{figure}
(ii) If we add a modulation sequence with period $T_{q}=2T_{s}$,
then the channel structure can be better exploited. Specifically,
suppose that the modulation sequence obeys $c^{0}=1$, $c^{3}=1$,
and $c^{i}=0$ for all other $i$'s, and that the post-modulation
filter is a band-pass filter with passbands $[-1.5,-0.5]\cup[3.5,4.5]$.
We can see that this moves spectral contents of Subband 1 and Subband
3 to frequency bands $[-1.5,-0.5]$ and $[3.5,4.5]$, respectively,
which are alias-free. Therefore, we obtain a two-subband channel with
respective channel gains both equal to 2, thus outperforming a single
branch of sampling with filtering. \end{example}
More generally, let us consider the following scenario. Suppose that
the channel of bandwidth $W=\frac{2L}{K}f_{s}$ is equally divided
into $2L$ subbands each of bandwidth $f_{q}=f_{s}/K$ for some integers
$K$ and $L$. The SNR $ $$\left|H\left(f\right)\right|^{2}/\mathcal{S}_{\eta}(f)$
within each subband is assumed to be flat. For instance, in the presence
of white noise, if $f_{q}\ll B_{c}$ with $B_{c}$ being the coherence
bandwidth \cite{Gold2005}, the channel gain (and hence the SNR) is
roughly equal across the subband. Algorithm 1 given below generates
an alias-free sampled analog channel, which is achieved by moving
the $K$ subbands with the highest SNRs to alias-free locations. By
Corollary \ref{Cor:UpperBoundModulationBank}, this algorithm determines
an optimal sampling mechanism that maximizes capacity under a single
branch of sampling with modulation and filtering. Specifically, take
any $f\in[-f_{q}/2,f_{q}/2]$, and the algorithm works as follows.\vspace{10pt}
\begin{center}
\begin{tabular}{>{\raggedright}p{3.2in}}
\hline
\textbf{Algorithm 1}\tabularnewline
\hline
1. \quad{}\textbf{Initialize.} Find the $K$ largest elements in
$\left\{ \frac{\left|H\left(f-lf_{q}\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-lf_{q}\right)}\mid l\in\mathbb{Z},-L\leq l\leq L-1\right\} $.
Denote by $\left\{ l_{i}\mid1\leq i\leq K\right\} $ the index set
of these $K$ elements such that $l_{1}>l_{2}>\cdots>l_{K}$. Set
$L^{*}:=\min\left\{ k\mid k\in\mathbb{Z},k\geq2L,k\text{ mod }K=0\right\} $
.\tabularnewline
2. \quad{}For $i=1:K$
\hspace{2.5em}Let $ $$\alpha:=i\cdot L^{*}+i-l_{i}$.
\hspace{2.5em}Set $c^{\alpha}=1$, and $S(f+\alpha f_{p})=1$.\tabularnewline
\hline
\end{tabular}
\par\end{center}
Algorithm 1 first selects the $K$ subbands with the highest SNR,
and then moves each of the selected subbands to a new location by
appropriately setting $\left\{ c^{i}\right\} $, which guarantees
that (1) the movement does not corrupt any of the previously chosen
locations; (2) the contents in the newly chosen locations will be
alias-free. The post-modulation filter is applied to suppress the
frequency contents outside the set of newly chosen subband locations.
One drawback of this algorithm is that we need to preserve as many
as $2LK$ subbands in order to make it work.
\begin{comment}
One way to slightly improve the bandwidth efficiency is through Algorithm
2, which is provided and discussed in Appendix \ref{sec:Proof-of-Prop-Modulation-Example}.
\end{comment}
The performance of Algorithm 1 is equivalent to the one using an optimal
filter bank followed by sampling with sampling rate $f_{q}$ at each
branch. Hence, single-branch sampling effectively achieves the same
performance as multi-branch filter-bank sampling. This approach may
be preferred since building multiple analog filters is often expensive
(in terms of power consumption, size, or cost). We note, however,
that for a given overall sampling rate, modulation-bank sampling does
not outperform filter-bank sampling with an arbitrary number of branches.
The result is formally stated as follows.
\begin{prop}\label{prop-ModulationUpperBound}Consider the setup
in Theorem \ref{thmPremodulatedFilterBank}. For a given overall sampling
rate $f_{s}$, sampling with $M$ branches of optimal modulation and
filter banks does not achieve higher sampled capacity compared to
sampling with an optimal bank of $aM$ filters.
\end{prop}
Hence, the main advantage of applying a modulation bank is a hardware
benefit, namely, using fewer branches and hence less analog circuitry
to achieve the same capacity.
\section{Connections between Capacity and MMSE\label{sec:ConnectionCapacityMMSE}}
In Sections \ref{sub:Optimal-Prefilters} and \ref{sub:Optimal-Filter-Banks},
we derived respectively the optimal prefilter and the optimal filter
bank that maximize capacity. It turns out that such choices of sampling
methods coincide with the optimal prefilter / filter bank that minimize
the MSE between the Gaussian channel input and the signal reconstructed
from sampling the channel output, as detailed below.
Consider the following sampling problem. Let $x(t)$ be a zero-mean
wide-sense stationary (WSS) stochastic signal whose power spectral
density (PSD) $\mathcal{S}_{X}(f)$ satisfies a power constraint %
\footnote{We restrict our attention to WSS input signals. This restriction,
while falling short of generality, allows us to derive sampling results
in a simple way. %
} $\int_{-\infty}^{\infty}\mathcal{S}_{X}(f)\mathrm{d}f=P$. This input
is passed through a channel consisting of an LTI filter and additive
stationary Gaussian noise. We sample the channel output using a filter
bank at a fixed rate $f_{s}/M$ in each branch, and recover a \emph{linear}
MMSE estimate $\hat{x}(t)$ of $x(t)$ from its samples in the sense
of minimizing $\mathbb{E}(\left|x(t)-\hat{x}(t)\right|^{2})$ for
$t\in\mathbb{R}$. We propose to jointly optimize $x(t)$ and the
sampling method. Specifically, our joint optimization problem can
now be posed as follows: for which input process $x(t)$ and for which
filter bank is the estimation error $\mathbb{E}(\left|x(t)-\hat{x}(t)\right|^{2})$
minimized for $t\in\mathbb{R}$.
\begin{comment}
We wish to show that the prefilter (\ref{eq:OptimalPrefilterGeneralUniformSampling})
minimizes the MSE between the original and the reconstructed signals.
To see this, we observe that the prefiltered noise prior to sampling
has power spectral density $\mathcal{S}_{\eta}(f)\left|S(f)\right|^{2}$.
Proceeding using a similar spirit as in \cite{Mat2000}, we can derive
the transfer function $R(f)$ of the optimal reconstruction interpolator
that minimizes the MSE for a given prefilter response $S(f)$ as
\begin{equation}
R(f)=\frac{H^{*}(f)S^{*}(f)\mathcal{S}_{X}(f)}{\left\Vert {\bf V}_{HS\sqrt{\mathcal{S}_{X}}+S\sqrt{\mathcal{S}_{\eta}}}(f,f_{s})\right\Vert _{2}^{2}}.
\end{equation}
The resulting MMSE can be calculated as
\begin{align*}
\xi & ={\displaystyle \int}_{-\frac{f_{s}}{2}}^{\frac{f_{s}}{2}}\left[\sum_{l\in\mathbb{Z}}\mathcal{S}_{X}(f-lf_{s})-\frac{\left\Vert {\bf V}_{HS\sqrt{\mathcal{S}_{X}}}(f,f_{s})\right\Vert _{2}^{2}}{\left\Vert {\bf V}_{HS\sqrt{\mathcal{S}_{X}}+S\sqrt{\mathcal{S}_{\eta}}}(f,f_{s})\right\Vert _{2}^{2}}\right]\mathrm{d}f
\end{align*}
For any given $f\in\left[-\frac{f_{s}}{2},\frac{f_{s}}{2}\right]$,
minimizing the corresponding error term is equivalent to
\begin{equation}
\underset{\left\{ S(f-lf_{s}),l\in\mathbb{Z}\right\} }{\mbox{maximize}}\quad\sum_{l\in\mathbb{Z}}\mu_{l}\frac{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})}{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})+\mathcal{S}_{\eta}(f-lf_{s})}
\end{equation}
where $\mu_{l}=\frac{\left(\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})+\mathcal{S}_{\eta}(f-lf_{s})\right)\left|S(f-lf_{s})\right|^{2}}{\left\Vert {\bf V}_{HS\sqrt{\mathcal{S}_{X}}+S\sqrt{\mathcal{S}_{\eta}}}(f,f_{s})\right\Vert _{2}^{2}}$.
The objective function is thus a convex combination of $\left\{ \frac{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})}{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})+\mathcal{S}_{\eta}(f-lf_{s})},l\in\mathbb{Z}\right\} $,
whose maximum value
\begin{equation}
\max_{l\in\mathbb{Z}}\frac{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})}{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})+\mathcal{S}_{\eta}(f-lf_{s})}
\end{equation}
can be attained by setting
\begin{equation}
S(f-kf_{s})=\begin{cases}
1 & ,\quad\mbox{if }\frac{\left|H(f-kf_{s})\right|^{2}\mathcal{S}_{X}(f-kf_{s})}{\mathcal{S}_{\eta}(f-kf_{s})}=\max_{l}\frac{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})}{\mathcal{S}_{\eta}(f-lf_{s})};\\
0 & ,\quad\mbox{otherwise}.
\end{cases}
\end{equation}
That said, the optimal prefilter puts all its mass in those frequencies
with highest SNR $\frac{\left|H(f)\right|^{2}\mathcal{S}_{X}(f)}{\mathcal{S}_{\eta}(f)}$.
Suppose that there is a sum power constraint: $\sum_{k\in\mathbb{Z}}\mathcal{S}_{X}(f-kf_{s})=P(f)$,
then allocating all input power to the frequency components with the
highest $\frac{\left|H(f)\right|^{2}}{\mathcal{S}_{\eta}(f)}$ yields
the optimal value among all input PSDs, which coincides with our information
theoretic derivation.
We note that channel capacity results include maximizing mutual information
for a given system, while sampling theory considers optimal prefiltering
and reconstruction schemes for a given class of input signals. The
perspectives from both theories coincide following the filter design:
the filter that maximizes mutual information for a given input distribution
also minimizes the MSE for that input distribution. The key metric
connecting both theories is SNR -- maximizing SNR leads to maximum
data rate as well as minimum MSE.
\end{comment}
It turns out that the optimal input and the optimal filter bank coincide
with those maximizing channel capacity, which is captured in the following
proposition.
\begin{prop}\label{lem-optimal-filter-bank-sampling-theoretic} Suppose
the channel input $x(t)$ is any WSS signal. For a given sampling
system, let $\hat{x}(t)$ denote the optimal linear estimate of $x(t)$
from the digital sequence $\left\{ {\bf y}[n]\right\} $. Then the
capacity-optimizing filter bank given in (\ref{eq:optimalFilterBank})
and its corresponding optimal input $x(t)$ minimize the linear MSE
$\mathbb{E}(\left|x(t)-\hat{x}(t)\right|^{2})$ over all possible
LTI filter banks.
\end{prop}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-optimal-filter-bank-sampling-theoretic}.\end{IEEEproof}
Proposition \ref{lem-optimal-filter-bank-sampling-theoretic} implies
that the input signal and the filter bank optimizing channel capacity
also minimize the MSE between the original input signal and its reconstructed
output. We note that if the samples $\left\{ {\bf y}[n]\right\} $
and $x(t)$ are jointly Gaussian random variables, then the MMSE estimate
$\hat{x}(t)$ for a given input process $x(t)$ is linear in $\left\{ {\bf y}[n]\right\} $.
That said, for Gaussian inputs passed through Gaussian channels, the
capacity-maximizing filter bank also minimizes the MSE even if we
take into account nonlinear estimation. Thus, under sampling with
filter-banks for Gaussian channels, information theory reconciles
with sampling theory through the SNR metric when determining optimal
systems. Intuitively, high SNR typically leads to large capacity and
small MSE.
Proposition \ref{lem-optimal-filter-bank-sampling-theoretic} includes
the optimal prefilter under single-prefilter sampling as a special
case. We note that a similar MSE minimization problem was investigated
decades ago with applications in PAM \cite{ChaDon1971,Eri1973}: a
given random input $x(t)$ is prefiltered, corrupted by noise, uniformly
sampled, and then postfiltered to yield a linear estimate $\hat{x}(t)$.
The goal in that work was to minimize the MSE between $x(t)$ and
$\hat{x}(t)$ over all prefiltering (or pulse shaping) and postfiltering
mechanisms. While our problem differs from this PAM design problem
by optimizing directly over the random input instead of the pulse
shape, the two problems are similar in spirit and result in the same
alias-suppressing filter. However, earlier work did not account for
filter-bank sampling or make connections between minimizing MSE and
maximizing capacity.
\section{Conclusions and Future Work}
We have characterized sampled channel capacity as a function of sampling
rate for different sampling methods, thereby forming a new connection
between sampling theory and information theory. We show how the capacity
of a sampled analog channel is affected by reduced sampling rate and
identify optimal sampling structures for several classes of sampling
methods, which exploit structure in the sampling design. These results
also indicate that capacity is not always monotonic in sampling rate,
and illuminate an intriguing connection between MIMO channel capacity
and capacity of undersampled analog channels. The capacity optimizing
sampling structures are shown to extract the frequency components
with highest SNRs from each aliased set, and hence suppress aliasing
and out-of-band noise. We also show that for Gaussian inputs over
Gaussian channels, the optimal filter / filter bank also minimizes
the MSE between the channel input and the reconstructed signal. Our
work establishes a framework for using the information-theoretic metric
of capacity to optimize sampling structures, offering a different
angle from traditional design of sampling methods based on other performance
metrics.
\begin{comment}
We primarily consider the channel capacity under the assumption that
perfect channel state information is known, which naturally leads
to the question of how sampled channel capacity degrades with only
partial channel state information. One special scenario is when the
transmitter and the receiver know perfect channel state information
but do not know the support of transmitted signals, which may occur
in a cognitive radio system. Consider a multiband channel where the
entire bandwidth $W$ is divided into $N$ subbands each of bandwidth
$B=W/N$. Among them, $K$ subbands can be used for transmission while
others stay idle. The receiver knows perfectly the channel gain in
each frequency but does not have the subband support information at
hand. In this scenario, the sampled channel capacity stays the same
as in the scenario with perfect channel state information known including
the subband support. One capacity-achieving scheme is to use a relatively
small portion of time as a training phase to estimate the channel
support, and use the remaining time to transmit signals.
\end{comment}
Our work uncovers additional questions at the intersection of sampling
theory and information theory. For instance, an upper bound on sampled
capacity under sampling rate constraints for more general nonuniform
sampling methods would allow us to evaluate which sampling mechanisms
are capacity-achieving for any channel. Moreover, for channels where
there is a gap between achievable rates and the capacity upper bound,
these results might provide insight into new sampling mechanisms that
might close the gap to capacity. Investigation of capacity under more
general nonuniform sampling techniques is an interesting topic that
is studied in our companion paper \cite{ChenEldarGoldsmith2012}.
In addition, the optimal sampling structure for time-varying channels
will require different analysis than used in the time-invariant case.
It is also interesting to investigate what sampling mechanisms are
optimal for channels when the channel state is partially or fully
unknown. A deeper understanding of how to exploit channel structure
may also guide the design of sampling mechanisms for multiuser channels
that require more sophisticated cooperation schemes among users and
are impacted in a more complex way by subsampling.
\appendices
\begin{comment}
\section{Proof of Theorem \ref{thmPerfectCSIIdealSamplerRigorous}\label{sec:Proof-of-Theorem-PerfectCSIIdealSampler}}
The proof for Theorem \ref{thmPerfectCSIIdealSamplerRigorous} is
provided in this subsection. The key idea is to investigate the asymptotic
spectral properties of block Toeplitz matrices. To do so, we first
discretize the system equation.
\subsection{Channel Discretization}
The continuity and Riemann integrability of the channel impulse response
$h(t)$ implies that
\begin{equation}
\lim_{\Delta\rightarrow0}h\left(\tau+\Delta\right)=h\left(\tau\right)
\end{equation}
For notational simplicity, we define $h_{u,v}:=h(uT_{s}+v\Delta)$
and $x_{u,v}:=x(uT_{s}-v\Delta)$. Setting $T=nT_{s}$ and $T_{s}=k\Delta$
with integers $n$ and $k$, we can obtain the following discretized
channel model as a good approximation of the real channel:
\[
\left[\begin{array}{c}
y[n]\\
y[n-1]\\
\vdots\\
y[1]
\end{array}\right]=\Delta\cdot\left[\begin{array}{cccccc}
h_{0,0} & h_{0,1} & \cdots & h_{1,0} & h_{1,1} & \cdots\\
h_{-1,0} & h_{-1,1} & \cdots & h_{0,0} & h_{0,1} & \cdots\\
h_{-2,0} & h_{-2,1} & \cdots & h_{-1,0} & h_{-1,1} & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
h_{-n+1,0} & \cdots & \cdots & \cdots & \cdots & \cdots
\end{array}\right]\left[\begin{array}{c}
x_{n,0}\\
x_{n,1}\\
\vdots\\
x_{n-1,0}\\
x_{n-1,1}\\
\vdots\\
x_{1,k-1}
\end{array}\right]+\left[\begin{array}{c}
\eta[n]\\
\eta[n-1]\\
\vdots\\
\eta[1]
\end{array}\right]
\]
This is a consequence of the fact that $\lim_{\Delta\rightarrow0}\int_{t}^{t+\Delta}h(t-\tau)x(\tau)\mathrm{d}\tau=\lim_{\Delta\rightarrow0}h(t)\Delta\left(\frac{\int_{t}^{t+\Delta}x(\tau)\text{d}\tau}{\Delta}\right)$.
Here, the noise sequence ${\bf \eta}_{n}=\left[\eta[n],\cdots,\eta[1]\right]{}^{T}$
is i.i.d. Gaussian noise each with variance $\sigma_{\eta}^{2}$.
Let ${\bf h}_{i}=\Delta\cdot\left[h_{i,0},h_{i,1},\cdots,h_{i,k-1}\right]$
be an $1\times k$ vector, and set
\begin{align*}
{\bf H}_{n} & =\left[\begin{array}{cccc}
{\bf h}_{0} & {\bf h}_{1} & \cdots & {\bf h}_{n-1}\\
{\bf h}_{-1} & {\bf h}_{0} & \cdots & {\bf h}_{n-2}\\
\vdots & \vdots & \cdots & \vdots\\
{\bf h}_{-n+1} & {\bf h}_{-n+2} & \cdots & {\bf h}_{0}
\end{array}\right]\quad\mbox{and}\quad{\bf x}_{n}=\left[\begin{array}{c}
x_{n,0}\\
x_{n,1}\\
\vdots\\
x_{n-1,0}\\
\vdots
\end{array}\right],
\end{align*}
and ${\bf x}_{n}=\left[x_{n,0},x_{n,1},\cdots,x_{n,k-1},x_{n-1,0},\cdots,x_{1,0},\cdots,x_{1,k-1}\right]^{T}$,
then we can have
\begin{equation}
{\bf y}_{n}={\bf H}_{n}{\bf x}_{n}+{\bf \eta}_{n}.\label{eq:LinearSystemEquationPerfectCSIIdealSampler}
\end{equation}
Here, ${\bf H}_{n}$ is a block Toeplitz matrix.
Hence, the capacity might then become
\begin{align}
C(f_{s}) & =\lim_{\Delta\rightarrow0}\lim_{n\rightarrow\infty}\frac{1}{nT_{s}}\max_{p(x):\frac{1}{nk}\mathbb{E}\left(\left|{\bf x}_{n}\right|^{2}\right)\leq P}I\left({\bf x}_{n};{\bf y}_{n}\right)\nonumber \\
& =\lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty}\frac{f_{s}}{n}\max_{p(x):\frac{1}{nk}\mathbb{E}\left(\left|{\bf x}_{n}\right|^{2}\right)\leq P}I\left({\bf x}_{n};{\bf y}_{n}\right)
\end{align}
\section{Proof of Theorem \ref{thmPerfectCSIIdealSamplerRigorous}}
For a given pair $(T,\Delta)$, or equivalently, for a given pair
$(n,k)$, the channel (\ref{eq:LinearSystemEquationPerfectCSIIdealSampler})
is equivalent to a MIMO Gaussian channel with $nk$ input dimensions
and $n$ output dimensions. Therefore, we can decouple it into $n$
independent parallel non-interfering Gaussian channels each of channel
gain $\sigma_{i}({\bf H}_{n})$$\text{ }\left(1\leq i\leq n\right)$
\cite{Tel1999}, where $\left\{ \sigma_{i}({\bf H}_{n}),1\leq i\leq n\right\} $
are singular values of ${\bf H}_{n}$.
Let $g(x)$ be a non-decreasing continuous function of bounded slope
with $g(0)=0$. We know from \cite[Theorem 4.3]{Tilli98} that
\begin{align}
\lim_{n\rightarrow+\infty}\frac{1}{n}\sum_{j=1}^{n}g\left(\sigma_{i}({\bf H}_{n})\right) & =\frac{1}{2\pi}{\displaystyle \int}_{-\pi}^{\pi}g\left(\sigma\left({\bf F}_{\omega}(\omega)\right)\right)\mathrm{d}\omega\label{eq:blockToeplitzSpectralProperty}\\
& =T_{s}{\displaystyle \int}_{-f_{s}/2}^{f_{s}/2}g\left(\sigma\left({\bf F}(f)\right)\right)\mathrm{d}f\nonumber
\end{align}
where ${\bf F}_{\omega}(\omega)$ and ${\bf F}(f)$ denote the discrete-time
Fourier transform related to the block Toeplitz matrix:
\begin{equation}
\begin{cases}
{\bf F}_{\omega}(\omega) & =[{\bf F}_{\omega,0}(\omega),{\bf F}_{\omega,1}(\omega),\cdots,{\bf F}_{\omega,k-1}(\omega)]\\
{\bf F}_{\omega,l}(\omega) & =\Delta\sum_{l=-\infty}^{+\infty}h_{l,i}\exp\left(-\hat{j}l\omega\right)\quad0\leq i<k
\end{cases}\label{eq:FourierSymbolIdealSampler}
\end{equation}
\begin{equation}
\mbox{and\quad}{\bf F}(f)={\bf F}_{\omega}\left(\frac{2\pi f}{f_{s}}\right)
\end{equation}
Also, $\sigma\left({\bf F}(f)\right)$ denotes the singular value
of ${\bf F}(f)$. Encouragingly, $\sigma\left({\bf F}(f)\right)$
can be exactly calculated as in the following lemma.
\begin{lem}\label{lem-FourierSymbolIdealSampler}The singular value
of the symbol ${\bf F}(f)$ given in (\ref{eq:FourierSymbolIdealSampler})
can be obtained as
\begin{equation}
\sigma\left({\bf F}(f)\right)=\sqrt{\frac{\Delta}{T_{s}}}\left\Vert {\bf V}_{H}(f,f_{s})\right\Vert _{2}
\end{equation}
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-Fourier-Symbol-Ideal-Sampler}.
\end{IEEEproof}
Now, we are in position to derive the capacity. If both transmitter
and receiver have perfect channel state information, water-filling
argument \cite[Theorem 7.5.1]{Gallager68} indicates that the channel
capacity can be given by the parametric equations
\begin{align}
C_{T}(W) & =\frac{1}{2T}\sum_{i:\sigma_{i}^{2}({\bf H}_{n})\geq W^{-1}}\log\left[W\sigma_{i}^{2}({\bf H}_{n})\right]\\
\frac{Pnk}{\sigma_{\eta}^{2}} & =\sum_{i:\sigma_{i}^{2}({\bf H}_{n})\geq W^{-1}}\left[W-\frac{1}{\sigma_{i}^{2}({\bf H}_{n})}\right]
\end{align}
Replacing $W$ with $W/\left(\Delta\sigma_{h}^{2}\right)$ yields
\begin{align}
\tilde{C}_{T}(W) & =\frac{1}{2T}\sum_{i:\frac{1}{\Delta\sigma_{\eta}^{2}}\sigma_{i}^{2}({\bf H}_{n})\geq\frac{1}{W}}\log\left[\frac{W}{\Delta\sigma_{\eta}^{2}}\sigma_{i}^{2}({\bf H}_{n})\right]\\
Pnk\Delta & =\sum_{i:\frac{1}{\Delta}\sigma_{i}^{2}({\bf H}_{n})\geq\frac{1}{W}}\left[W-\frac{\Delta\sigma_{\eta}^{2}}{\sigma_{i}^{2}({\bf H}_{n})}\right]\label{eq:parametricEquationPower}
\end{align}
Applying (\ref{eq:blockToeplitzSpectralProperty}) yields
\begin{align*}
C(f_{s}) & =\lim_{T\rightarrow\infty}\tilde{C}_{T}(W)=\frac{1}{2T_{s}}\lim\frac{1}{n}\sum_{i:\frac{1}{\Delta}\sigma_{i}^{2}({\bf H}_{n})\geq\frac{1}{W}}\log\left[\frac{W}{\Delta}\sigma_{i}^{2}({\bf H}_{n})\right]\\
& =\frac{1}{2}{\displaystyle \int}_{f\in\mathcal{F}(W)}\log\left(W\frac{\left\Vert {\bf V}_{H}(f,f_{s})\right\Vert _{2}^{2}}{\sigma_{\eta}^{2}T_{s}}\right)\mathrm{d}f
\end{align*}
where $\mathcal{F}(W)=\left\{ f:\frac{\left\Vert {\bf V}_{H}(f,f_{s})\right\Vert _{2}^{2}}{\sigma_{\eta}^{2}T_{s}}\geq\frac{1}{W},\mbox{ }f\in\left[-\frac{f_{s}}{2},\frac{f_{s}}{2}\right]\right\} $.
On the other hand, (\ref{eq:parametricEquationPower}) can be computed
as
\begin{equation}
PT_{s}=T_{s}{\displaystyle \int}_{f\in\mathcal{F}(W)}\left[W-\frac{\sigma_{\eta}^{2}T_{s}}{\left\Vert {\bf V}_{H}(f,f_{s})\right\Vert _{2}^{2}}\right]\mathrm{d}f
\end{equation}
The proof is now completed by observing that the above capacity result
is independent of $\Delta$ provided that $\Delta$ is small enough.
\end{comment}
\section{Proof of Theorem \ref{thmPerfectCSIPrefilteredSamplerRigorous}\label{sec:Proof-of-Theorem-PerfectCSIPrefilteredSampler}}
We begin by an outline of the proof. A discretization argument is
first used to approximate arbitrarily well the analog signals by discrete-time
signals, which allows us to make use of the properties of Toeplitz
matrices instead of the more general Toeplitz operators. By noise
whitening, we effectively convert the sampled channel to a MIMO channel
with i.i.d. noise for any finite time interval. Finally, the asymptotic
properties of Toeplitz matrices are exploited in order to relate the
eigenvalue distribution of the equivalent channel matrix with the
Fourier representation of both channel filters and prefilters. The
proofs of several auxiliary lemmas are deferred to Appendix \ref{sec:Proofs-of-Auxiliary-Lemmas}.
Instead of directly proving Theorem \ref{thmPerfectCSIPrefilteredSamplerRigorous},
we prove the theorem for a simpler scenario where the noise $\eta(t)$
is of \emph{unit spectral density}. In this case, our goal is to prove
that the capacity is equivalent to
\begin{align*}
C(f_{s}) & =\frac{1}{2}{\displaystyle \int}_{-\frac{f_{s}}{2}}^{\frac{f_{s}}{2}}\log^{+}\left(\nu\frac{\overset{}{\underset{l\in\mathbb{Z}}{\sum}}\left|H(f-lf_{s})S(f-lf_{s})\right|^{2}}{\overset{}{\underset{l\in\mathbb{Z}}{\sum}}\left|S(f-lf_{s})\right|^{2}}\right)\mathrm{d}f
\end{align*}
where the water level $\nu$ can be calculated through the following
equation
\[
{\displaystyle \int}_{-\frac{f_{s}}{2}}^{\frac{f_{s}}{2}}\left(\nu-\frac{\sum_{l}\left|S(f-lf_{s})\right|^{2}}{\sum_{l}\left|H(f-lf_{s})S(f-lf_{s})\right|^{2}}\right)^{+}\mathrm{d}f=P.
\]
This capacity result under white noise can then be immediately extended
to accommodate for colored noise. Suppose the additive noise is of
power spectral density $\mathcal{S}_{\eta}(f)$. We can then split
the channel filter $H\left(f\right)$ into two parts with respective
frequency response $H\left(f\right)/\sqrt{\mathcal{S}_{\eta}(f)}$
and $\sqrt{\mathcal{S}_{\eta}(f)}$. Equivalently, the channel input
is passed through an LTI filter with frequency response $H\left(f\right)/\sqrt{\mathcal{S}_{\eta}(f)}$,
contaminated by white noise, and then passed through a filter with
transfer function $\sqrt{\mathcal{S}_{\eta}(f)}S(f)$ followed by
an ideal sampler with rate $f_{s}$. This equivalent representation
immediately leads to the capacity in the presence of colored noise
by substituting corresponding terms into the capacity with white noise.
\subsection{Channel Discretization and Diagonalization}
Given that $h(t)$ is continuous and Riemann integrable, one approach
to study the continuous-time problem is via reduction to an equivalent
discrete-time problem \cite[Chapter 16]{KaiSayHas2000}. In this subsection,
we describe the method of obtaining our equivspace discretization
approximations to the continuous-time problems, which will allow us
to exploit the properties of block-Toeplitz matrices instead of the
more complicated block-Toeplitz operators.
For notational simplicity, we define
\[
g_{u,v}=\frac{1}{\Delta}\int_{0}^{\Delta}g\left(uT_{s}-v\Delta+\tau\right)\mathrm{d}\tau
\]
for any function $g(t)$. If $g(t)$ is a continuous function, then
$\lim_{\Delta\rightarrow0}g_{u,v}=g\left(uT_{s}-v\Delta\right)$,
where $v$ may be a function of $\Delta$. We also define $\tilde{h}(t):=h(t)*s(t)$.
Set $T=nT_{s}$ and $T_{s}=k\Delta$ for some integers $n$ and $k$,
and define
\begin{align*}
\tilde{{\bf h}}_{i} & :=\Delta\cdot\left[\tilde{h}_{i,0},\tilde{h}_{i,1},\cdots,\tilde{h}_{i,k-1}\right],\\
{\bf s}_{i} & :=\Delta\cdot\left[s_{i,0},s_{i,1},\cdots,s_{i,k-1}\right],\\
\left({\bf x}^{n}\right)_{i} & :=\frac{1}{\Delta}\int_{0}^{\Delta}x\left(i\Delta+\tau\right)\mathrm{d}\tau\text{ }\left(0\leq i<nk\right),\\
\left({\bf \eta}\right)_{i} & :=\frac{1}{\Delta}\int_{0}^{\Delta}{\bf \eta}\left(i\Delta+\tau\right)\mathrm{d}\tau\text{ }\left(i\in\mathbb{Z}\right).
\end{align*}
We also define
\begin{align*}
\tilde{{\bf H}}^{n} & :=\left[\begin{array}{cccc}
\tilde{{\bf h}}_{0} & \tilde{{\bf h}}_{-1} & \cdots & \tilde{{\bf h}}_{-n+1}\\
\tilde{{\bf h}}_{1} & \tilde{{\bf h}}_{0} & \cdots & \tilde{{\bf h}}_{-n+2}\\
\vdots & \vdots & \cdots & \vdots\\
\tilde{{\bf h}}_{n-1} & \tilde{{\bf h}}_{n-2} & \cdots & \tilde{{\bf h}}_{0}
\end{array}\right],
\end{align*}
\[
{\bf S}^{n}:=\left[\begin{array}{cccc}
\cdots & {\bf s}_{0} & {\bf s}_{-1} & \cdots\\
\cdots & {\bf s}_{1} & {\bf s}_{0} & \cdots\\
\cdots & \vdots & \vdots & \cdots\\
\cdots & {\bf s}_{n-1} & {\bf s}_{n-2} & \cdots
\end{array}\right].
\]
With these definitions, the original channel model can be approximated
with the following discretized channel:
\begin{equation}
{\bf y}^{n}=\tilde{{\bf H}}^{n}{\bf x}^{n}+{\bf S}^{n}{\bf \eta}.
\end{equation}
As can be seen, $\tilde{{\bf H}}^{n}$ is a fat \textit{block Toeplitz}
matrix. Moreover, ${\bf S}^{n}{\bf S}^{n*}$ is asymptotically equivalent
to a Toeplitz matrix, as will be shown in Appendix \ref{sub:ProofTheorem2PartC}.
We note that each element $\eta_{i}$ is a zero-mean Gaussian variable
with variance $\mathbb{E}(\left|\eta_{i}\right|^{2})=1/\Delta$. In
addition, $\mathbb{E}\left({\bf \eta}_{i}{\bf \eta}_{l}^{*}\right)=0$
for any $i\neq l$, implying that ${\bf \eta}$ is an i.i.d. Gaussian
vector. The filtered noise ${\bf S}^{n}{\bf \eta}$ is no longer i.i.d.
Gaussian, which motivates us to whiten the noise first.
The prewhitening matrix is given by ${\bf S}_{\text{w}}^{n}:=\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}$,
which follows from the fact that
\begin{align*}
& \mathbb{E}{\bf S}_{\text{w}}^{n}{\bf S}^{n}{\bf \eta}\left({\bf S}_{\text{w}}^{n}{\bf S}^{n}{\bf \eta}\right)^{*}={\bf S}_{\text{w}}^{n}{\bf S}^{n}\mathbb{E}\left({\bf \eta}{\bf \eta}^{*}\right){\bf S}^{n*}{\bf S}_{\text{w}}^{n*}\\
= & \frac{1}{\Delta}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}{\bf S}^{n}{\bf S}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}=\frac{1}{\Delta}{\bf I}^{n}.
\end{align*}
This basically implies that ${\bf S}_{\text{w}}^{n}{\bf S}^{n}$ projects
the i.i.d. Gaussian noise $\eta$ onto an $n$-dimensional subspace,
and that ${\bf S}_{\text{w}}^{n}\left({\bf S}^{n}\eta\right)$ is
now $n$-dimensional i.i.d. Gaussian noise. Left-multiplication with
this whitening matrix yields a new output
\begin{align*}
\tilde{{\bf y}}^{n}: & =\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\left(\tilde{{\bf H}}^{n}{\bf x}^{n}+{\bf S}^{n}{\bf \eta}\right)\\
& =\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}{\bf x}^{n}+\tilde{{\bf \eta}}^{n}.
\end{align*}
Here, $\tilde{\eta}^{n}$ consists of independent zero-mean Gaussian
elements with variance $1/\Delta$. Since the prewhitening operation
${\bf S}_{\text{w}}^{n}$ is invertible, we have
\begin{equation}
I\left({\bf x}^{n};\tilde{{\bf y}}^{n}\right)=I\left({\bf x}^{n};{\bf y}^{n}\right).
\end{equation}
In this paper, we will use $I_{k,\Delta}\left({\bf x}^{n};{\bf y}^{n}\right)$
and $I\left({\bf x}^{n};{\bf y}^{n}\right)$ interchangeably to denote
the mutual information between the $nk$-dimensional vector ${\bf x}^{n}$
and ${\bf y}^{n}$.
Moreover, when $x(t)$ is of bounded variance (i.e. $\sup_{t}\mathbb{E}\left|x(t)\right|^{2}\leq\infty$)
and the additive noise is Gaussian, it has been shown \cite{wu2012functional}
that the mutual information is weakly continuous in the input distribution.
Therefore, $\lim_{k\rightarrow\infty}I_{k,\Delta}\left({\bf x}^{n};{\bf y}^{n}\right)\rightarrow I\left(\left\{ x\left(t\right)\right\} _{t=0}^{T};\left\{ {\bf y}\left[n\right]\right\} _{t=0}^{T}\right).$
As $k$ increases, the discretized sequence becomes a finer approximation
to the continuous-time signal. The uniform continuity of the probability
measure of $x(t)$ and the continuity of mutual information immediately
imply that $\lim_{n\rightarrow\infty}\frac{1}{nT_{\text{s}}}I_{k,\Delta}\left({\bf x}^{n};{\bf y}^{n}\right)$
converges uniformly in $k$. We also observe that for every given
$n$, $\lim_{k\rightarrow\infty}I_{k,\Delta}\left({\bf x}^{n};{\bf y}^{n}\right)$
exists due to the continuity condition of the mutual information.
Therefore, applying the Moore-Osgood theorem in real analysis allows
us to exchange the order of limits.
Based on the above arguments, the capacity of the sampled analog channel
can be expressed as the following limit
\begin{align*}
C(f_{s}) & =\lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty}\frac{1}{nT_{s}}\sup_{p(x):\frac{1}{nk}\mathbb{E}\left(\left\Vert {\bf x}^{n}\right\Vert _{2}^{2}\right)\leq P}I_{k,\Delta}\left({\bf x}^{n},{\bf y}^{n}\right)\\
& =\lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty}\frac{f_{s}}{n}\sup_{p(x):\frac{1}{nk}\mathbb{E}\left(\left\Vert {\bf x}^{n}\right\Vert _{2}^{2}\right)\leq P}I_{k,\Delta}\left({\bf x}^{n},\tilde{{\bf y}}^{n}\right).
\end{align*}
Note that it suffices to investigate the case where $T$ is an integer
multiple of $T_{s}$ since $\lim_{T\rightarrow\infty}\frac{1}{T}\sup I\left(x(0,T];\left\{ y[n]\right\} \right)=\lim_{n\rightarrow\infty}\frac{1}{nT_{s}}\sup I\left(x(0,nT_{s}];\left\{ y[n]\right\} \right)$.
\begin{comment}
even
when there exists no integer $n$ such that $T=nT_{s}$, the capacity
can be bounded through the following fact. Since the proof of this
lemma is straightforward, we defer it to Appendix \ref{sec:Proof-of-Fact-integer}.
\begin{lem}\label{fact-integer}Suppose the following limit
\begin{equation}
\lim_{n\rightarrow\infty}\frac{1}{nT_{s}}\sup I\left(x(0,nT_{s}];\left\{ y[n]\right\} \right)
\end{equation}
exists. Then we have
\begin{equation}
\lim_{T\rightarrow\infty}\frac{1}{T}\sup I\left(x(0,T];\left\{ y[n]\right\} \right)=\lim_{n\rightarrow\infty}\frac{1}{nT_{s}}\sup I\left(x(0,nT_{s}];\left\{ y[n]\right\} \right).
\end{equation}
\end{lem}
Hence, it suffices to investigate the case when $T$ is integer multiples
of $T_{s}$.
\end{comment}
\subsection{Preliminaries on Toeplitz Matrices}
Before proceeding to the proof of the theorem, we briefly introduce
several basic definitions and properties related to Toeplitz matrices.
Interested readers are referred to \cite{Gray06,GreSze1984} for more
details.
A Toeplitz matrix is an $n\times n$ matrix ${\bf T}^{n}$ where $\left({\bf T}^{n}\right)_{k,l}=t_{k-l}$,
which implies that a Toeplitz matrix ${\bf T}^{n}$ is uniquely defined
by the sequence $\left\{ t_{k}\right\} $. A special case of Toeplitz
matrices is circulant matrices where every row of the matrix ${\bf C}^{n}$
is a right cyclic shift of the row above it. The Fourier series (or
symbol) with respect to the sequence of Toeplitz matrices $\left\{ {\bf T}^{n}:=\left[t_{k-l};k,l=0,1,\cdots,n-1\right]:n\in\mathbb{Z}\right\} $
is given by
\begin{equation}
F(\omega)=\sum_{k=-\infty}^{+\infty}t_{k}\exp\left(jk\omega\right),\quad\omega\in\left[-\pi,\pi\right].\label{eq:FourierSeries-1}
\end{equation}
Since the sequence $\left\{ t_{k}\right\} $ uniquely determines $F(\omega)$
and vice versa, we denote by ${\bf T}^{n}(F)$ the Toeplitz matrix
generated by $F$ (and hence $\left\{ t_{k}\right\} $). We also define
a related circulant matrix ${\bf C}^{n}(F)$ with top row $(c_{0}^{(n)},c_{1}^{(n)},\cdots,c_{n-1}^{(n)})$,
where
\begin{equation}
c_{k}^{(n)}=\frac{1}{n}\sum_{i=0}^{n-1}F\left(\frac{2\pi i}{n}\right)\exp\left(\frac{2\pi jik}{n}\right).\label{eq:circulantConstruction}
\end{equation}
One key concept in our proof is asymptotic equivalence, which is formally
defined as follows \cite{Gray06}.
\begin{definition}[{\bf Asymptotic Equivalence}] Two sequences of
$n\times n$ matrices $\left\{ {\bf A}^{n}\right\} $ and $\left\{ {\bf B}^{n}\right\} $
are said to be asymptotically equivalent if
(1) ${\bf A}^{n}$ and ${\bf B}^{n}$ are uniformly bounded, i.e.
there exists a constant $c$ independent of $n$ such that
\begin{equation}
\left\Vert {\bf A}^{n}\right\Vert _{2},\left\Vert {\bf B}^{n}\right\Vert _{2}\leq c<\infty,\quad n=1,2,\cdots
\end{equation}
(2) $\lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\left\Vert {\bf A}^{n}-{\bf B}^{n}\right\Vert _{\text{F}}=0$.
\end{definition}
We will abbreviate asymptotic equivalence of $\left\{ {\bf A}^{n}\right\} $
and $\left\{ {\bf B}^{n}\right\} $ by ${\bf A}^{n}\sim{\bf B}^{n}$.
Two important results regarding asymptotic equivalence are given in
the following lemmas \cite{Gray06}.
\begin{lem}\label{lemmaAsymptoticEquivalenceEigenvalues}Suppose
${\bf A}^{n}\sim{\bf B}^{n}$ with eigenvalues $\left\{ \alpha_{n,k}\right\} $
and $\left\{ \beta_{n,k}\right\} $, respectively. Let $g(x)$ be
an arbitrary continuous function. Then if the limits $\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=0}^{n-1}g\left(\alpha_{n,k}\right)$
and $\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=0}^{n-1}g\left(\beta_{n,k}\right)$
exist, we have
\begin{equation}
\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=0}^{n-1}g\left(\alpha_{n,k}\right)=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=0}^{n-1}g\left(\beta_{n,k}\right).
\end{equation}
\end{lem}
\begin{lem} \label{lemmaMultiplicationInverse}(a) Suppose a sequence
of Toeplitz matrices ${\bf T}^{n}$ where $\left({\bf T}^{n}\right)_{ij}=t_{i-j}$
satisfies that $\left\{ t_{i}\right\} $ is absolutely summable. Suppose
the Fourier series $F(\omega)$ related to ${\bf T}^{n}$ is positive
and ${\bf T}^{n}$ is Hermitian. Then we have
\begin{equation}
{\bf T}^{n}(F)\sim{\bf C}^{n}(F).
\end{equation}
If we further assume that there exists a constant $\epsilon>0$ such
that $F\left(\omega\right)\geq\epsilon>0$ for all $\omega\in\left[0,2\pi\right]$,
then we have
\begin{equation}
{\bf T}^{n}(F)^{-1}\sim{\bf C}^{n}(F)^{-1}={\bf C}^{n}(1/F)\sim{\bf T}^{n}\left(1/F\right).
\end{equation}
(b) Suppose ${\bf A}^{n}\sim{\bf B}^{n}$ and ${\bf C}^{n}\sim{\bf D}^{n}$,
then ${\bf A}^{n}{\bf C}^{n}\sim{\bf B}^{n}{\bf D}^{n}$.\end{lem}
Toeplitz or block Toeplitz matrices have well-known asymptotic spectral
properties \cite{GreSze1984,Tilli98}. The notion of asymptotic equivalence
allows us to approximate non-Toeplitz matrices by Toeplitz matrices,
which will be used in the next subsection to analyze the spectral
properties of the channel matrix.
\subsection{Capacity via Convergence of the Discrete Model\label{sub:ProofTheorem2PartC}}
After channel discretization, we can calculate the capacity for each
finite duration $T$ using well-known MIMO Gaussian channel capacity,
which, however, depends on the spectrum of the truncated channel and
may vary dramatically for different $T$. By our definition of capacity,
we will pass $T$ to infinty and see whether the finite-duration capacity
converges, and if so, whether there is a closed-form expression for
the limit. Fortunately, the beautiful asymptotic properties of block-Toeplitz
matrices guarantees the existance of the limit and allows for a closed-form
solution using the frequency response of $h(t)$ and $s(t)$.
To see this, we first construct a new channel whose capacity is easier
to obtain, and will show that the new channel has asymptotically equivalent
channel capacity as the original channel. As detailed below, each
key matrix associated with the new channel is a Toeplitz matrix, whose
spectrum can be well approximated in the asymptotic regime \cite{Gray06}.
Consider the spectral properties of the Hermitian matrices $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$
and ${\bf S}^{n}{\bf S}^{n*}$ . We can see that
\begin{equation}
\left(\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\right)_{ij}=\left(\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\right)_{ji}^{*}=\sum_{t=-j+1}^{n-j}\tilde{{\bf h}}_{j-i+t}\tilde{{\bf h}}_{t}^{*}.
\end{equation}
Obviously, $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$ is not a Toeplitz
matrix. Instead of investigating the eigenvalue distribution of $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$
directly, we look at a new \textit{Hermitian Toeplitz} matrix $\hat{{\bf H}}^{n}$
associated with $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$ such that
for any $i\leq j$:
\begin{equation}
\left(\hat{{\bf H}}^{n}\right)_{ij}=\left(\hat{{\bf H}}^{n}\right)_{ij}^{*}=\sum_{t=-\infty}^{\infty}\tilde{{\bf h}}_{j-i+t}\tilde{{\bf h}}_{t}^{*}.
\end{equation}
\begin{lem}\label{lemmaAsymptoticEquivalenceSH}The above definition
of $\hat{{\bf H}}^{n}$ implies that
\begin{equation}
\hat{{\bf H}}^{n}\sim\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}.
\end{equation}
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-Asymptotic-SH}.\end{IEEEproof}
On the other hand, for any $1\leq i\leq j\leq n$, we have
\begin{equation}
\left({\bf S}^{n}{\bf S}^{n*}\right)_{ij}=\left({\bf S}^{n}{\bf S}^{n*}\right)_{ji}^{*}=\sum_{t=-\infty}^{\infty}{\bf s}_{j-i+t}{\bf s}_{t}^{*}.
\end{equation}
Hence, the Hermitian matrix $\hat{{\bf S}}^{n}:={\bf S}^{n}{\bf S}^{n*}$
is still Toeplitz. However, the matrix of interest in the capacity
will be $\left({\bf S}^{n}{\bf S}^{n*}\right)^{-1/2}$ instead. We
therefore construct an asymptotically equivalent circulant matrix
${\bf C}^{n}$ as defined in (\ref{eq:circulantConstruction}), which
will preserves the Toeplitz property when we take $\left({\bf C}^{n}\right)^{-1/2}$
\cite{Gray06}. Formally speaking, $\left({\bf S}^{n}{\bf S}^{n*}\right)^{-1}$
can be related to $\left({\bf C}^{n}\right)^{-1}$ as follows.
\begin{comment}
For a linear filter with frequency response $S_{c}\left(\hat{j}\Omega\right)$,
we observe that the filtered noise is statistically equivalent to
the white noise filtered by another linear filter with frequency res
pone $\left|S_{c}\left(\hat{j}\Omega\right)\right|$. Hence, the channel
and prefilter pair $\left(H_{c}\left(\hat{j}\Omega\right),S_{c}\left(\hat{j}\Omega\right)\right)$
is equivalent to $\left(H_{c}\left(\hat{j}\Omega\right)\frac{S_{c}\left(\hat{j}\Omega\right)}{\left|S_{c}\left(\hat{j}\Omega\right)\right|},\left|S_{c}\left(\hat{j}\Omega\right)\right|\right)$.
In the following analysis, we will only focus on the type of prefilter
with \textit{nonnegative real} frequency response, and we can see
very soon that only the magnitude of $H_{c}\left(\hat{j}\Omega\right)$
matters in the capacity result.
\end{comment}
\begin{comment}
\begin{lem}If $s(t)$ is banded, and if there exists some constant
$\epsilon_{s}>0$ such that for all $\omega\in\left[-\pi,\pi\right]$,
\begin{equation}
\sum_{i=-\infty}^{\infty}\left|S_{c}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right|^{2}\geq\epsilon_{s}>0
\end{equation}
holds, then there exists a constant $M$ and an integer $n_{0}$
such that $\left\Vert \left({\bf S}_{n,nk}{\bf S}_{n,nk}^{*}\right)^{-1}\right\Vert _{2}<M<\infty$
for all $n>n_{0}$.
\end{lem}
\begin{IEEEproof}Introduce the set of Toeplitz submatrices $\left\{ {\bf T}_{v,n},1\leq v\leq k\right\} $
such that
\begin{equation}
\left({\bf T}_{v,n}\right)_{i,j}=\left({\bf S}_{n,nk}\right)_{i,(j-1)k+v}.
\end{equation}
Hence, the Fourier series associated with ${\bf T}_{v,n}$ is given
as
\begin{equation}
f_{v}(\omega)=\frac{\Delta}{T_{s}}\sum_{u=-\infty}^{+\infty}S_{c}\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{u}{T_{s}}\right)\right)\exp\left(\hat{j}\left(\frac{\omega}{T_{s}}-\frac{2\pi u}{T_{s}}\right)v\Delta\right).
\end{equation}
The matrix ${\bf T}_{n}=\left[{\bf T}_{1,n},{\bf T}_{2,n},\cdots,{\bf T}_{k,n}\right]$
can be obtained through column permutation of ${\bf {\bf S}_{n,nk}}$,
which implies that ${\bf T}_{n}$ and ${\bf {\bf S}_{n,nk}}$ have
the same set of singular values.
Let ${\bf T}_{n}\left(f\right)$ the $n$-dimensional Toeplitz matrix
generated by $f(\omega)$, then we can easily verify that We can see
that ${\bf T}_{v,n}{\bf T}_{v,n}^{*}={\bf T}_{n}\left(f_{v}\right){\bf T}_{n}\left(f_{v}^{*}\right)$.
Denote by $\mathcal{P}_{n}$ the projections on $L^{p}$ defined by
\begin{equation}
\mathcal{P}_{n}:\left\{ x_{0},x_{1},x_{2},\cdots\right\} \mapsto\left\{ x_{0},x_{1},\cdots,x_{n-1},0,0,\cdots\right\} .
\end{equation}
Applying Widom Theorem \cite{Bottcher05} yields
\begin{align*}
{\bf T}_{n}\left(f_{v}f_{v}^{*}\right) & ={\bf T}_{n}\left(f_{v}\right){\bf T}_{n}\left(f_{v}^{*}\right)+\mathcal{P}_{n}{\bf H}(f_{v}){\bf H}(f_{v})^{*}\mathcal{P}_{n}\\
& ={\bf T}_{n}\left(f_{v}\right){\bf T}_{n}\left(f_{v}^{*}\right)+\mathcal{P}_{n}{\bf H}(f_{v}){\bf H}(f_{v})^{*}\mathcal{P}_{n}
\end{align*}
This further yields
\begin{align}
{\bf T}_{n}{\bf T}_{n}^{*} & =\sum_{v=1}^{k}{\bf T}_{v,n}{\bf T}_{v,n}^{*}=\sum_{v=1}^{k}{\bf T}_{n}\left(f_{v}\right){\bf T}_{n}\left(f_{v}^{*}\right)\\
& =\sum_{v=1}^{k}{\bf T}_{n}\left(f_{v}f_{v}^{*}\right)-\mathcal{P}_{n}{\bf H}(f_{v}){\bf H}(f_{v})^{*}\mathcal{P}_{n}\\
& ={\bf T}_{n}\left(\sum_{v=1}^{k}f_{v}f_{v}^{*}\right)-\mathcal{P}_{n}\left(\sum_{v=1}^{k}{\bf H}(f_{v}){\bf H}(f_{v})^{*}\right)\mathcal{P}_{n}
\end{align}
\end{IEEEproof}
\end{comment}
\begin{lem}\label{lem-S-Inverse-Asymptotic}If there exists some
constant $\epsilon_{s}>0$ such that for all $f\in\left[-\frac{f_{s}}{2},\frac{f_{s}}{2}\right]$,
\begin{equation}
\sum_{l\in\mathbb{Z}}\left|S\left(f-lf_{s}\right)\right|^{2}\geq\epsilon_{s}>0
\end{equation}
holds, then $\left({\bf C}^{n}\right)^{-1}\sim\left({\bf S}^{n}{\bf S}^{n*}\right)^{-1}$.
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-S-Inverse-Asymptotic}.\end{IEEEproof}
One of the most useful properties of a circulant matrix ${\bf C}^{n}$
is that its eigenvectors $\left\{ {\bf u}_{c}^{(m)}\right\} $ are
\begin{equation}
{\bf u}_{c}^{(m)}=\frac{1}{\sqrt{n}}\left(1,e^{-2\pi jm/n},\cdots,e^{-2\pi j(n-1)m/n}\right).
\end{equation}
Suppose the eigenvalue decomposition of ${\bf C}^{n}$ is given as
\begin{equation}
{\bf C}^{n}={\bf U}_{c}{\bf \Lambda}_{c}{\bf U}_{c}^{*},
\end{equation}
where ${\bf U}_{c}$ is a Fourier coefficient matrix, and ${\bf \Lambda}_{c}$
is a diagonal matrix where each element in the diagonal is positive.
The concept of asymptotic equivalence allows us to explicitly relate
our matrices of interest to both circulant matrices and Toeplitz matrices,
whose asymptotic spectral densities have been well studied.
\begin{lem}\label{lem-asymptoticSpectralPropertyGeneralSampling}For
any continuous function $g(x)$, we have
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}g\left(\lambda_{i}\right)\\
= & T_{s}{\displaystyle \int}_{-\frac{f_{s}}{2}}^{\frac{f_{s}}{2}}g\left(\frac{\sum_{l\in\mathbb{Z}}\left|H(f-lf_{s})S(f-lf_{s})\right|^{2}}{\sum_{l\in\mathbb{Z}}\left|S(f-lf_{s})\right|^{2}}\right)\mathrm{d}f,
\end{align*}
where $\lambda_{i}$ denotes the $i$th eigenvalue of $\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}$.
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lem-Asymptotic-Spectral-General-Sampling}.\end{IEEEproof}
We can now prove the capacity result. The standard capacity results
for parallel channels \cite[Theorem 7.5.1]{Gallager68} implies that
the capacity of the discretized sampled analog channel is given by
the parametric equations
\begin{align}
C_{T} & =\frac{1}{2T}\sum_{i}\log^{\text{+}}\left(\nu\lambda_{i}\right),\\
\frac{Pnk}{1/\Delta} & =\sum_{i}\left[\nu-1/\lambda_{i}\right]^{\text{+}},\label{eq:PowerConstraint}
\end{align}
where $\nu$ is the water level of the optimal power allocation over
this discrete model, as can be calculated through (\ref{eq:PowerConstraint}).
Since this capacity depends on the eigenvalues of $\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}$,
then by Lemma \ref{lem-asymptoticSpectralPropertyGeneralSampling},
the convergence as $T\rightarrow\infty$ is guaranteed and the capacity
$C=\lim_{T\rightarrow\infty}C_{T}$ can be expressed using $H(f)$
and $S(f)$. Specifically,
\begin{align*}
& \lim_{T\rightarrow\infty}C_{T}\left(\nu\right)=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{i}\frac{1}{2}\log^{+}\left[\nu\lambda_{i}\right]\\
= & \frac{1}{2}{\displaystyle \int}_{-f_{s}/2}^{f_{s}/2}\log^{+}\left(\nu\frac{\sum_{l\in\mathbb{Z}}\left|H(f-lf_{s})S(f-lf_{s})\right|^{2}}{\sum_{l\in\mathbb{Z}}\left|S(f-lf_{s})\right|^{2}}\right)\mathrm{d}f.
\end{align*}
Similarly, (\ref{eq:PowerConstraint}) can be transformed into
\begin{align*}
& PT_{s}=\frac{Pk}{1/\Delta}=\frac{1}{n}\sum_{i}\left[\nu-1/\lambda_{i}\right]^{+}\\
= & T_{s}{\displaystyle \int}_{-f_{s}/2}^{f_{s}/2}\left[\nu-\frac{\sum_{l\in\mathbb{Z}}\left|S(f-lf_{s})\right|^{2}}{\sum_{l\in\mathbb{Z}}\left|H(f-lf_{s})S(f-lf_{s})\right|^{2}}\right]^{+}\mathrm{d}f,
\end{align*}
which completes the proof.
\section{Proof of Theorem \ref{thmPerfectCSIFilterBankSingleAntenna}\label{sec:Proof-of-Theorem-PerfectCSIFilterBank}}
We follow similar steps as in the proof of Theorem \ref{thmPerfectCSIPrefilteredSamplerRigorous}:
we approximate the sampled channel using a discretized model first,
whiten the noise, and then find capacity of the equivalent channel
matrix. Due to the use of filter banks, the equivalent channel matrix
is no longer asymptotically equivalent to a Toeplitz matrix, but instead
a block-Toeplitz matrix. This motivates us to exploit the asymptotic
properties of block-Toeplitz matrices.
\subsection{Channel Discretization and Diagonalization}
Let $\tilde{T}_{s}=MT_{s}$, and suppose we have $T=n\tilde{T}_{s}$
and $\tilde{T}_{s}=k\Delta$ with integers $n$ and $k$. Similarly,
we can define
\[
\tilde{h}_{i}(t):=h(t)*s_{i}(t),\quad\text{and}
\]
\[
\tilde{{\bf h}}_{i}^{l}=\left[\tilde{h}_{i}\left(l\tilde{T}_{s}\right),\tilde{h}_{i}\left(l\tilde{T}_{s}-\Delta\right),\cdots,\tilde{h}_{i}\left(l\tilde{T}_{s}-(k-1)\Delta\right)\right].
\]
We introduce the following two matrices as
\[
\tilde{{\bf H}}_{i}^{n}=\left[\begin{array}{cccc}
\tilde{{\bf h}}_{i}^{0} & \tilde{{\bf h}}_{i}^{-1} & \cdots & \tilde{{\bf h}}_{i}^{-n+1}\\
\tilde{{\bf h}}_{i}^{1} & \tilde{{\bf h}}_{i}^{0} & \cdots & \tilde{{\bf h}}_{i}^{-n+2}\\
\vdots & \vdots & \vdots & \vdots\\
\tilde{{\bf h}}_{i}^{n-1} & \tilde{{\bf h}}_{i}^{n-2} & \cdots & \tilde{{\bf h}}_{i}^{0}
\end{array}\right]
\]
\[
\text{and}\quad{\bf S}_{i}^{n}=\left[\begin{array}{cccc}
\cdots & {\bf s}_{i}^{0} & {\bf s}_{i}^{-1} & \cdots\\
\cdots & {\bf s}_{i}^{1} & {\bf s}_{i}^{0} & \cdots\\
\vdots & \vdots & \vdots & \vdots\\
\cdots & {\bf s}_{i}^{n-1} & {\bf s}_{i}^{n-2} & \cdots
\end{array}\right].
\]
We also set $\left({\bf x}^{n}\right)_{i}=\frac{1}{\Delta}\int_{0}^{\Delta}x\left(i\Delta+\tau\right)\mathrm{d}\tau\left(0\leq i<nk\right)$,
and $\left({\bf \eta}\right)_{i}=\frac{1}{\Delta}\int_{0}^{\Delta}{\bf \eta}\left(i\Delta+\tau\right)\mathrm{d}\tau$
$\left(i\in\mathbb{Z}\right)$. Defining ${\bf y}^{n}=\left[y_{1}[0],\cdots,y_{1}[n-1],y_{2}[0],\cdots,y_{2}[n-1],\cdots,y_{M}[n-1]\right]^{T}$
leads to the discretized channel model
\[
{\bf y}^{n}=\left[\begin{array}{c}
\tilde{{\bf H}}_{1}^{n}\\
\tilde{{\bf H}}_{2}^{n}\\
\vdots\\
\tilde{{\bf H}}_{M}^{n}
\end{array}\right]{\bf x}^{n}+\left[\begin{array}{c}
{\bf S}_{1}^{n}\\
{\bf S}_{2}^{n}\\
\vdots\\
{\bf S}_{M}^{n}
\end{array}\right]{\bf \eta}.
\]
Whitening the noise gives us
\[
\tilde{{\bf y}}^{n}=\left(\left[\begin{array}{c}
{\bf S}_{1}^{n}\\
{\bf S}_{2}^{n}\\
\vdots\\
{\bf S}_{M}^{n}
\end{array}\right]\left[\begin{array}{ccc}
{\bf S}_{1}^{n*} & \cdots & {\bf S}_{M}^{n*}\end{array}\right]\right)^{-\frac{1}{2}}\left[\begin{array}{c}
\tilde{{\bf H}}_{1}^{n}\\
\tilde{{\bf H}}_{2}^{n}\\
\vdots\\
\tilde{{\bf H}}_{M}^{n}
\end{array}\right]{\bf x}_{n}+\tilde{\eta},
\]
where $\tilde{{\bf \eta}}$ is i.i.d. Gaussian variable with variance
$1/\Delta$. We can express capacity of the sampled analog channel
under filter-bank sampling as the following limit
\begin{align*}
C(f_{s}) & =\lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty}\frac{f_{s}}{Mn}\sup I\left({\bf x}^{n};\tilde{{\bf y}}^{n}\right),
\end{align*}
Here, the supremum is taken over all distribution of ${\bf x}^{n}$
subject to a power constraint $\frac{1}{nk}\mathbb{E}\left(\left\Vert x_{n}\right\Vert _{2}^{2}\right)\leq P$.
\subsection{Capacity via Convergence of the Discrete Model}
We can see that for any $1\leq u,v\leq m$,
\begin{equation}
{\bf S}_{u}^{n}{\bf S}_{v}^{n*}=\tilde{{\bf S}}_{u,v}^{n},
\end{equation}
where the Toeplitz matrix $\tilde{{\bf S}}_{u,v}^{n}$ is defined
such that for any $1\leq i\leq j\leq n$
\begin{equation}
\left(\tilde{{\bf S}}_{u,v}^{n}\right)_{i,j}=\sum_{t=-\infty}^{\infty}{\bf s}_{u}^{j-i+t}\left({\bf s}_{v}^{t}\right)^{*}.
\end{equation}
Let ${\bf S}^{n}=\left[{\bf S}_{1}^{n*},{\bf S}_{2}^{n*},\cdots,{\bf S}_{M}^{n*}\right]^{*}$.
Then the Hermitian block Toeplitz matrix
\[
\tilde{{\bf S}}^{n}:=\left[\begin{array}{cccc}
\tilde{{\bf S}}_{1,1}^{n} & \tilde{{\bf S}}_{1,2}^{n} & \cdots & \tilde{{\bf S}}_{1,M}^{n}\\
\tilde{{\bf S}}_{2,1}^{n} & \tilde{{\bf S}}_{2,2}^{n} & \cdots & \tilde{{\bf S}}_{2,M}^{n}\\
\vdots & \vdots & \vdots & \vdots\\
\tilde{{\bf S}}_{M,1}^{n} & \tilde{{\bf S}}_{M,2}^{n} & \cdots & \tilde{{\bf S}}_{M,M}^{n}
\end{array}\right]
\]
satisfies $\tilde{{\bf S}}^{n}={\bf S}^{n}{\bf S}^{n*}$. Additionally,
we define $\hat{{\bf H}}_{u,v}^{n}\text{ }\left(1\leq u,v\leq M\right)$
, where
\begin{equation}
\left(\hat{{\bf H}}_{u,v}^{n}\right)_{i,j}=\sum_{t=-\infty}^{\infty}\tilde{{\bf h}}_{u}^{j-i+t}\left(\tilde{{\bf h}}_{v}^{t}\right)^{*},
\end{equation}
and we let $\tilde{{\bf H}}^{n}=\left[\tilde{{\bf H}}_{1}^{n*},\tilde{{\bf H}}_{2}^{n*},\cdots,\tilde{{\bf H}}_{M}^{n*}\right]^{*}$.
The block Toeplitz matrix
\[
\hat{{\bf H}}^{n}:=\left[\begin{array}{cccc}
\hat{{\bf H}}_{1,1}^{n} & \hat{{\bf H}}_{1,2}^{n} & \cdots & \hat{{\bf H}}_{1,M}^{n}\\
\hat{{\bf H}}_{2,1}^{n} & \hat{{\bf H}}_{2,2}^{n} & \cdots & \hat{{\bf H}}_{2,M}^{n}\\
\vdots & \vdots & \vdots & \vdots\\
\hat{{\bf H}}_{M,1}^{n} & \hat{{\bf H}}_{M,2}^{n} & \cdots & \hat{{\bf H}}_{M,M}^{n}
\end{array}\right]
\]
satisfies
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{\sqrt{nM}}\left\Vert \hat{{\bf H}}^{n}-\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\right\Vert _{\text{F}}\\
\leq & \lim_{n\rightarrow\infty}\frac{1}{\sqrt{M}}\sum_{1\leq u,v\leq M}\frac{1}{\sqrt{n}}\left\Vert \hat{{\bf H}}_{u,v}^{n}-\tilde{{\bf H}}_{u}^{n}\tilde{{\bf H}}_{v}^{n*}\right\Vert =0.
\end{align*}
\begin{equation}
\Longrightarrow\quad\hat{{\bf H}}^{n}\sim\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}.
\end{equation}
The $M\times M$ Fourier symbol matrix ${\bf F}_{\tilde{s}}(f)$ associated
with $\tilde{{\bf S}}^{n}$ has elements $\left[{\bf F}_{\tilde{s}}\left(f\right)\right]_{u,v}$
given by
\begin{align*}
& \left[{\bf F}_{\tilde{s}}\left(f\right)\right]_{u,v}\\
= & \frac{\Delta^{2}}{\tilde{T}_{s}^{2}}\sum_{i=0}^{k-1}\left(\sum_{l_{1}}S_{u}\left(-f+l_{1}\tilde{f}_{s}\right)\exp\left(-j2\pi\left(f-l_{1}\tilde{f}_{s}\right)i\Delta\right)\right)\\
& \quad\left(\sum_{l_{2}}S_{v}\left(-f+l_{2}\tilde{f}_{s}\right)\exp\left(-j2\pi\left(f-l_{2}\tilde{f}_{s}\right)i\Delta\right)\right)^{*}\\
= & \frac{\Delta^{2}}{\tilde{T}_{s}^{2}}\sum_{i=0}^{k-1}\left(\sum_{l_{1},l_{2}}S_{u}\left(-f+l_{1}\tilde{f}_{s}\right)S_{v}^{*}\left(-f+l_{2}\tilde{f}_{s}\right)\right.\\
& \quad\quad\quad\left.\exp\left(-j2\pi\left(l_{2}-l_{1}\right)\tilde{f}_{s}i\Delta\right)\right)\\
= & \frac{\Delta}{\tilde{T}_{s}}\sum_{l\in\mathbb{Z}}S_{u}\left(-f+l\tilde{f}_{s}\right)S_{v}^{*}\left(-f+l\tilde{f}_{s}\right).
\end{align*}
Denote by $\left\{ {\bf T}^{n}\left({\bf F}_{\tilde{s}}^{-1}\right)\right\} $
the sequence of block Toeplitz matrices generated by ${\bf F}_{\tilde{s}}^{-1}(f)$,
and denote by ${\bf T}_{l_{1},l_{2}}^{n}\left({\bf F}_{\tilde{s}}^{-1}\right)$
the $(l_{1},l_{2})$ Toeplitz block of ${\bf T}^{n}\left({\bf F}_{\tilde{s}}^{-1}\right)$.
It can be verified that
\begin{align*}
\sum_{l_{2}=1}^{M}{\bf T}_{l_{1},l_{2}}^{n}\left({\bf F}_{\tilde{s}}^{-1}\right)\cdot\tilde{{\bf S}}_{l_{2},l_{3}}^{n} & \sim{\bf T}_{n}\left(\sum_{l_{2}=1}^{M}\left[{\bf F}_{\tilde{s}}^{-1}\right]_{l_{1},l_{2}}\left[{\bf F}_{\tilde{s}}\right]_{l_{2},l_{3}}\right)\\
& ={\bf T}^{n}\left(\delta[l_{1}-l_{3}]\right),
\end{align*}
which immediately yields
\begin{equation}
{\bf T}^{n}\left({\bf F}_{\tilde{s}}^{-1}\right)\tilde{{\bf S}}^{n}\sim{\bf I}\text{ }\Longrightarrow\text{ }{\bf T}^{n}\left({\bf F}_{\tilde{s}}^{-1}\right)\sim\left(\tilde{{\bf S}}^{n}\right)^{-1}\sim\left({\bf S}^{n}{\bf S}^{n*}\right)^{-1}.
\end{equation}
Therefore, for any continuous function $g(x)$, \cite[Theorem 5.4]{Tilli98}
implies that
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{nM}\sum_{i=1}^{nM}g\left(\lambda_{i}\left\{ \left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\right\} \right)\\
= & {\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\sum_{i=1}^{M}g\left(\lambda_{i}\left({\bf F}_{\tilde{s}}^{-\frac{1}{2}}{\bf F}_{\tilde{h}}{\bf F}_{\tilde{s}}^{-\frac{1}{2}}\right)\right)\mathrm{d}f.
\end{align*}
Denote ${\bf F}_{s}^{\ddagger}=\left({\bf F}_{s}{\bf F}_{s}^{*}\right)^{-\frac{1}{2}}{\bf F}_{s}$,
then the capacity of parallel channels \cite{Gallager68}, which is
achieved via water filling power allocation, yields
\begin{align*}
& C(f_{s})\\
= & \lim_{n\rightarrow\infty}\frac{\sum_{i=1}^{nM}\log^{+}\left(\nu\lambda_{i}\left\{ \left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\right\} \right)}{2nMT_{s}}\\
= & {\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\frac{1}{2}\sum_{i=1}^{M}\log^{+}\left(\nu\lambda_{i}\left({\bf F}_{\tilde{s}}^{-\frac{1}{2}}{\bf F}_{\tilde{h}}{\bf F}_{\tilde{s}}^{-\frac{1}{2}}\right)\right)\mathrm{d}f\\
= & \frac{1}{2}{\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\underset{i=1}{\overset{M}{\sum}}\log^{+}\left(\nu\lambda_{i}\left({\bf F}_{s}^{\ddagger}{\bf F}_{h}{\bf F}_{h}^{*}{\bf F}_{s}^{\ddagger*}\right)\right)\mathrm{d}f,
\end{align*}
where
\begin{align*}
P & ={\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\sum_{i=1}^{M}\left[\nu-\frac{1}{\lambda_{i}\left({\bf F}_{\tilde{s}}^{-\frac{1}{2}}{\bf F}_{\tilde{h}}{\bf F}_{\tilde{s}}^{-\frac{1}{2}}\right)}\right]^{+}\mathrm{d}f\\
& ={\displaystyle \int}_{-\frac{f_{s}}{2M}}^{\frac{f_{s}}{2M}}\sum_{i=1}^{M}\left[\nu-\frac{1}{\lambda_{i}{\bf F}_{s}^{\ddagger}{\bf F}_{h}{\bf F}_{h}^{*}{\bf F}_{s}^{\ddagger*}}\right]^{+}\mathrm{d}f.
\end{align*}
This completes the proof.
\section{Proof of Theorem \ref{Cor:OptimalFilterBank}\label{sec:Proof-of-Corollary-Optimal-Filter-Bank} }
Theorem \ref{Cor:OptimalFilterBank} immediately follows from the
following proposition.
\begin{prop}\label{lem_BoundsOnMSingularValues}The $k$th largest
eigenvalue $\lambda_{k}$ of the positive semidefinite matrix $\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}$
is bounded by
\begin{equation}
0\leq\lambda_{k}\leq\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right),\quad1\leq k\leq M.
\end{equation}
These upper bounds can be attained simultaneously by the filter (\ref{eq:optimalFilterBank}).
\end{prop}
\begin{IEEEproof}Recall that at a given $f$, ${\bf F}_{h}$ is an
infinite diagonal matrix satisfying $\left({\bf F}_{h}\right)_{l,l}=H\left(f-\frac{lf_{s}}{M}\right)$
for all $l\in\mathbb{Z}$, and that $\tilde{{\bf F}}_{s}=\left({\bf F}_{s}{\bf F}_{s}^{*}\right)^{-\frac{1}{2}}{\bf F}_{s}$.
Hence, $\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}$
is an $M\times M$ dimensional matrix. We observe that
\begin{equation}
\tilde{{\bf F}}_{s}\left(\tilde{{\bf F}}_{s}\right)^{*}=\left({\bf F}_{s}{\bf F}_{s}^{*}\right)^{-\frac{1}{2}}{\bf F}_{s}{\bf F}_{s}^{*}\left({\bf F}_{s}{\bf F}_{s}^{*}\right)^{-\frac{1}{2}}={\bf I},
\end{equation}
which indicates that the rows of $\tilde{{\bf F}}_{s}$ are orthonormal.
Hence, the operator norm of $\tilde{{\bf F}}_{s}$ is no larger than
$1$, which leads to
\[
\lambda_{1}\left(\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}\right)=\left\Vert \tilde{{\bf F}}_{s}{\bf F}_{h}\right\Vert _{2}^{2}\leq\left\Vert {\bf F}_{h}\right\Vert _{2}^{2}=\lambda_{1}\left({\bf F}_{h}{\bf F}_{h}^{*}\right).
\]
Denote by $\left\{ {\bf e}_{k},\mbox{ }k\geq1\right\} $ the standard
basis where ${\bf e}_{k}$ is a vector with a $1$ in the $k$th coordinate
and $0$ otherwise. We introduce the index set $\left\{ i_{1},i_{2},\cdots,i_{M}\right\} $
such that ${\bf {\bf e}}_{i_{k}}$ $\left(1\leq k\leq M\right)$ is
the eigenvector associated with the $k$th largest eigenvalues of
the diagonal matrix ${\bf F}_{h}{\bf F}_{h}^{*}$.
Suppose that ${\bf v}_{k}$ is the eigenvector associated with the
$k$th largest eigenvalue $\lambda_{k}$ of $\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}$,
and denote by $\left(\tilde{{\bf F}}_{s}\right)_{k}$ the $k$th \emph{column}
of $\tilde{{\bf F}}_{s}$. Since $\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}$
is Hermitian positive semidefinite, its eigendecomposition yields
an orthogonal basis of eigenvectors. Observe that $\left\{ {\bf v}_{1},\cdots,{\bf v}_{k}\right\} $
spans a $k$-dimensional space and that $\left\{ \left(\tilde{{\bf F}}_{s}\right)_{j},1\leq j\leq k-1\right\} $
spans a subspace of dimension no more than $k-1$. For any $k\geq2$,
there exists $k$ scalars $a_{1},\cdots,a_{k}$ such that
\begin{equation}
\sum_{i=1}^{k}a_{i}{\bf v}_{i}\perp\left\{ \left(\tilde{{\bf F}}_{s}\right)_{i_{j}},1\leq j\leq k-1\right\} \text{ and }\sum_{i=1}^{k}a_{i}{\bf v}_{i}\neq0.\label{eq:VEigenvectorsOrthogonal}
\end{equation}
This allows us to define the following unit vector
\begin{equation}
{\bf \tilde{v}}_{k}\overset{\Delta}{=}\sum_{i=1}^{k}\frac{a_{i}}{\sqrt{\sum_{j=1}^{k}\left|a_{j}\right|^{2}}}{\bf v}_{i},
\end{equation}
which is orthogonal to $\left\{ \left(\tilde{{\bf F}}_{s}\right)_{j},1\leq j\leq k-1\right\} $.
We observe that
\begin{align}
\left\Vert \tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}{\bf \tilde{v}}_{k}\right\Vert _{2}^{2} & =\left\Vert \sum_{i=1}^{k}\frac{a_{i}}{\sqrt{\sum_{j=1}^{k}\left|a_{j}\right|^{2}}}\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}{\bf v}_{i}\right\Vert _{2}^{2}\nonumber \\
& =\left\Vert \sum_{i=1}^{k}\frac{a_{i}\lambda_{i}}{\sqrt{\sum_{j=1}^{k}\left|a_{j}\right|^{2}}}{\bf v}_{i}\right\Vert _{2}^{2}\nonumber \\
& =\sum_{i=1}^{k}\frac{\lambda_{i}^{2}\left|a_{i}\right|^{2}}{\sum_{j=1}^{k}\left|a_{j}\right|^{2}}\geq\lambda_{k}^{2}.
\end{align}
Define ${\bf u}_{k}:=\tilde{{\bf F}}_{s}^{*}\tilde{{\bf v}}_{k}$.
From (\ref{eq:VEigenvectorsOrthogonal}) we can see that $\left({\bf u}_{k}\right)_{i}=\left\langle \left(\tilde{{\bf F}}_{s}\right)_{i},\tilde{{\bf v}}_{i}\right\rangle =0$
holds for all $i\in\left\{ i_{1},i_{2},\cdots,i_{k-1}\right\} $.
In other words, ${\bf u}_{k}\perp\left\{ {\bf e}_{i_{1}},\cdots,{\bf e}_{i_{k-1}}\right\} $.
This further implies that
\begin{align}
\lambda_{k}^{2} & \leq\left\Vert \tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}{\bf \tilde{v}}_{k}\right\Vert _{2}^{2}\leq\left\Vert \tilde{{\bf F}}_{s}\right\Vert _{2}^{2}\left\Vert {\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}{\bf \tilde{v}}_{k}\right\Vert _{2}^{2}\nonumber \\
& \leq\left\Vert {\bf F}_{h}{\bf F}_{h}^{*}{\bf u}_{k}\right\Vert _{2}^{2}\\
& \leq\sup_{{\bf x}\perp\text{span}\left\{ {\bf e}_{i_{1}},\cdots,{\bf e}_{i_{k-1}}\right\} }\left\Vert {\bf F}_{h}{\bf F}_{h}^{*}{\bf x}\right\Vert _{2}^{2}\\
& =\lambda_{k}^{2}\left({\bf F}_{h}{\bf F}_{h}^{*}\right)
\end{align}
by observing that ${\bf F}_{h}{\bf F}_{h}^{*}$ is a diagonal matrix.
Setting
\begin{align*}
& S_{k}\left(f-\frac{lf_{s}}{M}\right)\\
= & \begin{cases}
1, & \quad\mbox{if }\left|H\left(f-\frac{lf_{s}}{M}\right)\right|^{2}=\lambda_{k}\left({\bf F}_{h}(f){\bf F}_{h}^{*}(f)\right),\\
0, & \quad\mbox{otherwise},
\end{cases}
\end{align*}
yields $\tilde{{\bf F}}_{s}={\bf F}_{s}$ and hence $\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}$
is a diagonal matrix such that
\begin{equation}
\left(\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}\right)_{k,k}=\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right).
\end{equation}
Apparently, this choice of $S_{k}(f)$ allows the upper bounds
\begin{equation}
\lambda_{k}\left(\tilde{{\bf F}}_{s}{\bf F}_{h}{\bf F}_{h}^{*}\tilde{{\bf F}}_{s}^{*}\right)=\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right),\quad\forall1\leq k\leq M
\end{equation}
to be attained simultaneously.\end{IEEEproof}
By extracting out the $M$ frequencies with the highest SNR from each
aliased set $\left\{ f-lf_{s}/M\mid l\in\mathbb{Z}\right\} $, we
achieve $\lambda_{k}=\lambda_{k}\left({\bf F}_{h}{\bf F}_{h}^{*}\right)$,
thus achieving the maximum capacity.
\section{Proof of Theorem \ref{thmPremodulatedFilterBank}\label{sec:Proof-of-Theorem-Premodulated-Filter-Bank}}
Following similar steps as in the proof of Theorem \ref{thmPerfectCSIFilterBankSingleAntenna},
we approximately convert the sampled channel into its discrete counterpart,
and calculate the capacity of the discretized channel model after
noise whitening. We note that the impulse response of the sampled
channel is no longer LTI due to the use of modulation banks. But the
periodicity assumption of the modulation sequences allows us to treat
the channel matrix as blockwise LTI, which provides a way to exploit
the properties of block-Toeplitz matrices.
Again, we give a proof for the scenario where noise is white Gaussian
with unit spectral density. The capacity expression in the presence
of colored noise can immediately be derived by replacing $P_{i}(f)$
with $P_{i}(f)\sqrt{\mathcal{S}_{\eta}(f)}$ and $H(f)$ with $H(f)/\sqrt{\mathcal{S}_{\eta}(f)}$.
In the $i$th branch, the noise component at time $t$ is given by
\begin{align*}
& s_{i}\left(t\right)*\left(q_{i}(t)\cdot\left(p_{i}(t)*\eta(t)\right)\right)\\
= & \int_{\tau_{1}}\mathrm{d}\tau_{1}s_{i}\left(t-\tau_{1}\right)\int_{\tau_{2}}q_{i}\left(\tau_{1}\right)p_{i}\left(\tau_{1}-\tau_{2}\right)\eta\left(\tau_{2}\right)\mathrm{d}\tau_{2}\\
= & \int_{\tau_{2}}\left({\displaystyle \int}_{\tau_{1}}s_{i}\left(t-\tau_{1}\right)q_{i}\left(\tau_{1}\right)p_{i}\left(\tau_{1}-\tau_{2}\right)\mathrm{d}\tau_{1}\right)\eta\left(\tau_{2}\right)\mathrm{d}\tau_{2}\\
= & \int_{\tau_{2}}g_{i}^{\eta}(t,\tau_{2})\eta(\tau_{2})\mathrm{d}\tau_{2},
\end{align*}
where $g_{i}^{\eta}(t,\tau_{2})\overset{\Delta}{=}\underset{\tau_{1}}{{\displaystyle \int}}s_{i}\left(t-\tau_{1}\right)q_{i}\left(\tau_{1}\right)p_{i}\left(\tau_{1}-\tau_{2}\right)\mathrm{d}\tau_{1}.$
Let $\tilde{T}_{s}=MT_{s}$. Our assumption $bT_{q}=a\tilde{T}_{s}$
immediately leads to
\begin{align*}
& g_{i}^{\eta}\left(t+a\tilde{T}_{s},\tau+bT_{q}\right)\\
= & \int_{\tau_{1}}s_{i}\left(t+a\tilde{T}_{s}-\tau_{1}\right)q_{i}\left(\tau_{1}\right)p_{i}\left(\tau_{1}-\tau-a\tilde{T}_{s}\right)\mathrm{d}\tau_{1}\\
= & \int_{\tau_{1}}s_{i}\left(t-\tau_{1}\right)q_{i}\left(\tau_{1}+bT_{q}\right)p_{i}\left(\tau_{1}-\tau\right)\mathrm{d}\tau_{1}\\
= & \int_{\tau_{1}}s_{i}\left(t-\tau_{1}\right)q_{i}\left(\tau_{1}\right)p_{i}\left(\tau_{1}-\tau\right)\mathrm{d}\tau_{1}=g_{i}^{\eta}\left(t,\tau\right),
\end{align*}
implying that $g_{i}^{\eta}\left(t,\tau\right)$ is a block-Toeplitz
function.
Similarly, the signal component
\[
s_{i}\left(t\right)*\left(q_{i}(t)\cdot\left(p_{i}(t)*h(t)*x(t)\right)\right)=\underset{\tau_{2}}{{\displaystyle \int}}g_{i}^{h}(t,\tau_{2})x(\tau_{2})\mathrm{d}\tau_{2},
\]
where
\[
g_{i}^{h}(t,\tau_{2})\overset{\Delta}{=}\underset{\tau_{1}}{{\displaystyle \int}}s_{i}\left(t-\tau_{1}\right)q_{i}\left(\tau_{1}\right)\underset{\tau_{3}}{{\displaystyle \int}}p_{i}\left(\tau_{1}-\tau_{2}-\tau_{3}\right)h(\tau_{3})\mathrm{d}\tau_{3}\mathrm{d}\tau_{1},
\]
which also satisfies the block-Toeplitz property $g_{i}^{h}\left(t+a\tilde{T}_{s},\tau+bT_{q}\right)=g_{i}^{h}\left(t,\tau\right)$.
Suppose that $T=n\tilde{T}_{s}$ and $\tilde{T}_{s}=k\Delta$ hold
for some integers $n$ and $k$. We can introduce two matrices ${\bf G}_{i}^{\eta}$
and ${\bf G}_{i}^{h}$ such that $\forall m\in\mathbb{Z},0\leq l<n$
\[
\begin{cases}
\left({\bf G}_{i}^{\eta}\right)_{l,m} & =g_{i}^{\eta}\left(l\tilde{T}_{s},m\Delta\right),\\
\left({\bf G}_{i}^{h}\right)_{l,m} & =g_{i}^{h}\left(l\tilde{T}_{s},m\Delta\right).
\end{cases}
\]
Setting ${\bf y}_{i}^{n}=\left[y_{i}[0],y_{i}[1],\cdots,y_{i}[n-1]\right]^{T}$
leads to similar discretized approximation as in the proof of Theorem
\ref{thmPerfectCSIPrefilteredSamplerRigorous}:
\begin{equation}
{\bf y}_{i}^{n}={\bf G}_{i}^{h}{\bf x}^{n}+{\bf G}_{i}^{\eta}{\bf \eta}.
\end{equation}
Here, ${\bf \eta}$ is a i.i.d. zero-mean Gaussian vector where each
entry is of variance $1/\Delta$.
Hence, ${\bf G}_{i}^{h}$ and ${\bf G}_{i}^{\eta}$ are block Toeplitz
matrices satisfying $\left({\bf G}_{i}^{h}\right)_{l+a,m+ak}=\left({\bf G}_{i}^{h}\right)_{l,m}$
and $\left({\bf G}_{i}^{\eta}\right)_{l+a,m+ak}=\left({\bf G}_{i}^{\eta}\right)_{l,m}$.
Using the same definition of ${\bf x}^{n}$ and ${\bf \eta}$ as in
Appendix \ref{sec:Proof-of-Theorem-PerfectCSIFilterBank}, we can
express the system equation as
\begin{equation}
{\bf y}^{n}=\left[\begin{array}{c}
{\bf G}_{1}^{h}\\
{\bf G}_{2}^{h}\\
\vdots\\
{\bf G}_{M}^{h}
\end{array}\right]{\bf x}^{n}+\left[\begin{array}{c}
{\bf G}_{1}^{\eta}\\
{\bf G}_{2}^{\eta}\\
\vdots\\
{\bf G}_{M}^{\eta}
\end{array}\right]{\bf \eta}.
\end{equation}
Whitening the noise component yields
\begin{equation}
\tilde{{\bf y}}_{n}=\left(\left[\begin{array}{c}
{\bf G}_{1}^{\eta}\\
{\bf G}_{2}^{\eta}\\
\vdots\\
{\bf G}_{M}^{\eta}
\end{array}\right]\left[\begin{array}{c}
{\bf G}_{1}^{\eta}\\
{\bf G}_{2}^{\eta}\\
\vdots\\
{\bf G}_{M}^{\eta}
\end{array}\right]^{*}\right)^{-\frac{1}{2}}\left[\begin{array}{c}
{\bf G}_{1}^{h}\\
{\bf G}_{2}^{h}\\
\vdots\\
{\bf G}_{M}^{h}
\end{array}\right]{\bf x}_{n}+\tilde{\eta},
\end{equation}
where $\tilde{{\bf \eta}}$ is i.i.d. Gaussian noise with variance
$1/\Delta$.
In order to calculate the capacity limit, we need to investigate the
Fourier symbols associated with these block Toeplitz matrices.
$ $\begin{lem}\label{lem: FourierSymbolModulationBank}At a given
frequency $f$, the Fourier symbol with respect to ${\bf G}_{\alpha}^{\eta}{\bf G}_{\beta}^{\eta}$
is given by $ak{\bf F}_{\alpha}^{\eta}{\bf F}_{\alpha}^{p}{\bf F}_{\beta}^{p*}{\bf F}_{\beta}^{\eta*}$,
and the Fourier symbol with respect to ${\bf G}_{\alpha}^{h}{\bf G}_{\beta}^{h}$
is given by $ak{\bf F}_{\alpha}^{\eta}{\bf F}_{\alpha}^{p}{\bf F}^{h}{\bf F}^{h*}{\bf F}_{\beta}^{p*}{\bf F}_{\beta}^{\eta*}$.
Here for any $\left(l,v\right)$ such that $v\in\mathbb{Z}$ and $1\leq l\leq a$,
we have
\begin{align*}
\left({\bf F}_{\alpha}^{\eta}\right)_{l,v} & =\sum_{u}c_{\alpha}^{u}S_{\alpha}\left(-f+uf_{q}+v\frac{f_{q}}{b}\right)\cdot\\
& \quad\quad\quad\quad\exp\left(-j2\pi l\frac{T_{s}}{M}\left(f-uf_{q}-v\frac{f_{q}}{b}\right)\right).
\end{align*}
Also, ${\bf F}_{\alpha}^{p}$ and ${\bf F}^{h}$ are infinite diagonal
matrices such that for all $l\in\mathbb{Z}$
\[
\begin{cases}
\left({\bf F}_{\alpha}^{p}\right)_{l,l} & =P_{\alpha}\left(-f+l\frac{f_{q}}{b}\right),\\
\left({\bf F}^{h}\right)_{l,l} & =H\left(-f+l\frac{f_{q}}{b}\right).
\end{cases}
\]
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-Fourier-Symbol-Modulation}.\end{IEEEproof}
Define ${\bf G}^{\eta}$ such that its $\left(\alpha,\beta\right)$
subblock is ${\bf G}_{\alpha}^{\eta}{\bf G}_{\beta}^{\eta*}$, and
${\bf G}^{h}$ such that its $\left(\alpha,\beta\right)$ subblock
is ${\bf G}_{\alpha}^{h}{\bf G}_{\beta}^{h*}$. Proceeding similarly
as in the proof of Theorem \ref{thmPerfectCSIFilterBankSingleAntenna},
we obtain
\[
\mathcal{F}\left({\bf G}^{\eta}\right)=ak{\bf F}^{\eta}{\bf F}^{\eta*}\quad\text{and}\quad\mathcal{F}\left({\bf G}^{h}\right)=ak{\bf F}^{\eta}{\bf F}^{h}{\bf F}^{h*}{\bf F}^{\eta*},
\]
where ${\bf F}^{\eta}$ contain $M\times1$ submatrices. The $\left(\alpha,1\right)$
submatrix of ${\bf F}^{\eta}$ is given by ${\bf F}_{\alpha}^{\eta}{\bf F}_{\alpha}^{p}$.
Denote ${\bf F}^{\eta\ddagger}\overset{\Delta}{=}\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}{\bf F}^{\eta}$.
For any continuous function $g(x)$, \cite[Theorem 5.4]{Tilli98}
implies that
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{naM}\sum_{i=1}^{naM}g\left(\lambda_{i}\left\{ \left({\bf G}^{\eta}\right)^{-\frac{1}{2}}{\bf G}^{h}\left({\bf G}^{\eta}\right)^{-\frac{1}{2}}\right\} \right)\\
= & {\displaystyle \int}_{-\frac{\tilde{f}_{s}}{2a}}^{\frac{\tilde{f}_{s}}{2a}}\sum_{i=1}^{aM}g\left(\lambda_{i}\left({\bf F}^{\eta\ddagger}{\bf F}^{h}{\bf F}^{h*}{\bf F}^{\eta\ddagger*}\right)\right)\mathrm{d}f.
\end{align*}
Then capacity of parallel channels, achieved via water-filling power
allocation, yields
\begin{align*}
C(f_{s})= & \lim_{n\rightarrow\infty}\sum_{i=1}^{naM}\frac{\log^{+}\left(\nu\lambda_{i}\left\{ \left({\bf G}^{\eta}\right)^{-\frac{1}{2}}{\bf G}^{h}\left({\bf G}^{\eta}\right)^{-\frac{1}{2}}\right\} \right)}{naM}\\
= & {\displaystyle \int}_{-\frac{\tilde{f}_{s}}{2a}}^{\frac{\tilde{f}_{s}}{2a}}\frac{1}{2}\sum_{i=1}^{aM}\log^{+}\left(\nu\lambda_{i}\left({\bf F}^{\eta\ddagger}{\bf F}^{h}{\bf F}^{h*}{\bf F}^{\eta\ddagger*}\right)\right)\mathrm{d}f,
\end{align*}
where the water level $\nu$ can be computed through the following
parametric equation
\begin{align*}
P & =\lim_{n\rightarrow\infty}\frac{1}{naM}\sum_{i=1}^{naM}\left[\nu-\frac{1}{\lambda_{i}\left\{ \left({\bf G}^{\eta}\right)^{-\frac{1}{2}}{\bf G}^{h}\left({\bf G}^{\eta}\right)^{-\frac{1}{2}}\right\} }\right]^{+}\\
& ={\displaystyle \int}_{-\frac{\tilde{f}_{s}}{2a}}^{\frac{\tilde{f}_{s}}{2a}}\sum_{i=1}^{aM}\left[\nu-\frac{1}{\lambda_{i}\left\{ \left({\bf G}^{\eta}\right)^{-\frac{1}{2}}{\bf G}^{h}\left({\bf G}^{\eta}\right)^{-\frac{1}{2}}\right\} }\right]^{+}\mathrm{d}f.
\end{align*}
\begin{comment}
aa
\section{Algorithm 2 and Its Performance\label{sec:Proof-of-Prop-Modulation-Example}}
Algorithm 2 given below generates $K$ near alias-free subbands with
high SNR under a single branch of sampling with modulation and filtering.
This algorithm slightly improves the bandwidth efficiency of Algorithm
1 while asymptotically achieving the same capacity. However, the algorithm
is optimal only in the asymptotic regime, detailed below.
\vspace{10pt}
\begin{center}
\begin{tabular}{>{\raggedright}p{3.2in}}
\hline
\textbf{Algorithm 2}\tabularnewline
\hline
1. \quad{}\textbf{Initialize.} Find the $K$ largest elements in
$\left\{ \frac{\left|H\left(f-lf_{q}\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-lf_{q}\right)}\mid l\in\mathbb{Z},-L\leq l\leq L-1\right\} $.
Denote by $\left\{ l_{i}\mid1\leq i\leq K\right\} $ the index set
of these $K$ elements such that $l_{1}>l_{2}>\cdots>l_{K}$. Set
$i=1$, $I_{\max}=-L$, $\mathcal{J}=\emptyset$, and $c^{i}=0$ for
all $i\in\mathbb{Z}$. Let $A$ be a large given number.\tabularnewline
2. \quad{}For $i=1:K$
$\hspace{1em}\hspace{1em}$\hspace{2.5em}For $m=I_{\max}:I_{\max}+K-1$
$\hspace{1em}\hspace{1em}$\hspace{4em}if $\left(m\text{ mod }K\right)\notin\mathcal{J}$,
do
$\hspace{1em}\hspace{1em}$\hspace{5em}$\mathcal{J}=\mathcal{J}\cup\left\{ m\text{ mod }K\right\} $,
$I_{i}=m$, \hspace{5em}$I_{\max}=m+L+1-l_{i}$ , $c^{m-l_{i}}=A^{K-i}$
and break;\tabularnewline
3. \quad{}For $i=-L:L-1$
$\hspace{1em}\hspace{1em}$\hspace{3em}if $i\in\left\{ I_{1},\cdots,I_{K}\right\} $,
then $S\left(f+if_{p}\right)=1$;
$\hspace{1em}\hspace{1em}$\hspace{3em}else $S\left(f+if_{q}\right)=0$. \tabularnewline
\hline
\end{tabular}
\par\end{center}
\vspace{10pt}
We first pick $K$ subbands with the highest SNR in Step 1. At each
iteration of Step 2, the frequency components in one of the desired
subbands is moved to a certain subband that does not alias with previously
chosen subbands. However, Algorithm 2 does not attempt to move the
chosen spectral contents to a new subband that is $2L$ subbands away
as Algorithm 1 does. Instead, it moves the selected subband to a much
closer location as long as the movement does not corrupt any frequency
location occupied prior to this iteration. Since moving a subband
will simultaneously result in movement of spectral contents in other
subbands, the key step in Algorithm 2 is to ensure that undesired
interference components are far weaker than the desired components,
which can be guaranteed by appropriate choices of $c^{i}$ (namely,
choosing $A$ to be sufficiently large). Finally, a band-pass filter
is chosen in Step 3 to avoid aliasing. The performance of this algorithm
is characterized in the following proposition.
\begin{prop}\label{prop-ModulationExample}Consider a piecewise flat
channel with $2L$ subbands as described above. For a given $f_{q}$,
the modulation sequence found by Algorithm 2 maximizes capacity when
$A\rightarrow\infty$. \end{prop}
\begin{IEEEproof}It can be easily observed that Algorithm 2 keeps
$K$ subbands in total while zeroing out all others through filtering.
Define $R(f)=H(f)X(f)+N(f)$. In step 2 of the algorithm, the frequency
response of the $i$th subband being chosen is a linear combination
of $\left\{ R\left(f-lf_{q}\right)\mid l\in\left\{ I_{i},I_{i+1},\cdots,I_{K}\right\} \right\} $.
More specifically,
\[
Y\left(f-I_{i}f_{q}\right)=A^{K+1-i}R\left(f-I_{k}f_{q}\right)+\underset{\text{residual}}{\underbrace{\sum_{k=i+1}^{K}B_{k}R\left(f-I_{k}f_{q}\right)}},
\]
where $\left|B_{k}\right|$ is either $\left|A^{K+1-k}\right|$ or
$0$. Treating the residual term as noise, the SNR at the $i$th branch
is
\[
\lim_{A\rightarrow\infty}\text{SNR}_{i}=\frac{\left|H\left(f-I_{i}f_{p}\right)\right|^{2}}{\mathcal{S}_{\eta}\left(f-I_{i}f_{p}\right)}.
\]
Thus, for large $A$, this sampling method extracts out $K$ subbands
of highest SNR, and suppresses all other subbands.
Now we need to prove that this is optimal. For any given $f$, modulation
and filtering act as two right-invertible operators on both the signal
and the noise. After noise whitening, the equivalent channel matrix
is given by $\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}{\bf F}^{\eta}{\bf F}^{h},$
where $\left({\bf F}^{\eta}{\bf F}^{\eta*}\right)^{-\frac{1}{2}}{\bf F}^{\eta}$
is of $K$ rows orthonormal to each other and ${\bf F}^{h}$ is a
diagonal matrix. This implies that modulation along with filtering
also plays the role of projecting the frequency components onto a
$K$ dimensional subspace, albeit with respect to the modulated aliased
set which is larger than the original aliased set. Applying the same
proof as for Theorem \ref{Cor:OptimalFilterBank}, the optimal method
is to extract out $K$ subbands with the highest SNR, which coincides
with the asymptotic performance of the sampling method derived by
Algorithm 2.\end{IEEEproof}
\end{comment}
\section{Proof of Proposition \ref{lem-optimal-filter-bank-sampling-theoretic}\label{sec:Proof-of-Lemma-optimal-filter-bank-sampling-theoretic}}
Denote by $y^{k}(t)$ the analog signal after passing through the
$k^{\text{th}}$ prefilter prior to ideal sampling. When both the
input signal $x(t)$ and the noise $\eta(t)$ are Gaussian, the MMSE
estimator of $x(t)$ from samples $\left\{ y^{k}[n]\mid1\leq k\leq M\right\} $
is linear. Recall that $\tilde{T}_{s}=MT_{s}$ and $\tilde{f}_{s}=f_{s}/M$.
A linear estimator of $x(t)$ from ${\bf y}[n]$ can be given as
\begin{equation}
\hat{x}(t)=\sum_{k\in\mathbb{Z}}{\bf {\bf g}}^{T}(t-k\tilde{T}_{s})\cdot{\bf y}(k\tilde{T}_{s}),\label{eq:xLinearEstimator}
\end{equation}
where we use the vector form ${\bf g}(t)=[g^{1}(t),\cdots,g^{M}(t)]^{T}$
and ${\bf y}(t)=[y^{1}(t),\cdots,y^{M}(t)]^{T}$ for notational simplicity.
Here, $g^{l}(t)$ denotes the interpolation function operating upon
the samples in the $l^{\text{th}}$ branch. We propose to find the
optimal estimator ${\bf g}(t)$ that minimizes the mean square estimation
error $\mathbb{E}\left(\left|x(t)-\hat{x}(t)\right|^{2}\right)$ for
some $t$.
From the orthogonality principle, the MMSE estimate $\hat{x}(t)$
obeys
\begin{equation}
\mathbb{E}\left(x(t){\bf y}^{*}(l\tilde{T}_{s})\right)=\mathbb{E}\left(\hat{x}(t){\bf y}^{*}(l\tilde{T}_{s})\right),\quad\forall l\in\mathbb{Z}.\label{eq:OrthogonalityPrinciple}
\end{equation}
Since $x(t)$ and $\eta(t)$ are both stationary Gaussian processes,
we can define ${\bf R}_{XY}(\tau):=\mathbb{E}\left(x(t){\bf y}^{*}(t-\tau)\right)$
to be the cross correlation function between $x(t)$ and ${\bf y}(t)$,
and ${\bf R}_{Y}(\tau):=\mathbb{E}\left({\bf y}(t){\bf y}^{*}(t-\tau)\right)$
the autocorrelation function of ${\bf y}(t)$. Plugging (\ref{eq:xLinearEstimator})
into (\ref{eq:OrthogonalityPrinciple}) leads to the following relation
\begin{align*}
{\bf R}_{XY}\left(t-l\tilde{T}_{s}\right) & =\sum_{k\in\mathbb{Z}}{\bf g}^{T}\left(t-k\tilde{T}_{s}\right){\bf R}_{Y}\left(k\tilde{T}_{s}-l\tilde{T}_{s}\right).
\end{align*}
Replacing $t$ by $t+l\tilde{T}_{s}$ , we can equivalently express
it as
\begin{align}
{\bf R}_{XY}(t) & =\sum_{k\in\mathbb{Z}}{\bf g}^{T}\left(t+l\tilde{T}_{s}-k\tilde{T}_{s}\right){\bf R}_{Y}\left(k\tilde{T}_{s}-l\tilde{T}_{s}\right)\nonumber \\
& =\sum_{l\in\mathbb{Z}}{\bf g}^{T}\left(t-l\tilde{T}_{s}\right){\bf R}_{Y}\left(l\tilde{T}_{s}\right),\label{eq:CrossCorrelationRelation}
\end{align}
which is equivalent to the convolution of ${\bf g}(t)$ and ${\bf R}_{Y}(t)\cdot\sum_{l\in\mathbb{Z}}\delta\left(t-l\tilde{T}_{s}\right)$.
Let $\mathcal{F}(\cdot)$ denote Fourier transform operator. Define
the cross spectral density ${\bf S}_{XY}(f):=\mathcal{F}\left({\bf R}_{XY}(t)\right)$
and ${\bf S}_{Y}(f)=\mathcal{F}\left({\bf R}_{Y}\left(t\right)\right)$.
By taking the Fourier transform on both sides of (\ref{eq:CrossCorrelationRelation})
, we have
\[
{\bf S}_{XY}(f)={\bf G}(f)\mathcal{F}\left({\bf R}_{Y}(\tau)\sum_{l\in\mathbb{Z}}\delta(\tau-l\tilde{T}_{s})\right),
\]
which immediately yields that $\forall f\in\left[-\tilde{f}_{s}/2,\tilde{f}_{s}/2\right]$
\begin{align*}
{\bf G}(f) & ={\bf S}_{XY}(f)\left[\mathcal{F}\left({\bf R}_{Y}(\tau)\sum_{l\in\mathbb{Z}}\delta(\tau-l\tilde{T}_{s})\right)\right]^{-1}\\
& ={\bf S}_{XY}(f)\left(\sum_{l\in\mathbb{Z}}{\bf S}_{Y}\left(f-l\tilde{f}_{s}\right)\right)^{-1}.
\end{align*}
Since the noise $\eta(t)$ is independent of $x(t)$, the cross correlation
function ${\bf R}_{XY}(t)$ is
\begin{align*}
{\bf R}_{XY}(\tau) & =\mathbb{E}\left(x(t+\tau)\cdot\right.\\
& \quad\quad\left.\left[\left(s_{1}*h*x\right)^{*}(t),\cdots,\left(s_{M}*h*x\right)^{*}(t)\right]\right).
\end{align*}
which allows the cross spectral density to be derived as
\begin{align}
{\bf S}_{XY}(f) & =H^{*}(f)\mathcal{S}_{X}(f)\left[S_{1}^{*}(f),\cdots,S_{M}^{*}(f)\right].
\end{align}
Additionally, the spectral density of ${\bf y}(t)$ can be given
as the following $M\times M$ matrix
\begin{equation}
{\bf S}_{Y}(f)=\left(\left|H(f)\right|^{2}\mathcal{S}{}_{X}(f)+\mathcal{S}_{\eta}(f)\right){\bf S}(f){\bf S}^{*}(f),
\end{equation}
with $\mathcal{S}_{\eta}(f)$ denoting the spectral density of the
noise $\eta(t)$, and ${\bf S}(f)=\left[S_{1}(f),\cdots,S_{m}(f)\right]^{T}$.
Define
\begin{align*}
{\bf K}(f): & =\sum_{l\in\mathbb{Z}}\left(\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})+\mathcal{N}(f-lf_{s})\right)\\
& \quad\quad\quad\quad{\bf S}(f-lf_{s}){\bf S}^{*}(f-lf_{s}).
\end{align*}
The Wiener-Hopf linear reconstruction filter can now be written as
\[
{\bf G}(f)=H^{*}(f)\mathcal{S}_{X}(f){\bf S}^{*}(f){\bf K}^{-1}(f)
\]
Define $R_{X}(\tau)=\mathbb{E}\left(x(t)x^{*}(t-\tau)\right)$. Since
$\int_{-\infty}^{\infty}\mathcal{S}_{X}(f)\mathrm{d}f=R_{X}(0)$,
the resulting MSE is
\begin{align*}
\xi(t) & =\mathbb{E}\left(\left|x(t)\right|^{2}\right)-\mathbb{E}\left(\left|\hat{x}(t)\right|^{2}\right)\\
& =\mathbb{E}\left(\left|x(t)\right|^{2}\right)-\mathbb{E}\left(x(t)\hat{x}^{*}(t)\right)\\
& =R_{X}(0)-\mathbb{E}\left(x(t)\left(\sum_{l\in\mathbb{Z}}{\bf g}^{T}(t-lT_{s}){\bf y}(lT_{s})\right)^{*}\right)\\
& =R_{X}\left(0\right)-\sum_{l\in\mathbb{Z}}{\bf R}_{XY}(t-lT_{s}){\bf g}(t-lT_{s}).
\end{align*}
Since $\mathcal{F}\left({\bf g}(-t)\right)=\left({\bf G}^{*}(f)\right)^{T}$
and ${\bf S}_{XY}=H^{*}(f)\mathcal{S}_{X}(f){\bf S}^{*}(f)$, Parseval's
identity implies that
\begin{align*}
\xi(t) & =\int_{-\infty}^{\infty}\left[\mathcal{S}_{X}(f)-{\bf G}^{*}(f){\bf S}_{XY}^{T}\right]\text{d}f\\
& =\int_{-\infty}^{\infty}\left[\mathcal{S}_{X}(f)-\left|H(f)\mathcal{S}_{X}(f)\right|^{2}{\bf S}^{*}(f){\bf K}^{-1}(f){\bf S}(f)\right]\mathrm{d}f\\
& ={\displaystyle \int}_{-\tilde{f}_{s}/2}^{\tilde{f}_{s}/2}\left[\sum_{l=-\infty}^{\infty}\mathcal{S}_{X}(f-l\tilde{f}_{s})-\tilde{T}_{s}{\bf V}_{\zeta}^{T}(f,\tilde{f}_{s})\cdot{\bf 1}\right]\mathrm{d}f.
\end{align*}
Suppose that we impose power constraints $\sum_{l\in\mathbb{Z}}\mathcal{S}_{X}(f-l\tilde{f}_{s})=P(f)$,
and define $\zeta(f):=\left|H(f)\mathcal{S}_{X}(f)\right|^{2}{\bf S}^{*}(f){\bf K}^{-1}(f){\bf S}(f)$.
For a given input process $x(t)$, the problem of finding the optimal
prefilter ${\bf S}(f)$ that minimizes MSE then becomes
\[
\underset{\left\{ S(f-lf_{s}),l\in\mathbb{Z}\right\} }{\mbox{maximize}}\quad{\bf V}_{\zeta}^{T}(f,\tilde{f}_{s})\cdot{\bf 1},
\]
where the objective function can be alternatively rewritten in matrix
form
\begin{equation}
\mbox{trace}\left\{ {\bf F}_{X}^{\frac{1}{2}}{\bf F}_{h}^{*}{\bf F}_{s}^{*}\left({\bf F}_{s}\left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right){\bf F}_{s}^{*}\right)^{-1}{\bf F}_{s}{\bf F}_{h}{\bf F}_{X}^{\frac{1}{2}}\right\}
\end{equation}
Here ${\bf F}_{X}$ and ${\bf F}_{\eta}$ are diagonal matrices such
that $\left({\bf F}_{X}\right)_{l,l}=\mathcal{S}_{X}(f-lf_{s})$ and
$\left({\bf F}_{\eta}\right)_{l,l}=\mathcal{S}_{\eta}(f+kf_{s})$.
We observe that
\begin{align}
& \mbox{trace}\left\{ {\bf F}_{X}^{\frac{1}{2}}{\bf F}_{h}^{*}{\bf F}_{s}^{*}\left({\bf F}_{s}\left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right){\bf F}_{s}^{*}\right)^{-1}{\bf F}_{s}{\bf F}_{h}{\bf F}_{X}^{\frac{1}{2}}\right\} \nonumber \\
= & \mbox{trace}\left\{ \left({\bf F}_{s}\left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right){\bf F}_{s}^{*}\right)^{-1}{\bf F}_{s}{\bf F}_{h}{\bf F}_{X}{\bf F}_{h}^{*}{\bf F}_{s}^{*}\right\} \nonumber \\
\overset{(\text{a})}{=} & \mbox{trace}\left\{ \left({\bf Y}{\bf Y}^{*}\right)^{-1}{\bf Y}\left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right)^{-\frac{1}{2}}{\bf F}_{h}{\bf F}_{X}{\bf F}_{h}^{*}\right.\nonumber \\
& \quad\quad\quad\quad\quad\left.\left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right)^{-\frac{1}{2}}{\bf Y}^{*}\right\} \nonumber \\
\overset{(\text{b})}{=} & \mbox{trace}\left\{ \left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right)^{-1}{\bf F}_{h}{\bf F}_{X}{\bf F}_{h}^{*}{\bf Y}^{*}\left({\bf Y}{\bf Y}^{*}\right)^{-1}{\bf Y}\right\} \nonumber \\
\overset{(\text{c})}{=} & \mbox{trace}\left\{ \left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right)^{-1}{\bf F}_{h}{\bf F}_{X}{\bf F}_{h}^{*}\tilde{{\bf Y}}^{*}\tilde{{\bf Y}}\right\} \nonumber \\
\overset{(\text{d})}{\leq} & \sup_{{\bf Z}\cdot{\bf Z}^{*}={\bf I}_{M}}\mbox{trace}\left\{ {\bf Z}\left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right)^{-1}{\bf F}_{h}{\bf F}_{X}{\bf F}_{h}^{*}{\bf Z}^{*}\right\} \nonumber \\
= & \sum_{i=1}^{M}\lambda_{i}({\bf D}),
\end{align}
where (a) follows by introducing ${\bf Y}:={\bf F}_{s}\left({\bf F}_{h}{\bf F}_{h}^{*}+{\bf F}_{\eta}\right)^{\frac{1}{2}}$,
(b) follows from the fact that ${\bf F}_{h}$, ${\bf F}_{X}$, ${\bf F}_{\eta}$
are all diagonal matrices, (c) follows by introducing $\tilde{{\bf Y}}=\left({\bf Y}{\bf Y}^{*}\right)^{-\frac{1}{2}}{\bf Y}$,
and $ $(d) follows by observing that $\tilde{{\bf Y}}\tilde{{\bf Y}}^{*}=\left({\bf Y}{\bf Y}^{*}\right)^{-\frac{1}{2}}{\bf Y}{\bf Y}^{*}\left({\bf Y}{\bf Y}^{*}\right)^{-\frac{1}{2}}={\bf I}$.
Here, ${\bf D}$ is an infinite diagonal matrix such that ${\bf D}_{l,l}=\frac{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})}{\left|H(f-lf_{s})\right|^{2}\mathcal{S}_{X}(f-lf_{s})+\mathcal{S}_{\eta}(f-lf_{s})}$.
In other words, the upper bound is the sum of the $M$ largest ${\bf D}_{i,i}$
which are associated with $M$ frequency points of highest SNR $\frac{\left|H(f+lf_{s})\right|^{2}\mathcal{S}_{X}(f+lf_{s})}{\mathcal{S}_{\eta}(f+lf_{s})}$.
Therefore, when restricted to the set of all permutations of $\left\{ \mathcal{S}_{X}(f),\mathcal{S}_{X}(f\pm f_{s}),\cdots\right\} $,
the minimum MSE is achieved when assigning the $M$ largest $\mathcal{S}_{X}(f+lf_{s})$
to $M$ branches with the largest SNR. In this case, the corresponding
optimal filter can be chosen such that
\begin{equation}
S_{k}(f-lf_{s})=\begin{cases}
1, & \quad\mbox{if }l=\hat{k}\\
0, & \quad\mbox{otherwise.}
\end{cases}
\end{equation}
where $\hat{k}$ is the index of the $k^{\text{th}}$ largest element
in $\left\{ \left|H(f-lf_{s})\right|^{2}/\mathcal{S}_{\eta}(f-lf_{s}):l\in\mathbb{Z}\right\} $.
\begin{comment}
\section{Proof of Lemma \ref{lem-FourierSymbolIdealSampler}\label{sec:Proof-of-Lemma-Fourier-Symbol-Ideal-Sampler}}
Since ${\bf F}(f)$ is a $1\times k$ vector, the unique singular
value is given as
\begin{equation}
\sigma\left({\bf F}(f)\right)=\sqrt{\sum_{i=0}^{k-1}\left|{\bf F}_{i}(f)\right|^{2}}
\end{equation}
Recall that the frequency response of the channel model as
\begin{equation}
H(f)={\displaystyle \int}_{-\infty}^{\infty}h(\tau)\exp\left(-\hat{j}2\pi f\tau\right)\text{d}\tau.
\end{equation}
Observing that ${\bf F}_{i}(f)$ is just the Fourier transform of
the sampled version of $h(t)$, we can alternately obtain \cite{Opp1999}
\begin{align*}
{\bf F}_{i}(f) & =\Delta\sum_{l=-\infty}^{+\infty}h(uT_{s}+i\Delta)\exp\left(-\frac{\hat{j}2\pi lf}{f_{s}}\right)\\
& =\Delta{\displaystyle \int}_{-\infty}^{+\infty}\left(h(t+i\Delta)\sum_{l=-\infty}^{+\infty}\delta\left(t-lT_{s}\right)\right)\exp\left(-\hat{j}2\pi ft\right)\text{d}t\\
& =\frac{\Delta}{T_{s}}\sum_{l=-\infty}^{+\infty}H\left(f-lf_{s}\right)\exp\left(\hat{j}2\pi\left(f-lf_{s}\right)i\Delta\right)
\end{align*}
Therefore, we can derive
\begin{align*}
\sum_{i=0}^{k-1}\left|{\bf F}_{i}(f)\right|^{2} & =\frac{\Delta^{2}}{T_{s}^{2}}\sum_{i=0}^{k-1}\left|\sum_{l=-\infty}^{+\infty}H\left(f-lf_{s}\right)\exp\left(\hat{j}2\pi\left(f-lf_{s}\right)i\Delta\right)\right|^{2}\\
& =\frac{\Delta^{2}}{T_{s}^{2}}\sum_{i=0}^{k-1}\sum_{l_{1},l_{2}}H\left(f-l_{1}f_{s}\right)H^{*}\left(f-l_{2}f_{s}\right)\exp\left(\hat{j}2\pi\left(l_{2}-l_{1}\right)f_{s}i\Delta\right)\\
& =\frac{\Delta^{2}}{T_{s}^{2}}\sum_{l_{1},l_{2}}H\left(f-l_{1}f_{s}\right)H^{*}\left(f-l_{2}f_{s}\right)\sum_{i=0}^{k-1}\exp\left(\hat{j}\frac{2\pi\left(l_{2}-l_{1}\right)i}{k}\right)\\
& =\frac{\Delta^{2}}{T_{s}^{2}}\sum_{l_{1},l_{2}}H\left(f-l_{1}f_{s}\right)H^{*}\left(f-l_{2}f_{s}\right)k\delta[l_{1}-l_{2}]\\
& =\frac{\Delta}{T_{s}}\left\Vert {\bf V}_{H}(f,f_{s})\right\Vert _{2}^{2}
\end{align*}
Combining the above result yields
\begin{align*}
\sigma\left({\bf F}(f)\right) & =\sqrt{\sum_{i=0}^{k-1}\left|{\bf F}_{i}(f)\right|^{2}}\\
& =\sqrt{\frac{\Delta}{T_{s}}}\left\Vert {\bf V}_{H}(f,f_{s})\right\Vert _{2}
\end{align*}
\end{comment}
\section{Proofs of Auxiliary Lemmas \label{sec:Proofs-of-Auxiliary-Lemmas}}
\begin{comment}
\subsection{Proof of Lemma \ref{fact-integer} \label{sec:Proof-of-Fact-integer}}
Suppose that $T=nT_{s}+T_{0}$ where $0<T_{0}<T_{s}$. Then
\begin{align*}
\frac{1}{T}\sup I\left(x(0,T];\left\{ y[n]\right\} \right) & =\frac{nT_{s}}{T}\frac{1}{nT_{s}}\sup I\left(x(0,T];\left\{ y[n]\right\} \right)\\
& \geq\frac{nT_{s}}{T}\frac{1}{nT_{s}}\sup I\left(x(0,nT_{s}];\left\{ y[n]\right\} \right).
\end{align*}
Similarly, we have
\[
\frac{1}{T}\sup I\left(x(0,T];\left\{ y[n]\right\} \right)\leq\frac{\left(n+1\right)T_{s}}{T}\frac{1}{\left(n+1\right)T_{s}}\sup I\left(x(0,(n+1)T_{s}];\left\{ y[n]\right\} \right).
\]
Combining the above inequalities we obtain
\begin{align*}
\lim_{n\rightarrow\infty}\frac{nT_{s}}{T}\frac{1}{nT_{s}}\sup I\left(x(0,nT_{s}];\left\{ y[n]\right\} \right) & \leq\lim_{n\rightarrow\infty}\frac{1}{nT_{s}+T_{0}}\sup I\left(x(0,nT_{s}+T_{0}];\left\{ y[n]\right\} \right)\\
& \leq\lim_{n\rightarrow\infty}\frac{\left(n+1\right)T_{s}}{T}\frac{1}{\left(n+1\right)T_{s}}\sup I\left(x(0,(n+1)T_{s}];\left\{ y[n]\right\} \right).
\end{align*}
Since the transmission time $T$ is an integer multiple of $T_{s}$,
$ $ the lower and upper bounds in the above inequality equal the
channel capacity, which immediately leads to the limit when $T$ is
not an integer multiple of $T_{s}$:
\begin{equation}
\lim_{n\rightarrow\infty}\frac{1}{nT_{s}+T_{0}}\sup I\left(x(0,nT_{s}+T_{0}];\left\{ y_{s}[n]\right\} \right)=\lim_{n\rightarrow\infty}\frac{1}{nT_{s}}\sup I\left(x(0,nT_{s}];\left\{ y_{s}[n]\right\} \right).
\end{equation}
Since $T_{0}$ can be arbitrarily chosen from the interval $(0,T_{s})$,
we conclude that
\begin{equation}
\lim_{T\rightarrow\infty}\frac{1}{T}\sup I\left(x(0,T];\left\{ y_{s}[n]\right\} \right)=\lim_{n\rightarrow\infty}\frac{1}{nT_{s}}\sup I\left(x(0,nT_{s}];\left\{ y_{s}[n]\right\} \right).
\end{equation}
\end{comment}
\subsection{Proof of Lemma \ref{lemmaAsymptoticEquivalenceSH}\label{sec:Proof-of-Lemma-Asymptotic-SH}}
For any $i\leq j$, we have
\begin{align}
& \left|\left(\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}-\hat{{\bf H}}^{n}\right)_{ij}\right|\nonumber \\
\leq & \left|\sum_{t=-\infty}^{-j}\tilde{{\bf h}}_{j-i+t}\tilde{{\bf h}}_{t}^{*}\right|+\left|\sum_{t=n-j+1}^{\infty}\tilde{{\bf h}}_{j-i+t}\tilde{{\bf h}}_{t}^{*}\right|.\label{eq:ResidualTerms}
\end{align}
Since $h(t)$ is absolutely summable and Riemann integrable, for
sufficiently small $\Delta$, there exists a constant $c$ such that
$\sum_{i=-\infty}^{\infty}\left\Vert \tilde{{\bf h}}_{i}\right\Vert _{1}\leq c$.
In the following analysis, we define ${\bf R}^{1}$ and ${\bf R}^{2}$
to capture the two residual terms respectively, i.e.
\[
{\bf R}_{ij}^{1}=\sum_{t=-\infty}^{-j}\tilde{{\bf h}}_{j-i+t}\tilde{{\bf h}}_{t}^{*},\quad\text{and}\quad{\bf R}_{ij}^{2}=\sum_{t=n-j+1}^{\infty}\tilde{{\bf h}}_{j-i+t}\tilde{{\bf h}}_{t}^{*}.
\]
In order to prove that $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\sim\hat{{\bf H}}^{n}$,
we need to prove (1) $\lim_{n\rightarrow\infty}\frac{1}{n}\left\Vert \tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}-\hat{{\bf H}}^{n}\right\Vert _{\text{F}}^{2}=0$,
or equivalently, $\lim_{n\rightarrow\infty}\frac{1}{n}\left\Vert {\bf R}^{2}\right\Vert _{\text{F}}^{2}=0$
and $\lim_{n\rightarrow\infty}\frac{1}{n}\left\Vert {\bf R}^{1}\right\Vert _{\text{F}}^{2}=0$;
(2) the $\ell_{2}$ norms of both $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$
and $\hat{{\bf H}}^{n}$ are uniformly bounded, i.e. $\exists M_{\text{u}}$
such that $\left\Vert \tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\right\Vert _{2}\leq M_{\text{u}}<\infty$
and $\left\Vert \hat{{\bf H}}^{n}\right\Vert _{2}\leq M_{\text{u}}<\infty$
for all $n$.
(1) We first prove that $\lim_{n\rightarrow\infty}\frac{1}{n}\left\Vert \tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}-\hat{{\bf H}}^{n}\right\Vert _{\text{F}}^{2}=0$.
By our assumptions, we have $h(t)=o\left(t^{-\epsilon}\right)$ for
some $\epsilon>1$. Since $s(t)$ is absolutely integrable, $\tilde{h}(t)=o\left(t^{-\epsilon}\right)$
also holds. Without loss of generality, we suppose that $j\geq i$.
(a) if $i\geq n^{\frac{1}{2\epsilon}}$, by the assumption $\tilde{h}(t)=o\left(\frac{1}{t^{\epsilon}}\right)$
for some $\epsilon>1$, one has
\begin{align}
\left|{\bf R}_{ij}^{1}\right|\leq & \sum_{t=-\infty}^{-j}\left\Vert \tilde{{\bf h}}_{j-i+t}\right\Vert _{1}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{\infty}\nonumber \\
\leq & \left(\max_{\tau\geq n^{\frac{1}{2\epsilon}}}\left\Vert \tilde{{\bf h}}_{-\tau}\right\Vert _{1}\right)\sum_{t=-\infty}^{-j}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{\infty}\nonumber \\
\leq & \left(\max_{\tau\geq n^{\frac{1}{2\epsilon}}}\left\Vert \tilde{{\bf h}}_{-\tau}\right\Vert _{1}\right)\sum_{t=-\infty}^{-j}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{1}\leq\mbox{ }c\max_{\tau\geq n^{\frac{1}{2\epsilon}}}\left\Vert \tilde{{\bf h}}_{-\tau}\right\Vert _{1}\nonumber \\
= & \mbox{ }kc\cdot o\left(\frac{1}{\sqrt{n}}\right)=o\left(\frac{1}{\sqrt{n}}\right).\label{eq:Rij_1_a}
\end{align}
(b) if $j\geq n^{\frac{1}{2\epsilon}}$,
\begin{align}
\left|{\bf R}_{ij}^{1}\right|\leq & \sum_{t=-\infty}^{-j}\left\Vert \tilde{{\bf h}}_{j-i+t}\right\Vert _{1}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{\infty}\nonumber \\
\leq & \left(\sum_{t=-\infty}^{-j}\left\Vert \tilde{{\bf h}}_{j-i+t}\right\Vert _{1}\right)\max_{\tau\leq-j}\left\Vert \tilde{{\bf h}}_{\tau}\right\Vert _{\infty}\nonumber \\
\leq & \mbox{ }c\max_{\tau\geq n^{\frac{1}{2\epsilon}}}\left\Vert \tilde{{\bf h}}_{-\tau}\right\Vert _{\infty}\nonumber \\
= & \mbox{ }c\cdot o\left(\frac{1}{\sqrt{n}}\right)=o\left(\frac{1}{\sqrt{n}}\right).\label{eq:Rij_1_b}
\end{align}
(c) if $j<n^{\frac{1}{2\epsilon}}$ and $i<n^{\frac{1}{2\epsilon}}$,
we have
\begin{align}
\left|{\bf R}_{ij}^{1}\right|^{2} & \leq\left(\sum_{t=-\infty}^{\infty}\left\Vert \tilde{{\bf h}}_{j-i+t}\right\Vert _{1}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{\infty}\right)^{2}\nonumber \\
& \leq\left(\sum_{t=-\infty}^{\infty}\left\Vert \tilde{{\bf h}}_{j-i+t}\right\Vert _{1}\right)^{2}\left(\max_{t}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{\infty}\right)^{2}\nonumber \\
& \leq\left(\sum_{t=-\infty}^{\infty}\left\Vert \tilde{{\bf h}}_{j-i+t}\right\Vert _{1}\right)^{2}\left(\sum_{t=-\infty}^{\infty}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{1}\right)^{2}\nonumber \\
& \leq c^{4}.\label{eq:Rij_1_c}
\end{align}
By combining inequality (\ref{eq:Rij_1_a}), (\ref{eq:Rij_1_b})
and (\ref{eq:Rij_1_c}), we can obtain
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{n}\left\Vert {\bf R}^{1}\right\Vert _{\text{F}}^{2}\\
= & \lim_{n\rightarrow\infty}\frac{1}{n}\left(\sum_{i,j<n^{\frac{1}{2\epsilon}}}\left|{\bf R}_{ij}^{1}\right|^{2}+\sum_{i\geq n^{\frac{1}{2\epsilon}}\text{ or }j\geq n^{\frac{1}{2\epsilon}}}\left|{\bf R}_{ij}^{1}\right|^{2}\right)\\
\leq & \lim_{n\rightarrow\infty}\frac{1}{n}\left[n^{\frac{1}{\epsilon}}\max_{i,j<n^{\frac{1}{2\epsilon}}}\left|{\bf R}_{ij}^{1}\right|^{2}+2n^{1+\frac{1}{2\epsilon}}\max_{i\text{ or }j\geq n^{\frac{1}{2\epsilon}}}\left|{\bf R}_{ij}^{1}\right|^{2}\right]\\
= & \lim_{n\rightarrow\infty}\frac{1}{n}\left[n^{\frac{1}{\epsilon}}c^{4}+2n^{1+\frac{1}{2\epsilon}}o\left(\frac{1}{n}\right)\right]=0
\end{align*}
Similarly, we can show that
\[
\lim_{n\rightarrow\infty}\frac{1}{n}\left\Vert {\bf R}^{2}\right\Vert _{\text{F}}^{2}=0,
\]
which immediately implies that
\[
\lim_{n\rightarrow\infty}\frac{1}{n}\left\Vert \hat{{\bf H}}^{n}-\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\right\Vert _{\text{F}}^{2}=0.
\]
(2) We now proceed to show that $\left\Vert \tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\right\Vert _{2}$
and $\left\Vert \hat{{\bf H}}^{n}\right\Vert _{2}$ are uniformly
bounded. Since $\hat{{\bf H}}^{n}$ is a Toeplitz matrix, applying
\cite[Lemma 6]{Gray06} and \cite[Section 4.1]{Gray06} yields
\begin{align*}
\left\Vert \hat{{\bf H}}^{n}\right\Vert _{2} & \leq2\sum_{i=0}^{\infty}\sum_{t=-\infty}^{\infty}\left|\tilde{{\bf h}}_{i+t}\tilde{{\bf h}}_{t}^{*}\right|\\
& \leq2\sum_{t=-\infty}^{+\infty}\left\Vert \tilde{{\bf h}}_{t}\right\Vert _{\infty}\sum_{i=0}^{\infty}\left\Vert \tilde{{\bf h}}_{i+t}\right\Vert _{1}\leq2c^{2}.
\end{align*}
Additionally, since $\tilde{{\bf H}}^{n}$ is a block Toeplitz matrix,
\cite[Corollary 4.2]{Tilli98} allows us to bound the norm as
\begin{align*}
\left\Vert \tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\right\Vert _{2} & =\left\Vert \hat{{\bf H}}^{n}\right\Vert _{2}^{2}\leq\left\Vert {\bf F}_{\tilde{h}}(\omega)\right\Vert _{\infty}^{2}=\sup_{\omega}\sum_{i=0}^{k-1}\left|{\bf F}_{\tilde{h},i}(\omega)\right|^{2}\\
& \leq\sum_{j=0}^{\infty}\left(\sum_{i=0}^{k-1}\left|\left(\tilde{{\bf h}}_{j}\right)_{i}\right|\right)^{2}\leq\left(\sum_{j=0}^{\infty}\left\Vert \tilde{{\bf h}}_{j}\right\Vert _{1}\right)^{2}\leq c^{2}.
\end{align*}
Hence, by definition of asymptotic equivalence, we have $\hat{{\bf H}}^{n}\sim\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$.
\subsection{Proof of Lemma \ref{lem-S-Inverse-Asymptotic}\label{sec:Proof-of-Lemma-S-Inverse-Asymptotic}}
We know that ${\bf S}^{n}{\bf S}^{n*}=\hat{{\bf S}}^{n}$, hence,
${\bf C}^{n}\sim\hat{{\bf S}}^{n}={\bf S}^{n}{\bf S}^{n*}$. Recall
that $\left(\hat{{\bf S}}^{n}\right)_{1i}=\sum_{t=-\infty}^{\infty}{\bf s}_{i-1+t}{\bf s}_{t}^{*}$.
For a given $k$, the Fourier series related to $\left\{ {\bf C}^{n}\right\} $
can be given as
\begin{equation}
F_{c}^{k}(\omega)=\sum_{i=-\infty}^{\infty}\left(\sum_{t=-\infty}^{\infty}{\bf s}_{i+t}{\bf s}_{t}^{*}\right)\exp(ji\omega).
\end{equation}
By Lemma \ref{lemmaMultiplicationInverse}, in order to show $\left({\bf C}^{n}\right)^{-1}\sim\left({\bf S}^{n}{\bf S}^{n*}\right)^{-1}$,
we will need to show that $F_{c}^{k}(\omega)$ is uniformly bounded
away from 0.
When $k$ is sufficiently large, the Riemann integrability of $s(t)$
implies that
\begin{align*}
F_{c}^{k}(\omega) & \overset{\cdot}{=}\Delta\sum_{i=-\infty}^{\infty}\left(\int_{-\infty}^{+\infty}s(t+iT_{s}){\bf s}(t)^{*}\text{d}t\right)\exp\left(ji\omega\right)\\
& =\Delta\int_{-\infty}^{\infty}\left(\int_{-\infty}^{\infty}s(t+\tau)s(t)^{*}\text{d}t\right)\\
& \quad\quad\quad\quad\cdot\left(\sum_{i=-\infty}^{\infty}\delta\left(\tau-iT_{s}\right)\right)\exp\left(j\frac{\omega}{T_{s}}\tau\right)\text{d}\tau.
\end{align*}
We observe that
\begin{align*}
& \int_{-\infty}^{+\infty}\left(\int_{-\infty}^{\infty}s(t+\tau)s(t)^{*}\text{d}t\right)\exp\left(j\frac{\omega}{T_{s}}\tau\right)\text{d}\tau\\
= & \left(\int_{-\infty}^{\infty}s(t+\tau)\exp\left(j\frac{\omega}{T_{s}}\left(t+\tau\right)\right)\text{d}\tau\right)\\
& \quad\quad\left(\int_{-\infty}^{+\infty}s(t)\exp\left(j\frac{\omega}{T_{s}}t\right)\text{d}t\right)^{*}\\
= & \left|S\left(-j\frac{\omega}{T_{s}}\right)\right|^{2}.
\end{align*}
Since $F_{c}^{k}(\omega)$ corresponds to the Fourier transform of
the signals obtained by uniformly sampling $\int_{-\infty}^{\infty}s(t+\tau)s(t)^{*}\text{d}t$,
we can immediately see that
\begin{equation}
\lim_{\Delta\rightarrow0}F_{c}^{k}(\omega)=\frac{\Delta}{T_{s}}\sum_{i=-\infty}^{\infty}\left|S\left(-j\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right|^{2}.
\end{equation}
If for all $\omega\in\left[-\pi,\pi\right]$, we have
\begin{equation}
\sum_{i=-\infty}^{\infty}\left|S\left(-j\left(\frac{\omega}{T_{s}}-\frac{i2\pi}{T_{s}}\right)\right)\right|^{2}\geq\epsilon_{s}>0
\end{equation}
for some constant $\epsilon_{s}$, then $\sigma_{\min}\left({\bf C}^{n}\right)=\inf_{\omega}F_{c}^{k}(\omega)\geq\frac{\Delta\epsilon_{s}}{T_{s}}$,
which leads to $\left\Vert \left({\bf C}^{n}\right)^{-1}\right\Vert _{2}\leq\frac{T_{s}}{\Delta\epsilon_{s}}$.
Let $\Xi^{n}={\bf C}^{n}-{\bf S}^{n}{\bf S}^{n*}$. Since ${\bf S}^{n}{\bf S}^{n*}\sim{\bf C}^{n}$,
we can have $\lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\left\Vert {\bf \Xi}^{n}\right\Vert _{F}=0$,
which implies that
\begin{align*}
\lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\left\Vert {\bf \Xi}^{n}\left({\bf C}^{n}\right)^{-1}\right\Vert _{\text{F}} & \leq\lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\left\Vert {\bf \Xi}^{n}\right\Vert _{\text{F}}\left\Vert \left({\bf C}^{n}\right)^{-1}\right\Vert _{2}\\
& \leq\lim_{n\rightarrow\infty}\frac{T_{s}}{\Delta\epsilon_{s}}\frac{1}{\sqrt{n}}\left\Vert {\bf \Xi}^{n}\right\Vert _{\text{F}}=0.
\end{align*}
The Taylor expansion of $\left({\bf S}^{n}{\bf S}^{n*}\right)^{-1}$
yields
\begin{align*}
& \left({\bf S}^{n}{\bf S}^{n*}\right)^{-1}=\left({\bf C}^{n}-{\bf \Xi}^{n}\right)^{-1}\\
= & \left({\bf C}^{n}\right)^{-1}\left({\bf I}+{\bf \Xi}^{n}\left({\bf C}^{n}\right)^{-1}+\left({\bf \Xi}^{n}\left({\bf C}^{n}\right)^{-1}\right)^{2}+\cdots\right).
\end{align*}
Hence, we can bound
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\left\Vert \left({\bf S}^{n}{\bf S}^{n*}\right)^{-1}-\left({\bf C}^{n}\right)^{-1}\right\Vert _{\text{F}}\\
\leq & \lim_{n\rightarrow\infty}\left\Vert \left({\bf C}^{n}\right)^{-1}\right\Vert _{2}\left(\sum_{i=1}^{\infty}\left(\frac{1}{\sqrt{n}}\left\Vert {\bf \Xi}^{n}\left({\bf C}^{n}\right)^{-1}\right\Vert _{\text{F}}\right)^{i}\right)=0.
\end{align*}
\subsection{Proof of Lemma \ref{lem-asymptoticSpectralPropertyGeneralSampling}
\label{sec:Proof-of-Lem-Asymptotic-Spectral-General-Sampling}}
Since $\left({\bf C}^{n}\right)^{-\frac{1}{2}}$ and $\left({\bf \hat{S}}^{n}\right)^{-\frac{1}{2}}$
are both Hermitian and positive semidefinite, we have $\left({\bf C}^{n}\right)^{-\frac{1}{2}}\sim\left(\hat{{\bf S}}^{n}\right)^{-\frac{1}{2}}$.
The asymptotic equivalence allows us to relate $\frac{1}{n}\sum_{i=1}^{n}g(\lambda_{i})$
to the function associated with the spectrum of the circulant matrix
${\bf C}^{n}$ instead of $\hat{{\bf S}}^{n}$. One nice property
is that $\left({\bf C}^{n}\right)^{-\frac{1}{2}}={\bf U}_{c}{\bf \Lambda}_{c}^{-\frac{1}{2}}{\bf U}_{c}^{*}$
is still a circulant matrix. Combining the above results with Lemma
\ref{lemmaAsymptoticEquivalenceEigenvalues} yields
\[
\left({\bf S}^{n}{\bf S}^{n}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\sim\left({\bf C}^{n}\right)^{-\frac{1}{2}}\hat{{\bf H}}^{n}\left({\bf C}^{n}\right)^{-\frac{1}{2}}
\]
Note that $\left({\bf C}^{n}\right)^{-\frac{1}{2}}\hat{{\bf H}}^{n}\left({\bf C}^{n}\right)^{-\frac{1}{2}}$
is simply multiplication of 3 Toeplitz matrices. This allows us to
untangle $F_{c}\left(\omega\right)$ and $F_{\hat{h}}(\omega)$, hence
separating $H(f)$ and $S(f)$.
Specifically, denote by $F_{c_{0.5}}\left(\omega\right)$, $F_{c}\left(\omega\right)$,
$F_{\hat{h}}\left(\omega\right)$, ${\bf F}_{\tilde{h}}\left(\omega\right)$
the Fourier series related to $\left({\bf C}^{n}\right)^{\frac{1}{2}}$
, ${\bf C}^{n}$ , $\hat{{\bf H}}^{n}$ and $\tilde{{\bf H}}^{n}$,
respectively. We note that $F_{c_{0.5}}\left(\omega\right)$, $F_{c}\left(\omega\right)$
and $F_{\hat{h}}\left(\omega\right)$ are all scalars since their
related matrices are Toeplitz, while ${\bf F}_{\tilde{h}}\left(\omega\right)$
is a $1\times k$ vector since $\tilde{{\bf H}}$ is block Toeplitz.
Then for any continuous function $g(x)$, applying \cite[Theorem 12]{Gray06}
yields
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}g\left\{ \lambda_{i}\left(\left({\bf S}^{n}{\bf S}^{n}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\right)\right\} \\
= & \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}g\left\{ \lambda_{i}\left(\left({\bf C}^{n}\right)^{-\frac{1}{2}}\hat{{\bf H}}^{n}\left({\bf C}^{n}\right)^{-\frac{1}{2}}\right)\right\} \\
= & \frac{1}{2\pi}{\displaystyle \int}_{-\pi}^{\pi}g\left(F_{c_{0.5}}^{-1}\left(\omega\right)F_{\hat{h}}\left(\omega\right)F_{c_{0.5}}^{-1}\left(\omega\right)\right)\text{d}\omega\\
= & \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}g\left\{ \lambda_{i}\left(\left({\bf C}^{n}\right)^{-1}\hat{{\bf H}}^{n}\right)\right\} \\
= & \frac{1}{2\pi}{\displaystyle \int}_{-\pi}^{\pi}g\left(\frac{F_{\hat{h}}\left(\omega\right)}{F_{c}\left(\omega\right)}\right)\text{d}\omega.
\end{align*}
Now we only need to show that both $F_{\hat{h}}\left(\omega\right)$
and $F_{c}\left(\omega\right)$ have simple close-form expressions.
We observe that $\hat{{\bf H}}_{n}$ is asymptotically equivalent
to $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$, and the eigenvalues
of $\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}$ are exactly the square
of the corresponding singular values of $\tilde{{\bf H}}^{n}$. Hence,
we know from \cite{Tilli98} that for any continuous function $g(x)$:
\begin{align*}
\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}g\left\{ \lambda_{i}\left(\hat{{\bf H}}^{n}\right)\right\} & =\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}g\left\{ \sigma_{i}^{2}\left(\tilde{{\bf H}}^{n}\right)\right\} \\
& =\frac{1}{2\pi}{\displaystyle \int}_{-\pi}^{\pi}g\left(\sigma^{2}\left({\bf F}_{\tilde{h}}(\omega)\right)\right)\text{d}\omega
\end{align*}
where ${\bf F}_{\tilde{h}}(\omega)$ can be expressed as ${\bf F}_{\tilde{h}}(\omega)=\left[F_{\tilde{h},0}(\omega),\cdots,F_{\tilde{h,}k-1}(\omega)\right]$.
Here, for any $0\leq i<k$:
\begin{align*}
F_{\tilde{h},i}(\omega) & :=\Delta\sum_{u=-\infty}^{+\infty}\tilde{h}_{u,i}\exp\left(ju\omega\right)\\
& =\Delta\sum_{u\in\mathbb{Z}}\tilde{h}(uT_{s}-i\Delta)\exp\left(ju\omega\right).
\end{align*}
The above analysis implies that $F_{\hat{h}}\left(\omega\right)=\sigma^{2}\left(F_{\tilde{h}}(\omega)\right)$.
Through algebraic manipulation, we have that
\[
F_{\tilde{h},i}(\omega)=\frac{\Delta}{T_{s}}\sum_{l\in\mathbb{Z}}H\left(-f+lf_{s}\right)\exp\left(-j2\pi\left(f-lf_{s}\right)i\Delta\right),
\]
which yields
\begin{align*}
& F_{\hat{h}}\left(f\right)=\sigma^{2}\left({\bf F}_{\tilde{h}}(2\pi f)\right)=\sum_{i=0}^{k-1}\left|F_{\tilde{h},i}(2\pi f)\right|^{2}\\
= & \frac{\Delta^{2}}{T_{s}^{2}}\sum_{i=0}^{k-1}\left|\sum_{l=-\infty}^{+\infty}\tilde{H}\left(-f+lf_{s}\right)\exp\left(-j2\pi\left(f-lf_{s}\right)i\Delta\right)\right|^{2}\\
= & \frac{\Delta^{2}}{T_{s}^{2}}\sum_{i=0}^{k-1}\sum_{l_{1},l_{2}}\tilde{H}\left(-f+l_{1}f_{s}\right)\tilde{H}^{*}\left(-f+l_{2}f_{s}\right)\cdot\\
& \quad\quad\quad\exp\left(-j2\pi\left(l_{2}-l_{1}\right)f_{s}i\Delta\right)\\
= & \frac{\Delta^{2}}{T_{s}^{2}}\sum_{l_{1},l_{2}}\tilde{H}\left(-f+l_{1}f_{s}\right)\tilde{H}^{*}\left(-f+l_{2}f_{s}\right)\\
& \quad\quad\quad\left[\sum_{i=0}^{k-1}\exp\left(-j2\pi\left(l_{2}-l_{1}\right)\frac{i}{k}\right)\right]\\
= & \frac{\Delta}{T_{s}}\sum_{l}\left|H\left(-f+lf_{s}\right)S\left(-f+lf_{s}\right)\right|^{2}.
\end{align*}
Similarly, we have
\begin{equation}
F_{c}\left(f\right)=\frac{\Delta}{T_{s}}\sum_{l\in\mathbb{Z}}\left|S\left(-f+lf_{s}\right)\right|^{2}.
\end{equation}
Combining the above results yields
\begin{align*}
& \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}g\left\{ \lambda_{i}\left(\left({\bf S}^{n}{\bf S}^{n}\right)^{-\frac{1}{2}}\tilde{{\bf H}}^{n}\tilde{{\bf H}}^{n*}\left({\bf S}^{n}{\bf S}^{n*}\right)^{-\frac{1}{2}}\right)\right\} \\
= & T_{s}{\displaystyle \int}_{-f_{s}/2}^{f_{s}/2}g\left(\frac{\sum_{l=-\infty}^{+\infty}\left|H\left(-f+lf_{s}\right)S\left(-f+lf_{s}\right)\right|^{2}}{\sum_{l=-\infty}^{+\infty}\left|S\left(-f+lf_{s}\right)\right|^{2}}\right)\text{d}f
\end{align*}
This completes the proof.
\subsection{Proof of Lemma \ref{lem: FourierSymbolModulationBank} \label{sec:Proof-of-Lemma-Fourier-Symbol-Modulation}}
Denote by ${\bf K}_{\alpha}^{\eta}$ the Fourier symbol associated
with the block Toeplitz matrix ${\bf G}_{\alpha}^{\eta}$. We know
that the Fourier transform of $g_{i}^{\eta}\left(t,\tau\right)$ with
respect to $\tau$ is given by
\begin{align*}
& {\displaystyle \int}_{\tau}g_{i}^{\eta}\left(t,\tau\right)\exp\left(-j2\pi f\tau\right)\mathrm{d}\tau\\
= & {\displaystyle \int}_{\tau_{2}}{\displaystyle \int}_{\tau_{1}}s_{i}\left(t-\tau_{1}\right)q_{i}\left(\tau_{1}\right)p_{i}\left(\tau_{1}-\tau_{2}\right)\exp\left(-j2\pi f\tau_{2}\right)\mathrm{d}\tau_{1}\mathrm{d}\tau_{2}\\
= & {\displaystyle \int}_{\tau_{2}}p_{i}\left(\tau_{1}-\tau_{2}\right)\exp\left(j2\pi f\left(\tau_{1}-\tau_{2}\right)\right)\mathrm{d}\tau_{2}\\
& \quad\quad{\displaystyle \int}_{\tau_{1}}s_{i}\left(t-\tau_{1}\right)q_{i}\left(\tau_{1}\right)\exp\left(-j2\pi f\tau_{1}\right)\mathrm{d}\tau_{1}\\
= & P_{i}\left(-f\right)\cdot\left[S_{i}\left(-f\right)\exp\left(-j2\pi tf\right)\cdot\sum_{u}c_{i}^{u}\delta\left(f-uf_{q}\right)\right]\\
= & P_{i}\left(-f\right)\cdot\left[\sum_{u}c_{i}^{u}S_{i}\left(-f+uf_{q}\right)\exp\left(-j2\pi t\left(f-uf_{q}\right)\right)\right].
\end{align*}
Introduce the notation $S^{e}(f)\overset{\Delta}{=}S(f)\exp\left(j2\pi l\tilde{T}_{s}f\right)$.
For any $\left(l,m\right)$ such that $1\leq l\leq a$ and $1\leq m\leq ak$,
the $\left(l,m\right)$ entry of the Fourier symbol ${\bf K}_{\alpha}^{\eta}$
can be related to the sampling sequence of $g_{\alpha}^{\eta}\left(l\tilde{T}_{s},\tau\right)$
at a rate $\frac{\tilde{f}_{s}}{a}$ with a phase shift $m\Delta$,
and hence it can be calculated as follows
\begin{align*}
\left({\bf K}_{\alpha}^{\eta}\right)_{l,m}= & \sum_{v}P_{i}\left(-f+v\frac{f_{q}}{b}\right)\exp\left(j2\pi\left(f-v\frac{f_{q}}{b}\right)m\Delta\right)\\
& \quad\quad\cdot\left[\sum_{u}c_{i}^{u}S^{e}\left(-f+uf_{q}+v\frac{f_{q}}{b}\right)\right].
\end{align*}
Using the fact that $\sum_{m=0}^{ak-1}\exp\left(j2\pi\left(\left(v_{2}-v_{1}\right)\frac{f_{q}}{b}\right)m\Delta\right)=ak\delta\left[v_{2}-v_{1}\right]$,
we get through algebraic manipulation that
\begin{align*}
\left({\bf K}_{\alpha}^{\eta}{\bf K}_{\beta}^{\eta*}\right)_{l,d}= & ak\sum_{v}P_{\alpha}\left(-f+v\frac{f_{q}}{b}\right)\cdot P_{\beta}^{*}\left(-f+v\frac{f_{q}}{b}\right)\\
& \quad\quad\quad\left[\sum_{u_{1}}c_{\alpha}^{u_{1}}S_{\alpha}^{e}\left(-f+u_{1}f_{q}+v\frac{f_{q}}{b}\right)\right]\cdot\\
& \quad\quad\quad\left[\sum_{u_{2}}c_{\beta}^{u_{2}}S_{\beta}^{e}\left(-f+u_{2}f_{q}+v\frac{f_{q}}{b}\right)\right]^{*}.
\end{align*}
Define another matrix ${\bf F}_{\alpha}^{\eta}$ such that
\begin{align*}
\left({\bf F}_{\alpha}^{\eta}\right)_{l,v} & =\sum_{u}c_{\alpha}^{u}S_{\alpha}\left(-f+uf_{q}+v\frac{f_{q}}{b}\right)\cdot\\
& \quad\quad\quad\exp\left(-j2\pi l\tilde{T}_{s}\left(f-uf_{q}-v\frac{f_{q}}{b}\right)\right).
\end{align*}
It can be easily seen that
\[
{\bf K}_{\alpha}^{\eta}{\bf K}_{\beta}^{\eta*}=ak{\bf F}_{\alpha}^{\eta}{\bf F}_{\alpha}^{p}{\bf F}_{\alpha}^{p*}{\bf F}_{\beta}^{\eta*}.
\]
Replacing $P_{\alpha}$ by $P_{\alpha}H$ immediately gives us the
Fourier symbol for ${\bf G}_{\alpha}^{h}{\bf G}_{\beta}^{h*}$.
\bibliographystyle{IEEEtran} \bibliographystyle{IEEEtran}
\bibliographystyle{IEEEtran} \bibliographystyle{IEEEtran}
|
1,108,101,564,126 | arxiv | \section{Introduction}
Observations of thermal dust continuum emission (e.g., Kenyon \&
Hartmann 1987; Beckwith \& Sargent 1993; Kitamura et al. 2002) and, more
directly, images of light scattered from dust grains
(e.g., Roddier et al. 1996; Itoh et al. 2003; Duchene et al. 2004)
have revealed that young stellar objects have circumstellar disks.
In addition, a gaseous component has been
detected from the circumstellar disks around T Tauri stars (e.g., Carr
1989; Dutrey et al. 1997, 2007; Thi et al. 2004; Qi et al. 2006; Najita
et al. 2007; Bergin et al. 2007). Recent high spectral
resolution and high sensitivity observations have made it possible to
detect line emission of molecular hydrogen gas, which is the major
component of the gas in the protoplanetary disks (Thi et al. 1999,
2001a, b; Richter et al. 2002; Sheret et al. 2003; Sako et al. 2005;
Weintraub et al. 2000; Bary et al. 2002, 2003; Itoh et al. 2003;
Herczeg et al. 2002, 2004, 2006; Bergin et al. 2004).
Furthermore, detection of molecular hydrogen lines in the near- and
mid-infrared
wavelength bands has been reported towards T
Tauri stars and Herbig Ae/Be stars very recently
(Weinstraub et al. 2005; Bary et al. 2007, in preparation; Richter et
al. 2007; Bitner et al. 2007, in preparation).
In a previous paper (Nomura \& Millar 2005, hereafter Paper I), we
constructed a model for molecular hydrogen emission from a
protoplanetary disk that is irradiated by strong ultraviolet (UV)
radiation from a central star, and whose dust component has the same
properties as dense molecular cloud dust.
Now, it is known observationally that many young stellar objects
emit strong X-ray radiation (Koyama et al. 1994; Feigelson \&
Montmerle 1999; Tsujimoto et al. 2002; Imanishi et al. 2003; Getman et
al. 2005; Preibisch et al. 2005) which ionizes hydrogen gas and
could be one
of the important heating sources of gas in protoplanetary disks,
in addition to the grain photoelectric
heating induced by far UV radiation from the central star (cf. Kamp \&
Dullemond 2004; Jonkheid et al. 2004; Dullemond et al. 2007;
Paper I). Actually,
some model calculations show that X-ray irradiation can heat the gas
to very high temperatures at the surface layer of the disks
(Glassgold \& Najita 2001; Gorti
\& Hollenbach 2004; Glassgold et al. 2004; Kamp et al. 2005).
Furthermore, it has been suggested that fast secondary electrons
produced by X-ray ionization, similar to the electrons induced by
cosmic-ray ionization, can pump molecular hydrogen into excited
electronic states and may be important, for example, in extra-galactic
objects, Herbig-Haro objects, and supernova remnants
(e.g., Shemansky et al. 1985; Gredel et al. 1989; Gredel \& Dalgarno
1995; Tin\'e et al. 1997). This X-ray pumping could be observable
toward disks which are irradiated by strong X-ray
radiation from their central stars (e.g., Bergin et al. 2004).
As disks evolve, it is believed that the dust particles in the disks coagulate
and settle toward the disk midplane as the first step of planet
formation. The dust dynamics in this stage have been studied
theoretically in many works (e.g., Weidenschilling 1980, 1997; Nakagawa
et al. 1981, 1986; Mizuno et al. 1988; Mizuno 1989; Cuzzi et al. 1993;
Schmitt et al. 1997; Nomura \& Nakagawa 2006). This dust evolution is
expected to affect observational properties of the disks, and some
model calculations have been done in order to study the effect (e.g.,
Miyake \& Nakagawa 1993, 1995; D'Alessio et al. 2001, 2006; Dullemond \&
Dominik 2004; Jonkheid et al. 2004, 2006, 2007; Retting et al. 2006;
Aikawa \& Nomura 2006). In
addition, some numerical calculations of the dust evolution have been
done by solving the coagulation equation for settling dust particles,
and its effect on the spectral energy distribution of thermal dust
emission disks investigated (Suttner \&
Yorke 2001; Tanaka et al. 2005; Dullemond \& Dominik 2005).
In this paper, we further examine the effects of the dust evolution on
the physical structure of the gas in the disks and on molecular hydrogen
emission by using both a simple dust model and a
numerical calculation of the coagulation equation.
Historically, line emission from molecular hydrogen has been
observed towards various kinds of astronomical objects, such as shock
fronts associated with star forming regions, reflection nebulae,
planetary nebulae, supernova remnants, external galaxies, and so on
(e.g., Beckwith et al. 1978; Brown et al. 1983; Hasegawa et al. 1987;
Burton et al. 1992).
The observed line ratios probe the physical properties of these
objects as they reflect the
excitation mechanisms of molecular hydrogen, e.g., thermal collisions,
ultraviolet and X-ray pumping, and formation pumping
(e.g., Black \& van Dishoeck 1987;
Sternberg \& Dalgalno 1989; Tanaka et al. 1989; Tin\'e et al. 1997;
Takahashi \& Uehara 2001).
In this work we propose a possible
observational diagnostic of the dust evolution in protoplanetary disks
using line spectra and the line ratios of molecular hydrogen.
In the following sections, we model the density and temperature
profiles of the gas and dust in protoplanetary disks, taking into
account the X-ray and UV irradiation from a central star, as well as
dust growth and settling towards the disk midplane.
Then, using the physical structure, we calculate the level populations and
line emission of molecular hydrogen.
In \S 2,
we introduce the models we use in this work. For the dust evolution,
we use both a simple model and a more realistic model in which we
solve the coagulation equation for settling dust particles (\S\ref{S2.1}).
The X-ray and UV radiation fields are computed 1+1 dimensionally
(\S\ref{S2.2}), the density and temperature profiles of the gas and dust
in the disks are obtained by assuming vertical hydrostatic equilibrium
and local thermal and radiative equilibrium (\S\ref{S2.3}). The level
populations of molecular hydrogen are calculated under an assumption of
statistical equilibrium from which we get the molecular
hydrogen emission by solving the radiative transfer equation (\S\ref{S2.4}).
In \S 3, we present the resulting dust size and spatial distributions
(\S\ref{S3.1}), the
physical structure of the disks (\S\ref{S3.2}), the level populations of
molecular hydrogen (\S\ref{S3.3}), and the line spectra and line ratios
of molecular hydrogen (\S\ref{S3.4}), in which the effects of
the X-ray irradiation and the dust evolution are discussed.
Finally, the results are summarized in \S 4.
\section{Models}
\subsection{Spatial and Size Distributions of Dust Particles}\label{S2.1}
The physical structure and the molecular hydrogen emission of the disks
are affected by the dust model in various ways; for example, the UV radiation
field through dust extinction (\S\ref{S2.2}), the dust temperature through
optical properties of dust grains, the gas temperature through grain
photoelectric heating
and dust-gas collisions (\S\ref{S2.3}), and the molecular hydrogen formation
rate on dust grains (\S\ref{S2.4}).
In this paper we use the following two types of model for the dust size
and spatial distributions. In model A we adopt a very simple assumption
in order to understand the basic properties of the effects of dust evolution.
In model B we consider a more realistic case by numerically solving the
coagulation equation for settling dust particles.
In both models the shape of the dust particles is simply assumed to be a
compact sphere (see e.g., Kozasa et al. 1992; Ossenkopf
1993; Ormel et al. 2007 for fractal dust aggregate models). Making use
of the resulting
spatial and size distributions of dust grains, the dust absorption
($\kappa_{\nu}$) and scattering ($\sigma_{\nu}$) coefficients at each
position of the disk are computed by means of
the Mie theory (Bohren \& Huffman 1983), with the dust particles
assumed to consist of silicate, carboneous grains, and water ice (see
Paper I for details).
\subsubsection{Model A}
In model A we assume that
the dust grains have a spatially uniform distribution and the mass
fractional abundance of the dust with respect to the gas is
fixed at each position in the disk (i.e. the dust grains are well mixed with
the gas). The size distribution is set to be
$dn/da\propto a^{-3.5}$ ($a$ is the radius of dust grain) with the
maximum grain radii of $a_{\rm max}=10\mu$m, 1mm, and 10cm. The minimum
radius is set to be $0.01\micron$ for all models.
The amount of small dust grains decreases with increasing maximum
grain radii as we keep the mass fractional abundance of the dust
grains to the gas fixed (see
also \S 2.1.3) (e.g., Miyake \&
Nakagawa 1993; D'Alessio et al. 2001; Aikawa \& Nomura 2006).
\subsubsection{Model B}\label{S2.1.2}
In this model the spatial and size distributions of dust grains are
obtained by solving coagulation equations for various sizes of settling
dust particles,
\begin{displaymath}
\dfrac{\partial\varphi(i)}{\partial t}+\dfrac{\partial}{\partial z}[V_z(i)\varphi(i)]=-m_i\varphi(i)\sum_{j=1}^n\beta(i,j)\varphi(j)
\end{displaymath}
\begin{equation}
+\dfrac{1}{2}m_i\sum_{j=1}^{i-1}\beta(i-j,j)\varphi(i-j)\varphi(j), \label{eq.2-1}
\end{equation}
where $\varphi(i)$ is the mass density of dust particles in a mass bin
$i$ whose
summation is equal to the total mass density of the dust particles at a
given position and time as
\begin{equation}
\rho_{\rm dust}(x,z,t)=\sum_{i=1}^n\varphi(x,z,t,i). \label{eq.2-2}
\end{equation}
Here we briefly summarize the dust evolution model; more details can be
found in Nomura \& Nakagawa (2006). Now, $V_z(i)$ in equation
(\ref{eq.2-1}) is the vertical velocity
and $m_i$ is the typical mass of a particle in a mass bin $i$.
The symbol $\beta(i,j)$ is related to the sticking rate of two colliding
dust particles, given by
\begin{equation}
\beta(i,j)=\pi(a_i+a_j)^2\delta Vp_{\rm s}/m_im_j,
\end{equation}
where $a_i$ is the radius of a dust particle in a mass bin $i$, and we
simply assume the sticking probability of $p_{\rm s}=1$ in this paper.
The fragmentation of dust particles is simply neglected in this
work.
For the relative velocity between two colliding particles, $\delta V$,
we adopt
\begin{equation}
\delta V=(\delta V_{\rm B}^2+\delta V_z^2+\delta V_x^2+\delta V_{\rm T}^2)^{1/2}.
\end{equation}
The symbol $\delta V_{\rm B}=(8kT_d/\pi)^{1/2}(1/m_i+1/m_j)^{1/2}$
($k$ is Boltzmann's constant and $T_d$ is the dust temperature) is the
relative velocity caused by the thermal Brownian motion. The symbols
$\delta
V_z=V_z(i)-V_z(j)$ and $\delta V_x=V_x(i)-V_x(j)$ ($V_x(i)$ is the local
radial velocity component of dust particles with mass $m_i$, arising
from angular momentum loss via gas-dust friction) are the velocity
differences in the vertical and radial directions, respectively.
Finally, $\delta V_{\rm T}$ is the turbulence-induced
relative velocity (see Nomura \& Nakagawa 2006 for more details).
The disk is simply assumed to be completely quiescent or
turbulent, with $\delta V_{\rm T}=0$ in a quiescent disk model.
The mass flux in equation (\ref{eq.2-1}) is given by
\begin{equation}
V_z(i)\varphi(i)=-\dfrac{\Omega_{\rm K}^2z}{A\rho}\varphi(i)
\end{equation}
in a quiescent disk, where $\Omega_{\rm K}$ is the Keplerian
frequency and $\rho$ is the gas density.
For the drag coefficient between the gas and dust particles, $A$, we
adopt $A={c_{\rm s}}/\rho_sa$ for $a\la l_g$ and $A=3{c_{\rm s}} l_g/2\rho_sa^2$ for
$a\ga l_g$, following Epstein's and Stokes' laws, respectively, where
${c_{\rm s}}$, $\rho_s$, and $l_g$ are the sound speed of the gas, the solid
density of a dust particle, and the mean free path of gas particles,
respectively. The mean velocity of the dust particles
in the vertical direction, $V_z(i)=\Omega_{\rm K}^2z/A\rho$, is obtained by
balancing the gas-dust friction force, $A\rho V_z(i)$, and the
gravitational force in the vertical direction, $\Omega_{\rm K}^2z$. In a
turbulent disk, the mass flux is written as
\begin{equation}
V_z(i)\varphi(i)=-\dfrac{\Omega_{\rm K}^2z}{A\rho}\varphi(i)-D_0\rho\dfrac{\partial[\varphi(i)/\rho]}{\partial z},
\end{equation}
following the gradient diffusion hypothesis.
For the turbulent diffusivity, we adopt $D_0=\alpha'{c_{\rm s}}
H/(1+\Omega_{\rm K}/A\rho)$, where $H={c_{\rm s}}_0/\Omega_{\rm K}$ (${c_{\rm s}}_0$ is
the sound speed at the disk midplane) is the scale height of the
disk and we set $\alpha'=10^{-4}$ in this paper.
The global radial motion of the dust particles toward the central star
is not taken into account in this work.
At the disk surface this simplified treatment is applicable because
large dust particles settle toward the disk midplane more rapidly than
they move toward the central star (Figs.~\ref{f3} and \ref{f4} in
\S\ref{S3.1} actually show that large particles cannot stay in the
surface layer).
We note that the radial motion is negligible for the small dust
particles which couple with the gas efficiently via friction force
(e.g., Adachi et al. 1976; Weidenschilling 1977; Takeuchi \& Lin 2005).
Therefore, neglect of radial global motion will not affect our results
of molecular hydrogen lines which are mainly emitted in the disk surface
(see Paper I). In a completely quiescent disk the radial motion is
negligible all over the disk (e.g., Nakagawa et al. 1986).
Now, in order to avoid making very large particles which do not couple
with the gas and should fall on towards the central star, we remove
particles larger than some critical radius
by simply assuming that
when the dust particles grow large enough so that they cannot be trapped in
a turbulent eddy, they gain very rapid radial motion. The critical radius
is estimated as $a_{\rm crit}={c_{\rm s}}\rho_{\rm gas}/\rho_s\Omega_{\rm K}$
for $a\la l_g$ and $a_{\rm crit}=(3{c_{\rm s}}\rho_{\rm
gas}l_g/2\rho_s\Omega_{\rm K})^{1/2}$ for $a\ga l_g$ by comparing the
friction time between the gas and dust particles, $\tau_f=1/A\rho$, and
the turnover time of the largest turbulent eddy, $\tau_{\rm
eddy}=1/\Omega_{\rm K}$. We note that the dust particles grow to be
larger than $a_{\rm crit}$ only
close to the midplane of a turbulent disk (see \S\ref{S3.1}).
As the initial condition of the calculations, we set the dust particles
to be well-mixed with the gas and the dust-to-gas mass ratio to be
spatially uniform.
For the initial size distribution of the dust grains, we adopt
dust model A with $a_{\rm max}=10\micron$, which is similar to the
dust model of
dense molecular clouds (e.g., Weingartner \& Draine 2001). We also
assume that the disk is surrounded by a dense molecular cloud with gas
number density $n_{\rm out}=10^4$cm$^{-3}$, and consider
a continuous input of dust particles from the molecular clouds to the
disk
as a boundary condition at the disk surface ($z=z_{\rm coag}$, see
below and \S\ref{S3.1}) for the calculations of dust evolution
(Eq. [\ref{eq.2-1}]).
Dust particles fall on to the disk because the pressure
gradient force is negligibly small for them (e.g., Landau et al. 1967)
and cannot sustain them against the gravitational force of the central
star (we note that the gas is assumed to be stationary due to the
pressure gradient force).
This input of dust particles has great influence on structure of
the disk surface and observational properties of the disk (see
\S\ref{S3}).
The total mass of dust particles infalling from the cloud to the disk
in $10^6$ yr (the time for which the calculations are performed) is
$\sim 5\times 10^{-5}M_{\odot}$, which corresponds to a dust mass in a
spherical cloud with a radius of 2,500 AU and a gas density of $10^4$cm$^{-3}$,
and $1/3$ of the initial mass of dust grains in the disk.
In the calculations for the dust coagulation and settling
we set 23 radial grids
logarithmically for $x=0.2-100$ AU, and 50 vertical grids for the region
where the coagulation becomes significant ($z<z_{\rm coag}$, see
\S\ref{S3.1}).
The dust coagulation equation is solved using fixed density and
temperature profiles of dust and gas because self-consistent
calculations coupled to the time evolution of the dust and gas profiles
are very time-consuming. Instead,
we get the dust distribution and the temperature and density profiles
by iterating the calculations only once under an assumption
that the temperature and density profiles turn into an equilibrium state
very quickly; that is, we (a) calculate the dust
gas profiles using the initial dust distribution, (b) solve the
coagulation equation using the temperature and density profile in (a),
(c) compute the dust
and gas profiles again using the dust distribution in (b), (d) obtain
the dust distribution by solving the coagulation equation with the
temperature and density profiles in (c), and (e) finally
get the dust and gas profiles using the dust distribution in (d).
In the calculations in \S\ref{S3} we compare the gas temperature and
density profiles in (c) and (e), and checked that the errors are
within 30\% for the gas temperatures and 80\% at most for the gas
densities in both quiescent and turbulent disks. These errors are
relatively large at the upper layer of the disks where
the molecular hydrogen is photodissociated via UV radiation from the
central star, so the errors in molecular hydrogen emission from the disks
(see \S\ref{S3.4}) are smaller and within 15\% for all lines.
\subsubsection{Parameter for Total Surface Area of Dust Particles, $f_{\rm dust}$}\label{S2.1.3}
We introduce a parameter, $f_{\rm dust}$, that represents the total
surface area of dust particles per unit volume of the gas at each
position in the disk $(x,z)$,
\begin{equation}
f_{\rm dust}(x,z)=A_{\rm tot}(x,z)/A_{{\rm tot}, 0}, \label{eq7}
\end{equation}
where
\begin{equation}
A_{\rm tot}(x,z)=\int 4\pi a^2\dfrac{dn(x,z)}{da}da.
\end{equation}
The size distributions $dn/da\propto\varphi(i)/a_i^4$ in dust model
B, and $A_{{\rm tot},
0}$ is calculated using the dust model A with $a_{\rm max}=10\mu$m,
which is similar to the dust model of dense molecular clouds and is used
as the initial condition for dust model B. This parameter controls the
physical disk structure and molecular hydrogen emission from the disks
because the grain opacity, the grain photoelectric heating rate, and the
formation rate of molecular hydrogen on grain surface are roughly
proportional to it. Thus, the dust and gas temperatures, the
UV radiation field, and the abundance of molecular hydrogen are related
to $f_{\rm dust}$ (see the following subsections). In dust model A,
the parameter $f_{\rm dust}$ has values of 1, 0.1, and 0.01 for the maximum
dust size of $a_{\rm max}=10\mu$m, 1mm, and 10cm, respectively.
In model B, $f_{\rm dust}$ decreases with dust
size growth and settling (except at the disk midplane), which leads to
a decrease of the grain photoelectric heating rate, and thus the gas
temperature and so on (see \S\ref{S3.1} for more details).
\subsection{X-ray and Ultraviolet Radiation Fields}\label{S2.2}
Observations have shown that many T Tauri stars emit strong X-ray
(e.g., Koyama et al. 1994; Feigelson \& Montmerle 1999; Tsujimoto et
al. 2002; Imanishi et al. 2003; Getman et al. 2005; Preibisch et
al. 2005) as well as strong ultraviolet (UV) radiation (e.g., Herbig \&
Goodrich 1986; Herbst et al. 1994; Valenti et al. 2000).
For the X-ray radiation from
the central star we use a model which reproduces observational data
toward a classical T Tauri star, TW Hydrae
(cf. Kastner et al. 2002; Stelzer \& Schmitt 2004).
Retrieving the archived \textit{XMM-Newton} data, we fit the
spectrum with a two-temperature thin-thermal plasma model (mekal model;
Mewe et al. 1985; Kaastra et al. 1992; Liedahl et al. 1995) which is
often used in order to reproduce observed X-ray spectrum of T Tauri
stars. The derived best-fit parameters are $kT_1=0.8$keV and
$kT_2=0.2$keV for the plasma temperatures, and $N_{\rm H}=2.7\times
10^{20}$ cm$^{-2}$ for the foreground interstellar hydrogen column
density.
The total X-ray luminosity of the spectrum
corresponds to $L_X\sim 10^{30}$ erg s$^{-1}$.
In Figure~\ref{f1} the resulting model spectra is plotted.
The adopted stellar UV radiation field model is also based on
observations towards TW Hydrae and analyses by Herbst et al. (1994),
Costa et al. (2000), Bergin et al. (2003), Herczeg et al. (2002), and
Ardila et al. (2002). The model consists of photospheric black
body radiation, hydrogenic thermal bremsstrahlung radiation, and strong
Ly $\alpha$ line emission (see Appendix C of Paper I). The total FUV
(6eV $<h\nu <$ 13eV) luminosity corresponds to $L_{\rm FUV}\sim 10^{31}$
erg s$^{-1}$. The interstellar UV radiation field is
taken into account, but its contribution is negligible under the
strong UV irradiation from the central star (see Paper I for details
of the UV radiation field in the disk).
We note that although we use the X-ray and UV radiation of TW Hya
(as it is one of the most well-observed T Tauri stars), our disk model
is to be more widely applicable.
\begin{figure}[t]
\includegraphics[scale=1.0]{f1.eps}
\caption{The model spectra of the X-ray radiation at the central star,
which reproduces observation toward a classical T Tauri star, TW
Hya ($d=56$pc). \label{f1}}
\end{figure}
The X-ray and UV fields in the disk are calculated in 1+1 dimensions
in the radial and vertical directions (see also Paper I) as
\begin{equation}
F_{\nu, R}(R,\Theta)=fF_{\nu, {\rm star}}\exp(-\tau_{\nu, R}),\ \ \ \tau_{\nu, R}=\int_{R_*}^R \chi_{\nu}\rho dR, \label{eq.2-6}
\end{equation}
and
\begin{displaymath}
F_{\nu, z}(x,z)=F_{\nu, {\rm ISRF}}\exp[-\tau_{\nu, z}(z_{\infty})]
\end{displaymath}
\begin{displaymath}
+2\pi\int_z^{z_{\infty}} \sigma_{\nu}(x,z')\rho(x,z')F_{\nu, R}(x,z')e^{-\tau_{\nu, z}(z')}dz',
\end{displaymath}
\begin{equation}
\tau_{\nu, z}(z')=\int_z^{z'} \chi_{\nu}\rho dz'', \label{eq.2-7}
\end{equation}
where the direct radiation from the central star is calculated
fully, while a plane-parallel approximation in the vertical direction is
adopted for the calculation of the scattering process, which
could result in overestimating the radiation fields in the disks.
In the above equations
$F_{\nu, {\rm star}}$ is the specific radiation field at the
stellar surface and $f=(R_*/R)^2$ accounts for the geometrical
dilution of the radiation field.
$F_{\nu, {\rm ISRF}}$ is the
FUV interstellar radiation field, and we set $F_{{\rm Xray,\ ISRF}}=0$
in the calculation of the X-ray radiation field. $\tau_{\nu, R}$ and
$\tau_{\nu, z}$ are the specific optical depths from the stellar
surface $(R_*,\Theta)$ to a point $(R,\Theta)$ and from a point $(x,z)$
to $(x,z')$, respectively. $\rho$ is the gas density, and $\chi_{\nu}$
is the monochromatic extinction coefficient defined by the absorption
($\kappa_{\nu}$) and scattering ($\sigma_{\nu}$) coefficients as
$\chi_{\nu}\equiv\kappa_{\nu}+\sigma_{\nu}$.
In order to treat X-ray extinction, we adopt the attenuation
cross section at an energy $E$
of $\sigma_{\rm att}(E)=\sigma_{\rm ph}(E)+\sigma_{\rm Com}(E)$, where
$\sigma_{\rm ph}$ is the total photoionization cross section due to all
elements per hydrogen nucleus
and $\sigma_{\rm
Com}$ is the incoherent Compton scattering cross section of hydrogen.
For the cross section $\sigma_{\rm ph}$ we adopt a broken power-law
model given by Maloney et al. (1996; see also Wilms et al. 2000), and
$\sigma_{\rm Com}$ is calculated based on McMaster et
al. (1969, http://cars9.uchicago.edu/\textasciitilde newville/mcbook/).
In calculating equations (\ref{eq.2-6}) and (\ref{eq.2-7}), we adopt
$\chi_{\rm Xray}=\sigma_{\rm att}(E)/m_{\rm p}$ and $\sigma_{\rm Xray}=\sigma_{\rm
Com}(E)/m_{\rm p}$, where $m_{\rm p}$ is the proton mass.
\subsection{Physical Structure of the Disks}\label{S2.3}
We model an axisymmetric disk surrounding a central star with the
physical parameters of typical T Tauri stars; a mass of
$M_*=0.5M_{\odot}$, a radius of $R_*=2R_{\odot}$, and a temperature
of $T_*=4000$K (e.g., Kenyon \& Hartmann 1995).
The gas temperature and density distributions of the disk are obtained
self-consistently by iteratively solving the equations for hydrostatic
equilibrium in the vertical direction and local thermal balance between
heating and cooling of gas (see Paper I for details).
The vertical hydrostatic equilibrium is represented by an equation,
\begin{equation}
\dfrac{dP}{dz}=-\rho\dfrac{GM_*z}{(x^2+z^2)^{3/2}}. \label{eq.1}
\end{equation}
$G$ is the gravitational constant, and $P$ is the gas pressure given by
$P=\rho kT/m_{\mu}$, where $\rho$, $T$,
$k$, and $m_{\mu}$ are the density and temperature of the gas,
Boltzmann's constant, and the mean molecular mass, respectively.
The condition, $\int_{-z_{\infty}}^{z_{\infty}}\rho(x,z)dz=\Sigma(x)$,
is imposed, where we set $\rho(x,z_{\infty})=5.0\times
10^{-19}$ g cm$^{-3}$ ($n_{\rm tot}\approx 3\times 10^5$cm$^{-3}$) as the
boundary condition. The surface density at a disk radius $x$,
$\Sigma(x)$, is defined by assuming a constantly accreting viscous disk
model and equating the gravitational energy release of accreting mass
to the thermal heating via viscous dissipation at the disk midplane,
\begin{equation}
\dfrac{9}{4}\Sigma\alpha{c_{\rm s}}_0^2\Omega_{\rm K}=\dfrac{3GM_*\dot{M}}{4\pi x^3}
\biggl[1-\biggl(\dfrac{R_*}{x}\biggr)^{1/2}\biggr],
\label{eq.2}
\end{equation}
where ${c_{\rm s}}_0$ and $\Omega_{\rm K}=(GM_*/x^3)^{1/2}$ represent the sound
speed at
the midplane and the Keplerian frequency, respectively. A viscous
parameter of $\alpha=0.01$ and a constant mass accretion rate of
$\dot{M}=10^{-8}$ M$_{\odot}$ yr$^{-1}$ are adopted here.
The gas temperature, $T$, is obtained by assuming detailed
energy balance at each position in the disk,
\begin{equation}
\Gamma_{\rm FUV}+\Gamma_{\rm Xray}+L_{\rm gr}+\Lambda_{\rm line}=0, \label{eq.3}
\end{equation}
where we include grain photoelectric heating induced by
FUV photons, $\Gamma_{\rm FUV}$, X-ray heating caused by hydrogen
ionization, $\Gamma_{\rm Xray}$,
gas-grain collisions, $\Lambda_{\rm gr}$, and radiative
cooling by line transitions, $\Lambda_{\rm line}$ for the gas heating
and cooling processes.
The X-ray heating rate, $\Gamma_{\rm Xray}$, is calculated as
\begin{equation}
\Gamma_{\rm X ray}=n_{\rm tot}f_hH_{\rm X}, \label{eq.4}
\end{equation}
where $n_{\rm tot}$ is the total number density of hydrogen nuclei,
and $f_h$ is the heating efficiency, namely, the fraction of absorbed
energy that goes into heating the gas. We adopt $f_h=0.1$ for
atomic hydrogen and $f_h=0.4$ for molecular hydrogen (Maloney et
al. 1996; Gorti \& Hollenbach 2004). $H_{\rm X}$ is the
local X-ray energy deposition rate per particle, given by
\begin{equation}
H_{\rm X}=\int_{E_{\rm min}}^{E_{\rm max}} \sigma_{\rm ph}(E)F_{\rm X}(E)dE,
\label{eq.5}
\end{equation}
where $\sigma_{\rm ph}(E)$ is the total photoionization cross section
due to all elements per hydrogen nucleus at energy $E$.
The symbol $F_{\rm X}(E)$ is the X-ray energy flux at each
position in the disk, and $E_{\rm min}=0.1$keV and
$E_{\rm max}=10$keV are adopted for the minimum and maximum energy
(see Fig.~\ref{f1} in \S\ref{S2.2}).
We note that the viscous heating is not taken into account in the
energy balance (Eq. [\ref{eq.3}]) because it is not dominant (at the
disk surface) if $\alpha=0.01$ (Glassgold et al. 2004).
For radiative cooling by line transitions, we consider the Ly
$\alpha$ transition of atomic hydrogen and the metastable transition of
OI ($\lambda$ 6300\AA) in addition to the fine-structure transitions of
OI (63$\mu$m) and CII (158$\mu$m), and the rotational transitions of CO.
In order to calculate the Ly $\alpha$ line cooling, we make use of the
table of level populations of atomic hydrogen for various electron
densities and temperatures, given by Storey \& Hummer (1995). The
collisional de-excitation rate coefficient is taken from Hollenbach \&
McKee (1989) for calculation of OI $\lambda$ 6300 line cooling.
Paper I gives details of calculations of the OI and CII fine-structure,
and CO rotational transition line cooling.
The spatial and size distributions of dust grains affect the gas
temperature through
the grain photoelectric heating, $\Gamma_{\rm FUV}$, and the
energy exchange between gas and dust particles through collisions,
$\Lambda_{\rm gr}$. Both rates are
roughly proportional to the parameter which represents
the total surface area of the dust particles,
$f_{\rm dust}$, given in \S\ref{S2.1.3}. In this paper we simply set
$\Gamma_{\rm FUV}=f_{\rm dust}\Gamma_{{\rm FUV},0}$ and $\Lambda_{\rm
gr}=f_{\rm dust}\Lambda_{{\rm gr},0}$,
where the heating/cooling rates with subscript '0' are calculated by
using the models given in Paper I in which we used the dense cloud dust
model (see also Aikawa \& Nomura 2006).
The dust temperature
profile is important for determining the disk structure because the gas
temperature is well coupled to the dust temperature in the dense
region near the midplane of the disks. We obtain the dust temperature
by assuming local radiative equilibrium between absorption and
reemission of radiation by dust grains at each position in the disk.
The intensity is calculated by
solving the axisymmetric two-dimensional radiative transfer equation
by means of the short characteristic method in spherical coordinates
(Dullemond \& Turolla 2000; Nomura 2002). As heating sources, we
consider the radiative flux produced by
the viscous dissipation ($\alpha$-viscous model) at the disk midplane,
and the irradiation from the central star (see Paper I for details).
The dust evolution in the disks affects the dust temperature through the
change in grain opacity (\S\ref{S2.1}).
\subsection{Level Populations and Line Emission of Molecular Hydrogen}\label{S2.4}
In order to obtain the molecular hydrogen emission from the disk, we
first calculate the
abundance and the level populations of the $X^1\Sigma_g^-$ electronic
state of molecular hydrogen in a statistical equilibrium state,
based on Wagenblast \& Hartquist (1988), as
\begin{displaymath}
n_l({\rm H}_2)\left[\sum_{m\ne l} \biggl(A_{lm}+\beta_{lm}+\gamma_{lm}+\sum_s n_sC_{lm}^s\biggr)+R_{{\rm diss},l}\right]
\end{displaymath}
\begin{displaymath}
+k_{{\rm O}+{\rm H}_2}n({\rm O})n_l({\rm H}_2)
\end{displaymath}
\begin{equation}
=\sum_{m\ne l}n_m({\rm H}_2)\biggl(A_{ml}+\beta_{ml}+\gamma_{ml}+\sum_s n_sC_{ml}^s\biggr)+n({\rm H})R_{{\rm form},l}, \label{eq.2.3.1}
\end{equation}
where $A_{lm}$ is the Einstein $A$-coefficient for spontaneous emission
from level $l$ to level $m$ and $C_{lm}^s$ is the collisional transition
rate with collision partner $s$. $\beta_{lm}$ represents the
effective rate for transition $l\rightarrow m$ via ultraviolet pumping
followed by radiative cascade, and $R_{{\rm diss},l}$ is the
photodissociation rate of hydrogen molecules in level $l$. $R_{{\rm
form},l}$ is the effective formation rate of H$_2$ in level $l$ on grain
surfaces. The endothermic reaction O + H$_2 \rightarrow$ OH + H, which
destroys molecular hydrogen in high temperature regions, is also taken
into account (see Paper I for details).
In addition, we consider the effective transition rate, $\gamma_{lm}$,
via X-ray pumping of molecular hydrogen followed by radiative cascade.
X-ray irradiation from the central star ionizes
the gas to produce photoelectrons and subsequently secondary electrons in
the disk. Through collisions they excite molecular hydrogen to singlet
and triplet electronic states, followed by radiative
cascade down into the ground electronic state (e.g., Gredel \& Dalgarno
1995; Tin\'{e} et al. 1997; Bergin et al. 2004). In this paper we simply
use the entry efficiency $\alpha_{J_i}(v,J)$ from the levels $(v=0,J_i)$
to $(v,J)$ for fractional ionization of $10^{-4}$, tabulated in
Tin\'{e} et al. (1997), in order to estimate the
rate $\gamma_{lm}$ as
\begin{equation}
\gamma_{lm}=\zeta_{\rm X}\alpha_{J_l}(v_m,J_m). \label{eq.18}
\end{equation}
This simplified treatment will not cause significant error at the
middle layer of the outer disk where molecular hydrogen lines are mainly
emitted (the line fluxes are strong where the UV radiation field is not
too strong,
the gas temperature is moderately high, and the surface area is large;
see Paper I), but a full calculation of X-ray pumping and the
subsequent radiative cascade should be done in future.
The symbol $\zeta_{\rm X}$ in equation (\ref{eq.18})
is the total hydrogen ionization rate given by
\begin{equation}
\zeta_{\rm X}\simeq N_{\rm sec}\int_{E_{\rm min}}^{E_{\rm max}} \sigma_{\rm ph}(E)F_{\rm X}(E)dE, \label{eq.2.4.3}
\end{equation}
where $\sigma_{\rm ph}$ is
the total photoionization cross section
due to all elements per hydrogen nucleus,
and $N_{\rm sec}$ is the number of secondary
ionizations of hydrogen per unit energy produced by primary
photoelectrons and we put $N_{\rm sec}=26/$keV in this paper (e.g.,
Verner \& Yakovlev 1995; Maloney et al. 1996; Gorti \& Hollenbach 2004).
We note that the effect of interstellar cosmic-ray ionization
is not taken into account in the calculation of the pumping process,
but it will not affect the results as the X-ray
ionization rate is much higher at the disk surface (see \S\ref{S3.2.1}).
Possible reactions induced by X-rays are ignored, and a simple
chemical network given in Wagenblast \& Hartquist (1988) (plus the
reactions O + H$_2 \rightarrow$ OH + H and OH + $h\nu \rightarrow$ O +
H) is adopted as in Paper I. This
neglect will not affect the resulting molecular hydrogen abundance since
the photodissociation by UV radiation from the central star or the
above-mentioned reaction with atomic oxygen is more efficient for
destroying molecular hydrogen than the X-ray-induced photoionization and
other related reactions in our model
(see e.g., Maloney et al. 1996 for XDR chemistry, and
also e.g., Aikawa \& Herbst 1999, 2001; Markwick et al. 2002 for more
detailed disk chemistry including the X-ray photoprocess).
The spatial and size distributions of dust grains affect the formation
rate of molecular hydrogen. The rate is roughly
proportional to the total surface area of the dust particles, $f_{\rm
dust}$, given in \S\ref{S2.1.3}, so we simply set $R_{{\rm form},l}=f_{\rm
dust}R_{{\rm form},l,0}$. Here we use the model in Paper I
in order to calculate $R_{{\rm form},l,0}$.
The total formation rate is given by $\sum_l R_{{\rm
form},l}=7.5\times 10^{-18}f_{\rm dust}T^{0.5}\epsilon_{{\rm
H}_2}(T_d)n_{\rm tot}n(H)$ cm$^{-3}$ s$^{-1}$, where $T$ is the gas
temperature and $\epsilon_{{\rm H}_2}(T_d)$
is the recombination efficiency of atomic hydrogen on dust grains
as a function of
the dust temperature, $T_d$ (Cazaux \& Tielens 2002,
2004; see also Pirronello et al. 1999; Zecho et al. 2002).
Making use of the physical properties obtained in the previous
subsections and the level populations, we calculate emission
(from levels $u$ to $l$) of molecular hydrogen from the disks
by integrating the radiative transfer equation (see Paper I for details),
\begin{equation}
F_{ul}=\dfrac{1}{4\pi d^2}\int_{x_{\rm in}}^{x_{\rm out}}2\pi xdx\int_{-z_{\infty}}^{z_{\infty}}\tilde{\eta}_{ul}(x,z)dz,
\end{equation}
where $\tilde{\eta}_{ul}(x,z)$ is the emissivity of the transition
line at $(x,z)$ times the effect of absorption in the upper disk layer,
given by
\begin{equation}
\tilde{\eta}_{ul}(x,z)=n_u(x,z)A_{ul}\dfrac{h\nu_{ul}}{4\pi}\exp(-\tau_{ul}(x,z)).
\end{equation}
$\tau_{ul}(x,z)$ is the optical depth from $z$ to the disk surface
$z_{\infty}$ at the frequency $\nu_{ul}$,
\begin{equation}
\tau_{ul}(x,z)=\int_z^{z_{\infty}}\chi_{ul}(x,z')dz',
\end{equation}
where $\chi_{ul}$ is the total extinction coefficient,
\begin{equation}
\chi_{ul}=\rho\chi_{\nu_{ul}}+(n_lB_{lu}-n_uB_{ul})\Phi_{ul}\dfrac{h\nu_{ul}}{4\pi}.
\end{equation}
In these equations, $A_{ul}$ and $B_{ul}$ are the Einstein coefficients,
$n_u$ and $n_l$ are the number densities of the upper and lower levels,
respectively, and $\Phi_{ul}$ is the line profile function. The energy
difference between the levels $u$ and $l$ corresponds to $h\nu_{ul}$.
The symbol $\chi_{\nu_{ul}}$ is the extinction coefficient of
dust grains (see \S\ref{S2.1} and \S\ref{S2.2}) at the frequency
$\nu_{ul}$, and $\rho$ is the gas density.
Here, the disk is
assumed to be face on an observer, and we use the distance to an object
of $d=56$ pc for calculating the intensity in order to compare it with
the observations towards TW Hya.
Extinction by a foreground interstellar dust grains is not taken
into account in the calculations.
\section{Results}\label{S3}
\subsection{Spatial and Size Distributions of Dust Particles}\label{S3.1}
\begin{figure}[h]
\includegraphics[scale=1.0]{f2a.eps}
\includegraphics[scale=1.0]{f2b.eps}
\caption{The vertical profiles of (a) the parameter for the total
surface area of dust grains, $f_{\rm dust}$, and (b) the total dust
density, $\rho_{\rm dust}$, normalized by the initial value in
quiescent ({\it solid lines}) and turbulent ({\it dashed lines}) disks
at the disk radii of $x=1, 10$ and 100AU at $10^6$yr after the
calculations start. At the disk surface
$\rho_{\rm dust}/\rho_{\rm dust, 0}$ and $f_{\rm dust}$ are small
due to the dust settling towards the disk midplane. Near the disk
midplane $f_{\rm dust}$ is further smaller as a result of the
dust coagulation, while $\rho_{\rm dust}/\rho_{\rm dust, 0}$ increases
due to the dust settling. \label{f2}}
\end{figure}
For dust model B we obtain the spatial and size distributions of dust
particles by solving the coagulation equations for various sizes of
settling dust particles in a quiescent or turbulent disk (\S\ref{S2.1}).
In Figure \ref{f2} we plot the resulting profiles
of (a) the parameter representing the total surface area of dust grains,
$f_{\rm dust}$ (defined in Eq. [\ref{eq7}] in \S\ref{S2.1.3}), and (b) the
total dust density $\rho_{\rm dust}$ normalized by the initial value
$\rho_{\rm dust, 0}$ (defined in Eq. [\ref{eq.2-2}] in \S\ref{S2.1.2})
in the vertical direction at the disk radii of $x=1, 10$, and 100AU.
The initial dust density is simply proportional to the gas density (the
dust particles are well-mixed with the gas) and corresponds to
roughly 1\% of the gas mass density,
$\rho_{\rm dust, 0}\approx 0.01\rho$, in this model.
The solid and dashed lines are the profiles in quiescent and turbulent
disks, respectively. The calculations are performed for $10^6$
yrs, comparable to the typical age of classical T Tauri stars.
We note that at $t\sim 10^6$yr the dust coagulation process and settling
motion (input from the upstream and output to the downstream) are almost
in equilibrium state at each position in the disk, and the spatial and
size distributions of dust particles do not change with time
except in the region very close to the disk midplane.
Therefore, the spatial and size distributions of dust particles
in the surface layer presented in this subsection are applicable to
older star-disk systems as well.
Figure \ref{f2} shows that the mass and total surface area of dust
grains per unit volume of the gas
are much smaller than the initial values.
In the disk surface ($z>z_{\rm coag}$; see below)
where the density of particles is low enough so that
the dust particles settle before they grow, $\rho_{\rm dust}/\rho_{\rm
dust, 0}$ and $f_{\rm dust}$ are small
due to the settling of dust particles toward the disk midplane.
In the upper surface of the disk ($z>z_{\rm fric}$; see below), where the
density is low enough that the gas
friction force does not affect the motion of dust particles, the particles
settle in the vertical direction with the free-fall
velocity, $V_z=V_{\rm ff}=[2GM_*/(x^2+z^2)^{1/2}]^{1/2}$. In this
region the normalized dust density $\rho_{\rm dust}/\rho_{{\rm dust}, 0}$
(and the parameter $f_{\rm dust}$) drop with
decreasing $z$, inversely proportional to the gas (or the initial dust)
density. Here, the dust particles are assumed to be continuously falling on
from the surrounding molecular cloud to the disk due to the
gravitational force of the central star with a constant
(time-independent) mass flux ($=n_{\rm out}V_{\rm ff}$)
(see \S\ref{S2.1.2}). At smaller $z$ ($z<z_{\rm fric}$)
where the gas density becomes higher and
the gas friction force controls the dust motion, the vertical velocity
of dust particles becomes $V_z=\Omega_{\rm K}^2z/A\rho$
(\S\ref{S2.1.2}), and the normalized dust density $\rho_{\rm
dust}/\rho_{{\rm dust}, 0}$ (and the parameter $f_{\rm dust}$) do not
change very much in this region.
The velocity changes from the free-fall velocity ($V_{\rm ff}$) to the
terminal velocity ($V_z=\Omega_{\rm K}^2z/A\rho$) around $z=z_{\rm
fric}=0.5$, 7.5, and 75 AU at the disk radii of $x=1$, 10, and 100AU,
respectively, in this model.
At even smaller $z$ ($z<z_{\rm coag}$) where the density is much higher
and the collisional cross section becomes high enough for the dust
particles to grow, the parameter $f_{\rm dust}$ drops with
decreasing $z$ (and increasing density) because small particles
disappear as a result of coagulation, while the normalized dust density
$\rho_{\rm dust}/\rho_{{\rm dust}, 0}$ does not change very much and
increases close to the disk midplane due to the settling of the
particles. Most of the dust mass settles at the disk midplane and
$\rho_{\rm dust}/\rho_{{\rm dust}, 0}\gg 1$ at $z\approx 0$ (not shown
in this figure). Throughout the calculations the total dust mass in the
disk is equal to the initial dust mass plus the mass infalled from the
cloud (minus the mass of particles with $a>a_{\rm crit}$ removed near
the midplane of the turbulent disk; see below).
The difference between the quiescent and turbulent disks
shows up most clearly in the parameter $f_{\rm dust}$ at small $z$
($z<z_{\rm coag}$; where the dust coagulation is efficient) because
the collision rate is higher in the turbulent disk owing to the
turbulent induced relative velocity between the particles, $\delta
V_{\rm T}$ (see \S\ref{S2.1.2}). The coagulation becomes efficient
around $z=z_{\rm coag}\sim 0.15$ (0.2), 3.5 (4.0), and 65 (65) AU for
the quiescent (turbulent) disk at $x=1$, 10, and 100AU, respectively,
in this model.
In Figures \ref{f3} and \ref{f4} we plot the resulting size
distributions of mass density of dust particles, $\varphi(i)$,
normalized by the initial dust density $\rho_{{\rm dust},0}$, in
quiescent and turbulent disks, respectively.
Each figure shows the size distributions
at (a) $x=1$AU, $t=1\times 10^2$ yr, (b) $x=1$AU, $t=1\times 10^6$ yr;
(c) $x=10$AU, $t=3\times 10^3$ yr, (d) $x=10$AU, $t=1\times 10^6$ yr;
(e) $x=100$AU, $t=3\times 10^4$ yr, and (f) $x=100$AU, $t=1\times 10^6$
yr. The time used in Figure {\it a, c}, and {\it e} is around
the time when the size of the largest dust particles at the disk height
of $z\sim H$ becomes maximum in the quiescent disk model.
The dot-dashed, dashed, and solid lines in each figure represent
the size distributions at $z\sim z_{\rm coag}$, $z\sim 2H$, and $z\sim
H$, respectively. In Figure \ref{f4}b, d, and f, we also plot the size
distributions at $z\sim 0.25H$ in thin solid lines.
The disk scale heights are $H=0.044$ (0.047), 0.51
(0.60), and 11 (11) AU for the quiescent (turbulent) disk at $x=1$,
10, and 100AU, respectively. The thin dotted lines show the distribution
of the initial
\onecolumn
\begin{figure}
\includegraphics[scale=1.0]{f3a.eps}
\includegraphics[scale=1.0]{f3b.eps}
\includegraphics[scale=1.0]{f3c.eps}
\includegraphics[scale=1.0]{f3d.eps}
\includegraphics[scale=1.0]{f3e.eps}
\includegraphics[scale=1.0]{f3f.eps}
\caption{The size distributions of mass density of dust particles,
$\varphi(i)$, normalized by the initial dust density
$\rho_{\rm dust, 0}$ at each disk radii, $x$, and time, $t$, in a
quiescent disk. The dot-dashed, dashed, and solid lines represent
the distributions at $z\sim z_{\rm coag}$, $z\sim 2H$, and
$z\sim H$, respectively. The thin dotted lines show the initial
distribution. At the disk surface the size distributions are
similar to those in molecular clouds, but the number density of the dust
particles is much smaller than the initial value due to the dust
settling. Near the disk midplane small particles disappear due to the
dust coagulation and larger particles settle towards the midplane as
time increases. \label{f3}}
\end{figure}
\begin{figure}
\includegraphics[scale=1.0]{f4a.eps}
\includegraphics[scale=1.0]{f4b.eps}
\includegraphics[scale=1.0]{f4c.eps}
\includegraphics[scale=1.0]{f4d.eps}
\includegraphics[scale=1.0]{f4e.eps}
\includegraphics[scale=1.0]{f4f.eps}
\caption{The same as Figure \ref{f3} but in a turbulent disk. The size
distributions of dust particles at $z\sim 0.25H$ are also plotted in
thin solid lines in Figure b, d, and f. Near the
disk midplane a certain number of large dust particles remain due to
turbulent mixing. \label{f4}}
\end{figure}
\twocolumn
\noindent
condition (the dust model A with $a_{\rm max}=10\mu$m).
The figures show that at the surface layer
above $z\sim z_{\rm coag}$ the size distributions at $t=10^6$yr are
similar to those in dense molecular clouds (the dust model A with
$a_{\rm max}=10\mu$m in this work) as the dust particles can not
grow due to the small collisional rate, but the number density of the
particles is much smaller than the initial value due to the dust
settling as mentioned above.
We note that bumps of mass density of small dust particles ($a\la
1\micron$) at $z\sim z_{\rm coag}$ in early phases are remnants of the
initial distribution.
At smaller $z$ ($z<z_{\rm coag}$), small dust particles disappear
as they stick together to make larger particles. In the quiescent disk
the larger particles settle toward the disk midplane and disappear
from the disk surface, $z\geq H$, as time goes on (Fig. \ref{f3}).
Meanwhile, in the turbulent disk a certain amount of large particles
remain even at $z\geq H$ at $t=10^6$yr (though most of them settle toward
the midplane) because of the turbulent mixing which works so as to
unify the size distributions in the vertical direction (\S\ref{S2.1.2};
Fig. \ref{f4}).
The cutoffs around the dust radii of $a\sim$ 7 and 0.8
cm at the disk height of $z\sim 0.25H$ in
Figure \ref{f4}{\it d} and {\it e} correspond to the critical radii,
$a_{\rm crit}$, beyond which the particles cannot be trapped in a
turbulent eddy and move toward the central star rapidly. The
particles with $a>a_{\rm crit}$ are simply removed from the calculations
(see \S\ref{S2.1.2}).
\subsection{Physical Properties of the Disks}\label{S3.2}
\begin{figure}[t]
\includegraphics[scale=1.0]{f5.eps}
\caption{The vertical temperature profiles of dust ({\it thin
dotted lines}) and gas at the disk radii of $x=1, 10$ and 100AU for the
irradiation models of X-rays $+$ UV ({\it solid lines}), X-rays only
({\it dashed lines}), and UV only ({\it dot-dashed lines}). The dust
model A with $a_{\rm max}=10\micron$ is used. The X-ray
heating is dominant at the inner region and the very surface layer of
the disk, while the FUV heating dominates in the middle layer and the
outer region of the disk. \label{f5}}
\end{figure}
In this subsection we obtain the gas density and temperature
distributions of the
disk self-consistently by iteratively solving the equations for vertical
hydrostatic equilibrium and local thermal balance between heating and
cooling of gas (\S\ref{S2.3}).
The effects of the X-ray irradiation from the central star and the dust
evolution on the physical properties of the disks are discussed in the
following.
\subsubsection{Effect of X-rays}\label{S3.2.1}
First, in Figure \ref{f5} we plot the gas temperature
profiles in the vertical direction at the disk radii of $x=1, 10$, and
100 AU, where the disk is irradiated by both of X-ray and UV radiation
from the central star ({\it solid lines}). We also plot the profiles for a
disk which is irradiated by X-ray radiation only ({\it dashed
lines}) or UV radiation only ({\it dot-dashed lines}) for comparison.
The thin dotted lines are the dust temperature profiles which are not
affected by the UV or X-ray irradiation model. The dust model A with the
maximum dust radius of $a_{\rm
max}=10\mu$m (see \S\ref{S2.1}) is used throughout this sub-subsection.
We note that the calculations are performed in the region where
$\rho\geq \rho(x,z_{\infty})=5.0\times 10^{-19}$ g cm$^{-3}$, and the
position of $z_{\infty}$ depends on the models (see \S\ref{S2.3}).
The figure shows that the gas temperature is much higher than the dust
temperature in the surface layer of the disk due to the X-ray and FUV
heating. The X-ray heating dominates the FUV heating in the inner region
and in the surface layer of the disk where direct irradiation from the
central star is strong. Meanwhile,
the FUV heating dominates the X-ray heating in the middle layer and in
the outer disk. This is because the FUV radiation is scattered
efficiently by dust grains, while the Compton scattering of X-ray
radiation is inefficient in the energy range of $E\la 1$keV (e.g., Igea
\& Glassgold 1999) in which T Tauri stars mainly emit X-rays (see
Fig.~\ref{f1}).
The gas temperature is almost the same as the dust
temperature near the disk midplane where the density is high enough
\begin{figure}[h]
\includegraphics[scale=0.85]{f6a.eps}
\includegraphics[scale=0.85]{f6b.eps}
\includegraphics[scale=0.85]{f6c.eps}
\caption{The vertical profiles of the cooling and heating rates at the
disk radii of (a)
$x=1$ AU, (b) 10 AU, and (c) 100 AU for the irradiation model of X-rays
$+$ UV and the dust model A with $a_{\rm max}=10\micron$. The X-ray or
FUV heating dominates the heating process, while the radiative cooling
(Ly $\alpha$, OI 6300\AA, and OI 63$\mu$m for $x=1$, 10, and 100 AU)
and the dust-gas collision dominate the cooling process at the surface
layer and near the midplane, respectively. \label{f6}}
\end{figure}
\noindent
that the gas and dust particles are well coupled through collisions.
In Figure \ref{f6} we plot the vertical profiles of the heating and
cooling rates at disk radii of (a) 1AU, (b) 10AU, and (c) 100AU, for a
disk irradiated by both X-ray and UV radiation from the central
star. The figures clearly show that the X-ray heating dominates in
the inner region and in the surface layer of the disk, while the FUV
heating dominates in the middle layer and the outer region of the disk.
With regard to the cooling processes, radiative cooling dominates in
the surface layer, while dust-gas collisions dominates near the midplane
where the density is high. The main coolant at the surface layer changes
as Ly $\alpha$, OI 6300\AA, and OI 63$\mu$m at the disk radii of 1AU,
10AU, and 100AU with decreasing gas temperature. These properties are
qualitatively the same even if we use the different dust models in
\S\ref{S2.1}.
Furthermore, we plot in Figure \ref{f7} the vertical profiles of the X-ray
ionization rates, $\zeta_{\rm X}$, defined in equation (\ref{eq.2.4.3}),
at disk radii of 1AU, 10AU, and 100AU, where the disk is irradiated
by both of X-ray and UV radiation from the central star. The radial
($\zeta_{{\rm X},R}$; {\it dashed lines}) and vertical ($\zeta_{{\rm
X},z}$; {\it dotted lines}) components, which are calculated by
substituting $F_{{\rm X},R}$ and $F_{{\rm X},z}$ of equations
(\ref{eq.2-6}) and (\ref{eq.2-7}) into equation (\ref{eq.2.4.3}) and
satisfy $\zeta_{\rm X}=\zeta_{{\rm X},R}+\zeta_{{\rm X},z}$, are also
plotted for comparison. In addition,
the ionization rates caused by interstellar cosmic-ray, $\zeta_{\rm
CR}$ are plotted in dot-dashed lines, which are estimated as
\begin{equation}
\zeta_{\rm CR}=\zeta_{{\rm CR},0}\exp[-\Sigma(z)/\chi_{\rm CR}],
\end{equation}
where we adopt $\zeta_{{\rm CR},0}=1\times 10^{-17}$s$^{-1}$ and the
attenuation coefficient of $\chi_{\rm CR}=96$ g cm$^{-2}$ (Umebayashi \&
Nakano 1981). The surface density is calculated as
$\Sigma(z)=\int_{z}^{z_{\infty}}\rho(z')dz'$. The figure shows that at
the disk surface
the ionization rates due to X-rays from the central star
are much higher than those due to interstellar cosmic-rays, while
near the disk midplane the former is much lower than the latter. This is
because X-ray attenuation is larger than that of cosmic-rays and because
the Compton scattering of X-ray radiation is inefficient (see
Fig.~\ref{f1} and Igea \& Glassgold 1999).
\clearpage
\begin{figure}[h]
\includegraphics[scale=1.0]{f7.eps}
\caption{The vertical profiles of the X-ray ionization rate at the
disk radii of $x=1$, 10, and 100 AU for the irradiation model of X-rays
$+$ UV and the dust model A with $a_{\rm max}=10\micron$. The solid,
dashed, and dotted lines show the total rate, radial and vertical
components, respectively.
The ionization rate due to interstellar cosmic-rays are plotted as a
dot-dashed line for comparison. The ionization rates by the X-rays from
the central star are much lower than those by the interstellar
cosmic-ray near the disk midplane due to the inefficient Compton
scattering of the X-ray radiation. \label{f7}}
\end{figure}
\subsubsection{Effect of Dust Evolution}\label{S3.2.2}
\begin{figure}
\includegraphics[scale=1.0]{f8a.eps}
\includegraphics[scale=1.0]{f8b.eps}
\caption{The vertical temperature profiles of dust ({\it thin dotted
lines}) and gas at the disk radii of $x=1, 10$ and 100AU for (a)
dust model
A with $a_{\rm max}=10\micron$ ({\it solid lines}), 1mm ({\it dashed
lines}), and 10cm ({\it dot-dashed lines}), and (b) model B in
quiescent ({\it solid lines}) and turbulent ({\it dashed lines}) disks.
The profiles for the dense cloud dust model (dust
model A with $a_{\rm max}=10\micron$) are plotted here as
thin solid lines. The irradiation model of X-rays $+$ UV is used here.
As the dust particles grow or settle toward the disk midplane
the gas temperature in the middle layer and in the outer disk decreases
due to the decrease of the grain photoelectric
heating rate. Meanwhile, the dust and gas temperatures near the midplane
increases due to smaller grain opacity and greater penetration
of the irradiation from the central star. \label{f8}}
\end{figure}
In Figure \ref{f8} we plot the vertical gas temperature profiles for
various dust models at disk radii of $x=1, 10$, and 100
AU for the case of a disk heated by both X-rays and UV radiation.
The profiles for dust model A with different maximum dust radii
of $a_{\rm max}=10\mu$m ({\it solid lines}), 1mm ({\it dashed lines}),
and 10cm ({\it dot-dashed lines}) are plotted together in
Figure~\ref{f8}a. The profiles for dust model B at $10^6$yr after
the calculation starts are plotted in Figure~\ref{f8}b for the quiescent
({\it solid lines}) and turbulent ({\it dot-dashed lines}) disks.
The profiles calculated by using the dense cloud dust model (which is
the initial condition of the calculation for the dust evolution and the
dust model A with $a_{\rm max}=10\mu$m) are also plotted together in
thin solid lines for comparison. The thin dotted
lines in the figures are the dust temperature profiles.
Figure~\ref{f8}a shows that as the dust
particles grow and the total surface area of dust grains ($f_{\rm
dust}$) decreases (see \S\ref{S2.1.3}), the gas temperature at the disk
surface drops because the grain photoelectric heating rate decreases
(e.g., Aikawa \& Nomura 2006). The gas temperatures in the
inner disk ($x\sim 1$ AU) and in the surface layer at $x\sim 10$ AU
do not change because the X-ray heating dominates in these regions.
The dust temperature at the disk surface decreases slightly with
dust growth.
Figure \ref{f8}b shows that clear differences appear in the gas and dust
temperatures between the dust model of dense clouds (dust model A
with $a_{\rm max}=10\mu$m) and the models with the dust evolution in
both quiescent and turbulent disks. For the models with dust
evolution the gas temperature in the middle
layer and the outer region of the disks drops owing to the decrease of
$f_{\rm dust}$ (see \S\ref{S2.1.3} and \S\ref{S3.1}), while the dust and
gas temperatures near the midplane increase
because the grain opacity decreases and the irradiation from
the central star can penetrate deeper in to the disks. At $x\sim 1$AU
heating via irradiation dominates even near the midplane for the
models with the dust evolution, whereas the viscous heating is dominant for
the dense cloud dust model. The differences between the quiescent and
turbulent disks are small because the profiles of $f_{\rm dust}$ are
similar, especially in the surface layer. The dust and gas temperatures
very close to the midplane are slightly higher in the turbulent disk
due to the higher collision rate between the dust particles, which results
in lower $f_{\rm dust}$ and grain opacity (see \S\ref{S3.1}).
Dust growth and settling are also expected to impact on the
X-ray heating rates and the gas temperature profile through the
change in the photoionization cross section, $\sigma_{\rm ph}$, part of
which is contributed by heavy elements in dust
grains (e.g., Glassgold et al. 1997; Wilms et al. 2000). Here we check
the effects by simply adopting an extreme case, specifically that the
contribution of dust grains to the cross section is negligible
for dust model A with $a_{\rm max}=10$cm and for dust model B.
We modify the cross section in Malony et al. (1996)
by simply assuming that the contribution by heavy elements in gas phase
is about 60\%, on average, of the total cross section
(Wilms et al. 2000).
This modification of the cross section makes the gas temperature
higher or lower by a factor of 2 at most
at the disk surface, where the X-ray heating is dominant.
At large $z$ the gas temperature becomes
slightly lower due to the decrease of the photoionization rate, while at
smaller $z$, where the influence of attenuation is more important, the
temperature becomes a bit higher owing to the decrease of the attenuation
coefficient, which results in the relatively
stronger X-ray radiation field (e.g., Glassgold et al. 1997).
The gas density at the disk surface also becomes higher or lower by a
factor of 3 at most,
according to the change in the gas temperature.
The variation of the photoionization cross section due to the dust
evolution will also slightly affect the level populations and the line
emission of molecular hydrogen through the changes in the thermal
collision and the X-ray pumping rates. When the FUV heating or the UV
pumping process dominates, however, the changes will be small.
In the following sections we neglect these effects for simplicity.
\begin{figure}[t]
\includegraphics[scale=1.0]{f9.eps}
\caption{The vertical gas density profiles at the disk radii of
$x=1, 10$ and 100AU for dust model B in quiescent ({\it solid
lines}) and turbulent ({\it dashed lines}) disks. The profiles for the
dense cloud dust model are also plotted together in thin solid
($x=1$AU), dotted ($x=10$AU), and dot-dashed ($x=100$AU) lines.
The disks are puffed out more for the models with the dust evolution
due to higher gas temperatures at the disk midplane and
higher disk scale height. \label{f9}}
\end{figure}
As the dust particles evolve in the disk, the gas density profile also
changes since it is related to the gas temperature
profile. In Figure \ref{f9} we plot the gas density profiles in the
vertical direction at the disk radii of $x=1, 10$, and 100AU, which are
calculated by
using dust model B in quiescent ({\it solid lines}) and turbulent
({\it dashed lines}) disks. The profiles for the dense cloud dust model
are also plotted together as thin solid ($x=1$AU), dotted
($x=10$AU), and dot-dashed ($x=100$AU) lines for comparison. The gas
densities in the models
\onecolumn
\begin{figure}
\includegraphics[scale=0.44]{f10a.eps}
\includegraphics[scale=0.44]{f10d.eps}
\includegraphics[scale=0.44]{f10b.eps}
\includegraphics[scale=0.44]{f10e.eps}
\includegraphics[scale=0.44]{f10c.eps}
\caption{The contour plots of the gas temperature ({\it solid lines})
and density ({\it dotted lines}) distributions in the $z/x$ vs. $z$
plane for dust model A with (a) $a_{\rm max}=10\micron$, (b) 1mm,
and (c) 10cm, and model B in (d) quiescent and (e) turbulent
disks. The irradiation model of X-rays $+$ UV is used here. \label{f10}}
\end{figure}
\twocolumn
\noindent
with the dust evolution are lower at the disk
midplane and higher at the disk surface than those for the dense cloud
dust model because of higher gas temperatures at the midplane and
higher disk scale height ($H={c_{\rm s}}_0/\Omega_{\rm K}$).
In Figure \ref{f10} we present the contour plots of the resulting gas
temperature ({\it solid lines}) and density ({\it dashed lines})
profiles in the $z/x$ vs. $x$ plane. The contour levels are taken as
$T=30,100,300,1000$, and $3000$K,
and $\rho=10^{-16},10^{-14},10^{-12}$, and $10^{-10}$g cm$^{-3}$.
Dust model A with (a) $a_{\rm max}=10\mu$m, (b) 1mm, and
(c) 10cm, and model B in (d) quiescent and (e) turbulent
disks are used in these figures.
Making use of the obtained density and temperature distributions, we
calculate the continuum radiation of thermal dust emission from the
disks by solving the radiative transfer equation, simply assuming that
the disks are face on to an observer. The resulting infrared (IR) spectra
basically reproduce the median spectral energy
distribution (SED) observed toward classical T Tauri stars (CTTSs) in
the Taurus-Auriga molecular cloud (D'Alessio et al. 1999, 2006)
for the dust model A with $a_{\rm
max}=10\micron$. For the models with larger maximum dust radii, the
resulting IR dust emission is weaker than the median SED
by a factor of about 4 at the most (e.g., D'Alessio et al. 2001).
The thermal dust emission from the disks for dust model B in both
quiescent and turbulent disks also reproduces the median observed
SED if we adjust the inner disk radii. These models do not reproduce the
flux deficits relative to the median SED in the near-IR to the mid-IR
wavelength bands which are observed toward several CTTSs, including
TW Hya (e.g., Calvet et al. 2002; Bergin et al. 2004). However, the disk
structure beyond several tens of AU where molecular hydrogen lines are
mainly emitted (see Paper I) will be almost unaffected even if
we were to modify the structure of inner disk within several AU in order
to force the thermal dust emission to account for the flux deficits,
because only a limited region close to the midplane of the outer
disk can be shadowed by the inner disk since the disk has a flared
structure.
Finally, in Figure \ref{f11} we show
another effect of the dust evolution due to the change in grain opacity.
The figure shows the profiles of the integrated FUV radiation fields for
the energy range of 6eV $<h\nu <$ 13eV
in the vertical
direction at the disk radii of $x=1, 10$, and 100 AU.
The profiles for dust models A and B are plotted in Figure
\ref{f11}a and b, respectively, in the same way as Figure \ref{f8}.
The figures show that as the dust particles evolve in the disk and
$f_{\rm dust}$ decreases, the FUV radiation from the central star
penetrates deeper in the disk due to the decrease of grain opacity
(see also Paper I for the FUV radiation fields in disks).
\begin{figure}[t]
\includegraphics[scale=1.0]{f11a.eps}
\includegraphics[scale=1.0]{f11b.eps}
\caption{The vertical profiles of the integrated FUV radiation fields
(6eV $<h\nu <$ 13eV) at the disk radii of $x=1, 10$ and 100AU for
dust models A (a) and B (b). The profiles are plotted in the same way as
in Figure \ref{f8}. The FUV radiation from the central star
penetrates deeper in the disks as the dust particles evolve and the
grain opacity decreases. \label{f11}}
\end{figure}
\subsection{Level Populations of Molecular Hydrogen}\label{S3.3}
Making use of the physical properties of the disks obtained in the
previous subsections, we calculate the level populations of molecular
hydrogen in the disks by solving the equations for statistical
equilibrium (\S\ref{S2.4}). The effects of X-ray irradiation and dust
evolution on the level populations are discussed in the following.
\begin{figure}[h]
\includegraphics[scale=1.0]{f12a.eps}
\includegraphics[scale=1.0]{f12b.eps}
\includegraphics[scale=1.0]{f12c.eps}
\caption{The level populations of molecular hydrogen at a disk radius
of 50 AU ({\it filled diamonds}) for the irradiation models of (a)
X-rays $+$ UV, (b) UV only, and (c) X-rays only. The LTE distributions
are plotted as dashed lines. Dust model A with $a_{\rm max}=10\micron$
is used here. The populations are in LTE in lower energy
levels when the disk is irradiated by strong UV radiation,
while they are controlled by X-ray pumping if the UV
irradiation is weak and X-ray irradiation is strong. \label{f12}}
\end{figure}
\subsubsection{Effect of X-rays}
In Figure \ref{f12} we plot the resulting level populations of
molecular hydrogen for the models in which the disk is irradiated by
(a) both X-ray and UV radiation from the central star, (b) UV
radiation only, and (c) X-ray radiation only. Dust model A with
maximum dust radius of $a_{\rm max}=10\mu$m (see \S\ref{S2.1}) is used in
this sub-subsection.
The filled diamonds show the column densities of molecular hydrogen in
each ro-vibrational level as a function of the level energy.
The column densities are calculated by integrating the number density of
molecular hydrogen in each level along the vertical direction at a
disk radius of 50AU. The level populations in local thermodynamic
equilibrium (LTE) are shown as dashed lines.
The figure shows that if we take into account UV irradiation from
the central star, the gas temperature becomes high enough for the
collisional excitation process to be very
efficient, and the level populations in lower energy levels are in LTE
distribution as a result. Meanwhile, if the disk is irradiated by
X-rays only and the gas is cold enough, the populations are not in LTE
due to the X-ray pumping process. It suggests that we may
be able to observe molecular hydrogen transitions excited by
X-ray pumping toward those protoplanetary disks whose central stars have
strong X-ray and weak UV radiation.
\subsubsection{Effect of Dust Evolution}\label{S3.3.2}
We now discuss the effect of dust evolution on the level
populations of molecular hydrogen in a disk irradiated by both X-rays
and UV radiation. Figure \ref{f13} is the same as
Figure \ref{f12} but for different dust models. Dust model A
with maximum dust radii of $a_{\rm max}=$ (a) 10$\micron$, (b) 1mm,
and (c) 10cm and (d) model B are used in these figures.
Figures \ref{f13}a-c show that as the dust particles grow, the level
populations of molecular hydrogen change from LTE to non-LTE
distributions. This is because with increasing dust size and decreasing
$f_{\rm dust}$, the gas temperature drops due to the decrease of grain
photoelectric heating rate, and the collisional excitation
process becomes less efficient. In addition, the UV radiation from the
\onecolumn
\begin{figure}
\includegraphics[scale=1.0]{f13a.eps}
\includegraphics[scale=1.0]{f13b.eps}
\includegraphics[scale=1.0]{f13c.eps}
\includegraphics[scale=1.0]{f13d.eps}
\caption{The level populations of molecular hydrogen at a disk radius
of 30 AU for dust model A ({\it filled diamonds}) with
$a_{\rm max}=$ (a) 10$\micron$, (b) 1mm, and (c) 10cm, and (d)
model B in
quiescent ({\it asterisks}) and turbulent ({\it open squares}) disks.
The irradiation model of X-rays $+$ UV is used here.
The level populations change from LTE to non-LTE as
dust particles grow or settle toward the disk midplane, and since the
gas temperature drops while the UV photons in the disk increase.
\label{f13}}
\end{figure}
\twocolumn
\noindent
central star can penetrate deeper in the disk due to the decrease of
grain opacity, and the UV pumping process becomes more efficient.
X-ray pumping is not the dominant process if we take into account
the UV irradiation from the central star.
In Figures \ref{f13}d the level populations with dust model B in
quiescent ({\it asterisks}) and turbulent ({\it open squares}) disks are
plotted together. The populations in quiescent and turbulent disks are
almost identical because of similar physical properties of the disks
(see \S\ref{S3.2.2}). The figure shows that if we take into account
dust evolution,
the level populations are in non-LTE distributions both in quiescent and
turbulent disks because of low $f_{\rm dust}$, which results in low gas
temperature and high UV radiation fields, while they are in LTE
distributions with the dust model of dense clouds (dust model A with
$a_{\rm max}=10\mu$m).
\subsection{Molecular Hydrogen Emission}\label{S3.4}
Making use of the physical properties of the disks and the level
populations of molecular hydrogen obtained in the previous
subsections, we calculate the line emission from molecular hydrogen
(\S\ref{S2.4}). In the following we show the resulting line
spectra in the near- and mid-infrared (NIR and MIR), and ultraviolet
(UV) wavelength bands, and present line ratios, using various dust and
irradiation models.
\subsubsection{Line Spectra}
Figures \ref{f14}, \ref{f15}, and \ref{f16} show that the
resulting line spectra in the NIR, MIR, and UV wavelength bands,
respectively. Dust model A with $a_{\rm max}=$ (a) $10\micron$, (b)
1mm, and (c) 10cm,
and (d) model B in a quiescent disk are used in these figures.
The spectra from a turbulent disk are not plotted in these figures as they
are almost identical to those in a quiescent disk.
In Figure \ref{f14} and Table \ref{T1} (upper rows) we present the
ro-vibrational line fluxes of molecular hydrogen in the NIR
wavelength band. These show that as the dust particles grow, the
transition lines from higher vibrational energy levels become
relatively
stronger in this wavelength band. This is because the level populations
change from LTE to non-LTE distributions due to the decrease of the gas
temperature and the increasing importance of UV pumping,
so that the populations in
higher vibrational energy levels become relatively larger as we have
seen in the previous section (see Fig.~\ref{f13}). The line fluxes
decrease with the dust evolution because the area of high temperature
region in the disk shrinks.
In Figure \ref{f15} and Table \ref{T1} (lower rows) we present the pure
rotational transition lines in the MIR wavelength band. In
this case the transition lines from lower energy levels become
relatively stronger. This is because the level populations in the ground
vibrational state are in LTE for all dust models, and
the populations in lower energy levels become relatively larger as the
gas temperature decreases with increasing dust size. The line fluxes
decrease with dust evolution for the same reason as the NIR lines. The MIR
flux for dust model B, however, does not decrease so much because
$f_{\rm dust}$ is not very small in the outer disk (see Fig.~\ref{f2}a).
The line fluxes from lower energy levels are rather stronger than
those in model A since the gas temperatures near the midplane, to
which the MIR lines are sensitive, are higher for model B (see
\S\ref{S3.2.2}).
In Table \ref{T1} we also list the infrared line fluxes for different
irradiation models, calculated by using dust model A with $a_{\rm
max}=10\micron$. For the irradiation model of X-rays only, the intensity
of NIR lines is similar to that for the irradiation model of X-rays $+$
UV and dust model A with $a_{\rm max}=1$mm. The MIR lines are
relatively weaker because the gas temperature at the outer disk is not
so high if the irradiation source is X-rays only (see Fig.~\ref{f5}).
For the irradiation model of UV only, the NIR and MIR line fluxes are a bit
weaker than those for the irradiation model of X-rays $+$ UV and the dust
model A with $a_{\rm max}=10\micron$ since the gas temperature for the
latter model is higher due to the X-ray heating. The intensity of
emission lines from lower energy levels in the MIR is similar between
the former and the latter models because X-ray heating does
not affect the gas temperature in the outer disk.
Comparing our results for the 2.12 $\micron\ v=1\rightarrow 0\ S(1)$
transition to the
observational data towards TW Hya of $1.0\times 10^{-15}$ergs s$^{-1}$
cm$^{-2}$ (Bary et al. 2003), dust model A with $a_{\rm max}=1$mm
($f_{\rm dust}=0.1$)
seems to be most suitable. We may need larger amount of small dust
grains than that predicted in dust model B in order to reproduce the
observed 2.12 $\micron$ line flux. The calculated fluxes of the MIR
lines are consistent with the upper limits of the ground based
observations
\onecolumn
\begin{figure}
\includegraphics[scale=1.0]{f14a.eps}
\includegraphics[scale=1.0]{f14b.eps}
\includegraphics[scale=1.0]{f14c.eps}
\includegraphics[scale=1.0]{f14d.eps}
\caption{The near-infrared ($1\micron <\lambda <4\micron$) spectra of
ro-vibrational transition lines of molecular hydrogen from the disks for
dust model A with $a_{\rm max}=$ (a) $10\micron$, (b) 1mm, and (c)
10cm, and (d) model B in a quiescent disk.
The irradiation model of X-rays $+$ UV is used here.
The distance to the disk is set to be $d=56$pc.
The lines from higher
energy levels become relatively stronger as the dust particles evolve
and the level populations change from LTE to non-LTE. \label{f14}}
\end{figure}
\begin{figure}
\includegraphics[scale=1.0]{f15a.eps}
\includegraphics[scale=1.0]{f15b.eps}
\includegraphics[scale=1.0]{f15c.eps}
\includegraphics[scale=1.0]{f15d.eps}
\caption{Same as Fig.~\ref{f14}, but for the mid-infrared
($5\micron <\lambda <30\micron$) spectra of pure rotational emission lines.
The lines from lower energy levels become relatively stronger as
dust particles evolve and the gas temperature decreases.
\label{f15}}
\end{figure}
\begin{figure}
\includegraphics[scale=1.0]{f16a.eps}
\includegraphics[scale=1.0]{f16b.eps}
\includegraphics[scale=1.0]{f16c.eps}
\includegraphics[scale=1.0]{f16d.eps}
\caption{Same as Fig.~\ref{f14}, but for the ultraviolet (1100\AA
$<\lambda <$ 1800\AA) emission lines.
The line fluxes of transitions originally pumped from
higher energy levels seems to be relatively weaker as dust particles
evolve and the gas temperature decreases. The lines are also affected by
the strength of UV radiation field in the disk. \label{f16}}
\end{figure}
\twocolumn
\noindent
(Richter et al. 2002; Sako et al. 2005).
In Figure \ref{f16} and Table \ref{T2} we present the line fluxes
in the UV wavelength band. In the table we list the
lines pumped by $0-2\ R(0)$, $0-2\ R(1)$, $1-2\ P(5)$, $1-2\ R(6)$,
$3-1\ P(14)$, and $4-3\ P(5)$ transitions in the wavelength band of
the strong Ly $\alpha$ emission from the central star (see e.g., Herczeg
et al. 2002; Paper I). The figure and table show that the fluxes of
most lines decrease as dust particles evolve in the disk and the
area of the high temperature region shrinks. The fluxes of some lines,
however, do not decrease because the lines in the UV wavelength band,
excited by UV photons, are
affected very much by the strength of the UV radiation field in the
disk, which becomes stronger with dust evolution (see Fig.~\ref{f11}).
The line fluxes for dust model B are relatively strong since the
populations of
molecular hydrogen in the energy levels in the ground electronic states,
from which the lines are pumped, are larger. The populations in lower
energy levels are large because the gas temperatures near the midplane
are high, while those in higher energy levels, which are
controlled by the UV pumping process, are large because the UV
irradiation from the central star penetrates deeper in the disk
(see Figs.~\ref{f8} and \ref{f11}). We note that in this work the fluxes
of lines in the UV wavelength band caused by X-ray pumping are not
calculated. Such pumping will affect the strength of weak emission lines
of molecular hydrogen (e.g., Bergin et al. 2004), but the effect on
the flux of the strong lines, listed in Table \ref{T2}, which are pumped
by the strong Ly $\alpha$ line emission, will be negligible.
The calculated UV line fluxes are $10^{-15}\sim 10^{-14}$ ergs
s$^{-1}$ cm$^{-2}$, which are consistent with the observations towards
TW Hya (Herczeg et al. 2002) to
zeroth order but have discrepancies in details. This could be because the
UV line fluxes depend not only on the density and temperature profiles of
protoplanetary disks, but also on the strength and shape of the Ly
$\alpha$
line irradiated from the central star, as shown by Herczeg et al. (2002,
2004; see also Paper I). A simple single Gaussian profile is used for
the Ly $\alpha$ line profile in this paper, while the actual line
profile seems to be influenced by wind absorption (see Herczeg et
al. 2002, 2004). Thus, a more detailed analysis of the Ly $\alpha$ line
profile, beyond the scope of this work, will be needed
in order to fit the observed UV line fluxes in more detail.
Our results show that molecular hydrogen emission is strong and will be
easier to observe toward those disks whose central stars have strong UV and
X-ray radiation. In addition, if the disk contains relatively large
amount of small dust grains, the volume of hot gas in the disk will be
larger, and emission lines will be stronger. Therefore, the disks which
have an observable signature of the presence of small dust grains,
such as strong 10$\micron$ silicate emission, could be good targets in
which to observe molecular hydrogen lines.
\subsubsection{$v=1\rightarrow 0\ S(1)/v=2\rightarrow 1\ S(1)$ Line Ratio}
Finally, we discuss the effect of the dust evolution on a particular
line ratio, the $v=1\rightarrow 0\ S(1)/v=2\rightarrow 1\ S(1)$ ratio,
which is
often used as a probe of physical properties of astronomical objects.
In Figure \ref{f17} the resulting line ratios for various irradiation
models and dust models are plotted. The lines with diamonds in the left
hand side of the figure show the ratios calculated using dust
model A with $a_{\rm max}=10\mu$m, 1mm, and 10cm ($f_{\rm dust}=1.0$,
0.1, and 0.01) and irradiation of both X-rays and UV
({\it solid lines}), UV only ({\it dashed
lines}), and X-rays only ({\it dot-dashed lines}).
They show that if the disk is irradiated by the UV radiation from
the central star, this ratio becomes larger as the dust particles grow
and the total surface area of dust grains, $f_{\rm dust}$, decreases.
This is because the level populations
of molecular hydrogen change from LTE to non-LTE distributions due to
the increase in the grain photoelectric heating rate and the decrease of
grain opacity, as discussed in \S\ref{S3.3.2}. This effect appears to be
more efficient if the disk is irradiated only by UV and the
heating source of the gas at the surface disk is grain
photoelectric heating only. If the disk is irradiated by X-rays
only, the ratio does not change with the maximum dust size.
We note, however, that we have neglected the change in the X-ray
photoionization cross section due to the dust evolution.
If we take it into account, it affects the line ratio
slightly with the error being about 25\% for the model with $a_{\rm
max}=10$cm in the extreme case that the contribution to the cross
sections of heavy elements in the dust is negligible (see
\S\ref{S3.2.2}; Wilms et al. 2000).
Figure \ref{f17} shows that the line ratio in the model with
X-ray irradiation only is slightly larger than that in the model with
UV irradiation,
although the gas temperature is lower. This occurs because the level
populations are not in LTE, but affected by the X-ray pumping
process (see \S\ref{S3.2.1}).
We also plot in the right hand side of
Figure \ref{f17} the resulting line ratios for dust model B in
quiescent and turbulent disks in filled and open triangles,
respectively. The ratio for the
model with dust grains typical of dense cloud (model A with $a_{\rm
max}=10\mu$m) is plotted as a filled circle. The results suggest that if
the dust grains coagulate and settle towards the disk midplane, the
ratio becomes substantially larger than the case in which dust grains do
not evolve. So, our results suggest that dust evolution
in protoplanetary disks could be observable through this particular line
ratio. Itoh et al. (2003) derived an upper
limit to the line ratio of 0.26 from an observation toward a classical T
Tauri star, LkH$\alpha$ 264, and Bary et al. (2007, in preparation) find
an upper limit of $\sim$0.2 toward a Herbig Be star, HD 97048; all the
models we used in this paper almost satisfy these observational upper
limits.
\begin{figure}[t]
\includegraphics[scale=0.85]{f17.eps}
\caption{The $v=1\rightarrow 0\ S(1)/v=2\rightarrow 1\ S(1)$ line ratios
for the irradiation models of X-rays $+$ UV ({\it solid lines}),
UV only ({\it dashed lines}), and X-rays only ({\it dot-dashed lines}) and
dust model A with $a_{\rm max}=10\micron$, 1mm, and 10cm
($f_{\rm dust}=1.0$, 0.1, and 0.01) (in the left-hand panel). The
ratios for the
irradiation model by X-rays $+$ UV with dust model B in quiescent
({\it open triangle}) and turbulent ({\it filled triangle}) disks and
the dense clouds (dust model A with
$a_{\rm max}=10\micron$; {\it filled circle}) are plotted in the
right-hand panel. If the disk is irradiated by strong UV radiation, the
line ratio
increases with dust evolution as a consequence of
the decrease in gas temperature and the increase in the UV
radiation field strength .
\label{f17}}
\end{figure}
\section{Summary}
We have made a detailed model of physical structure of protoplanetary
disks and calculated the level populations and line emission of molecular
hydrogen, taking into account X-ray irradiation from the central star
as well as dust growth and settling towards the disk midplane.
We have followed the time evolution of the spatial and size
distributions of dust particles in the disks by numerically solving the
coagulation equation for settling dust particles. The resulting
mass and total surface area of dust grains per unit gas volume is much
smaller, except at the disk midplane, than those for the model in which
the dense cloud dust grains
are well mixed with the gas. At the disk surface the dust density
normalized by the initial value, $\rho_{\rm dust}/\rho_{{\rm dust}, 0}$,
and the parameter for the total surface area of dust grains, $f_{\rm
dust}$, are small due to the dust settling toward the
disk midplane. Near the disk midplane the parameter $f_{\rm
dust}$ becomes much smaller since small dust particles are removed by
dust coagulation.
We have studied the effects of X-ray irradiation on the physical structure
of the disks and found
that the X-ray irradiation is the dominant heating source in the inner
region and in the surface layers of the disk. FUV
heating dominates in the middle layers and in the outer region of the disk.
This is because the FUV radiation is scattered efficiently by dust
grains, while the Compton scattering of X-ray radiation is inefficient
in the energy range of $E\la 1$keV in which T Tauri
stars mainly emit X-ray radiation. We found that the ionization
rate caused by X-rays is much smaller than that due to interstellar
cosmic-rays near the disk midplane because of the relatively large
attenuation and inefficient scattering of X-rays.
The dust evolution in the disks affects the physical disk structure,
especially the gas temperature at the disk surface and the FUV
radiation field within the disk. As the dust particles grow or settle
towards the disk midplane, the gas temperature in the middle layers and
the outer disk decreases because the grain photoelectric
heating which is induced by FUV radiation becomes less efficient.
Meanwhile, the
FUV radiation from the central star penetrates deeper into the disk due
to the decrease of grain opacity.
Furthermore, making use of the obtained physical structure of the disks,
we calculated the
level populations of molecular hydrogen in the ground electronic state.
Our results show that if the central star has strong X-ray and weak UV
radiation, the level populations are controlled by X-ray pumping.
Otherwise, the level populations are mainly
controlled by thermal collisions or UV pumping, depending on the
dust properties in the disk. As the dust particles evolve in the disk,
the level populations change from LTE to non-LTE distributions since
collisional excitation becomes less efficient due to
the decrease of the gas temperature at the disk surface while UV
pumping becomes more efficient owing to the stronger UV
radiation field in the disk.
Finally, using these level populations, we calculated the line emission of
molecular hydrogen from the disk. The ro-vibrational line spectra
in the near-infrared wavelength band show that the emission lines from high
energy levels become relatively stronger as the dust particles evolve
in the disk. Again, this is due to level populations changing from LTE to
non-LTE and the populations in higher vibrational energy
levels becoming relatively larger. For the pure rotational
line spectra in the mid-infrared wavelength band, it is the emission lines
from lower energy levels which become relatively stronger. This is because the
level populations in the ground vibrational state
are in LTE and the populations in lower energy levels
become relatively larger with dust evolution and
decreasing dust surface area which results in lower gas temperatures.
For transitions in the UV wavelength band,
the dependence on the dust evolution is not so straightforward because
the line fluxes decrease
as the area of high temperature region shrinks, while they increase as
UV irradiation from the central star penetrates deeper in the
disks. Basically, the line fluxes which originate from pumping from
higher energy levels seem to be relatively weaker as the dust particles
evolve and the gas temperature decreases.
Our results suggest that
infrared line ratios of molecular hydrogen could be a useful probe
of dust evolution in protoplanetary disks.
If the dust particles evolve, the $v=1\rightarrow
0\ S(1)/v=2\rightarrow 1\ S(1)$ line ratio, for example, becomes clearly
larger than that for the dense cloud dust model
(without the dust evolution). Further observations of the line
ratios of molecular hydrogen could provide some constraints on the dust
evolution model in protoplanetary disks.
\acknowledgments
We are grateful to an anonymous referee for his comments which
improve the clarity of our discussion.
We would like to thank J.S. Bary, D.A. Weintraub, M.J. Richter, and
M.A. Bitner for giving us information about unpublished observational
data, Y. Itoh and T. Takeuchi for fruitful comments,
and T. Matsuda for useful help for numerical calculations.
This work is supported by ``The 21st Century COE Program of Origin and
Evolution of Planetary Systems" and the Grants-in-Aid for
Scientific Research 17039008, 17540217, and 18026006 in MEXT.
Astrophysics at Queen's University Belfast is supported by PPARC.
H.\ N. and M.\,T. acknowledge financial supported from the Japan Society
for the Promotion of Science.
|
1,108,101,564,127 | arxiv | \section{Introduction}
In recent years extensive research has addressed challenges and
problems raised in mobile, sparse and intermittently connected
networks (i.e. DTN). In this case, forwarding packets greatly
depends on the occurrence of contacts. Since the existence of links
is crucial to deliver data from a source to a destination, the
contacts and their properties emerge as a key issue in designing
efficient communication protocols \cite{Hossmann2010a}. Obviously,
the occurrence of links is determined by the behavior of the nodes
in the network \cite{Chaintreau07}. It has been widely shown in
\cite{Hsu2009a, Thakur2010} that human mobility is directed by
social intentions and reflects spatio-temporal regularity. A node
can follow other nodes to a specific location (spatial level) and
may bring out a behavior which may be regulated by a schedule
(temporal level). The social intentions that govern the behavior of
mobile users have also been observed through statistical analyses in
\cite{Chaintreau07,Karagiannis2007} by showing that the distribution
of inter-contact times follow a truncated power law.
With the intention of improving the performance of intermittently
connected wireless network protocols, it is paramount to track and
understand the behavior of the nodes. We aim to propose an approach
that analyzes the network statistics, quantifies the social
relationship between each pair of nodes and exploits this measure as
a score which indicates if a link would occur in the immediate
future.
In this paper, we adapt a tensor-based link prediction algorithm
successfully designed for data-mining \cite{Acar2009,Dunlavy2011}.
Our proposal records the network structure for $T$ time periods and
predicts links occurrences for the $(T+1)^{th}$ period. This link
prediction technique is designed through two steps. First, tracking
time-dependent network snapshots in adjacency matrices which form a
tensor. Second, applying of the Katz measure \cite{Katz1953}
inspired from sociometry. To the best of our knowledge, this work is
the first to perform the prediction technique in a distributed way.
The assessment of its efficiency can be beneficial for the
improvement or the design of communication protocols in mobile,
sparse and intermittently connected networks.
The paper is organized as follows: Section 2 presents the related
work that highlights the growing interest to the social analysis and
justifies the recourse to the tensors and to the Katz measure to
perform predictions. In Section 3, we describe the two main steps
that characterize our proposal. Section 4 details simulation
scenarios used to evaluate the tensor-based prediction approach,
analyzes the obtained results and assesses its efficiency. Finally,
we conclude the paper in Section 5.
\section{Related Work}
Social Network Analysis (SNA) \cite{Wasserman1994, Katsaros2010a}
and ad-hoc networking have provided new perspectives for the design
of network protocols \cite{Hui2008, Daly2007, Hossmann2010}. These
protocols aim to exploit the social aspects and relationship
features between the nodes. Studies conducted in the field of SNA
have mainly focused on two kinds of concepts: the most well-known
centrality metrics suggested in
\cite{Wasserman1994,Page1999,Hwang2008,Chung1997} and the community
detection mechanisms proposed in
\cite{Bollobas1998,Newman2006,Palla2005,Wasserman1994}. From this
perspective, several works have tried to develop synthetic models
that aim to reproduce realistic moving patterns
\cite{Hsu2009a,Lee2009}. Nonetheless, the study done in
\cite{Hossmann2010a} has underlined the fact that synthetic models
cannot faithfully reproduce human behavior because these synthetic
models are only location-driven and they do not track social
intentions explicitly.
In their survey, Katsaros et al. \cite{Katsaros2010a} have
underlined the limits of these protocols when the network topology
is time-varying. The main drawback comes down to their inability to
model topology changes as they are based on graph theory tools. To
overcome this limit, tensor-based approaches have been used in some
works to build statistics on the behavior of nodes in wireless
networks over time as in \cite{Acer2010}. Thakur et al.
\cite{Thakur2010} have also developed a model using a collapsed
tensor that tracks user's location preferences (characterized by
probabilities) with a considered time granularity (week days for
example) in order to follow the emergence of ``behavior-aware" delay
tolerant networks closely.
As previously mentioned, tracking the social ties between network
entities enables us to understand how the network is structured.
Such tracking has led to the design of techniques for link
prediction. Link prediction in social networks has been addressed in
data mining applications as in \cite{Acar2009,Dunlavy2011}.
Concerning link prediction in community-based communication
networks, \cite{Wang2011} has highlighted salient measures that
allow link occurrence between network users to be predicted. These
metrics determine if a link occurrence is likely by quantifying the
degree of proximity of two nodes (Katz measure \cite{Katz1953}, the
number of common neighbors, Adamic-Adar measure \cite{Adamic2003},
Jaccard's coefficient \cite{Jaccard1901,Salton1986}, \ldots) or by
computing the similarity of their mobility patterns (spatial cosine
similarity, co-location rate, \ldots).
In this paper, we propose a link prediction technique that tracks
the temporal network topology evolution in a tensor and computes a
metric in order to characterize the social-based behavior similarity
of each pair of nodes. Some approaches have addressed the same
problem in data-mining in order to perform link prediction. Acar et
al. \cite{Acar2009} and Dunlavy et al. \cite{Dunlavy2011} have
provided detailed methods based on matrix and tensor factorizations
for link prediction in social networks such as the DBLP data set
\cite{DBLP}. These methods have been successfully applied to predict
a collaboration between two authors by recording the structure of
relationships over a tracking period. Moreover, they have
highlighted the use of the Katz measure \cite{Katz1953}, which can
be seen as a behavior similarity metric, by assigning a link
prediction score for each pair of nodes. The efficiency of the Katz
measure in link prediction has been also demonstrated in
\cite{Acar2009,Dunlavy2011,Wang2011,Liben-Nowell2007}.
\section{Description of the Tensor Based Prediction Method}
It has been highlighted that a human mobility pattern shows a high
degree of temporal and spatial regularity, and each individual is
characterized by a time-dependent mobility pattern and a trend to
return to preferred locations \cite{Chaintreau07, Hsu2009a,
Thakur2010}. In this paper, we propose an approach that aims to
exploit similar behavior of nodes in order to predict link
occurrence referring to the social closeness.
To quantify the social closeness between each pair of nodes in the
network, we use the Katz measure \cite{Katz1953} inspired by
sociometry. This measure aims at quantifying the social distance
between people inside a social network. We also need to use a
structure that records link occurrence between each pair of nodes
over a certain period of time in order to perform the similarity
measure computation. The records represent the network behavior
statistics in time and space. To this end, a third-order tensor is
considered. A tensor $\boldsymbol{\mathcal{Z}}$ consists of a set of
slices and each slice corresponds to an adjacency matrix of the
network tracked over a given period of time $p$. After the tracking
phase, we reduce the tensor into a matrix (or collapsed tensor)
which expresses the weight of each link according to its lifetime
and its recentness. A high weight value in this matrix denotes a
link whose corresponding nodes share a high degree of closeness. We
apply the Katz measure to the collapsed tensor to compute a matrix
of scores $\mathbf{S}$ that not only considers direct links but also
indirect links (multi-hop connections). The matrix of scores
expresses the degree of similarity of each pair of nodes according
to the spatial and the temporal levels. The higher the score is, the
better the similarity pattern gets. Therefore, two nodes that have a
high similarity score are more likely to have a common link in the
future.
\subsection{Notation}
Scalars are denoted by lowercase letters, e.g., $a$. Vectors are
denoted by boldface lowercase letters, e.g., $\bf{a}$. Matrices are
denoted by boldface capital letters, e.g., $\mathbf{A}$. The
$r^{th}$ column of a matrix $\mathbf{A}$ is denoted by $\bf{a_r}$.
Higher-order tensors are denoted by bold Euler script letters, e.g.,
$\boldsymbol{\mathcal{T}}$. The $n^{th}$ frontal slice of a tensor
$\boldsymbol{\mathcal{T}}$ is denoted $\mathbf{T_n}$. The $i^{th}$
entry of a vector $\bf{a}$ is denoted by $\bf{a}(i)$, element
$(i,j)$ of a matrix $\mathbf{A}$ is denoted by $\mathbf{A}(i,j)$,
and element $(i, j, k)$ of a third-order tensor
$\boldsymbol{\mathcal{T}}$ is denoted by $\mathbf{T_{i}}(j, k)$.
\subsection{Matrix of Scores Computation}
The computation of the similarity scores is modeled through two
distinct steps. First, we store the inter-contact between nodes in a
tensor $\boldsymbol{\mathcal{Z}}$ and reduce it to a matrix
$\mathbf{X}$ called the collapsed tensor. In a second step, we
compute the matrix of similarity scores $\mathbf{S}$ relying on the
matrix $\mathbf{X}$ (cf. Fig. \ref{Zayani0}).
We consider that the data is collected into the tensor
$\boldsymbol{\mathcal{Z}}$. The slice $\mathbf{Z_{p}}(i, j)$
describes the status of a link between a node $i$ and a node $j$
during a time period $[(p-1) \cdot t,p \cdot t[$ ($p$>0) where
$\mathbf{Z}_{p}(i, j)$ is 1 if the link exists during this period
and 0 otherwise. The tensor is formed by a succession of adjacency
matrices $\mathbf{Z_{1}}$ to $\mathbf{Z_{T}}$ where the subscript
letters designate the observed period. To collapse the data into one
matrix as done in \cite{Acar2009,Dunlavy2011}, we choose to compute
the collapsed weighted tensor (which is the most efficient way to
collapse the data as shown in \cite{Acar2009} and
\cite{Dunlavy2011}). The links structure is considered over time and
the more recent the adjacency matrix is, the more weighted the
structure gets. The collapsed weighted tensor is computed as
following:
\begin{equation}
\mathbf{X}(i,j)=\sum_{p=1}^{T} (1-\theta)^{T-t}\ \mathbf{Z_{p}}(i,j)
\label{eq1}
\end{equation}
where the matrix $\mathbf{X}$ is the collapsed weighted tensor of
$\boldsymbol{\mathcal{Z}}$, and $\theta$ is a parameter used to
adjust the weight of recentness and is between 0 and 1.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.4\textwidth]{Zayani0}
\caption{Example of the matrix $\mathbf{S}$ computation}
\label{Zayani0}
\end{figure}
As Katz measure quantifies the network proximity between two nodes
and given that there are ``social relationships" between nodes in
networks with intermittent connections, it is challenging to exploit
this measure and to apply it on the collected data. Therefore, the
Katz score of a link between a node $i$ and a node $j$ as given by
\cite{Katz1953}:
\begin{equation}
\mathbf{S}(i,j)=\sum_{\ell=1}^{+\infty} \beta^{\ell} P_{\left \langle \ell \right \rangle}(i,j)
\label{eq2}
\end{equation}
where $\beta$ is a user defined parameter strictly superior to zero,
$\beta^{\ell}$ is the weight of a $\ell$ hops path length and
$P_{\left \langle \ell \right \rangle}(i,j)$ represents the number
of paths of length $\ell$ that join the node $i$ to the node $j$.
It is clear that the longer the path is, the lower the weight gets.
There is also another formulation to compute Katz scores by means of
collapsed weighted tensor as detailed previously. We quantify the
proximity between nodes relying on the paths that separate a pair of
nodes and the weights of the links that form these paths. Then, the
score matrix $\mathbf{S}$ can be rewritten as:
\begin{equation}
\mathbf{S}=\sum_{\ell=1}^{+\infty} \beta^{\ell} \cdot \mathbf{X}^{\ell}=(\mathbf{I}-\beta \cdot \mathbf{X})^{-1}-\mathbf{I}
\label{eq3}
\end{equation}
Where $\mathbf{I}$ is the identity matrix and $\mathbf{X}$ is the
collapsed weighted tensor obtained.
We depict as previously mentioned in Fig. \ref{Zayani0} an example
which details the two major steps described before. We take into
consideration a network consisting of 4 nodes and having a dynamic
topology over 4 time periods and we highlight how similarity scores
are obtained. The parameters $\theta$ and $\beta$ are respectively
set to 0.2 and 0.001 for the example and later for the simulations.
We have looked after the values to choose for these two parameters
through several simulations and we have found that such a setting
make possible the convergence of the Katz measure as explained in
\cite{Franceschet2011}. In this example, we assume that all nodes
have the full knowledge of the network structure.
\section{Performance Evaluation and Simulation Results}
To evaluate how efficient is the tensor-based link prediction in
intermittently connected wireless networks, we consider two real
traces. In the following, we firstly present the traces used for the
link prediction evaluation. Then, we expose the corresponding
results, analyze the effectiveness of the prediction method and
compare its performance to those of well-known link prediction
metrics proposed in the literature.
\subsection{Simulation Traces}
We consider two real traces to evaluate the link prediction
approach. We exploit them to construct the tensor by generating
adjacency matrices for several tracking periods. For each case, we
track the required statistics about network topology within $T$
periods. We also consider the adjacency matrix corresponding to the
period $T$+1 as a benchmark to evaluate Katz scores matrix. We
detail, in the following, the used traces.
\begin{itemize}
\item \textbf{First Trace: Dartmouth Campus trace:}
we choose the trace of 01/05/06 \cite{Dartmouth} and construct the
tensor slices relying on SYSLOG traces between 8 a.m. and 3 p.m. (7
hours). The number of nodes is 1018 and the number of locations
(i.e. access points) is 128.
\item \textbf{Second Trace: MIT Campus trace:}
we focus on the trace of 07/23/02 \cite{Balazinska2003} and consider
also the events between 8 a.m. and 3 p.m. to build up the tensor.
The number of nodes is 646 and the number of locations (i.e. access
points) is 174.
\end{itemize}
For each scenario, we generate adjacency matrices corresponding to a
different tracking periods $t$: 5, 10, 30 and 60 minutes. To record
the network statistics over 7 hours, the tensor has respectively a
number of slices $T$ equal to 84, 42, 14 and 7 slices (for the case
where $t$=5 minutes, it is necessary to have 84 periods to cover 7
hours). We take into account both centralized and distributed cases
for the computation of scores.
\begin{itemize}
\item \textbf{The Centralized Computation:}
the centralized way assumes that there is a central entity which has
full knowledge of the network structure at each period and applies
Katz measure to the global adjacency matrices.
\item \textbf{The Distributed Computation:}
each node has a limited knowledge of the network structure. We
assume that a node is aware of its two-hop neighborhood. Hence,
computation of Katz measures is performed on a
local-information-basis.
\end{itemize}
\subsection{Performance Analysis}
As described in the previous section, we apply the link prediction
method to the traces with considering different tensor slice periods
in both centralized and distributed cases. In order to assess the
efficiency of this method, we consider several link prediction
scenarios (according to the trace, the tensor slice period and the
scores computation way) and we use different evaluation techniques.
We detail in the following the results obtained for the evaluation
and analyze the link prediction efficiency. Then, we compare the
performance of the proposed framework to those of major link
prediction metrics in order to justify the use of the Katz measure.
\subsubsection{Evaluation of the link prediction technique}
To evaluate the efficiency of our proposal, we plot the ROC curves
(Receiver Operating Characteristic curves) \cite{FAWCETT2006}. In
Fig. \ref{ROC_Dartmouth}, we depict the ROC curves obtained after
performing prediction on the Dartmouth Campus trace and for
different tensor slice times. Also, adapted metrics are used in
order to weigh the performance of the proposed link prediction
technique. To this end, we compute the Area Under the ROC Curve
metric (AUC metric) \cite{FAWCETT2006} which could be considered as
a good performance indicator in our case. The AUC metric of each
scenario is determined from the corresponding ROC curve. Moreover,
we consider the top scores ratio metric at $T$+1. To determine this
metric, we compute the accurate number of links identified through
the link prediction technique. We list, for each considered time
period, the number of existing links at period $T$+1, which we call
$L$. Then, we extract the links having the $L$ highest scores and
determine the number of existing links in both sets. The evaluation
metrics are computed for all traces with different tensor slice
periods in both distributed and centralized scenarios. The results
corresponding to all links prediction are listed in Table
\ref{table2} (Dartmouth Campus trace)and Table \ref{table3} (MIT
Campus trace).
\begin{figure*}[!tb]
\centering
\subfigure[5 minutes tensor slice period]{
\label{fig1}
\includegraphics[width=0.15\textwidth,angle=270]{5}
}\hspace{1cm
\subfigure[10 minutes tensor slice period]{
\label{fig2}
\includegraphics[width=0.15\textwidth,angle=270]{10}
}\\
\subfigure[30 minutes tensor slice period]{
\label{fig3}
\includegraphics[width=0.15\textwidth,angle=270]{30}
}\hspace{1cm
\subfigure[60 minutes tensor slice period]{
\label{fig4}
\includegraphics[width=0.15\textwidth,angle=270]{60}
}
\caption{ROC Curves for different prediction cases applied on Dartmouth Campus trace}
\label{ROC_Dartmouth}
\end{figure*}
\begin{table}[!t]
\renewcommand{\arraystretch}{1}
\caption{Evaluation metrics for the prediction of all links applied
on Dartmouth Campus trace} \label{table2} \centering
\scalebox{0.55}{
\begin{tabular}{|l|c|c|}
\hline
\backslashbox{\bfseries Prediction Cases}{\bfseries Metrics} & \bfseries AUC & \bfseries Top Scores Ratio at $T$+1\\
\hline\hline
Distributed Case and $t$=5 mins & 0.9932 & 93.70\%\\
Centralized Case and $t$=5 mins & 0.9905 & 93.61\%\\
\hline
Distributed Case and $t$=10 mins & 0.9915 & 90.26\% \\
Centralized Case and $t$=10 mins & 0.9883 & 90.19\% \\
\hline
Distributed Case and $t$=30 mins & 0.9813 & 82.31\% \\
Centralized Case and $t$=30 mins & 0.9764 & 82.56\% \\
\hline
Distributed Case and $t$=60 mins & 0.9687 & 76.10\% \\
Centralized Case and $t$=60 mins & 0.9636 & 75.94\% \\
\hline
\end{tabular}}
\end{table}
\begin{table}[!t]
\renewcommand{\arraystretch}{1}
\caption{Evaluation metrics for the prediction of all links applied
on MIT Campus trace} \label{table3} \centering \scalebox{0.55}{
\begin{tabular}{|l|c|c|}
\hline
\backslashbox{\bfseries Prediction Cases}{\bfseries Metrics} & \bfseries AUC & \bfseries Top Scores Ratio at $T$+1\\
\hline\hline
Distributed Case and $t$=5 mins & 0.9907 & 91.48\%\\
Centralized Case and $t$=5 mins & 0.9929 & 91.48\%\\
\hline
Distributed Case and $t$=10 mins & 0.9797 & 85.18\% \\
Centralized Case and $t$=10 mins & 0.9809 & 85.14\% \\
\hline
Distributed Case and $t$=30 mins & 0.9589 & 73.31\% \\
Centralized Case and $t$=30 mins & 0.9578 & 73.76\% \\
\hline
Distributed Case and $t$=60 mins & 0.9328 & 64.54\% \\
Centralized Case and $t$=60 mins & 0.9325 & 64.54\% \\
\hline
\end{tabular}}
\end{table}
\begin{table}[t]
\renewcommand{\arraystretch}{1}
\caption{Table of confusion of a binary prediction technique}
\label{table_confusion} \centering \scalebox{0.55}{
\begin{tabular}{|l|P{0.2}|P{0.2}|}
\hline \backslashbox{Prediction outcome} {Actual
value}& Positive & Negative\\
\hline
Positive & True Positive ($TP$) & False Positive ($FP$)\\
\hline
Negative & False Negative ($FN$) & True Negative ($TN$)\\
\hline
\end{tabular}
}
\end{table}
We first note that, in Fig. \ref{ROC_Dartmouth} and for all
scenarios, the prediction of all links is quite efficient, compared
to the random guess (the curve's bends are at the upper left
corner). We obtain similar ROC curves with the MIT Campus traces (we
do not present them due to space limitations). Moreover, we remark,
based on the high values of AUC metric (over than 0.9) and top
scores ratio obtained at $T$+1, that the prediction method is
efficient in predicting future links (for the period $T$+1). We also
note that prediction is better when the tensor slice periods are
shorter. This observation is obvious for two reasons. On the one
hand, with a low tensor slice time, the probability of tracking a
short and occasional contact between two nodes is not likely. On the
other hand, recording four hours of statistics requires 84 adjacency
matrices of 5-minute periods instead of 7 matrices for 60-minute
periods case. Thus, tracking a short contact between two nodes has
less influence when the tensor slices are more numerous.
Regarding the comparison between the two ways of computing the Katz
scores, we observe that the centralized and distributed matrix of
scores computation achieve similar performances. In fact, the
similarity is higher when the paths considered between a pair of
nodes are short. Thereby, paths that have more than two hops have
weaker scores and so are less weighted compared to shorter ones. The
distributed case assumes that each node knows its neighbors at most
at two hops. That is why distributed scores computation presents
performances which are so similar to the centralized ones.
\subsubsection{Prediction Performance Comparison between the
Tensor-Based Technique and Well-Known Link Prediction Metrics}
We aim through this subsection to compare our proposal to another
similar approaches (we use the distributed design of our framework
to compute the Katz scores). To propose a comprehensive comparison,
we also propose to evaluate the prediction efficiency of well-known
prediction metrics presented in the literature. On the one hand, we
consider behavioral-based link prediction metrics as the similarity
metric of Thakur et al. \cite{Thakur2010} and two metrics expressing
mobile homophily proposed by Wang et al. in \cite{Wang2011}: the
spatial cosine similarity and the co-location rate. On the other
hand, we take two link prediction metrics based on measuring the
degree of proximity as the Katz measure: they are the Adamic-Adar
measure \cite{Adamic2003} and the Jaccard's coefficient
\cite{Jaccard1901,Salton1986}.
To assess the efficiency of each link prediction metric, we consider
these evaluation measures:
\begin{itemize}
\item \textbf{Top Scores Ratio in the period $T$+1 (TSR):} to determine this metric, we compute
the percentage of occurring links identified through the link
prediction technique. We list the number of existing links (at
period $T$+1 or during the periods coming after the period $T$)
which we call $L$. Then, we extract the pair of nodes having the $L$
highest scores and determine the percentage of links involved in
both sets. existing links in both sets.
\item \textbf{Accuracy (ACC):} this measure is defined in \cite{FAWCETT2006} as
the ratio of correct prediction (true positive and true negative
predictions) over all predictions (true positive, true negative,
false positive and false negative predictions). In other words, it
is computed by the ratio $\frac{TP+TN}{TP+FP+TN+FN}$ (see Table
\ref{table_confusion}). We identify for each scenario the maximum
value of the accuracy which indicates the degree of precision that
can reach each prediction metric.
\item \textbf{Precision or Positive Predictive Value (PPV):}
it represents to the proportion of links with positive prediction
(occurring in the future) which are correctly identified
\cite{FAWCETT2006}. Based on Table \ref{table_confusion}, the
precision is equal to $\frac{TP}{TP+FP}$. This value is determined
according to the deduced accuracy value.
\item \textbf{Recall or True Positive Rate (TPR):} it quantifies
the ratio of correctly identified links over the occurring links in
the future \cite{FAWCETT2006}. Referring to Table
\ref{table_confusion}, the recall is defined by the expression
$\frac{TP}{TP+FN}$. This value is also computed according to the
retained accuracy value.
\item \textbf{F-measure or balanced F1 score:} the F-measure \cite{vanRijsbergen1979} is the harmonic mean of
precision and recall. The F-measure is expressed by
$2.\frac{precision.recall}{precision+recall}$. The higher the
F-measure is, the better the tradeoff of precision and recall gets
and the more efficient the prediction metric is.
\end{itemize}
The evaluation metrics are computed for all traces with different
tracking periods lengths (5, 10, 30 and 60 minutes). For each trace,
we track the network topology from 8 a.m. to 4 p.m. We divide, as
previously, the historical into $T$ periods and we focus on
predicting the links occurring in the period $T$+1. Regarding the
Dartmouth Campus trace, the results are reported in Table
\ref{table_Dartmouth}. For the MIT Campus trace, the prediction
results are listed in Table \ref{table_MIT}.
\begin{table}[!tb]
\renewcommand{\arraystretch}{1}
\caption{Evaluation metrics for the prediction applied on the
Dartmouth Campus trace} \label{table_Dartmouth} \centering
\scalebox{0.55}{
\begin{tabular}{|P{0.07}|P{0.17}|P{0.078}|P{0.078}|P{0.078}|P{0.078}|P{0.078}|}
\hline
\bfseries Period length & \bfseries Prediction Score & \bfseries TSR in $T$+1 & \bfseries Accuracy & \bfseries Precision (PPV) & \bfseries Recall (TPR) & \bfseries F-measure\\
\hline\hline \multirow{6}{2cm}{5 mins ($T$=96)}
& Thakur's Metric & 41.39\% & 99.11\% & 36.40\% & 11.57\% & 0.1756 \\ \cline{2-7}
& Spatial Cosine Sim. & 66.01\% & 99.45\% & 67.44\% & 63.75\% & 0.6554 \\ \cline{2-7}
& Co-Location Rate & 68.96\% & 99.50\% & 73.98\% & 60.71\% & 0.6669 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 83.81\% & 99.74\% & 82.58\% & 85.57\% & 0.8405 \\ \cline{2-7}
& Jaccard's Coeff. & 82.54\% & 99.72\% & 81.27\% & 85.08\% & 0.8313 \\ \cline{2-7}
& Katz Measure & 90.88\% & 99.86\% & 90.59\% & 91.87\% & 0.9123 \\ \cline{2-7}
\hline\hline \multirow{6}{2cm}{10 mins ($T$=48)}
& Thakur's Metric & 43.29\% & 99.10\% & 37.31\% & 11.15\% & 0.1717 \\ \cline{2-7}
& Spatial Cosine Sim. & 66.71\% & 99.45\% & 68.52\% & 62.99\% & 0.6564 \\ \cline{2-7}
& Co-Location Rate & 68.78\% & 99.49\% & 71.50\% & 65.63\% & 0.6844 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 81.01\% & 99.68\% & 78.87\% & 84.00\% & 0.8135 \\ \cline{2-7}
& Jaccard's Coeff. & 79.75\% & 99.66\% & 78.04\% & 82.83\% & 0.8036 \\ \cline{2-7}
& Katz Measure & 86.39\% & 99.78\% & 89.75\% & 82.94\% & 0.8621 \\ \cline{2-7}
\hline\hline \multirow{6}{2cm}{30 mins ($T$=16)}
& Thakur's Metric & 45.18\% & 99.06\% & 39.08\% & 10.83\% & 0.1696 \\ \cline{2-7}
& Spatial Cosine Sim. & 67.35\% & 99.42\% & 67.60\% & 67.00\% & 0.6730 \\ \cline{2-7}
& Co-Location Rate & 67.78\% & 99.45\% & 72.47\% & 61.33\% & 0.6644 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 71.82\% & 99.50\% & 71.25\% & 73.86\% & 0.7253 \\ \cline{2-7}
& Jaccard's Coeff. & 71.34\% & 99.50\% & 72.63\% & 69.65\% & 0.7111 \\ \cline{2-7}
& Katz Measure & 79.83\% & 99.64\% & 80.09\% & 79.48\% & 0.7978 \\ \cline{2-7}
\hline\hline \multirow{6}{2cm}{60 mins ($T$=8)}
& Thakur's Metric & 46.39\% & 99.04\% & 41.39\% & 10.61\% & 0.1689 \\ \cline{2-7}
& Spatial Cosine Sim. & 67.55\% & 99.40\% & 68.51\% & 65.70\% & 0.6708 \\ \cline{2-7}
& Co-Location Rate & 68.11\% & 99.42\% & 72.21\% & 60.31\% & 0.6573 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 65.98\% & 99.38\% & 69.73\% & 57.42\% & 0.6298 \\ \cline{2-7}
& Jaccard's Coeff. & 67.00\% & 99.47\% & 68.31\% & 64.53\% & 0.6637 \\ \cline{2-7}
& Katz Measure & 74.09\% & 99.53\% & 75.33\% & 72.84\% & 0.7406 \\ \cline{2-7}
\hline
\end{tabular}}
\end{table}
\begin{table}[!tb]
\renewcommand{\arraystretch}{1}
\caption{Evaluation metrics for the prediction applied on the MIT
Campus trace} \label{table_MIT} \centering \scalebox{0.55}{
\begin{tabular}{|P{0.07}|P{0.17}|P{0.078}|P{0.078}|P{0.078}|P{0.078}|P{0.078}|}
\hline
\bfseries Period length & \bfseries Prediction Score & \bfseries TSR in $T$+1 & \bfseries Accuracy & \bfseries Precision (PPV) & \bfseries Recall (TPR) & \bfseries F-measure\\
\hline\hline \multirow{6}{2cm}{5 mins ($T$=96)}
& Thakur's Metric & 58.22\% & 99.29\% & 65.58\% & 44.96\% & 0.5335 \\ \cline{2-7}
& Spatial Cosine Sim. & 60.87\% & 99.34\% & 72.56\% & 44.17\% & 0.5491 \\ \cline{2-7}
& Co-Location Rate & 69.35\% & 99.49\% & 77.79\% & 60.71\% & 0.6820 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 84.20\% & 99.72\% & 84.22\% & 84.36\% & 0.8429 \\ \cline{2-7}
& Jaccard's Coeff. & 82.18\% & 99.68\% & 83.11\% & 81.12\% & 0.8210 \\ \cline{2-7}
& Katz Measure & 90.14\% & 99.86\% & 95.29\% & 89.02\% & 0.9205 \\ \cline{2-7}
\hline\hline \multirow{6}{2cm}{10 mins ($T$=48)}
& Thakur's Metric & 57.70\% & 99.27\% & 65.25\% & 44.58\% & 0.5569 \\ \cline{2-7}
& Spatial Cosine Sim. & 60.50\% & 99.32\% & 72.56\% & 43.18\% & 0.5414 \\ \cline{2-7}
& Co-Location Rate & 68.74\% & 99.46\% & 76.50\% & 60.08\% & 0.6730 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 80.04\% & 99.63\% & 79.31\% & 80.87\% & 0.8008 \\ \cline{2-7}
& Jaccard's Coeff. & 77.97\% & 99.59\% & 80.53\% & 73.77\% & 0.7700 \\ \cline{2-7}
& Katz Measure & 86.83\% & 99.78\% & 86.62\% & 87.25\% & 0.8693 \\ \cline{2-7}
\hline\hline \multirow{6}{2cm}{30 mins ($T$=16)}
& Thakur's Metric & 56.73\% & 99.20\% & 62.87\% & 46.14\% & 0.5322 \\ \cline{2-7}
& Spatial Cosine Sim. & 59.35\% & 99.26\% & 72.65\% & 40.51\% & 0.5202 \\ \cline{2-7}
& Co-Location Rate & 65.03\% & 99.39\% & 80.75\% & 49.93\% & 0.6171 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 67.07\% & 99.35\% & 67.77\% & 64.55\% & 0.6572 \\ \cline{2-7}
& Jaccard's Coeff. & 66.34\% & 99.39\% & 78.56\% & 51.97\% & 0.6214 \\ \cline{2-7}
& Katz Measure & 72.85\% & 99.47\% & 88.30\% & 53.86\% & 0.7279 \\ \cline{2-7}
\hline\hline \multirow{6}{2cm}{60 mins ($T$=8)}
& Thakur's Metric & 55.70\% & 99.08\% & 63.51\% & 41.85\% & 0.5045 \\ \cline{2-7}
& Spatial Cosine Sim. & 57.57\% & 99.14\% & 72.82\% & 37.22\% & 0.4926 \\ \cline{2-7}
& Co-Location Rate & 61.71\% & 99.24\% & 77.89\% & 45.10\% & 0.5712 \\ \cline{2-7}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
\hhline{|~|------}
& Adamic-Adar Meas. & 59.13\% & 99.14\% & 57.52\% & 65.48\% & 0.6124 \\ \cline{2-7}
& Jaccard's Coeff. & 58.95\% & 99.22\% & 71.27\% & 45.73\% & 0.5571 \\ \cline{2-7}
& Katz Measure & 61.00\% & 99.28\% & 74.90\% & 50.09\% & 0.6003 \\ \cline{2-7}
\hline
\end{tabular}}
\end{table}
The results obtained enable us to attest that the use of the Katz
measure has been one of the best choices to perform prediction
through the tensor-based technique. Using this metric achieves
better performance than those of the other link prediction metrics
proposed in the literature. Hence, the Katz measure is the best
metric that we can use to perform link prediction.
The prediction made through the Katz measure achieves better
performance than those of mobility homophily metrics and Thakur et
al.'s similarity. Indeed, our framework quantifies the similarity of
nodes based on their encounters and geographical closeness. In other
words, the proposed prediction method cares about contacts (or
closenesses) at (around) the same location and at the same time.
Meanwhile, the mobility homophily metrics and Thakur et al.'s
similarity are defined as an association metric. Hence, they measure
the degree of similarity of behaviors of two mobile nodes without
necessarily seeking if they are in the same location at the same
time. Regarding the comparison with the other network proximity
metrics, the Katz measure quantifies better the behavior similarity
of two nodes as it takes into consideration only the paths that
separate them. Meanwhile, the Adamic-Adar metric and the Jaccard's
coefficient are dependent respectively on the degree of common
neighbors between two nodes and the size of the intersection of the
neighbors of two nodes. These latter metrics express similarity
based on common neighbors of two nodes but don't seek if a link is
occurring between them. This criterion highly influences the value
of Katz measure and make it more precise.
\section{Conclusion}
Human mobility patterns are mostly driven by social intentions and
correlations appear in the behavior of people forming the network.
These similarities highly govern the mobility of people and then
directly influence the structure of the network. The knowledge about
the behavior of nodes greatly helps in improving the design of
communication protocols. Intuitively, two nodes that follow the same
social intentions over time promote the occurrence of a link in the
immediate future.
In this paper, we presented a link prediction technique inspired by
data mining and exploit it in the context of human-centered wireless
networks. Through the link prediction evaluation, we have obtained
relevant results that attest the efficiency of our contribution and
agree
with some findings referred in the literature.
Good link prediction offers the possibility to further improve
opportunistic packet forwarding strategies by making better
decisions in order to enhance the delivery rate or limiting latency.
Therefore, it will be relevant to supply some routing protocols with
prediction information and to assess the contribution of our
approach in enhancing the performance of the network especially as
we propose an efficient distributed version of the prediction
method. The proposed technique also motivates us to inquire into
future enhancements as a more precise tracking of the behavior of
nodes and a more efficient similarity computation.
\bibliographystyle{ieeetran}
|
1,108,101,564,128 | arxiv | \section{Introduction}
Recent lattice simulations lead to many new theoretical insights into the dynamics of low-energy gluon modes~\cite{Juge:1997nc,Juge:2002br,Takahashi:2004rw,Luscher:2004ib,Cornwall:2004gi,Greensite:2001nx}. In the quenched approximation aspects of confinement emerge from studies of the gluonic spectrum produced by static color sources. In the following we will focus on the pure gluon dynamics
(the role of dynamical quarks in the screening of confining gluonic strings has recently been studied in ~\cite{Bali:2005fu}).
Lattice studies indicate that with relative separations between two color sources, $R \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1.7 \mbox{ fm}$ , the ground state energy obeys Casimir scaling~\cite{Bali:2000un,Juge:2004xr}. This means that the spectrum of gluon modes generated by static color sources depends on the dimension of the color representation of the sources rather than on the N-ality of the representation (which is related to the transformation property of a representation with respect to the group center)~\cite{Greensite:2003bk}. For example, for two sources in the fundamental representation, lattice computations show, as expected, that energy grows linearly with the separation between the sources. However, also for sources in the adjoint representation (with N-ality of zero), lattice produces a linearly rising potential, even though for vanishing N-ality screening is expected to saturate the potential. Screening comes from the production of gluon pairs
(glueballs) which vanishes in the limit of a large number of colors. Casimir scaling is thus telling us that there is, at least in the energy range relevant for hadronic phenomenology, a simple, universal (source independent) description of the confining string.
The lattice spectrum of gluonic modes generated by sources in the fundamental representation, {\it i.e.}, a static quark-antiquark (${q{\bar q}}$) pair, has been extensively studied in ~\cite{Juge:1997nc,Juge:2002br}. The ground state energy, which as a function of the ${q{\bar q}}$ separation is well represented by the Cornell, "Coulomb+linear" potential and the spectrum of excited gluonic modes have been computed. The excited gluonic modes lead to excited adiabatic potentials between the sources in the sense of the Born-Oppenheimer approximation with the quark sources and gluonic field corresponding to the slow and fast degrees of freedom, respectively~\cite{Juge:1999ie,Juge:1999aw}. The gluonic wave functional of these modes can be classified analogously to that of a diatomic molecule. The good quantum numbers are: $\Lambda=0 (\Sigma) ,1 (\Pi) ,2 (\Delta) ,\cdots$ which give the total gluon spin projection along the ${q{\bar q}}$ axis, $PC=+1(g), -1(u)$ which correspond to the product of gluon parity and charge conjugation, and $Y=\pm 1$ which describes parity under reflection in a plane containing the ${q{\bar q}}$ axis. The ground state corresponds to $\Lambda^{Y}_{PC}=\Sigma^+_g$. The lattice calculations show that the first excited state has the $\Pi_u$ symmetry (for $\Lambda\ne 0$, $Y=\pm 1$ states are degenerate) and thus has $PC= -1$.
The lattice spectrum of gluonic excitations is well reproduced by the bag model~\cite{Hasenfratz:1980jv,Juge:1997nd} . The crucial feature of the model that makes this possible is the boundary condition, which requires the longitudinal component of the chromo-electric and transverse components of the chromo-magnetic field of the free gluon inside the cavity to vanish at the boundary of the bag. This results in the TE mode with pseudo-vector, $J^{P,C} = 1^{+,-}$, quantum numbers having the lowest energy, which leads to the $\Pi_u$ adiabatic potential being the lightest from among the excited gluonic states in the ${q{\bar q}}$ system. In another model, the non-relativistic flux tube model~\cite{Isgur:1984bm} , the $PC=+1$ quantum numbers of the low-lying gluon mode result from associating a negative parity and a positive charge conjugation to the lowest order transverse phonon (unlike that of a vector field). This also results in the $\Pi_u$ quantum numbers for the first excited adiabatic potential. Finally in a QCD based quasi-particle picture the intrinsic quantum numbers of the quasi-gluons are, $J^{P,C} = 1^{-,-}$, that of a transverse vector field~\cite{Horn:1977rq,Swanson:1998kx} . If the first excited adiabatic potential between ${q{\bar q}}$ sources is associated with a single quasi-gluon excitation and this quasi-gluon interacts via normal two-body forces with the sources, then, one expects the quasi-gluon ground state wave function to be in an orbital $S$-wave, which, in turn, leads to the net $PC=+1$ and the $\Pi_g$ symmetry for this state. This is in contradiction with the lattice data as noted in~\cite{Swanson:1998kx}. The bag model and the flux tube model give the right ordering of the spectrum of low lying gluonic excitations, even though they are based on very different microscopic representations of the gluonic degrees of freedom.
There are indications from lattice simulations of various gauge models that the adiabatic potentials approach that of the flux tube, or better string-like spectrum for ${q{\bar q}}$ separations larger then $R \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 3 \mbox{ fm}$~\cite{Juge:2003ge}, however, the situation for QCD is far less clear~\cite{Juge:2002br}. In particular, for large separations between the sources the splitting between nearby string excitations is expected to fall off as $\propto \pi/R$. The lattice results indicate, however, that the spacing between the adiabatic potentials is close to constant. At distances $R \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.2 \mbox{ fm}$ the flux tube model becomes inadequate while QCD is expected to become applicable. For example as $R \to 0$, the Coulomb potential between the quark and the anti-quark in the color octet is repulsive, and, indeed, the results of lattice calculations do seem to have that trend. The bag model attempts to combine the perturbative and long range, collective dynamics by using a free fled theory inside a spherically symmetric bag and deforming the bag to a string like shape as the separation between the sources increases. A self consistent treatment of bag and gluon degrees of freedom is, however, lacking.
Another model which aims at relating the string-like excitations at large ${q{\bar q}}$ separations with
the QCD gluon degrees of freedom is the gluon chain model~\cite{Thorn:1979gu,Greensite:2001nx} and versions thereof~\cite{Szczepaniak:1996tk}. The model is based on the assumption that as the separation between the sources increases pairs of constituent gluons are created to screen the charges in such a way that the Fock space is dominated by a state with a number of constituent gluons, which grows with the ${q{\bar q}}$ separation. Recently, support for the gluon chain model came from lattice studies of the Coulomb energy of the ${q{\bar q}}$ pair~\cite{Greensite:2004ke,Greensite:2003xf}. As shown in ~\cite{Zwanziger:2002sh}, at fixed $R$, Coulomb energy bounds the true exact (from Wilson line) energy from above. The Coulomb energy is defined as the expectation value of the Coulomb potential in a state obtained by adding the ${q{\bar q}}$ pair to the exact ground state of the vacuum, {\it i.e.}, without taking into account vacuum polarization by the sources. The addition of sources changes the vacuum wave functional by creating constituent gluons as described by the gluon chain model.
In this paper we discuss the structure of the ${q{\bar q}}$ state in terms of physical, transverse gluon degrees of freedom. In particular, we focus on the importance of constituent gluons in describing the excited adiabatic potentials. For simplicity and to make our arguments clearer, we concentrate on excited adiabatic potentials of single, $\Lambda^Y_{PC} = \Sigma+_g$, symmetry. A description of the complete spectrum of excited potentials will be presented in a following paper. Our main finding here is that a description based on a single (few) constituent gluon excitation is valid up to $R \sim \mbox{few fm} $, with the gluon chain turning in, most likely, at asymptotically large ${q{\bar q}}$ separations. Consequently, we show how the gluon chain model can emerge in the basis of transverse gluon Fock space.
In Section~II we review the Coulomb gauge formulation of QCD and introduce the Fock space of quasi-gluons. In Section~III we review the computation of the ground state and the excited $\Sigma^+_g$ potentials. There we also discuss the role of multi-particle Fock sectors and a schematic model of the gluon chain. A summary and outlook are given in Section~IV.
\section{Coulomb gauge QCD}
In the Coulomb gauge gluons have only physical degrees of freedom. For all color components $a=1,\cdots,N^2_C-1$ the gauge condition, $\bm{\nabla}\cdot{\bf A}^a({\bf x}) = 0$, eliminates the longitudinal degrees of freedom and the scalar potential, $A^{0,a}$, becomes dependent on the transverse components through Gauss's law~\cite{Christ:1980ku}. The canonical momenta, $\bm{\Pi}^a({\bf x})$, satisfy $[\Pi^a_i({\bf x}),A_j^b({\bf y})] = -i \delta_{ab}\delta^{ij}_T(\bm{\nabla})\delta^3({\bf x}-{\bf y})$ where $\delta^{ij}_T(\bm{\nabla}) = \delta^{ij} - \nabla^i\nabla^j/\bm{\nabla}^2$; in the Shr{\"o}dinger representation, the momenta are given by $ \bm{\Pi}^a({\bf x}) = -i\delta/\delta {\bf A}^a({\bf x})$. More discussion of the topological properties of the fundamental domain of the gauge variables can be found in ~\cite{vanBaal:1997gu}. The full Yang-Mills (YM) Hamiltonian with gluons coupled to static ${q{\bar q}}$ sources in the fundamental representation is given by,
\begin{equation}
H = H_0 + H_{Qg} + H_{QQ},
\end{equation}
where $H_0$ is the YM Hamiltonian containing the kinetic term and interactions between transverse gluons. The explicit form of the YM Hamiltonian, $H_0$, can be found in ~\cite{Christ:1980ku}. The coupling between ${q{\bar q}}$ sources and the transverse gluons, $H_{Qg}$, is explicitly given by,
\begin{equation}
H_{Qg} = \int d{\bf x} d{\bf y} \rho_Q^a({\bf x}) K[{\bf A}]({\bf x},a;{\bf y},b) \rho^b({\bf y}),
\end{equation}
where $\rho_Q = h^{\dag}({\bf x})T^a h({\bf x}) - \eta^{\dag}({\bf x})T^{*a} \eta({\bf x})$ is the color density of the sources with $h$ and $\eta$ representing the static quark and anti-quark annihilation operators, respectively; $\rho = -f_{abc} {\cal J}^{-1} \bm{\Pi}^b({\bf x}) {\cal J} \cdot {\bf A}^c({\bf x})$ is the gluon charge density operator and $K$ is the non-abelian Coulomb kernel,
\begin{equation}
K[{\bf A}]({\bf x},a;{\bf y},b) = {{g^2}\over {4\pi}} \int d{\bf z} {{ (1 - \lambda)^{-2}({\bf x},a;{\bf z},b) } \over {|{\bf z} - {\bf y}|}} ,
\end{equation}
with the matrix elements of $\lambda$ given by $(1-\lambda)({\bf x},a;{\bf y},b) =\delta_{ab}\delta^3({\bf x}-{\bf y}) - g f_{acb} \bm{\nabla}_y (1/|{\bf x} - {\bf y}|) {\bf A}^c({\bf y})$. The Faddeev-Popov (FP) operator, $(1-\lambda)$, determines the curvature of the gauge manifold specified by the FP determinant, ${\cal J} = \det(1-\lambda)$.
Finally, the interaction between the heavy sources, $H_{QQ}$, is given by
\begin{equation}
H_{QQ} = {1\over 2} \int d{\bf x} d{\bf y} \rho_Q^a({\bf x}) K[{\bf A}]({\bf x},a;{\bf y},b) \rho^b_Q({\bf y}).
\end{equation}
The Coulomb kernel is a complicated function of the transverse gluon field. When $H_{Qg}$ and $H_{QQ}$ are expanded in powers of the coupling constant, $g$, they lead to an infinite series of terms proportional to powers of ${\bf A}$. The FP determinant also introduces additional interactions. All these interactions involving gluons in the Coulomb potential are responsible for binding constituent gluons to the quark sources.
\subsection{ Fock space basis}
The problem at hand is to find the spectrum of $H$ for a system containing a ${q{\bar q}}$ par,
\begin{equation}
H |R,N \rangle = E_N(R) |R,N\rangle. \label{qq}
\end{equation}
In the Shr{\"o}dinger representation, the eigenstates can be written as,
\begin{equation}
|R,N\rangle = \int D[{\bf A}^a(x)] {\cal J}[A] \Psi^N_{ij}[{\bf A}^a({\bf x})] | {R\over 2}{\hat{\bf z}},i,-{R\over 2}\hat{{\bf z}},j;{\bf A} \rangle,
\label{qqwf}
\end{equation}
with
\begin{equation}
| {R\over 2} {\hat{\bf z}},i-{R\over 2} \hat{{\bf z}},j;{\bf A} \rangle = h^{\dag}_i({R\over 2} \hat{{\bf z}})
\eta^{\dag}_j(-{R\over 2}\hat{\bf z}) |{\bf A} \rangle
\end{equation}
describing a state containing a quark at position $R{\hat{\bf z}}/2$ and color $i$ and an anti-quark at position $-R\hat{{\bf z}}/2$ and color $j$. We keep quark spin degrees of freedom implicit since, for static quarks, the Hamiltonian is spin-independent. The eigenenergies, $E_N(R)$, correspond to the adiabatic potentials discussed in Section~I with $N$ labeling consecutive excitations and spin-parity, $\Lambda^Y_{PC}$, quantum numbers of the gluons in the static ${q{\bar q}}$ state.
The vacuum without sources, denoted by $|0\rangle$, in the Shr{\"o}dinger representation is given by,
\begin{equation}
|0\rangle = \int D[{\bf A}^a({\bf x})]{\cal J}[A] \Psi_0[{\bf A}^a({\bf x})] |{\bf A} \rangle , \label{0}
\end{equation}
and satisfies $ H_0 |0\rangle = E_{vac}|0\rangle$.
The eigenenergies, $E_N(R)$, in Eq.~(\ref{qq}) contain contributions from disconnected diagrams which sum up to the energy of the vacuum, $E_{vac}$. In the following, we will focus on the difference, $E_N(R) \to E_N(R) - E_{vac}$, and ignore disconnected contributions in the matrix elements of $H$.
Instead of using the Shr{\"o}dinger representation, it is convenient to introduce a Fock space for quais-particle-like gluons~\cite{Reinhardt:2004mm,Feuchter:2004mk,Szczepaniak:2003ve,Szczepaniak:2001rg}. These are defined in the standard way, as excitations built from a gaussian (harmonic oscillator) ground state. Regardless of the choice of parameters of such a gaussian ground state, the set of all quasi-particle excitations forms a complete basis.
We will optimize this basis by minimizing the expectation value of the Hamiltonian in such a gaussian ground state. We will then use this variational state to represent the physical vacuum and use it in place of $|0\rangle$ and $\Psi_0[{\bf A}]$. The unnormalized variational wave functional is given by, $ \Psi_0[{\bf A}] = \langle {\bf A}| 0\rangle$,
\begin{equation}
\Psi_0[{\bf A}] = \exp\left( -{1\over 2} \int {{d {\bf k}} \over {(2\pi)^3}} {\bf A}^a({\bf k}) \omega(|{\bf k}|) {\bf A}^a(-{\bf k}) \right),
\label{psi0}
\end{equation}
where ${\bf A}^a({\bf k}) = \int d{\bf x} \exp(-i{\bf k}\cdot {\bf x}) {\bf A}^a({\bf x})$ and the gap function, $\omega(|{\bf k}|)$ plays the role of the variation parameter. The computation of the expectation value of $H_0$ in $\Psi_0$ given above,
was described in Ref.~\cite{Szczepaniak:2001rg}. In the following we will summarize the main points.
The expectation value of $\langle 0|H_0|0\rangle$ can be written in terms of
functional integrals over $D[{\bf A}^a({\bf x})]$ with the measure ${\cal J}[A]$. The functionals to be integrated are products of $H_0 = H_0(\bm{\Pi},{\bf A})$ and the wave functional $|\Psi_0[{\bf A}]|^2$. For example the contribution to $\langle 0 |H_0 | 0 \rangle$ from the $g=0$ component of the transverse chromo-magnetic field density, $\langle B^2 \rangle = \langle 0| \int d{\bf x} [{\bf B}^a({\bf x})]^2|0 \rangle/\langle 0|0\rangle$, is given by,
\begin{eqnarray}
\langle B^2 \rangle & = &
\int D{\bf A} {\cal J}[{\bf A}] [ \bm{\nabla} \times {\bf A}^a({\bf x})]^2 {{ \Psi_0^2[{\bf A}] }\over {\langle 0|0\rangle}}
\nonumber \\
& = & {\cal N} \int {{d{\bf k}}\over {(2\pi)^3}} { {{\bf k}^2} \over {2\Omega(|{\bf k}|)}},
\end{eqnarray}
where ${\cal N} = 2 \times (N^2_c - 1) \times {\cal V}$ counts the total (infinite) number of gluon
degrees of freedom in volume ${\cal V}$ and $\Omega$ is the instantaneous gluon-gluon correlation function,
\begin{equation}
\int D{\bf A} {\cal J}[{\bf A}] {\bf A}^a({\bf p}) {\bf A}^b({\bf q})
{{ \Psi_0^2[{\bf A}]} \over {\langle 0|0\rangle}} = {{\delta_{ab} } \over {2\Omega(|{\bf p}|)} }
(2\pi)^3 \delta({\bf p}+{\bf q}). \label{Omega}
\end{equation}
In the limit ${\cal J} \to 1$, $\Omega$ becomes equal to the gap function $\omega$~\cite{Reinhardt:2004mm,Szczepaniak:2003ve}. Evaluation of functional integrals over non-gaussian distributions, like the one in Eq.~(\ref{Omega}) for ${\cal J} \ne 1$ can be performed to the leading order in $N_C$ by summing all planar diagrams. This produces a set of coupled integral (Dyson) equations for functions like $\Omega(p)$. The Dyson equations contain, in general, UV divergencies. To illustrate how renormalization takes place, let us consider expectation value of the inverse of the FP operator,
\begin{equation}
\delta_{ab} d({\bf x} - {\bf y}) \equiv \int D{\bf A} {\cal J}[{\bf A}] g (1-\lambda)^{-1}({\bf x},a;{\bf y},b)
{{ \Psi_0^2[{\bf A}] } \over {\langle 0|0\rangle} }. \label{d}
\end{equation}
From translational invariance of the vacuum, it follows that the integral depends on ${\bf x} - {\bf y}$ and the Dyson equation for $d$ becomes simple in momentum space. Defining,
$d({\bf x} - {\bf y}) \to d({\bf p}) = \int d{\bf x} \exp(-i{\bf k}\cdot {\bf x}) d({\bf x})$, one obtains, ($p = |{\bf p}|$, {\it etc.} ),
\begin{equation}
{1\over {d(p)}} = {1\over {g(\Lambda)}} - {{N_C}\over 2} \int^\Lambda {{d{\bf q}} \over {(2\pi)^3}} {{(1 - \hat{\bf q}\cdot\hat{\bf p})} \over {\Omega(|{\bf p}-{\bf q}|) {\bf q}^2 } } d(q). \label{eqd}
\end{equation}
As expected from asymptotic freedom, for large momenta, $\Omega(k)/k \to 1
+ O(\log k)$; the integral in Eq.~\ref{d} becomes divergent as $q \to \infty$, and we need to introduce an UV cutoff $\Lambda$. The cut-off dependence can, however, be removed by renormalizing the coupling constant $g \to g(\Lambda)$. The final equation for $d(p)$, renormalized at a finite scale $\mu$, is obtained by subtracting from Eq.~(\ref{d}) the same equation evaluated at $p = \mu$.
One also finds that the expectation value of $(1-\lambda)^2$, which enters in the Coulomb kernel, $K[{\bf A}]$, requires a multiplicative renormalization. We define the Coulomb potential as,
\begin{equation}
\int D{\bf A} {\cal J}[{\bf A}] K[{\bf A}]({\bf x},a;{\bf y},b) {{\Psi_0^2[{\bf A}] } \over {\langle 0|0\rangle}}
\equiv - \delta_{ab} V_C({\bf x} - {\bf y}), \label{V}
\end{equation}
and introduce a function $f$ by,
\begin{equation}
V_C(k) = \int d{\bf x} e^{ i{\bf k}\cdot{\bf x}} V_C({\bf x}) \equiv - { {f(k) d^2(k) } \over {k^2} },\label{Vk}
\end{equation}
This function then satisfies a renormalized Dyson equation,
\begin{eqnarray}
f(k) & = & f(\mu) + \nonumber \\
& + & \left[ {{N_C}\over 2} \int {{d{\bf q}} \over {(2\pi)^3}} {{(1 - \hat{\bf q}\cdot\hat{\bf p})d^2(q)f(q)} \over {\Omega(|{\bf p}-{\bf q}|) {\bf q}^2 } } - (k \to \mu) \right]. \nonumber \\
\end{eqnarray}
Finally, the bare gap equation, $\delta [\langle 0|H_0|0\rangle/\langle 0|0\rangle]/\delta \omega(k) = 0$, contains a quadratic divergence proportional to $\sim \Lambda^2$. This divergence is eliminated by a single relevant operator from the regularized Hamiltonian, the gluon mass term, which is proportional to $\Lambda^2 \int d{\bf x} {\bf A}^a({\bf x})$. The renormalized gap equation determines the gap function $\omega(k)$, and it depends on a single dimensional subtraction constant, $\omega(\mu)$.
The functions described above completely specify the variational ground state, and the complete Fock space basis can be constructed by applying to this variational ground state quasi-particle creation operators, $\alpha^{a,\dag}({\bf k},\lambda)$, defined by,
\begin{eqnarray}
{\bf A}^a({\bf x}) & = & \int {{d{\bf k}} \over {(2\pi)^3}} {1\over {\sqrt{2\omega(k)}}} \left[ \alpha^a({\bf k},\lambda) \bm{\epsilon}({\bf k},\lambda) \right. \nonumber \\
& & \left. + \alpha^{a,\dag}(-{\bf k},\lambda) \bm{\epsilon}(-{\bf k},\lambda) \right ] e^{i{\bf k}\cdot{\bf x}}, \nonumber \\
\bm{\Pi}^a({\bf x}) & = & -i \int {{d{\bf k}} \over {(2\pi)^3}} \sqrt{ {\omega(k)} \over 2}
\left[ \alpha^a({\bf k},\lambda) \bm{\epsilon}({\bf k},\lambda) \right. \nonumber \\
& & \left. - \alpha^{a,\dag}(-{\bf k},\lambda) \bm{\epsilon}(-{\bf k},\lambda) \right ] e^{i{\bf k}\cdot{\bf x}}. \nonumber \\
\end{eqnarray}
Here $\bm{\epsilon}$ represent helicity vectors with $\lambda =\pm 1$.
This Fock space and the corresponding Hamiltonian matrix elements depend on four parameters (renromalization constants), $\omega(\mu)$, $d(\mu)$, $f(\mu)$ and one constant needed to regulate the FP determinant. The FP determinant enters into the Dyson equation for $\Omega(k)$.
In principle, if the entire Fock space is used in building the Hamiltonian matrix and no approximations are made in diagonalization, the physical spectrum will depend on the single parameter of the theory {\it i.e} the renormalized coupling (or $d(\mu)$, {\it cf} Eq.~(\ref{eqd})). In practical calculations, the Fock space is truncated and this may introduce other renormalization constants. Goodness of a particular basis, for example the one built on the state given in Eq.~(\ref{psi0}), can be assessed by studying sensitivity of physical observables to these residual parameters.
For example, if we define the running coupling as, $\alpha(k) \equiv f(k)d^2(k)$, so that $V_C(k) = -{{4\pi \alpha(k)} \over {k^2}} $, we will find that for large $k$, $\alpha(k) \propto (1/\log^{c}(k))[1 + O(1/\log(k))]$ where $c \sim 1.5$, \cite{Szczepaniak:2001rg}, while in full QCD the leading log has power $c=1$. The discrepancy arises because we used the single Fock state, $|0\rangle$ in definition of $V_C$ (and $\alpha$). This omits, for example, the contribution from the two-gluon Fock state, as shown in
Fig.~\ref{llogfig}. This two gluon intermediate state clearly impacts the short range behavior of the Coulomb interaction, but, as discussed in ~\cite{Szczepaniak:2001rg}, it is not expected to affect the long range part (partially because the low momentum gluons develop a large constituent mass). Similarly, in ~\cite{Szczepaniak:2003ve}, the role of the FP determinant has been analyzed, and it was shown that it does not make a quantitative difference leading to $\Omega(p) \sim \omega(p)$.
This is in contrast, however, to the results of ~\cite{Feuchter:2004mk}. We think this discrepancy originates from the difference in the boundary conditions which in~\cite{Feuchter:2004mk}
lead to $f(k)=1$. This makes possible for the gap equation to have a solution for $\omega(k)$ which rises at low momenta. If $f(k) \ne 1$ and, in particular, if $f(k)$ grows as $k\to 0$, which is necessary if $V_C(R)$ is to grow linearly for large $R$, we find that $\omega(k)$ has to be finite as $k\to 0 $. A more quantitative comparison is currently being pursued. We also note that lattice simulations~\cite{Langfeld:2004qs} are consistent with the results of ~\cite{Szczepaniak:2003ve,Szczepaniak:2001rg}.
In the following, we will thus set ${\cal J} =1 $, which makes $\Omega=\omega$, and use the solutions for $f(k)$, $d(k)$ and $\omega(k)$ found in Ref.~\cite{Szczepaniak:2001rg}.
Finally, we want to stress that the Coulomb potential, defined in Eqs.~(\ref{V}),~(\ref{Vk}), gives the energy expectation value in the state obtained by adding the ${q{\bar q}}$ pair to the vacuum of Eqs.~(\ref{0}),~(\ref{psi0}), {\it i.e},
\begin{equation}
\langle {q{\bar q}} |H|{q{\bar q}}\rangle = C_F V_C(R) - C_F V_C(0),
\end{equation}
with $C_F V_C(0)$ originating from self-energies,
and
\begin{eqnarray}
|{q{\bar q}} \rangle &= & |R,N=0,\Sigma^+_g\rangle \nonumber \\
& = & {1\over {\sqrt{N_C}}} h^{\dag}\left({R\over 2} \hat{\bf z}\right)\eta^{\dag}\left(-{R\over 2}\hat{\bf z}\right) {{| 0\rangle } \over {\langle 0|0\rangle}}.
\label{qq0}
\end{eqnarray}
The state $|R,N=0,\Sigma^+_g\rangle$ refers the the ground state $(N=0)$ with spin-partiy quantum numbers $\Lambda^Y_{PC} = 0(\Sigma)^+_g$. The energy $C_F V_C(R)$ should be distinguished from $E_0(R)$ in Eq.~(\ref{qq}). The latter is evaluated using the {\it true} ground state of the ${q{\bar q}}$ system while the former is evaluated in a state obtained by simply adding a ${q{\bar q}}$ pair to the vacuum. Since a ${q{\bar q}}$ pair is expected to polarize the gluon distribution ,these two states are different. Furthermore, in this work, the $|{q{\bar q}}\rangle$ state in Eq.~(\ref{qq0}) is obtained by adding the ${q{\bar q}}$ pair to the {\it variational} state of the vacuum and not to the {\it true} vacuum state in the absence of sources.
\begin{figure}
\includegraphics[width=3in]{Fig1.eps}
\caption{\label{llogfig} The $O(g^4)$, one loop diagrams contributing to the leading log term in the expansion of the $\beta$-function in YM theory with heavy sources. a) anti-screening dressing of the Coulomb potential by transverse gluons, b) Debye screening of the Coulomb potential by transverse gluons. The Coulomb potential is represented by the dashed line, and sources by thick lines. }
\end{figure}
\subsection{Fitting the Coulomb Potential}
As discussed above, the Coulomb energy, $C_F V_C(R)$, represents the expectation value of the Hamiltonian in a particular ${q{\bar q}}$ state (given in Eq.~(\ref{qq0})), which is not the same as the true eigenstate of the Hamiltonian for the ${q{\bar q}}$ system as defined in Eq.~(\ref{qq}). The latter has energy $E_0(R)$.
According to ~\cite{Zwanziger:2002sh}, $C_F V_C(R) > E_0(R)$ and numerical results in ~\cite{Greensite:2004ke} further indicate that for large $R$, $C_F V_C(R) \sim \sigma_C R $ and $E_0(R) \sim \sigma R$ with the Coulomb string tension, $\sigma_C$, being approximately three times larger then $\sigma$. In ~\cite{Szczepaniak:2001rg} we, however, fitted $d(\mu)$, $f(\mu)$ and $\omega(\mu)$ so that $C_F V_C(R) \to E_0(R)$, and a number of phenomenological studies have been successful with those parameters ~\cite{Adler:1984ri,Szczepaniak:1995cw,Ligterink:2003hd,Szczepaniak:2003mr}. It should be noted, however, that the results from ~\cite{Greensite:2004ke} for $C_F V_C(R)$ may not directly apply to our analysis since the ${q{\bar q}}$ state used here to define $V_C(R)$ may be different from the one used in lattice computations of $V_C(R)$.
Guided by the successes of the phenomenological applications of our approach we
proceed with fitting $C_F V_C(R)$ to $E_0(R)$. It is clear, however,
that since the ${q{\bar q}}$ state of Eq.~(\ref{qq0}) is a variational state, $C_F V_(R)$ should be greater than $E_0(R)$~\cite{Zwanziger:2002sh}. We will nevertheless proceed with the approximation $C_F V_C(R) = E_0(R)$ and examine the consequences afterwards.
In ~\cite{Szczepaniak:2001rg}, we have found that the numerical solutions to the set of coupled Dyson equations for $d(k)$, $f(k)$ and $\omega(k)$ can be well represented by,
\begin{equation}
d(k) = \left\{ \begin{array}{cc} 3.5 \left( {m_g \over k} \right)^{0.48} & \mbox{ for } k < m_g \\
3.5 \left( {{ \log(2.41) } \over {\log(k^2/m_g^2 + 1.41)} } \right)^{0.4} & \mbox{ for } k > m_g ,
\end{array}\right.
\end{equation}
\begin{equation}
f(k) = \left\{ \begin{array}{cc} 1.41 \left( {m_g \over k} \right)^{0.97} & \mbox{ for } k < m_g \\
1.41 \left( {{ \log(1.82) } \over {\log(k^2/m_g^2 + 0.82) } } \right)^{0.62} & \mbox{ for } k > m_g ,
\end{array} \right.
\end{equation}
\begin{equation}
\omega(k) = \left\{ \begin{array}{cc} m_g & \mbox{ for } k < m_g \\
k & \mbox{ for } k > m_g . \end{array} \right.
\end{equation}
The parameter $m_g = 650 \mbox{ MeV}$ effectively represents the constituent gluon mass. It should be noticed, however, that $\omega(k)$ is the gap function and not the single quasi-particle energy. This energy, denoted by $E_g(k)$ is given by,
\begin{eqnarray}
&& E_g(k) = \omega(k)\left[ 1
- {N_C \over 2} \int {{d{\bf q}} \over {(2\pi)^3}} V_C({\bf k}- {\bf q}) {{1 + \hat{\bf k}\cdot\hat{\bf q}} \over {2\omega(q)}}
\right]. \nonumber \\
& & \label{eg}
\end{eqnarray}
Since $V_C(k) = -f(k)d^2(k)/k^2$, which for small $k$ grows faster then $k^3$, the integral in
< Eq.~(\ref{eg}) is divergent. This IR divergence is a manifestation of the long range nature of the confining Coulomb potential which removes single, colored excitations from the spectrum. As will be explicit in the examples studied later, residual interactions between colored constituents in color neutral states cancel such divergencies and result in a finite spectrum for color neutral states. In the following analysis, we will also need the Coulomb potential in coordinate space. We find it practical to approximate the numerical FT of $V_C({\bf k}-{\bf q})$ by,
\begin{figure}
\includegraphics[width=2.6in,angle=270]{sigma.eps}
\caption{\label{coul} Comparison between $V(R) = C_F V_C(r)$ from Eq.~(\ref{vcr}) (solid line) and
$V(R) = E_0(R)$ lattice data from ~\cite{Juge:1997nc} ($r_0 = 1/450 \mbox{ MeV}^{-1}$).}
\end{figure}
\begin{equation}
V_C(r) = b r - {\alpha \over {r \log^c[ (r \Lambda)^{-1} + a]} }, \label{vcr}
\end{equation}
with $b= 0.20 \mbox{ GeV}^2$, $\alpha=0.83$, $\Lambda = 0.63\mbox{ GeV}$, $a=1.24$ and $c=1.51$.
Comparison between $C_F V_C(R)$ and $E_0(R)$ obtained from lattice computations is shown in Fig.~\ref{coul}.
We now proceed to the main subject of this paper, namely to investigate the difference between $E_0(R)$ computed using the single Fock space approximation to the ${q{\bar q}}$ state ({\it i.e} without modification of the gluon distribution) and the solution of Eq.~(\ref{qq}) which accounts for modifications in the gluon distribution in the vacuum in presence of ${q{\bar q}}$ sources. We will also compute the first excited potential with the $\Sigma^+_g$ symmetry.
\section{Adiabatic potentials}
To diagonalize the full Hamiltonian in the Fock space described above, in principle, requires an infinite number of states. In the zeroth-order approximation, $E_0(R) = C_FV_C(R)$, a single state with no
< quasi-gluons was used. At vanishing ${q{\bar q}}$ separation, we expect the wave function of the system to be identical to that of the vacuum, and the approximation becomes exact. One also expects that the average number of quasi-gluon excitations in the full wave functional of Eq.~(\ref{qqwf}) increases with the ${q{\bar q}}$ separation. We thus start by examining the approximation based on adding a single quasi-gluon and truncate the Hamiltonian matrix to a space containing
$|{q{\bar q}}\rangle $ and $|{q{\bar q}} g\rangle$ states,
\begin{eqnarray}
& & \left[\begin{array}{cc} \langle {q{\bar q}} | H {q{\bar q}} \rangle & \langle {q{\bar q}}|H| {q{\bar q}} g\rangle \\
\langle {q{\bar q}} g|H |{q{\bar q}} \rangle & \langle {q{\bar q}} g|H | {q{\bar q}} g \rangle \end{array} \right]
\left[ \begin{array}{c} |{q{\bar q}} \rangle \\ |{q{\bar q}} g \rangle \end{array} \right] \nonumber \\
& & = E_N(R) \left[\begin{array}{c} |{q{\bar q}} \rangle \\ |{q{\bar q}} g \rangle \end{array} \right] . \label{mix}
\end{eqnarray}
The $|{q{\bar q}}\rangle $ state is given in Eq.~(\ref{qq0}). In the quasi-particle representation the state with a single gluon and $\Lambda^Y_{PC}$ quantum numbers, $|{q{\bar q}} g\rangle = |R,n,\Lambda^Y_{PC}\rangle$ is given by,
\begin{eqnarray}
& & |R,N,\Lambda^Y_{PC} \rangle = \sum_{j_g,\xi,\mu,\lambda} \sqrt{ {2j+1}\over {8\pi C_F N_C }}
\int {{d{\bf k}} \over {(2\pi)^3}} \nonumber \\
& & \left[ D^{j_g*}_{\Lambda\mu}(\hat{\bf k})
+ \eta_Y D^{j_g*}_{-\Lambda\mu}(\hat{\bf k}) \right]
\psi^{j_g}_{N}(k) \chi^{\xi}_{\mu\lambda} |R,{\bf k},\lambda\rangle, \nonumber \\
\label{wfg}
\end{eqnarray}
for $\Lambda \ne 0$ and,
\begin{eqnarray}
& & |R,N,0^{PC}_Y \rangle = \sum_{j_g,\xi,\mu,\lambda} \sqrt{ {2j+1}\over {4\pi C_F N_C }}
\nonumber \\
& \times & \int {{d{\bf k}} \over {(2\pi)^3}} D^{j_g*}_{0\mu}(\hat{\bf k})
\psi^{j_g}_{N}(k) \chi^{\xi}_{\mu\lambda} |R,{\bf k},\lambda\rangle, \nonumber \\
\label{wfg1}
\end{eqnarray}
for $\Lambda=0$ ($\Sigma$ potentials)
where
\begin{equation}
|R,{\bf k},\lambda\rangle = h^{\dag}\left({R\over 2}\hat{\bf z}\right) \alpha^{\dag}({\bf k},\lambda) \eta^{\dag}\left(-{R\over 2} \hat{\bf z}\right) {{|0 \rangle} \over {\langle 0|0\rangle} },
\end{equation}
and $\alpha^{\dag} = \alpha^{a,\dag} T^a $. In Eqs.~(\ref{wfg}),(\ref{wfg1}), $j_g$ is the total angular momentum of the quasi-gluon. For vanishing separation between the quarks, the system has full rotational symmetry, and $j_g$ becomes a good quantum number. In general, the system is invariant only under rotations around the ${q{\bar q}}$ axis. It is only the projection of the total angular momentum, $\Lambda$, that is conserved and states with different $j_g$ become mixed. The wave function $\chi^\xi_{\mu\lambda}$ represents the two possibilities for the spin-oribt coupling of given parity, ($j_g = L_g$ or $j_g = L_g \pm 1$). It is given by
$\delta_{\mu\lambda}/\sqrt{2}$ for $\xi= 1$ and $\lambda\delta_{\mu\lambda}/\sqrt{2}$ for $\xi=-1$, corresponding to TM (natural parity) and TE (unnatural parity) gluons, respectively.
Finally $\eta_Y$ determines the behavior under reflections in the plane containing the ${q{\bar q}}$ axis, {\it i.e.}, the $Y$-parity.
The radial wave functions, $\psi^{j_g}_N(k)$, labeled by the radial quantum number $N$ and $j_g$, are obtained by diagonalizing the full Hamiltonian in the Fock space spanned by the ${q{\bar q}} g$ states alone, {\it i.e} by solving the equation,
\begin{equation}
P H P |R,N, \Lambda^Y_{PC} \rangle = V^{qqg}_{C,N}(R) |R,N,\Lambda^Y_{PC} \rangle. \label{hgbare}
\end{equation}
Here $P$ projects on the $|{q{\bar q}} g\rangle$ state and $V^{qqg}_{C,N}(R)$ are the {\it bare}
energies of the excited adiabatic potentials, {\it i.e.}, without mixing between states with a different number of quasi-gluons. Analogously, $C_F V_C(R)$ is the {\it bare}
ground state energy $E_0(R)$. The matrix elements of $PHP$ are shown in Fig.~\ref{vqqg} and given explicitly in the Appendix.
\begin{figure}
\includegraphics[width=3in]{Fig2-new.eps}
\caption{\label{vqqg} Matrix elements, $\langle R, {\bf k}',\lambda' | H| R,{\bf k}, \lambda \rangle$. Diagrams a) and b) represent gluon and quark self energies, respectively. Diagrams c) and d) represent the Coulomb interaction, $V_C$ between the gluon and one of the quarks and between the two quarks, respectively. In the bottom row, diagrams e) and f) describe matrix elements of the interaction term resulting from expansion of the Coulomb kernel $K[A]$ in up to one power in gluon field. }
\end{figure}
The mixing matrix element,
\begin{equation}
\langle {q{\bar q}}| H |{q{\bar q}} g \rangle = V^{qq,qqg}_{C,N}(R),
\end{equation}
depends on the number of bare, ${q{\bar q}} g$ states from Eq.~(\ref{hgbare}) kept, $N = 1,\cdots N_{max}$ and the separation between the sources, $R$. It is shown in Fig.~\ref{vmix} and given in the Appendix.
\begin{figure}
\includegraphics[width=2.5in]{Fig3-new.eps}
\caption{\label{vmix} The matrix element $\langle R |H|R,{\bf k},\lambda\rangle$. It originates from expansion of the Coulomb kernel $K[A]$ to first order in $A$}
\end{figure}
The $(N_{max} +1) \times (N_{max} + 1)$ Hamiltonian matrix shown in Eq.~(\ref{mix})
is explicitly given by,
\begin{equation}
H_{NM} = \left\{ \begin{array}{ll} C_F\left[ V_C(R) - V_C(0) \right] & N=M=0 \\
V^{qq,qqg}_{C,M}(R) & N=0, M=1-N_{max} \\
{V^{qq,qqg}}^*_{C,N}(R) & N=1-N_{max}, M= 0 \\
V^{qqg}_{C,N}(R)\delta_{NM} & N,M = 1-N_{max}
\end{array} \right. . \label{mixmat}
\end{equation}
\section{ Numerical Results}
In terms of $\xi$ and $\eta$, the $PC$ and $Y$ quantum numbers of the gluonic field are given by,
\begin{equation}
PC = \xi (-1)^{j_g + 1}, \; Y= \left\{ \begin{array}{c} \xi \eta_Y (-1)^\Lambda \mbox{ for } \Lambda \ne 0
\\ \xi \mbox{ for } \Lambda = 0 \end{array} \right. . \label{PC}
\end{equation}
In the following, we will concentrate on the states with $\Lambda=0$, $PC=g(+)$ and $Y=+$, {\it i.e.}, of $\Sigma^+_g$ symmetry, since it is only these states that mix the bare $|{q{\bar q}}\rangle$ state with the states with non-vanishing number of gluons.
For the $\Sigma^+_g$ potentials, the wave function contains TM gluons, $\xi = 1$ of natural parity and $PC=+1$ which implies $j_g = 1,3,\cdots$. As discussed above, for $R\to0$, $j_g$ becomes a good quantum number, and we have verified numerically that for $R$ in the range considered here the contributions from $j_g = 3$ and higher are at a level of a few percent. Diagonalization of the Hamiltonian in the ${q{\bar q}} g$ subspace alone, leads to the $V^{{q{\bar q}} g}_{C,N}(R)$ potential which is shown in Fig.~\ref{sigma_no_mix_new} (upper solid line) for the lowest excitation with $N=1$. The dashed line is the result of using the one- and two-body interactions depicted in Figs.~\ref{vqqg}a-d. ($H_{\ref{vqqg}a}-H_{\ref{vqqg}d}$ in Eq.~(\ref{htot})). These are also the interactions that were used in ~\cite{Swanson:1998kx}. When the three-body interactions shown in Fig.~\ref{vqqg}e,f are added, the energy moves up. This discrepancy is then also a measure of how far our variational, truncated Fock space expansion is from the true excited state. The three-body potential is expected to be responsible for reversing the ordering between the $\Pi_u$ and $\Pi_g$ surfaces; with only one- and two-body interactions, the $\Pi_g$ potential has lower energy than $\Pi_u$, which is inconsistent with the lattice data~\cite{Swanson:1998kx}. In the Appendix, we also show that the three-body term is suppressed at large separations, and thus the net potential approaches the Casimir scaling $C_F b R$ limit as $R \to \infty$. Finally, we note that when the Fock space is restricted to single quasi-gluon excitations, the diagrams in Fig.~\ref{vqqg} and Fig.~\ref{mix} represent the complete set of Hamiltonian matrix elements .
\begin{figure}
\includegraphics[width=2.6in,angle=270]{sigma_no_mix_new.eps}
\caption{\label{sigma_no_mix_new} Comparison between $V(R) = C_F V_C(r)$ from Eq.~(\ref{vcr}) (solid line) and the
$V(R) = E_0(R)$ lattice data from ~\cite{Juge:1997nc} ($r_0 = 1/450 \mbox{ MeV}^{-1}$).}
\end{figure}
The general features of higher excitations, $V^{qqg}_{C,N}(R)$ for $N>1$, follow from the structure of the Hamiltonian, which represents a one-body Schr{\" o}dinger equation for the single quasi-gluon wave function in momentum space. The kinetic energy corresponds to the one-body diagram in Fig.~\ref{vqqg}a and the potential to the diagrams in Fig.~\ref{vqqg}c,e,f.
The diagrams in Figs.~\ref{vqqg}b,d give an $R$-dependent shift describing the ${q{\bar q}}$ self-interactions and ${q{\bar q}}$ octet potential. The IR singularity in the gluon kinetic energy, $E_g$, is canceled by the collinear singularity of the two-body potential, the ${q{\bar q}}$ self energy and ${q{\bar q}}$ octet potential.
On average, gluon kinetic energy contributes an effective quasi-gluon mass of the order of $m_g$. Quasi-gluon are thus heavy, and adding Fock space components, with more gluons, $|{q{\bar q}}, 2g\rangle, \cdots |{q{\bar q}} n_g g \rangle$, for small-$R$ will result in higher adiabatic potentials with $(N=2,3\cdots$) that are split from the first excited state by $\sim n_g m_g$. At large $R$, the two-body Coulomb potential dominates and together with Coulomb energies of the pair-wise gluon interactions, results in the Casimir scaling (we will discuss this in more detail in the following section). In the absence of mixing between Fock space components the number of quasi-particle gluons in the $|{q{\bar q}}, n_g g\rangle$ state is conserved, and they directly map in to the tower of excited adiabatic potentials.
\begin{figure}
\includegraphics[width=2.6in,angle=270]{sigma_with_mix_new-2.eps}
\caption{\label{sigma_with_mix_new-2} Comparison between $V(R) = C_F V_C(r)$ from Eq.~(\ref{vcr}) (solid line) and the
$V(R) = E_0(R)$ lattice data from ~\cite{Juge:1997nc} ($r_0 = 1/450 \mbox{ MeV}^{-1}$).}
\end{figure}
We will now address the effects of mixing between $|{q{\bar q}}\rangle$ and $|{q{\bar q}} g\rangle$ states. The only non-vanishing diagram is shown in Fig.~\ref{mix}. Since, as discussed above, the $V^{qqg}_{C,N}(R)$ potentials are split from the first excited state, $N=1$, by at least $m_g$, the mixing matrix in Eq.~(\ref{mixmat}) saturates quickly, and in practice, only the $N=1$ state is relevant. However, even this single state mixing leads to a very small energy shift. In Fig.~\ref{sigma_with_mix_new-2} the dashed line corresponds to the energy of the ground state without mixing, (the same as the solid line in Fig.~\ref{sigma_no_mix_new}), and the solid line shows the effect of mixing. The effect of the mixing is small. Numerically, we find that the full ground state,
\begin{equation}
|{q{\bar q}}, N=0\rangle = Z^0_{qq}(R) |{q{\bar q}}\rangle + Z^0_{qqg} |{q{\bar q}} g\rangle,
\end{equation}
is still dominated by the $|{q{\bar q}} \rangle$ component and the first excited
state,
\begin{equation}
|{q{\bar q}}, N=1\rangle = Z^1_{qq}(R) |{q{\bar q}}\rangle + Z^1_{qqg} |{q{\bar q}} g\rangle,
\end{equation}
by the $|{q{\bar q}} g \rangle$ component. The probabilities of each are shown in Fig.~\ref{prob}.
We see that, for distances between sources as large as $5\mbox{ fm}$, the
admixture of the gluon component is only of the order of $10\%$.
\begin{figure}
\includegraphics[width=2.6in,angle=270]{Zqq.eps}
\caption{\label{prob} Normalized probability of finding the bare $|{q{\bar q}}\rangle$ state in the full ground state of the $|qq,N=0\rangle$ (which is also equal to the probability of finding the $|{q{\bar q}} g\rangle$ state in the first excited $|qq, N=1\rangle$ state).}
\end{figure}
This small admixture of the $|{q{\bar q}},g\rangle$ in the full ground state is correlated with the small shift in the $\Sigma^+_g$ surface shown in
Fig.~\ref{sigma_with_mix_new-2} and would justify using the ground state, exact $\Sigma^+_g$ energy to constrain the Coulomb potential $V_C$. This is, however, contradicting the results of Ref.~\cite{Greensite:2003xf} where the effect of mixing must be large since it results in a factor of three in the ratio of the unmixed to mixed string tensions. One possible explanation is that there is an accidental suppression of the mixing interaction matrix element for the two states considered here, $|{q{\bar q}}\rangle$ and $|{q{\bar q}} g\rangle$. Inspecting Eq.~(\ref{hvmix}), we note that due to the gradient coupling of the transverse gluon to the Coulomb line, the coupling vanishes both for small and large-R. In contrast, a two gluon state can be coupled to $|{q{\bar q}}\rangle$ with either the Coulomb line mediated interaction as shown in Fig.~\ref{higher}a or the quark density- gluon density interaction shown in Fig.~\ref{higher}b. As discussed in the Appendix, at large distances the former is suppressed and it is easy to show that the latter is proportional to $C_F V_C({\bf x} - {\bf R}/2) + C_F V_C({\bf x} + {\bf R}/2)$ (once the gluon spin is neglected) and persists at large distances. In the large-$N_C$ limit $C_F = N_C/2 (1 + O(1/N_C))$. It is therefore possible that the $|{q{\bar q}}, 2g\rangle$ component of the full $|{q{\bar q}}, N=0\rangle$ state is actually more important then the $|{q{\bar q}} g\rangle$ one. We will investigate this further in section~\ref{chainsec}.
\begin{figure}
\includegraphics[width=2.5in]{Fig9.eps}
\caption{\label{higher} Matrix elements $\langle qq|H|{q{\bar q}},2g\rangle$ leading to the $|{q{\bar q}}, 2g\rangle$ component in the ground state $\Sigma^+_g$ potential, a) interaction mediated via the Coulomb line coupled to quark sources, b) interaction between a single quark and the gluon charge density. }
\end{figure}
\subsection{\label{chainsec} Multi-gluons states and the chain model}
As shown above, the quasi-gluon degrees of freedom defined in terms of a variational quasi-particlue vacuum provide an attractive basis for describing gluon excitations. This is in the sense that for source separations relevant for phenomenology the color singlet states can be effectively classified in terms of the number of quasi-gluons. This basis, however, does overestimate the energies (as expected in a variational approach), and this fact together with lessons from other models can give us guidance for how to improve on the variational state of the ${q{\bar q}}$ system. As the separation between quarks increases one expects the average number of gluons in the energy eigenstate to increase. This is because it becomes energetically favorable to add a constituent gluon which effectively screens the ${q{\bar q}}$ charge. Furthermore, the spacial distribution of these gluons is expected to be concentrated near the ${q{\bar q}}$ axis in order for the energy distribution to be that of a flux tube, as measured by the lattice. An improvement in the ansatz wave functional will therefore result in a more complicated Fock space decomposition with a large number of quasi-gluons present, even at relatively small separations between the sources. In this section we will first discuss how multi-gluon states indeed become important, even in the case of the quasi-gluon basis used here. We then compare with expectations from other models and discuss the possible directions for improving the quasi-gluon basis.
\begin{figure}
\includegraphics[width=2.5in]{Fig10.eps}
\caption{\label{multi} Typical diagrams contribution to mixing between $n$ and $m$ gluon states.
Vertical dots represent any number of gluons not affected by the interaction. a) mixing mediated by the Coulomb potential, b) same as in a) with rearrangement of gluons, c) long-range Coulomb interaction between gluon charge densities, d) same as in c) but with the charge density of the quark sources. }
\end{figure}
As discussed in the Appendix, at large separations the interactions between multi-gluon Fock states mediated by the Coulomb potential, shown in Fig.~\ref{multi}a,b, require all but two gluons to be at relative separations smaller than $R$. Furthermore, rearrangement of gluons leads to $1/N_C$ suppression. For large $R$, the largest diagonal matrix elements of $H$ are the ones corresponding to the long-range Coulomb interaction between charge densities as shown in Fig.~\ref{multi}c,d. To leading order in $N_C$, the gluons should be path-order along the ${q{\bar q}}$ axis. For simplicity, we will neglect the gluon spin and use a single wave function to represent a state with an arbitrary number of gluons. We write
\begin{widetext}
\begin{equation}
|{q{\bar q}}, n_g g\rangle = N_{n_g} \int_{-R/2}^{R/2} dx_{n_g} \alpha^{\dag}(x_{n_g}) \int_{-R/2}^{x_{n_g}} dx_{n_g-1} \alpha^{\dag}(x_{n_g-1}) \cdots \int_{-R/2}^{x_3} dx_2 \alpha^{\dag}(x_2) \int_{-R/2}^{x_2} dx_1 \alpha^{\dag}(x_1) |0\rangle \label{chain}
\end{equation}
\end{widetext}
where we have also forced all gluons to be on the ${q{\bar q}}$ axis. The factor
$N_{n_g} = (n_g !/C_F N_C R)^{1/2}$ is, to leading order in $N_C$ fixed by the normalization condition, $\langle {q{\bar q}}, n_g g| {q{\bar q}}, n'_g g\rangle = \delta_{n_g,n'_g}$, where we used $[\alpha(x_i),\alpha^{\dag}(x_j)] = \delta_{ij}$. In this basis, the diagonal matrix elements of the Hamiltonian ({\it cf.} Fig.~\ref{multi}c,d) add up to
\begin{equation}
H_{n_g n'_g } = \langle {q{\bar q}}, n_g g| H |{q{\bar q}}, n'_g g \rangle = C_F V_C(R) \to C_F b R \delta_{n_g,n'_g} .\label{mod-1}
\end{equation}
The off-diagonal matrix elements are dominated by interactions between color charges, {\it e.g.}, similar to the ones in Fig.~\ref{higher}b, but with the upper vertex attached to a gluon line. With the approximations leading to Eq.~(\ref{chain}) a vertex which either annihilates or creates two gluons results in a vanishing matrix element since in our basis no two gluons are at the same point. Smearing each gluon in the coordinate space by a distance of the order of $1/m_g$ will give a finite matrix element , which just like the diagonal matrix elements grows linearly with $R$,
\begin{eqnarray}
H_{n_g n'_g } & = & \langle {q{\bar q}}, n_g g| H |{q{\bar q}}, n'_g g \rangle \nonumber \\
& \to & \gamma C_F b R \left[ \delta_{n_g,n'_g+2} + \delta_{n_g,n'_g+2} \right] , \label{mod-2}
\end{eqnarray}
where $\gamma$ is a parameter representing the effect of a smearing,
and we expect $|\gamma| < O(1)$. In addition, each gluon has a kinetic energy of the order of $m_g$, so $H_{nn} \to H_{nn} + n m_g$. The model Hamiltonian can be easily diagonalized numerically, and in Fig.~\ref{model}, we plot the energy of the ground state and of the first excited state as a function of $R$. It is clear that in the absence of accidental spin suppression, which, as discussed earlier, takes place for the $\langle {q{\bar q}} | H |{q{\bar q}} g\rangle$ mixing matrix, the effect of the mixing with two and more gluons can produce shifts in the lowest adiabatic potential and decrease the Coulomb string tension by as much as a factor of $\sim 2 $ at $R\sim 3r_0 = 2.6 \mbox{ fm}$. Finally, in Fig.~\ref{ng} we plot the average number of gluons in the ground state of the model Hamiltonian. As expected, the number of gluons grows with $R$; however, still a small number of quasi-gluons contributes to the ground state at these separations, which again provides justification for the quasi-gluon description.
\begin{figure}
\includegraphics[width=2.6in,angle=270]{model.eps}
\caption{\label{model} Shift in the ground state $\Sigma^+_g$ energy due to coupling with muti-gluon states of the model Hamiltonian of Eqs.~(\ref{mod-1}),~(\ref{mod-2}). The maximum number of states was taken to be $n_{g, max} = 40$. The other parameters are $b=0.21\mbox{ GeV}^{-2}$ and $m_g = 0.65 \mbox{ GeV}$. }
\end{figure}
\begin{figure}
\includegraphics[width=2.6in,angle=270]{ng.eps}
\caption{\label{ng} Average number of quasi-gluons in the full eigenstate of the model Hamiltonian of Eqs.~(\ref{mod-1}),~(\ref{mod-2}). }
\end{figure}
\section{Summary and Outlook}
We computed the ground state energy and the energy of the first excited ${q{\bar q}}$ potential with the $\Lambda^Y_{PC} = \Sigma^+_g$ symmetry. We used the quasi-particle basis of constituent gluons based on a variational ground state to build the Fock space representaion. We found that the ${q{\bar q}}$ state can be well approximated by a superposition of the bare ${q{\bar q}}$ state and a few quasi-gluons. The exact computation in which the bare ${q{\bar q}}$ sate mixes with a state containing a single quasi-gluon leads to negligible change in the energy of the bare (Coulomb) ${q{\bar q}}$ system. We found that this is due to an accidental small mixing matrix element of the Coulomb gauge Hamiltonian. We have discussed the general properties of the mixing matrix between states with an arbitrary number of gluons, and using a simple approximation, we have found a good agreement with the lattice data. The lattice data indicates that there is a change in slope between the Coulomb and the true, Wilson potential~\cite{Greensite:2003xf}.
Based on the representation used here, we interpret this in terms of quasi-gluon excitations rather than in terms of a flux-tube-like degrees of freedom. We also note that lattice data on splitting between several excited ${q{\bar q}}$ states does not unambiguously show a string-like behavior for separation as large as $2-3\mbox{ fm}$~\cite{Juge:2002br}. In fact, the splittings are almost constant,
although why lattice data has such a behavior is not completely understood (including a possible systematic error)~\cite{colin}. In fact this data is consistent with the quasi-gluon picture where each quasi-particle adds kinetic energy of the order of the effective gluon mass.
The full excitation spectrum as well as distribution of energy density is currently being investigated.
\section{Acknowledgment}
I would like to thank J.~Greensite, C.~Morningstar and E.~Swanson,
for several discussions and C.~Halkyard for reading the manuscript.
This work was supported in part by the US
Department of Energy grant under contract
DE-FG0287ER40365.
\section{Appendix}
Here we list matrix elements of the Hamiltonian in the basis spanned by $|{q{\bar q}}\rangle
= |R,N=0,\Lambda^Y_{PC}\rangle$ and $|{q{\bar q}} g\rangle = |R,N\ne0, \Lambda^Y_{PC} \rangle$.
The $|{q{\bar q}} \rangle$ state
exists only in the $\Lambda^Y_{PC}= \Sigma^+_g$ configuration. Thus mixing matrix elements are non-vanishing for $|{q{\bar q}} g\rangle$ with $\Sigma^+_g$ spin-parity quantum numbers only.
For each $j_g$, the wave functions $\psi_{N,j_g}(k)$ are expanded in a complete
orthonrmal basis of functions $\phi_{m,j_g}(k)$
\begin{equation}
\psi_{N,j_g}(k) = \sum_{m=1}^{m_{max}} a^{m}_{N,j_g} \phi_{m,j_g}(k)
\end{equation}
with normalization, $\int {{dk k^2} \over {(2\pi)^3}} \phi^*_{m',j'_g}(k) \phi_{m,j_g}(k) = \delta_{m',m}\delta_{j'_g,j_g}$. The expansion coefficients are computed by diagonalizing the $(m_{max}j_{g,max}) \times (m_{max}j_{g,max})$ matrix, $\tilde{H}_{m'j_g';m,j_g}$, obtained by evaluating the diagrams in Fig.~\ref{vqqg},
\begin{equation}
\tilde{H_3} = H_{3a} + H_{3b} + \cdots + H_{3f}, \label{htot}
\end{equation}
evaluated in the basis of functions $\phi_{m,j_g}$. In numerical computations for each $j_g$, we used a momentum grid as the basis functions.
The numerical results presented were for a single $j_g$ determined from
Eq.~(\ref{PC}) after verifying that increasing $j_g$ changes the computed spectrum by at most a few percent.
For arbitrary $\Lambda^Y_{PC}$ the Hamiltonian matrix elements are given by,
\begin{equation}
H_{3a} = {{\delta_{j'_g,j_g}}\over 2} \int {{dk k^2} \over {(2\pi)^3}} \phi^*_{m',j_g}(k) E_g(k) \phi_{m,j_g}(k),
\end{equation}
\begin{eqnarray}
H_{3b} & = & - C_F V_C(0)\delta_{m',m}\delta_{j'_g,j_g} \nonumber \\
& = & - 4\pi C_F \int {{dk k^2} \over {(2\pi)^3}} V_C(k) \delta_{m',m}\delta_{j'_g,j_g} ,
\end{eqnarray}
with
\begin{equation}
V_C(k) = - {{d^2(k)f(k)} \over {k^2}},
\end{equation}
\begin{widetext}
\begin{eqnarray}
H_{3c} & = &
{N_C \over 2} \sum_{\lambda,\lambda',\sigma,\sigma',\mu} \int {{d{\bf q}} \over {(2\pi)^3}}
\int {{d{\bf k}} \over {(2\pi)^3}}
\phi^*_{m',j'_g}(q) \phi_{m,j_g}(k)\int d{\bf x} \left[ V_C({\bf x} - { {\bf R}\over 2}) + V_C ({\bf x} + {{\bf R}\over 2}) \right]
e^{ i {\bf x} \cdot ({\bf k}-{\bf q}) } \nonumber \\
& \times & { \sqrt{ {(2j'_g + 1)(2j_g + 1)}} \over {16\pi} }
\left[ D^{j'_g}_{\Lambda,\sigma'}(\hat{\bf q})
D^{j_g,*}_{\Lambda\lambda'}(\hat{\bf k}) \chi^{\xi'}_{\sigma\sigma'} \chi^{\xi}_{\lambda\lambda'}
D^{1*}_{\mu\sigma}(\hat{\bf q}) D^1_{\mu\lambda}(\hat{\bf k})
+ \eta_Y\eta_Y' (\Lambda \to -\Lambda) \right] \left(\sqrt{ { \omega(k)} \over {\omega(q)}}
+ \sqrt{ {\omega(q)} \over {\omega(k)}} \right)\nonumber \\
& = & {N_C \over 2}
\sum_{\lambda,\lambda',\sigma,\sigma',\mu} \int {{d{\bf q}} \over {(2\pi)^3}} \int {{d{\bf k}} \over {(2\pi)^3}}
\phi^*_{m',j'_g}(q) V_C({\bf k} - {\bf q}) \left[ e^{ - i {{\bf R} \over 2} \cdot ({\bf k}-{\bf q}) }
+ e^{ i {{\bf R} \over 2} \cdot ({\bf k}-{\bf q}) } \right] \phi_{m,j_g}(k)
\nonumber \\
& \times &
{\sqrt{(2j'_g + 1)(2j_g + 1)} \over {16\pi} } \left[ D^{j'_g}_{\Lambda,\sigma'}(\hat{\bf q})
D^{j_g,*}_{\Lambda\lambda'}(\hat{\bf k}) \chi^{\xi'}_{\sigma\sigma'} \chi^{\xi}_{\lambda\lambda'}
D^{1*}_{\mu\sigma}(\hat{\bf q}) D^1_{\mu\lambda}(\hat{\bf k})
+ \eta_Y\eta_Y' (\Lambda \to -\Lambda) \right] \left(\sqrt{ { \omega(k)} \over {\omega(q)}}
+ \sqrt{ {\omega(q)} \over {\omega(k)}} \right), \nonumber \\
\end{eqnarray}
\end{widetext}
and $\eta_Y$ and $\xi$ related to $j_g$ and $\Lambda^Y_{PC}$ through Eq.~(\ref{PC}).
\begin{eqnarray}
H_{3d} & = & - {1\over {2N_C}} V_C(R)\delta_{m',m}\delta_{j'_g,j_g} \nonumber \\
& = & - 4\pi {1\over {2N_C}} \int {{dk k^2} \over {(2\pi)^3}} V_C(k) j_0(R k)\delta_{m',m}\delta_{j'_g,j_g}
\nonumber \\
\end{eqnarray}
\begin{widetext}
\begin{eqnarray}
H_{3e} & = & \sum \int {{d{\bf k}} \over {(2\pi)^3}} {{d{\bf p}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}}
{{\phi^*_{m',j'_g}(p)}\over {\sqrt{2\omega(p)}}} {{ \phi_{m,j_g}(k) }\over {\sqrt{2\omega(k)}}}
\nonumber \\
& \times & \int d{\bf x} d{\bf y} d{\bf z} \left[ K({\bf x} - {{\bf R}\over 2}, {\bf z} + {\bf y} - {\bf x}, {\bf y} + {{\bf R}\over 2}) + ({\bf R} \to - {\bf R}) \right]
e^{i{\bf x} \cdot {\bf k}} e^{i {\bf z} \cdot {\bf q}} e^{-i {\bf y} \cdot {\bf p}} \nonumber \\
& \times & { \sqrt{ {(2j'_g + 1)(2j_g + 1)}} \over {8\pi} } \left[
D^{j'_g}_{\Lambda,\sigma'}(\hat{\bf p}) D^{1,*}_{\mu,\sigma}(\hat{\bf p}) \chi^{\xi'}_{\sigma'\sigma}
D^1_{\mu,0}(\hat{\bf q})
D^{j_g,*}_{\Lambda,\lambda'}(\hat{\bf k}) D^1_{\nu,\lambda}(\hat{\bf k}) \chi^{\xi}_{\lambda'\lambda}
D^{1,*}_{\nu,0}(\hat{\bf q}) + \eta_Y\eta'_Y (\Lambda \to -\Lambda) \right] \nonumber \\
& = &
\sum \int {{d{\bf k}} \over {(2\pi)^3}} {{d{\bf p}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}}
{{ \phi^*_{m',j'_g}(p)}\over {\sqrt{2\omega(p)}}} {{\phi_{m,j_g}(k) }\over {\sqrt{2\omega(k)}}} K({\bf k}+{\bf q},{\bf q},{\bf p}+{\bf q})
\left[ e^{i {{\bf R}\over 2} \cdot ( {\bf k} + {\bf p} + 2{\bf q})} + ({\bf R} \to -{\bf R}) \right]
\nonumber \\
& \times & { \sqrt{ {(2j'_g + 1)(2j_g + 1)}} \over {8\pi} } \left[
D^{j'_g}_{\Lambda,\sigma'}(\hat{\bf p}) D^{1,*}_{\mu,\sigma}(\hat{\bf p}) \chi^{\xi'}_{\sigma'\sigma}
D^1_{\mu,0}(\hat{\bf q})
D^{j_g,*}_{\Lambda,\lambda'}(\hat{\bf k}) D^1_{\nu,\lambda}(\hat{\bf k}) \chi^{\xi}_{\lambda'\lambda}
D^{1,*}_{\nu,0}(\hat{\bf q}) + \eta_Y\eta'_Y (\Lambda \to -\Lambda) \right] \nonumber \\
\end{eqnarray}
\end{widetext}
where the sum is over $\mu,\nu,\lambda,\lambda',\sigma,\sigma'$ and the kernel is given by
\begin{equation}
K({\bf x},{\bf z},{\bf y})
= \int {{d{\bf k}} \over {(2\pi)^3}} {{d{\bf p}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}}
K(k,q,p) e^{i{\bf x}\cdot {\bf k}} e^{i{\bf y} \cdot {\bf p}} e^{i{\bf z}\cdot {\bf q}}
\end{equation}
and
\begin{eqnarray}
K(k,q,p) & = & q^2 {{N_C^2 }\over {4}} {{d(k) d(p) d(q)} \over {k^2q^2 p^2}}
\nonumber \\
& \times & \left[ d(k) f(k) + d(p) f(p) + d(q) f(q) \right] \label{kk}
\end{eqnarray}
Finally,
\begin{widetext}
\begin{eqnarray}
H_{3f} & = & \sum \int {{d{\bf k}} \over {(2\pi)^3}} {{d{\bf p}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}}
{{\phi^*_{m',j'_g}(p)}\over {\sqrt{2\omega(p)}}} {{ \phi_{m,j_g}(k)}\over {\sqrt{2\omega(k)}}}
\nonumber \\
& \times & \int d{\bf x} d{\bf y} d{\bf z} \left[ K({\bf x} - {{\bf R}\over 2}, {\bf z} + {\bf y} - {\bf x}, {\bf y} - {{\bf R}\over 2}) + ({\bf R} \to - {\bf R}) \right]
e^{i{\bf x} \cdot {\bf k}} e^{i {\bf z} \cdot {\bf q}} e^{-i {\bf y} \cdot {\bf p}} \nonumber \\
& \times & { \sqrt{ {(2j'_g + 1)(2j_g + 1)}} \over {8\pi} } \left[
D^{j'_g}_{\Lambda,\sigma'}(\hat{\bf p}) D^{1,*}_{\mu,\sigma}(\hat{\bf p}) \chi^{\xi'}_{\sigma'\sigma}
D^1_{\mu,0}(\hat{\bf q})
D^{j_g,*}_{\Lambda,\lambda'}(\hat{\bf k}) D^1_{\nu,\lambda}(\hat{\bf k}) \chi^{\xi}_{\lambda'\lambda}
D^{1,*}_{\nu,0}(\hat{\bf q}) + \eta_Y\eta'_Y (\Lambda \to -\Lambda) \right] \nonumber \\
& = &
\sum \int {{d{\bf k}} \over {(2\pi)^3}} {{d{\bf p}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}}
{{ \phi^*_{m',j'_g}(p)}\over {\sqrt{2\omega(p)}}}{{ \phi_{m,j_g}(k)}\over {\sqrt{2\omega(k)}}} K({\bf k}+{\bf q},{\bf q},{\bf p}+{\bf q})
\left[ e^{i {{\bf R}\over 2} \cdot ( {\bf k} - {\bf p} )} + ({\bf R} \to -{\bf R}) \right]
\nonumber \\
& \times & { \sqrt{ {(2j'_g + 1)(2j_g + 1)}} \over {8\pi} } \left[
D^{j'_g}_{\Lambda,\sigma'}(\hat{\bf p}) D^{1,*}_{\mu,\sigma}(\hat{\bf p}) \chi^{\xi'}_{\sigma'\sigma}
D^1_{\mu,0}(\hat{\bf q})
D^{j_g,*}_{\Lambda,\lambda'}(\hat{\bf k}) D^1_{\nu,\lambda}(\hat{\bf k}) \chi^{\xi}_{\lambda'\lambda}
D^{1,*}_{\nu,0}(\hat{\bf q}) + \eta_Y\eta'_Y (\Lambda \to -\Lambda) \right] \nonumber \\
\end{eqnarray}
\end{widetext}
In the large-$N_C$ limit, $g\sqrt{N_C} \sim O(1)$, and since $d(k) \propto g$ and $f \sim O(1)$, all of terms above are $O(1)$ except $H_d$ (which corresponds to a non-planar diagram, see Fig.~\ref{vqqg}). The products of the three factors, $d(p_i)/p_i^2$, originate from the three dressed Coulomb lines in diagrams $e$ and $f$ in
Fig.~\ref{vqqg}, and the three factors of $f$ come from the three possibilities to insert the $\nabla^2$ operator on these three lines. The derivative coupling between transverse and Coulomb gluons leads to the extra $q^2$ factor in the numerator in Eq.~(\ref{kk}). In coordinate space this implies that $K({\bf x},{\bf z},{\bf y})$ is short-ranged in ${\bf z}$. Furthermore in each of the three terms in Eq.~(\ref{kk}) there is only one combination, $d^2(p_i)f(p_i)/p_i^2$, which in momentum, space leads to the confining potential $V_C$. The remaining two are of the form $d(p_i)/p_i^2$ with $d(p) \propto 1/\sqrt{p}$, which for small momenta also leads to a short-ranged interaction decreasing as $~1/\sqrt{r}$ for large $r$. We thus conclude that for the three interaction lines connecting the four vertices in the "three-body force" of Fig.~\ref{vqqg}e only one is long-ranged and all others are short-ranged. Along these lines one can approximate $K({\bf x},{\bf z},{\bf y})$ as
\begin{equation}
K({\bf x},{\bf z},{\bf y}) \propto \delta({\bf z}) \left[ {{m_g V_C({\bf x})} \over { (m_g |{\bf y}|)^\alpha}}
+ {{m_g V_C({\bf y})} \over {(m_g|{\bf x}|)^\alpha}} \right],
\end{equation}
with $0<\alpha < 1$. Ignoring the gluon spin and all spin-orbit couplings we then obtain,
\begin{eqnarray}
& & H_{3e} \to \int d{\bf x} d{\bf y}
{{\phi^*_{m'}({\bf x})}\over {\sqrt{2m_g}}} {{ \phi_{m}({\bf y})}\over {\sqrt{2m_g}}} \nonumber \\
& \times & \left[ K({\bf x} - {{\bf R}\over 2}, {\bf y} - {\bf x}, {\bf y} + {{\bf R}\over 2}) + ({\bf R} \to - {\bf R}) \right]
\nonumber \\
& \propto & \int d{\bf x} \phi^*_{m'}({\bf x}) \left[ { {V_C({\bf x} - {{\bf R}\over 2}) } \over {(m_g|{\bf x} + {{\bf R}\over 2}|)^\alpha} } + ({\bf R} \to -{\bf R}) \right] \phi_m({\bf x}). \nonumber \\
\end{eqnarray}
At large separation $R$ with the wave functions peaking at $|{\bf x}| \sim 0$, we find that $H_e$ grows less rapidly than two-body interactions. This is in general true for interactions originating from the expansion of $K[A]$ in powers of $A$ which couple multiple gluons. This is the basis for the approximations discussed in Section.~\ref{chainsec}.
The off-diagonal matrix element of the Hamiltonian mixing the $|{q{\bar q}}\rangle$ and ${q{\bar q}} g\rangle$ states, shown in Fig.~\ref{vmix}, is given by,
\begin{widetext}
\begin{eqnarray}
H_{4} & = & i \sum \int {{d{\bf k}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}}
{{\phi_{m,j_g}(k)}\over {\sqrt{2\omega(k)}}} \int d{\bf x} d{\bf z} \left[ K_1({\bf x} - {{\bf R}\over 2}, {\bf z} - {\bf x} - {{\bf R}\over 2}) - ({\bf R} \to - {\bf R}) \right]
e^{i{\bf x} \cdot {\bf k}} e^{i {\bf z} \cdot {\bf q}} \nonumber \\
& \times & { \sqrt{ {2j_g + 1}} \over {4\pi} }
D^{j_g,*}_{\Lambda=0,\lambda'}(\hat{\bf k}) D^1_{\nu,\lambda}(\hat{\bf k}) \chi^{\xi}_{\lambda'\lambda}
D^{1,*}_{\nu,0}(\hat{\bf q})
=
i \sum \int {{d{\bf k}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}}
{{\phi_{m,j_g}(k)}\over {\sqrt{2\omega(k)}}} K_1({\bf k}+{\bf q},{\bf q}) \nonumber \\
& \times & \left[ e^{i {{\bf R}\over 2} \cdot ( {\bf k} + 2{\bf q} )} - ({\bf R} \to -{\bf R}) \right]
{ \sqrt{ {2j_g + 1}} \over {4\pi} }
D^{j_g,*}_{\Lambda=0,\lambda'}(\hat{\bf k}) D^1_{\nu,\lambda}(\hat{\bf k}) \chi^{\xi}_{\lambda'\lambda}
D^{1,*}_{\nu,0}(\hat{\bf q}) \nonumber \\
\label{hvmix}
\end{eqnarray}
\end{widetext}
\begin{equation}
K_1({\bf x},{\bf y}) = \int {{d{\bf p}} \over {(2\pi)^3}} {{d{\bf q}} \over {(2\pi)^3}} K(p,q)
e^{i{\bf x}\cdot {\bf p}} e^{i{\bf y} \cdot {\bf q}}
\end{equation}
with
\begin{equation}
K_1(p,q) = { {N_C\sqrt{C_F}}\over 2} q {{d(p)d(q)} \over {p^2q^2}}\left[d(p)f(p) + d(q)f(q)\right].
\end{equation}
As expected in the large $N_C$ limit $K_1 = O(1)$ and just like the three-body kernel described previously, $K_1({\bf x},{\bf y})$ has mixed behavior for large separations. A term, in momentum space, proportional to $d^2f$ in one of the two momentum variables leads to $V_C$ in the corresponding position space argument. While for the other momentum variable it leads to a less singular behavior for large distances. Approximately, we find
\begin{equation}
K_1({\bf x}-{{\bf R} \over 2},{\bf x}+{{\bf R} \over 2}) \propto {{m_g^2 V_C(|{\bf x} - {{\bf R}\over 2}|) } \over {(m_g |{\bf x} + {{\bf R}\over 2}|)^\beta}} + ({\bf R} \to -{\bf R})
\end{equation}
with $1<\beta<2$.
In this limit, ignoring spin dependence, one finds
\begin{equation}
H_{4} \to i \int d{\bf x} K_1(|{\bf x} - {{\bf R}\over 2}, |{\bf x} + {{\bf R}\over 2}|) {{\phi_{m,j_g}(x)}\over {\sqrt{2 m_g
}}}.
\end{equation}
Thus, similar to the case of $H_{3e}$, we find that at large separations the mixing terms grow less rapidly with $R$ as compared to two-body interactions.
|
1,108,101,564,129 | arxiv | \section{Introduction}
\subsection{The model} In this work we seek to extend the analysis carried out by the second author in \cite{Pa}. Specifically, this paper is concerned with the stability properties of traveling/standing wave solutions to the $1+1$ dimensional $\phi^{4n}$-equation on the torus (see for example \cite{Lo}): \begin{align}\label{phif}
\partial_{t}^2\phi-\partial_x^2\phi=-\lambda_nV'_n(\phi), \quad t\in\mathbb{R},\ x\in\mathbb{T}_L:=\mathbb{R}/L\mathbb{Z},
\end{align}
where $\lambda_n\in\mathbb{R}$ is a positive parameter and $V_n(\phi)$ is given by the following class of potentials: \begin{align}\label{potential}
V_{n}(\phi):=\prod_{k=1}^n\left(\phi^2-v^2\big(k-\tfrac{1}{2}\big)^2\right)^2, \quad v>0.
\end{align}
Here, $\phi(t,x)$ denotes a real-valued $L$-periodic function. This family of equations corresponds to a generalization of the celebrated $\phi^4$-equation in Quantum Field Theory, which arises as a model for self-interactions of scalar fields (represented by $\phi$). In particular, in the case $n=1$, equation \eqref{phif} is one of the simplest examples where to apply Feynman diagram techniques to do perturbative analysis in quantum theory.
\medskip
The $\phi^4$-model has been extensively studied from both, a mathematical and a physical point of view. Especially, this equation has been a ``workhorse'' of the Ginzburg-Landau (phenomenological) theory of superconductivity, taking $\phi$ as the order parameter of the theory, that is, the macroscopic wave function of the condensed phase \cite{KeCu}. In particular, the $\phi^4$-equation has been derived as a simple continuum model of lightly doped polyacetylene \cite{Ri}. We refer the interested reader
to \cite{MaPa,PeSc,Va} for some other physical motivations.
\medskip
On the other hand, equation \eqref{phif} belongs to a bigger family of equations called the $P(\phi)_2$-theory, which considers general polynomial self-interactions of scalar fields, where the potential is assumed to be of the form $V(\phi)=(P(\phi))^2$, where $P(\cdot)$ corresponds to some polynomial and the potential $V$ is asked to be even. The first examples of such theory are the famous $\phi^4$, $\phi^6$ and $\phi^8$ models (notice that $\phi^6$ does not belongs to our current framework
\eqref{potential}). In this setting, the self-interaction intensity is quantified by $V(\phi)$, and clearly sets the dynamics of the field \cite{Lo}.
\medskip
One interesting feature of the $\phi^{4n}$-model (and generally of the $P(\phi)_2$-theory) is that, as $n$ goes to infinity, for a proper selection of parameters $\lambda_n$, equation \eqref{phif} is converging to the so-called sine-Gordon equation \[
\partial_t^2\phi-\partial_x^2\phi+\sin\phi=0.
\]
Roughly speaking, in order to recover the sine-Gordon as a limiting equation of \eqref{phif}, the parameter $\lambda_n$ has to be chosen so that, for $n\in\mathbb{N}$ sufficiently large ($v=1$), \[
\lambda_n^{-1}=\pi^2\prod_{k=1}^n\big(k-\tfrac{1}{2}\big)^2+\varepsilon(n),
\]
where $\varepsilon:\mathbb{N}\to\mathbb{R}$ is any function converging to zero sufficiently fast as $n$ goes to infinity. Additionally, notice that, as $n$ increases, one is adding more and more different minima to the potential $V_n$ in \eqref{potential} (see Figure \ref{fig1}). Correspondingly, more soliton sectors. As a result, these polynomial theories are in general more difficult to handle than the sine-Gordon theory, although for $n$ large, one would expect the soliton properties to approach those of sine-Gordon solitons \cite{Lo}.
\medskip
From a mathematical point of view, equation \eqref{phif} can also be understood as a particular case of the general family of nonlinear Klein-Gordon equations: \begin{align}\label{klein}
\partial_t^2\phi-\partial_x^2\phi=m\phi+f(\phi),
\end{align}
where $m\in\mathbb{R}$ and $f:\mathbb{R}\to\mathbb{R}$ denotes the nonlinearity. Many important nonlinear models can be recovered as particular cases of this latter equation, such as the whole $\phi^{4n}$-family \eqref{phif}, as well as the $\phi^{4n+2}$-family and the sine-Gordon equations (see \cite{Lo} for the explicit form of the $\phi^{4n+2}$-family). Interestingly, under rather general assumptions it is still possible to obtain some stability results for model \eqref{klein}. We refer the reader to \cite{De,KMM2} for a fairly general theory for small solutions to equation \eqref{klein} and to \cite{LiSo,St} for studies of the long time asymptotics for some generalizations of equation \eqref{phif} with variable coefficients.
\medskip
On the other hand, since \eqref{phif} corresponds to a wave-like equation, it can be rewritten in the standard form as a first order system for $\vec{\phi}=(\phi_1,\phi_2)$ as \begin{align}\label{phif_2}
\begin{cases}
\partial_t\phi_1=\phi_2,
\\ \partial_t\phi_2=\partial_x^2\phi_1-\lambda_nV_n'(\phi_1).
\end{cases}
\end{align}
Moreover, from the Hamiltonian structure of the equation it follows that, at least formally, the energy of system \eqref{phif_2} is conserved along the trajectory, that is,
\begin{align}\label{energy}
\mathcal{E}(\vec{\phi}(t))&:=\dfrac{1}{2}\int_0^L \big(\phi_2^2+\phi_{1,x}^2+2\lambda_nV_n(\phi_1)\big)(t,x)dx=\mathcal{E}(\vec{\phi}_0).
\end{align}
Besides, the conservation of momentum shall also play a fundamental role for our current purposes, which is given by:
\begin{align}\label{momentum}
\mathcal{P}(\vec{\phi}(t)):=\int_0^L \phi_2(t,x)\phi_{1,x}(t,x)dx=\mathcal{P}(\vec{\phi}_0).
\end{align}
We point out that, from these two conservation laws it follows that $H^1(\mathbb{T}_L)\times L^2(\mathbb{T}_L)$ defines the natural energy space associated to system \eqref{phif_2}.
\medskip
Additionally, equation \eqref{phif} is known for satisfying several symmetries. Among the most important ones we have the invariance under space and time translations. It is worth to notice that, in the aperiodic setting there is an extra invariance, the so-called Lorentz boost, that means, if $\vec{\phi}(t,x)$ is a solution to the equation, then so is \[
\vec{\varphi}(t,x):=\vec{\phi}\big(\gamma(t-\beta x),\gamma(x-\beta t)\big) \quad \hbox{where}\quad \gamma^{-1}:= \sqrt{1-\beta^2} \quad \hbox{and}\quad \beta\in(-1,1).
\]
However, this transformation does not let the period fixed, and hence, strictly speaking, it is not an invariance of the equation in our current setting.
\medskip
Now, in order to motivate our work we recall that, for general nonlinear evolution equations, two of the most important objects in nonlinear dynamics are traveling and standing wave solutions, particularly in the context of dispersive PDEs due to the so-called \emph{soliton conjecture}. The existence and (if the case) the corresponding orbital stability of such type of solutions have become a fundamental issue in the area. In this regard, we prove the existence of at least one branch of traveling wave solutions to equation \eqref{phif} in the periodic setting, as well as one associated branch of standing wave solutions. Nonetheless, we remark that, up to the best of our knowledge, for $n>2$ these solutions have no explicit form, which has been an important problem in this work.
\medskip
One of the key points in our analysis is the use of classical results of Grillakis-Shatah-Strauss (see \cite{GSS}) which set a general framework to study the orbital stability/instability for both traveling and standing wave solutions. These general results are based on the spectral information of the linearized Hamiltonian around these specific solutions. Thereby, it is worthwhile to notice that, in the real-valued case, equation \eqref{phif_2} can be rewritten in the abstract Hamiltonian form as \[
\partial_t\vec\phi=\mathbf{J}\mathcal{E}'(\vec\phi) \quad \hbox{where} \quad \mathbf{J}:=\left(\begin{matrix}
0 & 1 \\ -1 & 0
\end{matrix}\right),
\]
where $\mathcal{E}'$ denotes the Frechet derivative of the conserved energy functional $\mathcal{E}$ in \eqref{energy}.
\medskip
Regarding the orbital stability of explicit solutions to equations \eqref{phif} and \eqref{klein}, there exists a vast literature regarding the aperiodic case. We refer the reader to \cite{HPW} for a classical and rather general result about orbital stability of Kink solutions for Klein-Gordon equations, and to \cite{KMM,KMMH} for some interesting results regarding asymptotic stability of Kink solutions for general scalar-field equations (see also \cite{AlMuPa3} for a recent work in this direction in the case of sine-Gordon). We also refer to \cite{Cu} for an study of the asymptotic stability properties of this type of solutions in dimension $3$. Nevertheless, for the periodic setting, there are not that many well-known results. We refer the reader to \cite{AnNa2,NaCa,NaPa} for the treatment of periodic solutions for a specific type of Klein-Gordon equations. Specifically, the first two of these works considers the stability problem of periodic solutions with $-\phi+\vert \phi\vert^4\phi $ as right-hand side in \eqref{phif}, while the third one considers $+\vert\phi\vert^2\phi$ and $-\phi+\vert\phi\vert^2\phi$ as right-hand sides. We emphasize that none of the $\phi^{4n}$ equations (for no $n\in\mathbb{N}$) fit any of these settings. On the other hand, as mentioned before, for the case $n=1$, the orbital in/stability of traveling/standing wave solutions to equation \eqref{phif} was already treated in \cite{Pa}. Regarding the stability of periodic wavetrains, we refer the reader to \cite{JoMaMiPl}. We remark that this latter result seems to be the first one (up to the best of our knowledge) for wavetrains in the periodic case (see also \cite{JoMaMiPl3}). On the other hand, we refer to \cite{DeMc,JoMaMiPl2} for stability results in a particularly interesting Klein-Gordon setting (but different from the previous-ones), the sine-Gordon equation. However, in the last two works, the authors are mostly focused in spectral and exponential stability, rather than in orbital stability. We point out that, in the previous case, the authors also deal with \emph{superluminal waves}, a case which we do not treat in this work. About the stability of periodic traveling waves in Hamiltonian equations that are first-order in time, we refer to \cite{DeUp} for stability results for the nonlinear Schr\"odinger equation and to \cite{An,DeKa,DeNi} for the KdV and mKdV settings. Finally, we refer the reader to \cite{AlMuPa} for an stability study for more complex periodic structures that do not fit into the framework of Grillakis \emph{et al.} \cite{GSS,GSS2}, such as spatiallty-periodic \emph{Breathers}. These are explicit solutions to the equation which behave as solitons but are also time-periodic. See also \cite{AlMuPa2,MP} for some stability results of aperiodic Breathers in the sine-Gordon equation.
\medskip
Finally, concerning the well-posedness of the equation, we recall that by applying the classical Kato theory for quasilinear equations we obtain the local well-posedness in the energy space $H^1(\mathbb{T}_L)\times L^2(\mathbb{T}_L)$ of equation \eqref{phif} (see \cite{Kato}). We refer the reader to \cite{De,HaNa,Kl,Kl2} for several other local and global well-posedness results in one-dimensional and higher dimensional Klein-Gordon equations.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.445]{phi8.pdf}\qquad \quad
\includegraphics[scale=0.445]{phi12.pdf
\caption{On the left-hand we have $V_{n}(\phi)$ for $n=2$, that is, the potential associated to the $\phi^8$-model. On the right-hand we have $V_{n}$ for $n=3$, that is, the potential associated to the $\phi^{12}$-model.}
\label{fig1}
\end{figure}
\subsection{Main results}
In order to present our main results, let us first define what it means for a solution to be \emph{Orbitally Stable}. We say that a traveling wave solution $\vec{\varphi}_c$ is orbitally stable if for all $\varepsilon>0$ there exists $\delta>0$ small enough such that for every initial data $\vec{\phi}_0\in X$, with $X:=H^1(\mathbb{T}_L)\times L^2(\mathbb{T}_L)$, satisfying $
\Vert \vec{\phi}_0-\vec{\varphi}_c\Vert_{X}\leq\delta$, then
\[
\sup_{t\in\mathbb{R}}\inf_{\rho\in[0,L)}\Vert\vec{\phi}(t)-\vec{\varphi}_c(\cdot-\rho) \Vert_{X}<\varepsilon.
\]
Additionally, we shall say that an odd-standing wave solution $\vec{\varphi}$ is orbitally stable in the \emph{odd energy space} $X_{\mathrm{odd}}:=H^1_{\mathrm{odd}}(\mathbb{T}_L)\times L^2_{\mathrm{odd}}(\mathbb{T}_L)$ if, for all $\varepsilon>0$, there exists $\delta>0$ small enough such that for every initial data $\vec{\phi}_0\in X_{\mathrm{odd}}$ satisfying $
\Vert \vec{\phi}_0-\vec{\varphi}\Vert_{X}\leq\delta$, then
\[
\sup_{t\in\mathbb{R}}\Vert\vec{\phi}(t)-\vec{\varphi} \Vert_{X}<\varepsilon.
\]
Otherwise, we say that $\vec{\varphi}_c$ (respectively $\vec{\varphi}$) is orbitally unstable. In particular, this latter is the case when the solution ceases to exist in finite time.
\medskip
It is worth noticing that, even when it is not explicitly said, we shall always assume that $L$ is the fundamental period of $\vec{\varphi}_c$. In particular, we are only considering perturbations with exactly the same period as our fundamental solution.
\medskip
Now, in order to avoid overly introducing new notation and definitions in this introductory section, we shall only formally state our main results. We remark again that all the theorems below have already been proven in \cite{Pa} for the case $n=1$. Moreover, since there is no explicit solution for $n>2$, in the sequel, we shall refer to the specific family of solutions we are considering as ``periodic solutions orbiting around the origin'' (see section \ref{existence} below for further details).
\begin{thm}[Orbital instability of subluminal traveling waves]\label{MT1}
Let $n\in\mathbb{N}$ be arbitrary but fixed. Then, periodic traveling wave solutions ($c\in(-1,1)$) orbiting around the origin in the corresponding phase-portrait are orbitally unstable in the energy space by the periodic flow of the $\phi^{4n}$ equation.
\end{thm}
\begin{rem}
We refer the reader to Figure \ref{fig2} for a quick qualitative checking of the behavior of solutions to model \eqref{phif} around the origin in the corresponding phase-portrait.
\end{rem}
As discussed above, in order to obtain this result we use the general theory of Grillakis-Shatah-Strauss. Nevertheless, the results in \cite{GSS} require the existence of a non-trivial curve of solutions of the form $c\mapsto \phi_c$, which, in sharp contrast with the aperiodic setting, presents a delicate issue to overcome, and most of this work is devoted to address this problem.
\begin{thm}[Existence of a smooth curve of solutions]
Consider $n\in\mathbb{N}$ and let $L>0$ be arbitrary but fixed. There exists a non-trivial smooth curve of periodic solutions $c\mapsto \phi_c\in H^\infty(\mathbb{T}_L)$ orbiting around the origin in the corresponding phase-portrait.
\end{thm}
\begin{rem}
We point out that the domain on which $c$ is moving in the definition of $c\mapsto \phi_c$ is not always equals to $(-1,1)$ (see Theorem \ref{thm_monotonicity} below for further details).
\end{rem}
The main obstruction in showing the previous theorem is due to both, the difficulty to handle the potential $V_n(\phi)$ for general $n\in\mathbb{N}$, as well as the fact that, for $n>2$, no explicit solution exists. In order to surpass this problem we use ODE results for Hamiltonian systems and several combinatorial arguments to handle the potential.
\medskip
Notice that from the orbital instability theorem above we also conclude that the associated stationary solutions ($c=0$) are orbitally unstable. However, under some additional hypothesis we have the following result.
\begin{thm}[Orbital stability: stationary case]\label{MT2}
Let $n\in\mathbb{N}$ be arbitrary but fixed. Then, periodic standing wave solution ($c=0$) orbiting around the origin in the corresponding phase-portrait are orbitally stable by the periodic flow of the $\phi^{4n}$ equation under $(\mathrm{odd},\mathrm{odd})$ perturbations in the energy space.
\end{thm}
Finally, as a by-product of our analysis we are able to extend the main result in \cite{deNa} (given only for cases $n=1,2$, see Section \ref{ext_nata} below for more details), for equation \eqref{phi_w} below, to all $n\in\mathbb{N}$.
\begin{thm}[Orbital instability of traveling waves in \cite{deNa}]\label{MT3}
Let $n\in\mathbb{N}$ be arbitrary but fixed. Then, traveling wave solutions ($c\in(-1,1)$) found in \cite{deNa} orbiting around the origin in the corresponding phase-portrait associated to equation \eqref{phi_w} are orbitally unstable in the energy space.
\end{thm}
\begin{rem}
We emphasize that the previous theorems are independent of the results in \cite{Pa} and have been proven by different techniques.
\end{rem}
\begin{rem}
As an important observation we point out that Theorem \ref{MT2} is motivated by the fact that the oddness character of the initial data is preserved by the periodic flow associated to equation \eqref{phif}. In other words, if $\vec{\phi}_0=(\phi_{0,1},\phi_{0,2})=(\mathrm{odd},\mathrm{odd})$, then so is the solution for all times. Then, noticing that, under the additional requirement $\phi(x=0)=0$, the solution orbiting around zero in the corresponding phase-portrait correspond to an odd function. Thus, in the case $c=0$, the associated solution corresponds to an $(\mathrm{odd},\mathrm{odd})$ vector, and hence, under the assumptions of the previous theorem, the solution associated to this kind of initial perturbation shall always remain odd. Here, and for the rest of this paper, when we refer to an \emph{odd} function, we mean that it is odd regarded as a function in the whole line.
\end{rem}
\begin{rem}
We point out that, since equation \eqref{phif} (equation \eqref{phi_w} for Theorem \ref{MT3}) is also invariant under the maps: \[
u(t,x)\mapsto u(-t,x),\quad u(t,x)\mapsto -u(t,x) \quad \hbox{and} \quad u(t,x)\mapsto -u(-t,x),
\]
we also deduce Theorems \ref{MT1}, \ref{MT2} and \ref{MT3} for both traveling and \emph{anti-traveling}\footnote{The solution with a minus sign in front (which is also a solution).} wave solutions, moving to the left or right respectively.
\end{rem}
\subsection{Organization of this paper}
This paper is organized as follow. In Section \ref{existence} we prove the existence of a smooth curve of traveling waves solutions, and show that, under some conditions on the size of the period, we are able to consider standing waves solutions too. In Section \ref{sec:SpecAna} we provide the main spectral information of the linear operators needed in the stability analysis. Then, in Section \ref{stab_standing} we use the spectral information to conclude stability of standing waves under odd perturbations. In Section \ref{sec:Instability} we show the instability of traveling waves in the whole energy space. Finally, in Section \ref{ext_nata} we extend the analysis in \cite{deNa} to traveling waves solutions.
\medskip
\section{Existence of smooth curves periodic solutions}\label{existence}
In this section we seek to establish the existence of smooth curves of periodic traveling wave solutions to equation \eqref{phif} associated to subluminal waves, that is, with speed $c\in(-1,1)$. More precisely, in this section we look for solutions of the form $\phi(t,x)=\phi_c(x-ct)$. Before going further notice that, with no loss of generality, from now on we can assume\footnote{If not, we use the transformation $(t,x)\mapsto(\lambda_n^{1/2}t,\lambda_n^{1/2}x)$ what fixes $\lambda_n=1$. To fix $v^2=1$ it is enough to re-scale $\phi$ by defining the change of variables $\varphi(t,x)=v\phi\big(v^{2n-1}t,v^{2n-1}x\big)$.} $\lambda_n=v^2=1$. Thus, plugging $\phi_c(x-ct)$ into the equation, we obtain that if $\phi(t,x)$ is a traveling wave solution, then $\phi_c$ must satisfy: \begin{align}\label{solit_eq}
(c^2-1)\phi_c''=-V'_{n}(\phi_c).
\end{align}
On the other hand, the question regarding the existence of periodic solutions for the latter equation can be rewritten in terms of the following (autonomous) Hamiltonian system:
\begin{align}\label{system}
\begin{cases}\dot{u}=v,
\\ \dot{v}=\tfrac{1}{\omega}V_{n}'(u),
\end{cases}
\end{align}
where $\omega:=1-c^2$. From the explicit form of $V_{n}$ in \eqref{potential} it follows that the previous system has exactly $4n-1$ critical points. In fact, first of all, since $V_n'$ is a $(4n-1)$-th degree polynomial (see \eqref{comp_v_x}), it follows that it can have at most $4n-1$ real roots. Now, from direct computations, recalling the explicit form of $V_n$ in \eqref{potential}, we infer that zero is a simple real root\footnote{From the explicit form of $V_n$ it immediately follows that $V_n'$ has a factor $x$ multiplying the whole expression.} of $V_n'$. Besides, it is not hard to see that each root associated to each individual factor in the definition of $V_n$ is also a simple root\footnote{Since each individual factor in $V_n$ is of the form $(x^2-a^2)^2$, its derivative still contains a factor $(x^2-a^2)$. Therefore, $x=\pm a$ is still a root of $V_n'$.} of $V_n'$. Summarizing, we have found $2n+1$ roots of $V_n'$, which are precisely located at \begin{align}\label{critical_points}
(u_{-k},v_{-k}):=\big(-k+\tfrac{1}{2},0\big), \quad (u_0,v_0):=(0,0), \quad (u_k,v_k):=\big(k-\tfrac{1}{2},0\big),
\end{align}
where $k=1,...,n$. Even more, the remaining $2n-2$ critical points are located in between each consecutive pair\footnote{This follows, for example, from Rolle Theorem applied to $f(x)=V_n(x)$, since all of these critical points in \eqref{critical_points} (except for $x=0$) are also roots of $V_n$. Thus, $f'(x)$ must to have at least one root in between each pair.} in \eqref{critical_points} for $\pm k=1,...,n$. More specifically, for each $k\in\{1,..,n\}$, we have exactly one critical point in between $(u_k,v_k)$ and $(u_{k+1},v_{k+1})$ (and their corresponding reflections, that is, in between each pair $(u_{-k-1},v_{-k-1})$ and $(u_{-k},v_{-k})$). Since we already have found $4n-1$ roots, there cannot be any other missing root for $V_n'$. Hence, the two nearest critical points to $(0,0)$ are $(u_{\pm1},v_{\pm 1})$ given in \eqref{critical_points}. Moreover, by standard computations we see that the linearized matrix around each of these points takes the form \begin{align}\label{form_linear_matrix}
M:=\dfrac{1}{\omega}\left(\begin{matrix}
0 & \omega
\\ V_{n}'' & 0
\end{matrix}\right).
\end{align}
Furthermore, from direct computations it follows that, for all $n\in\mathbb{N}$ we have (see \eqref{vnpp} below): \[
V_n''(0)=-4\prod_{k=1}^n\big(k-\tfrac{1}{2}\big)^4\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}<0.
\]
Thus, for $c\in(-1,1)$ or equivalently for $\omega>0$, from the latter inequality, and recalling identity \eqref{form_linear_matrix}, it follows that $(0,0)$ is a stable center point for all $n\in\mathbb{N}$. Even more, from similar computations it is not hard to see that $V_{n}''\big(\tfrac{1}{2}\big)>0$, and hence, $(u_{\pm1},v_{\pm1})$ are both saddle critical points, for all $n\in\mathbb{N}$.
\medskip
On the other hand, recalling that the previous system is Hamiltonian, and setting $(0,0)$ as the zero energy level, we obtain that the Hamiltonian associated to \eqref{system} is given by \begin{align}\label{hamilt}
\mathcal{H}(u,v):=\dfrac{1}{2}v^2-\dfrac{1}{\omega}\big(V_{n}(u)-V_{n}(0)\big).
\end{align}
Therefore, by the standard ODE theory for Hamiltonian equations (see for example \cite{ChiL}), we know that all periodic solutions of \eqref{system} orbiting around $(0,0)$ corresponds to regular level sets of $\mathcal{H}$ given by \begin{align}\label{gamma}
\Gamma_\beta:=\big\{(u,v): \ \mathcal{H}(u,v)=\beta\big\},
\end{align}
with $\beta\in(0,E_\star)$, where the maximal energy level $E_\star$ is given by \[
E_\star:=-\dfrac{1}{\omega}\big(V(\tfrac{1}{2})-V(0)\big)=\dfrac{1}{\omega}\prod_{k=1}^n\big(k-\tfrac{1}{2}\big)^4.
\]
\begin{figure}[h!]
\centering
\includegraphics[scale=0.26]{phase_phi4.pdf} \qquad
\includegraphics[scale=0.26]{phase_phi8.pdf
\caption{Phase portrait of the Hamiltonian system \eqref{system} around $(0,0)$ for the first two cases $n=1,2$. On the left we have the phase portrait associated to the $\phi^4$-model while on the right the one associated to $\phi^8$.}\label{fig2}
\end{figure}
Now, with the additional constraint $\phi_c(0)=0$, from the symmetry of these level sets it follows that all solutions associated to these periodic orbits are $\mathrm{odd}$ (other solutions are translations of the same function, and consequently, not necessarily odd). Finally, by using again that each solution is a level curve of $\mathcal{H}$ and the symmetry of the phase portrait, it follows from \eqref{hamilt}-\eqref{gamma} that, for every $\beta\in(0,E_\star)$, the period of the corresponding odd solution satisfies
\begin{equation}
L=\sqrt{2}\int_{x_0}^{x_1}\dfrac{dx}{\sqrt{\beta+\frac{1}{\omega}(V_n(x)-V_n(0))}},\label{eq:peri}
\end{equation}
where $x_0$ and $x_1$ are the left and right intersections of the curve given by $\tfrac{1}{2}v^2-\tfrac{1}{\omega}\big(V_n(u)-V(0)\big)=\beta$ with the $u$-axis. We point out that the upper integration limit $x_1$ can also be written as the solution of $V(x)=V(0)-\omega\beta$ for $x\in(0,\tfrac{1}{2})$, and $x_0=-x_1$ (note that there is only one solution in this interval). Moreover, from the equation for $x_1$ we also infer that when $\beta$ goes to zero (or $c$ goes to $1$ for fixed $\beta$), $x_1$ goes to zero too. It is worth noting that the period $L$ defines a convergent improper integral for all values of $\beta\in(0,E_\star)$. Furthermore, notice that \[
\lim_{\beta\to E_\star^-}L(\beta)=+\infty.
\]
On the other hand, when $\beta\to 0^+$ we have\footnote{If the reader prefers, the existence of this limit can be rigorously justify by defining it (the limit) after the proof of the monotonicity of the period. Notice that the period is trivially bounded from below by $0$ and decreases as $\beta\to0^+$ (see the proof of Theorem \ref{thm_monotonicity} below). Hence, $L(\beta)$ has a limit as $\beta\to0^+$.}: \[
\lim_{\beta\to0^+}L(\beta)=\sqrt{2\omega}\lim_{\beta\to0^+}\int_{x_0(\beta)}^{x_1(\beta)}\dfrac{dx}{\sqrt{\omega\beta+V_n(x)-V_n(0)}}=:\sqrt{\omega}\delta_n,
\]
where $\delta_n\in[0,\infty)$ does not depends on $\omega$. The following theorem ensures us that, once we fix the period $L\in(0,\infty)$, the previous method produces a non-trivial smooth curve of periodic traveling wave solutions that can be parameterized by their speeds.
\begin{thm}[Smooth curve of periodic solutions]\label{thm_monotonicity}
Consider $n\in\mathbb{N}$ and let $L>0$ be arbitrary but fixed. Then, for any speed $c$ satisfying \[
c\in(-1,1) \quad \hbox{such that}\quad L>\sqrt{\omega}\delta_n,
\]
there exists an unique energy level $\beta=\beta(c)\in(0,E_\star)$ such that the periodic wave solution $\vec{\phi}(x-ct)$ to the $\phi^{4n}$-equation \eqref{phif} constructed above has fundamental period $L$. Furthermore, the map $c\mapsto \phi_c(t=0,x)\in H^1(\mathbb{T}_L)$ is smooth.
\end{thm}
\begin{rem}
Notice that, by choosing $L>\delta_n$ we are able to consider standing waves solutions. These standing waves are related (in some sense) to the odd Kink solution of the $\phi^{4n}$-model. Additionally, when $c=0$, the corresponding solution is $(\phi_1,\phi_2)=(\mathrm{odd},\mathrm{odd})$, while when $c\neq 0$, the solution is $(\phi_1,\phi_2)=(\mathrm{odd},\mathrm{even})$, property that is not preserved by the flow.
\end{rem}
The main theorem in \cite{Chi} ensures that, under our current notations, if $-(V_n(x)-V_n(0))/(V_n'(x))^2$ is strictly convex for $x\in(-\tfrac{1}{2},\tfrac{1}{2})$, then the period $L=L(\beta)$ defines a strictly increasing function of $\beta$. Besides, notice that, by showing the strict monotonicity of $L$ with respect to the energy level $\beta$, the proof of the theorem follows. Thus, in order to conclude the proof of the theorem, it is enough to study the sign of the following function:
\[
-\dfrac{d^2}{dx^2}\dfrac{V_{n}(x)-V_n(0)}{(V_{n}'(x))^2}=\dfrac{3V_{n}''\big( (V_{n}')^2-2(V_{n}-V_n(0))V_{n}''\big)+2(V_{n}-V_n(0))V_{n}'V_{n}'''}{(V_{n}')^4}.
\]
Then, our first goal is to show the non-negativity of the latter quantity. Since the denominator is always non-negative, for this first step it is enough to show that \begin{align*
\mathcal{V}_n(x):=3V_{n}''(x)\big( (V_{n}'(x))^2-2(V_{n}(x)-V_n(0))V_{n}''(x)\big)+2(V_{n}(x)-V_n(0))V_{n}'(x)V_{n}'''(x)\geq 0.
\end{align*}
In order to show that the latter inequality holds, we start by doing several basic computations needed in our analysis. First of all, by directly differentiating $V_n$ we have
\begin{align}\label{comp_v_x}
V_{n}'=4x\prod_{k=1}^n\left(x^2-\big(k-\tfrac{1}{2}\big)^2\right)\sum_{P\in\mathcal{P}_{n-1}^n}\prod_{i\in P}\left(x^2-\big(i-\tfrac{1}{2}\big)^2\right),
\end{align}
where $\mathcal{P}_{m}^n$ denotes the set\footnote{We call $m$-combination of a set $E$ to any subset of $m$ different elements $E$. For example, \[\mathcal{P}_2^4=\big\{\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\}\big\}.\]} of $m$-combinations of $\{1,...,n\}$ without repetitions and no permutations allowed. In particular, each $P\in\mathcal{P}_{n-1}^n$ is a set of $(n-1)$ elements. For the sake of clarity, let us introduce some notation that shall be useful in the sequel. From now on we shall denote by $\Pi_{n}$, $\Pi_{n,0}$ and $\Sigma_{i}$ the following quantities\footnote{By convention $\Sigma_0=1$ and $\Sigma_m=0$ for $m<0$.}
\[
\Pi_n:=\prod_{k=1}^n\left(x^2-\big(k-\tfrac{1}{2}\big)^2\right), \ \ \Pi_{n,0}:=\prod_{k=1}^n\big(k-\tfrac{1}{2}\big)^2 \ \ \hbox{and}\ \ \Sigma_i:=\sum_{P\in\mathcal{P}_{i}^n}\prod_{j\in P}\left(x^2-\big(j-\tfrac{1}{2}\big)^2\right).
\]
Hence, by taking advantages of the previous notations we can write, for example, $
V_n'=4x\Pi_n\Sigma_{n-1}$. Then, performing similar direct computations and taking advantage of the previous notations, we are able to express $V_n''$ and $V_n'''$ as:
\begin{align}
V_n''&=4\Pi_n\Sigma_{n-1}+8x^2\Sigma_{n-1}^2+16x^2\Pi_n\Sigma_{n-2}, \label{vnpp}
\\ V_n'''&=24x\Sigma_{n-1}^2+48x\Pi_n\Sigma_{n-2}+96x^3\Sigma_{n-1}\Sigma_{n-2}+96x^3\Pi_n\Sigma_{n-3}.\nonumber
\end{align}
Therefore, gathering the identities above and performing some extra direct computations we obtain $\tfrac{1}{96}\mathcal{V}_n =\mathbf{A}+\mathbf{B}x^2+\mathbf{C}x^4$, where \begin{align*}
\mathbf{A}&:=-\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n^2\Sigma_{n-1}^2,
\\ \mathbf{B}&:=2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3-4(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-1}\Sigma_{n-2},
\\ \mathbf{C}&:=4\Pi_{n,0}^2\Sigma_{n-1}^4
+8\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^2\Sigma_{n-2}+8(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-1}\Sigma_{n-3}
\\ & \quad \ \, -16(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-2}^2.
\end{align*}
Now, for the sake of clarity we split the analysis into several small lemmas. Moreover, since the case $n=1$ was already treated in \cite{Pa}, from now on we shall only address the case $n>1$. The following lemma give us the non-negativity of the sum of the second term in $\mathbf{B}$ with the second one in $\mathbf{C}$ (notice that the terms associated to $\mathbf{C}$ in $\mathcal{V}_n$ have an extra $x^2$ with respect to the ones associated to $\mathbf{B}$).
\begin{lem}\label{mon_lem_1} Let $n\in\mathbb{N}$ with $n\geq 2$. Then, for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ we have: \begin{align}
\label{first_prop_ineq}
-4\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n^2\Sigma_{n-1}\Sigma_{n-2}+8x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^2\Sigma_{n-2}\geq0.
\end{align}
\end{lem}
\begin{proof} In fact, first of all, in order to simplify the proof we start by factorizing the left-hand side of inequality \eqref{first_prop_ineq} by $4\Pi_n\Sigma_{n-1}\Sigma_{n-2}$. Although, notice that, for all $n\in\mathbb{N}$, if we expand all terms involved in $4\Pi_n\Sigma_{n-1}\Sigma_{n-2}$, by using the definition of $\Pi_n$, $\Sigma_{n-1}$ and $\Sigma_{n-2}$ it is not difficult to see that each addend in the resulting multiplication $4\Pi_n\Sigma_{n-1}\Sigma_{n-2}$ is composed by exactly $3n-3$ factors\footnote{In fact, notice that each addend in $\Sigma_{i}$ is composed exactly by $i$ factors, and that $\Pi_n$ is composed by $n$ more factors. Hence, each addend in the composition $4\Pi_n\Sigma_{n-1}\Sigma_{n-2}$ has exactly $3n-3$ factors.}, each of which is simultaneously negative for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$. This latter remark comes from the fact that, for all $k\geq 1$, any factor of the form $x^2-(k-\tfrac{1}{2})^2$ is non-positive for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$. Thus, inequality \eqref{first_prop_ineq} is equivalent to show that, for any $n\in\mathbb{N}$ with $n\geq 2$, and all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ the following holds: \begin{align}\label{reduction_first_prop}
-\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n+2x^2\Pi_{n,0}^2 \Sigma_{n-1}\gtreqless0,
\end{align}
where we have to choose the ``$\leq $'' sign in the latter inequality whenever $n$ is even, and the ``$\geq $'' sign otherwise. Of course, the change from ``$\leq$'' to ``$\geq$'' comes from the fact that $3(n-1)$ is even whenever $n$ is odd, and odd whenever $n$ is even. Consequently, if $n$ is even, the function $\Pi_n\Sigma_{n-1}\Sigma_{n-2}$ is non-positive for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$, while it is non-negative if $n$ is odd.
\medskip
\textbf{Case $n$ even:} In this case we are lead to prove inequality \eqref{reduction_first_prop} with ``$\leq$''-sign. In fact, let us start by defining \[
f(x):=\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n-2x^2\Pi_{n,0}^2\Sigma_{n-1}.
\]
By definition it immediately follows that $f(0)=0$ and that $f(x)$ is an even function. Thus, it is enough to show that \begin{align}\label{fprop_prime}
f'(x)=6x\big(\Pi_n^2-\Pi_{n,0}^2\big)\Sigma_{n-1}-8x^3\Pi_{n,0}^2\Sigma_{n-2}\geq 0, \quad \,\hbox{for all }\, x\in(0,\tfrac{1}{2}).
\end{align}
Now, in order to prove the latter inequality, it is enough to recall the following basic property: If $a,b,c,d\in\mathbb{R}$ are all positive numbers satisfying $$
a\geq c \quad \hbox{and}\quad b\geq d,
$$
then $ab\geq cd$. Then, from the previous analysis we infer that inequality \eqref{reduction_first_prop} follows if we show the following -stronger- result (recall that $n$ is even): \begin{align}\label{improved_ineq}
\hbox{for all }\,x\in(0,\tfrac{1}{2}), \quad -\big(\Pi_n^2-\Pi_{n,0}^2\big)\geq 4x^2\Pi_{n,0}^2 \quad \hbox{and} \quad -\Sigma_{n-1}\geq \Sigma_{n-2} .
\end{align}
Notice that by gathering both inequalities we obtain \eqref{fprop_prime}. Hence, let us start by proving the first of them. In fact, by a direct re-arrangement of terms, it follows that the first inequality in \eqref{improved_ineq} is equivalent to show that \[
(1-4x^2)\geq \Pi_{n,0}^{-2}\Pi_{n}^2=:\mathrm{R} \quad \hbox{ where } \quad
\mathrm{R}=\prod_{k=1}^n\big(1-(k-\tfrac{1}{2})^{-2}x^2\big)^2.
\]
Now, on the one-hand, notice that the first factor in $\mathrm{R}$ (that is, the term associated with $k=1$) is given by $(1-4x^2)^2$. On the other hand, for all $k\in\{1,...,n\}$ and all $x\in(0,\tfrac{1}{2})$ we have \[
0< \big(1-(k-\tfrac{1}{2})^{-2}x^2\big)^2< 1.
\]
Thus, by plugging the latter inequality into the definition of $\mathrm{R}$, and by using the explicit form of the factor associated to $k=1$, it immediately follows that \[
\mathrm{R}\leq 1-4x^2.
\]
Now we focus on showing the second inequality in \eqref{improved_ineq}, that is, on showing $-\Sigma_{n-1}\geq \Sigma_{n-2}$. First of all notice that, for $x\in(0,\tfrac{1}{2})$, we can re-write these terms as \begin{align*}
\Sigma_{n-1}&=\Pi_n\cdot\sum_{k=1}^n\big(x^2-(k-\tfrac{1}{2})^2\big)^{-1},
\\ \Sigma_{n-2}&=\Pi_n\cdot\sum_{k=1}^{n-1}\sum_{j=k+1}^n\big(x^2-(k-\tfrac{1}{2})^2\big)^{-1}\big(x^2-(j-\tfrac{1}{2})^2\big)^{-1}.
\end{align*}
For the sake of simplicity, from now on we denote by $\Sigma_i^k$ the $k$-th term associated to $\Sigma_{i}$. More specifically, for the cases of $i=n-1$ and $i=n-2$, for each $k\in\{1,...,n\}$ we define
\begin{align*}
\Sigma_{n-1}^k&:= \big(x^2-(k-\tfrac{1}{2})^2\big)^{-1}\cdot\Pi_n
\\ \Sigma_{n-2}^k&:=\big(x^2-(k-\tfrac{1}{2})^2\big)^{-1}\cdot\Pi_n\sum_{j=k+1}^n\big(x^2-(j-\tfrac{1}{2})^2\big)^{-1},
\end{align*}
where in the second case we assume $k\neq n$. Then, in order to show the second inequality in \eqref{improved_ineq}, it is enough to prove that, for each $k\in\{1,...,n-1\}$ and all $x\in (0,\tfrac{1}{2})$, \[
-\Sigma_{n-1}^k\geq\Sigma_{n-2}^k.
\]
In fact, once proving the latter inequality, it is enough to sum them all for all $k=1,...,n$, from where we conclude the desired result. Indeed, notice that, since $x\in(0,\tfrac{1}{2})$ we infer \[
\sum_{j=k+1}^n\big\vert x^2-(j-\tfrac{1}{2})^2\big\vert^{-1}\leq \sum_{j=k+1}^n\big\vert\tfrac{1}{4}-(j-\tfrac{1}{2})^2\big\vert^{-1}.
\]
Therefore, recalling the following standard identity \[
\sum_{j=2}^n\big((j-\tfrac{1}{2})^2-\tfrac{1}{4}\big)^{-1}=\dfrac{n-1}{n}<1,
\]
by plugging the latter inequality into the definition of $\Sigma_{n-2}$ we deduce that \[
\Sigma_{n-2}^k\leq -\big(x^2-(k-\tfrac{1}{2})^2\big)^{-1}\cdot\Pi_n=-\Sigma_{n-1}^k.
\]
\medskip
The case $n$ odd follows exactly the same lines (up to obvious modifications) and hence we omit it.
\end{proof}
Now, the following lemma give us the non-negativity of the sum of the third and fourth addend in the definition of $\mathbf{C}$.
\begin{lem}\label{mon_lem_2} Let $n\in\mathbb{N}$ with $n\geq 2$. Then, for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ we have: \begin{align}\label{second_prop_ineq}
8\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n^2\Sigma_{n-1}\Sigma_{n-3}-16\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n^2\Sigma_{n-2}^2\geq0.
\end{align}
\end{lem}
\begin{proof}
In fact, similarly as before, we start by reducing the problem to an easier one. First of all notice that, for all $n\in\mathbb{N}$ with $n\geq 2$ and all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$, we have \[
\big(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\leq 0.
\]
Then, it follows that inequality \eqref{second_prop_ineq} is equivalent to prove that, for all $n\in\mathbb{N}$ with $n\geq 2$ and all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ it holds: \begin{align}\label{main_ineq_42}
2\Sigma_{n-2}^2\geq \Sigma_{n-1}\Sigma_{n-3}.
\end{align}
In this case, we shall not split the analysis into two different cases (comparing separately one factor from the left-hand side with another one from the right-hand side and then multiplying both inequalities). Instead, in this case it is easier to consider both factors at the same time. First of all, we rewrite both sides of \eqref{main_ineq_42} as: \begin{align*}
\Sigma_{n-2}^2=\Pi_n^2\cdot\sum_{k=1}^{n-1}\sum_{j=1}^{n-1}\sum_{i=j+1}^n\sum_{\ell=k+1}^n \pi_{k,j,i\ell} \ \,\hbox{ and }\ \, \Sigma_{n-1}\Sigma_{n-3}=\Pi_n^2\cdot\sum_{k=1}^n\sum_{j=1}^{n-2}\sum_{i=j+1}^{n-1}\sum_{\ell=i+1}^n\pi_{k,j,i,\ell},
\end{align*}
where, \[
\pi_{k,j,i,\ell}:=\big(x^2-(k-\tfrac{1}{2})^2\big)^{-1}\big(x^2-(j-\tfrac{1}{2})^2\big)^{-1}\big(x^2-(i-\tfrac{1}{2})^2\big)^{-1}\big(x^2-(\ell-\tfrac{1}{2})^2\big)^{-1}.
\]
Similarly as before, we shall compare each addend in the right-hand side of \eqref{main_ineq_42} to a corresponding (properly chosen) addend in the left-hand side. The idea of the proof is to show that each quadruple in the list defined by all possible combinations $(k,j,i,\ell)$ associated to the four sums in the right-hand side can be mapped to a proper permutation of itself, so that the resulting pair $(\sigma(k),\sigma(j),\sigma(i),\sigma(\ell))$ belongs to the list of possible combinations associated to the four sums in the left-hand side. Of course, the main difficulty in doing this is that both lists are not equivalent, and it is actually not possible to simply map them by using the identity map. However, by taking advantage of the factor $2$ in \eqref{main_ineq_42}, together with the fact that all terms in both sides are non-negative\footnote{Since each addend is composed by the multiplication of four simultaneously-non-positive factors.} for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$, we shall show that it is possible to map all of these elements from one list to the other one, where we shall use each element in the left-hand side list at most two times. Notice that the desired inequality follows once we prove that the previous procedure holds.
\medskip
In fact, first of all notice that $\pi_{k,j,i,\ell}$ is invariant under permutations, that is, for any quadruple $(k,j,i,\ell)\in\{1,...,n\}^4$ we have \[
\pi_{k,j,i,\ell}=\pi_{\sigma(k),\sigma(j),\sigma(i),\sigma(\ell)},
\]
for any injective function $\sigma:\{k,j,i,\ell\}\to\{k,j,i,\ell\}$. Now, we define $\Gamma_{\mathrm{RHS}}$ and $\Gamma_{\mathrm{LHS}}$, the sets of indexes of all possible combinations associated with each side of \eqref{main_ineq_42}:
\begin{align*}
\Gamma_{\mathrm{RHS}}&:=\big\{(k,j,i,\ell)\in \{1,...,n\}^4: \ k\leq n-1,\ j<i<\ell\big\},
\\ \Gamma_{\mathrm{LHS}}&:= \big\{(k,j,i,\ell)\in\{1,...,n\}^4: \ k< \ell,\ j<i \big\}.
\end{align*}
We remark we have excluded the case $k=n$ in the definition of $\Gamma_{\mathrm{RHS}}$. The reason behind this is to be able to match (as a first case) both lists more easily (since $k=n$ is not allowed in the left-hand side of \eqref{main_ineq_42}). We shall address this exceptional case at the end of the proof. In this sense, one important (yet trivial) observation is that, the cardinality of the set of all possible combinations associated to each side is given by \[
\big\vert\Gamma_{\mathrm{LHS}}\big\vert= \dfrac{n^2(n^2-1)}{4},\quad \hbox{and} \quad \big\vert\Gamma_{\mathrm{RHS}}^{+n}\big\vert=\dfrac{n^2(n^2-3n+2)}{6},
\]
where $ \Gamma_{\mathrm{RHS}}^{+n}:=\big\{(k,j,i,\ell)\in\{1,...,n\}^4:\ j<i<\ell\big\}$. Additionally, for all $n\geq 2$, we have $\vert\Gamma_{\mathrm{LHS}}\vert\geq \vert\Gamma_{\mathrm{RHS}}^{+n}\vert$. Said that, as remarked before, we shall split the set of indexes given by the right-hand side and map them into the set of indexes appearing in the left-hand side. Having all of this in mind, we split the analysis into three main steps.
\medskip
\textbf{Case $j\geq k-1$:} In this case, by the definition of both sets $\Gamma_{\mathrm{RHS}}$ and $\Gamma_{\mathrm{LHS}}$ we trivially have that: \[
\hbox{if } \, (k,j,i,\ell)\in\Gamma_{\mathrm{RHS}}, \, \hbox{ then } \ (k,j,i,\ell)\in\Gamma_{\mathrm{LHS}}.
\]
In fact, it is enough to notice that, if $(k,j,i,\ell)\in\Gamma_{\mathrm{RHS}}$, then, by the definition of $\Gamma_{\mathrm{RHS}}$ it follows \[
\ell\geq i+1\geq j+2\geq k+1,
\]
where we have used the fact that $j\geq k-1$ to obtain the latter inequality. Hence, we deduce that, in this case, it is enough to map $(k,j,i,\ell)$ to itself.
\medskip
\textbf{Case $k\geq j+2$:} Let us consider any quadruple $(k,j,i,\ell)\in\Gamma_{\mathrm{RHS}}$ with $k\geq j+2$. We split the analysis into three different sub-cases.
\begin{itemize}
\item Case $\ell\geq k+1$. Again, since $\ell\geq k+1$, by definition of $\Gamma_{\mathrm{LHS}}$ it immediately follows that \[
(k,j,i,\ell)\in\Gamma_{\mathrm{LHS}}.
\]
\item Case $\ell=k$. In this case we permute the coordinates in the following way: \[
(k,j,i,k)\mapsto (\widetilde{k},\widetilde{j},\widetilde{i},\widetilde{\ell}) \quad \hbox{where} \quad \widetilde{k}=j,\ \widetilde{j}=i, \ \widetilde{i}=k, \ \widetilde{\ell}=k.
\]
With these definitions it is not hard to see that $(\widetilde{k},\widetilde{j},\widetilde{i},\widetilde{\ell})\in\Gamma_{\mathrm{LHS}}$. In fact, it is enough to notice that, on the one hand, by definition of $\Gamma_{\mathrm{RHS}}$ we have $\ell\geq i+1\geq j+2$, while on the other hand, by hypothesis $\ell=k$. Then, it follows that \[
k=\widetilde{\ell}\geq \widetilde{k}+2=j+2 \quad \hbox{ and } \quad k=\widetilde{i}\geq \widetilde{j}+1=i+1.
\]
We point out that this quadruple $(\widetilde{k},\widetilde{j},\widetilde{i},\widetilde{\ell})$ has already been used in the first case ``$j\geq k-1$''. However, notice that, since $n-1\geq k=\widetilde{\ell}$, in the present situation we never reach a quadruple of the form $(\cdot,\cdot,\cdot,n)$. This fact shall be important at the end of the proof.
\item Case $\ell\leq k-1$. First of all notice that, if $\ell\leq k-1$ and $(k,j,i,\ell)\in\Gamma_{\mathrm{RHS}}$, it transpires that $k\geq j+3$. Consequently, in this case we permute the first and last entry of the quadruple: \[
(k,j,i,\ell)\mapsto \big(\widetilde{k},j,i,\widetilde{\ell}\big) \quad \hbox{ where } \quad \widetilde{k}=\ell \quad \hbox{and}\quad \widetilde{\ell}=k.
\]
Thus, with these definitions we obtain that $\widetilde{\ell}\geq \widetilde{k}+1$, and therefore, $\big(\widetilde{k},j,i,\widetilde{\ell}\big)\in\Gamma_{\mathrm{LHS}}$. Moreover, due to the fact that $\ell\geq j+2$, we infer that $\widetilde{k}\geq j+2$, and hence this quadruple has already been used in the first sub-case of the present case, that is, ``$k\geq j+2$, sub case $\ell\geq k+1$''. Of course, as remarked before, we have \[
\pi_{k,j,i,\ell}=\pi_{\widetilde{k},j,i,\widetilde{\ell}}.
\]
Finally, by the same reason as in the previous case, in the present situation we never reach any quadruple of the form $(\cdot,\cdot,\cdot, n)$.
\end{itemize}
\textbf{Case $k=n$:} By the previous procedure we have used (at most) two times many of the quadruples on the list associated to the left-hand side. However, notice that we have used at most once any quadruple of the form $(\cdot,\cdot,\cdot,n)$. Now, if $(k,j,i,\ell)\in\Gamma_{\mathrm{RHS}}$, then $j<n$ and $i<\ell$. Thus, in this case, by taking advantage of the factor $2$ in \eqref{main_ineq_42} again, we map \[
(n,j,i,\ell)\mapsto (j,i,\ell,n)\in\Gamma_{\mathrm{LHS}}.
\]
Therefore, we have mapped each addend of the right-hand side of \eqref{main_ineq_42}, to the ``same addend'' (numerically they are the same due to the invariance under permutations of $\pi_{k,j,i,\ell}$) appearing in the left-hand side, where we are repeating each addend in the left-hand side at most two times. Finally, notice that, any other addends in the left-hand side that has not been used is non-negative. Hence, gathering all the previous analysis we conclude $
2\Sigma_{n-2}^2\geq \Sigma_{n-1}\Sigma_{n-3}$.
\end{proof}
Now, before going further and for the sake of simplicity, let us prove the following inequality which shall be useful to treat the remaining addends in $\mathcal{V}_n$.
\begin{lem} Let $n\in\mathbb{N}$ with $n\geq 2$. For all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ it holds:\begin{align}\label{improimpro}
-(\Pi_n^2-\Pi_{n,0}^2)&\geq 2x^2\Pi_{n,0}^2\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}
\\ & \quad -x^4\Pi_{n,0}^2\left(\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-4}+4\sum_{k=1}^{n-1}\sum_{j=k+1}^n\big(k-\tfrac{1}{2}\big)^{-2}\big(j-\tfrac{1}{2}\big)^{-2}\right).\nonumber
\end{align}
\end{lem}
\begin{proof}
Intuitively, the right-hand side of \eqref{improimpro} corresponds to the first two terms in the expansion of the left-hand side. Moreover, it is not difficult to see (by using the fact that $x\in(-\tfrac{1}{2},\tfrac{1}{2})$) that the right-hand side in \eqref{improimpro} is always non-negative. Now, for the sake of clarity let us start by proving inequality \eqref{improimpro} for the case $n=2$. In fact, in this case the left-hand side becomes \[
-(\Pi_2^2-\Pi_{2,0}^2)=-x^8+5x^6-\tfrac{59}{8}x^4+\tfrac{45}{16}x^2.
\]
On the other hand, when $n=2$ both terms in the right-hand can be simply computed as: \[
2x^2\Pi_{2,0}^2\sum_{k=1}^2\big(k-\tfrac{1}{2}\big)^{-2}=\tfrac{45}{16}x^2,
\]
and \[
-x^4\Pi_{2,0}^2\sum_{k=1}^2\big(k-\tfrac{1}{2}\big)^{-4}-x^4\Pi_{2,0}^2\sum_{k=1}^1\sum_{j=2}^2\big(k-\tfrac{1}{2}\big)^{-2}\big(j-\tfrac{1}{2}\big)^{-2}=-\tfrac{59}{8}x^4.
\]
Therefore, by noticing that $-x^8+5x^6\geq0$ for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ we conclude the case $n=2$. For the general case, after trivial rearrangements, we can rewrite inequality \eqref{improimpro} as: \begin{align}\label{equi}
&\Pi_{n,0}^2-2x^2\Pi_{n,0}^2\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}
\\ & \quad +x^4\Pi_{n,0}^2\left(\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-4}+4\sum_{k=1}^{n-1}\sum_{j=k+1}^n\big(k-\tfrac{1}{2}\big)^{-2}\big(j-\tfrac{1}{2}\big)^{-2}\right)\geq \Pi_n^2 \nonumber
\end{align}
Since we have already proved the case $n=2$, from now on we shall assume that $n\geq 3$. Hence, it is enough to prove \eqref{equi}. In order to do that, we express $\Pi_n^2$ as: \begin{align}\label{expansion_pi}
\Pi_n^2=a_0^2-a_1^2x^2+a_2^2x^4\mp...+a_{2n}^2x^{4n}.
\end{align}
By explicit computations it is not difficult to check that, for any $n\in\mathbb{N}$ with $n\geq 3$ we have \begin{align*}
a_0^2&=\Pi_{n,0}^2, \qquad a_1^2=2\Pi_{n,0}^2\sum_{k=1}^n \big(k-\tfrac{1}{2}\big)^{-2},
\\ a_2^2&=\Pi_{n,0}^2\left(\sum_{k=1}^n\big(k-\tfrac{1}{2})^{-4}+4\sum_{k=1}^{n-1}\sum_{j=k+1}^n\big(k-\tfrac{1}{2}\big)^{-2}\big(j-\tfrac{1}{2}\big)^{-2}\right).
\end{align*}
Therefore, by plugging these identities into \eqref{equi}, and after direct cancellations we deduce that the problem is equivalent to prove: \[
0\geq -a_3^2x^6+a_4^2x^8\mp....+a_{2n}^2x^{4n}=:g(x),
\]
where $a_3^2,...,a_{2n}^2$ are the coefficients appearing in \eqref{expansion_pi}. Now, we group the addends in the definition of $g(x)$ into pairs of ``easier'' addends as:\[
g(x)=(-a_3^2x^6+a_4^2x^8)+(-a_5^2x^{10}+a_6^2x^{12})+...+(-a_{2n-1}^2x^{4n-1}+a_{2n}^2x^{4n}).
\]
Now, we claim that for all $m\in\{2,...,n\}$ the following holds:
\begin{align}\label{equiv_impro_proof}
4a_{2m-1}^2\geq a_{2m}^2.
\end{align}
Notice that, if we assume that the claim is true for the moment, then, gathering the latter inequality together with the fact that $x\in (-\tfrac{1}{2},\tfrac{1}{2})$ we would infer that, for all $m\in\{2,...,n\}$, \[
0\geq -a_{2m-1}^2x^{4m-2}+a_{2m}x^{4m}=x^{4m-2}(-a_{2m-1}^2+a_{2m}x^2) \quad \hbox{ for all } \, x\in (-\tfrac{1}{2},\tfrac{1}{2}).
\]
Clearly this would conclude the proof of inequality \eqref{equi}, and hence the proof of the lemma. Now, for the sake of clarity let us start by explicitly writing the first two cases ($a_3$ and $a_4$). In fact, by explicit computations we have: \begin{align*}
a_3^2&=2\Pi_{n,0}^2\sum_{i_1=1}^n\sum_{\substack{i_2=1 \\ i_2\neq i_1}}^n\big(i_1-\tfrac{1}{2}\big)^{-4}\big(i_2-\tfrac{1}{2}\big)^{-2}
\\ & \quad +8\Pi_{n,0}^2\sum_{i_1=1}^{n-2}\sum_{i_2=i_1+1}^{n-1}\sum_{i_3=i_2+1}^n\big(i_1-\tfrac{1}{2}\big)^{-2}\big(i_2-\tfrac{1}{2}\big)^{-2}\big(i_3-\tfrac{1}{2}\big)^{-2},
\\ a_4^2&=\Pi_{n,0}^2\sum_{i_1=1}^{n-1}\sum_{i_2=i_1+1}^n\big(i_1-\tfrac{1}{2}\big)^{-4}\big(i_2-\tfrac{1}{2}\big)^{-4}
\\ & \quad + 4\Pi_{n,0}^2\sum_{i_1=1}^n\sum_{\substack{i_2=1 \\ i_2\neq i_1}}^n\sum_{\substack{i_3=i_2+1 \\ i_3\neq i_1}}^n\big(i_1-\tfrac{1}{2}\big)^{-4}\big(i_2-\tfrac{1}{2}\big)^{-2}\big(i_3-\tfrac{1}{2}\big)^{-2}
\\ & \quad +16\Pi_{n,0}^2\sum_{i_1=1}^{n-3}\sum_{i_2=i_1+1}^{n-2}\sum_{i_3=i_2+1}^{n-1}\sum_{i_4=i_3+1}^{n} \big(i_1-\tfrac{1}{2}\big)^{-2}\big(i_2-\tfrac{1}{2}\big)^{-2}\big(i_3-\tfrac{1}{2}\big)^{-2}\big(i_4-\tfrac{1}{2}\big)^{-2}.
\end{align*}
Now, for the general case we distinguish two different cases, each of which is simultaneously composed by two different sub-cases ($a_{2m-1}$ and $a_{2m}$). The main difference between these inner sub-cases comes from the fact that $2m-1$ is always odd and $2m$ always even.
\medskip
\textbf{Case $m\leq n$:} By basic combinatorial arguments it is not difficult to see that $a_{2m-1}$ can be explicitly written as the sum of $m$ different type of terms. In fact, in order to do that let us start by describing the set of indexes that define each of these terms. Indeed, for $k=1,...,m$ we define the sets $\Gamma_1^{2m-1},...,\Gamma_m^{2m-1}$ as:
\begin{align*}
\Gamma_k^{2m-1}&:=\big\{(i_1,...,i_{m+k-1})\in\mathbb{N}^{m+k-1}: \ 1\leq i_1<...<i_{m-k}\leq n,
\\ & \qquad \ 1\leq i_{m-k+1}<...<i_{m+k-1}\leq n, \ \, i_{m-k+1},...,i_{m+k-1}\notin\{i_1,...,i_{m-k}\} \big\}.
\end{align*}
In other words, each $\Gamma^m_k$ is composed by two different types of indexes. First we have $(m-k)$-indexes which are internally ordered. Then, we have the remaining $(2k-1)$-indexes which are simultaneously internally ordered (and they never coincide). Intuitively, the first $(m-k)$ indexes shall be associated to the factors with power $-4$ in the sums below, while the remaining $(2k-1)$ indexes shall be associated to the factors with power $-2$.
Then, taking advantage of the definition of $\Gamma^{2m-1}_k$ we can write $a_{2m-1}$ as\footnote{This can be seen as having two different copies of a list of $n$ elements. If we choose $2m-1$ elements out of the ``extended list'' of $2n$ elements, each element can be chosen in two different ways. The $m$ different types of terms (and the motivation for defining these $\Gamma_k^{2m-1}$) are associated to the number of repeated elements we choose.}:
\begin{align}\label{def_a_2m_1}
a_{2m-1}&= \Pi_{n,0}^2\sum_{k=1}^m 2^{2j-1}\sum_{(i_1,...,i_{m+k-1})\in\Gamma_k^{2m-1}}\big(i_1-\tfrac{1}{2}\big)^{-4}...\, \big(i_{m-k}-\tfrac{1}{2}\big)^{-4}\times \nonumber
\\ & \qquad \times \big(i_{m-k+1}-\tfrac{1}{2}\big)^{-2}...\, \big(i_{m+k-1}-\tfrac{1}{2}\big)^{-2}.
\end{align}
A few words to clarify the limit cases: Notice that, when $k=m$, the inner sum is composed only by terms with power $-2$, while in the case $k=1$ there is only one factor with power $-2$ (exactly as in the definition of the sets $\Gamma_m^m$ and $\Gamma_1^m$ respectively).
\medskip
Now for $a_{2m}$, it is not difficult to see that $a_{2m}$ can be expressed as the sum of $m+1$ different types of terms. Similarly as before, we start by describing the set of indexes for each of these sums. In fact, for $k=0,...,m$ we define the sets $\Gamma_{0}^{2m},...,\Gamma_m^{2m}$ as:
\begin{align*}
\Gamma_k^{2m}&:=\big\{(i_1,...,i_{m+k})\in\mathbb{N}^{m+k}: \ 1\leq i_1<...<i_{m-k}\leq n,
\\ & \qquad \ 1\leq i_{m-k+1}<...<i_{m+k}\leq n, \ \, i_{m-k+1},...,i_{m+k}\notin\{i_1,...,i_{m-k}\}\big\}.
\end{align*}
We emphasize that in this case $k$ starts at $k=0$ (in contrast with the previous case). Of course, in the present case as well as in the previous one above, whenever a set of indexes becomes empty, then the corresponding constraint does not exist. For example, in the latter definition, when $k=0$, the inequality \[
1\leq i_{m-k+1}<...<i_{m+k}\leq n,
\]
always holds (it is vacuously true since $i_{m-k+1}$ does not exists). Then, by taking advantage of the definition of $\Gamma_k^{2m}$, we can express $a_{2m}$ as: \begin{align}\label{def_a_2m}
a_{2m}&:= \Pi_{n,0}^2\sum_{k=0}^m 2^{2k}\sum_{(i_1,...,i_{m+k})\in\Gamma_k^{2m}}\big(i_1-\tfrac{1}{2}\big)^{-4}...\, \big(i_{m-k}-\tfrac{1}{2}\big)^{-4}\times\nonumber
\\ & \qquad \ \times \big(i_{m-k+1}-\tfrac{1}{2}\big)^{-2}...\, \big(i_{m+k}-\tfrac{1}{2}\big)^{-2}.
\end{align}
Finally, it is not too difficult to prove \eqref{equiv_impro_proof} by using the previous expressions and by recalling the following standard (but useful) identities:
\begin{align}\label{tech_ident_proof}
\sum_{i=2}^\infty \big(i-\tfrac{1}{2}\big)^{-2}=\tfrac{\pi^2-8}{2}<1 \quad \hbox{ and }\quad \sum_{i=2}^\infty \big(i-\tfrac{1}{2}\big)^{-4}=\tfrac{\pi^4-96}{6}<1.
\end{align}
In fact, having all of the previous identities and definitions at hand, the idea is to notice that, except for the case $i=1$, all factors $(i-\tfrac{1}{2})^{-1}$ are smaller than $1$. Even more, as the previous identities show, their square and fourth-power are summable, and their sums are smaller than $1$. That motivates us to compare the sums over the set $\Gamma_k^{2m-1}$ with respect to the one associated to $\Gamma_{k}^{2m}$. We point out that, for each $k=1,...,m$ (we skip the case $k=0$ for the moment), the vectors in $\Gamma_{k}^{2m}$ have exactly one more coordinate than the ones in $\Gamma_k^{2m-1}$. Then, if $i_{m+k-1}\neq n$, for any $(i_1,...,i_{m+k-1})\in\Gamma_k^{2m-1}$ we define the restriction set \begin{align*}
\Gamma_k^{2m}[i_1,...,i_{m+k-1}]&:=\big\{(j_1,...,j_{m+k})\in\mathbb{N}^{m+k}: \ j_1=i_1,...,\ j_{m+k-1}=i_{m+k-1},
\\ & \qquad \ (j_1,...,j_{m+k})\in \Gamma_k^{2m} \big\}.
\end{align*}
Then, by using \eqref{tech_ident_proof} it immediately follows that \begin{align}\label{restricted_i}
&\big(i_1-\tfrac{1}{2}\big)^{-4}...\big(i_{m-k}-\tfrac{1}{2}\big)^{-4}\big(i_{m-k+1}-\tfrac{1}{2}\big)^{-2}...\big(i_{m+k-1}-\tfrac{1}{2}\big)^{-2}\geq
\\ & \quad \geq \sum_{(j_1,...,j_{m+k})\in \Gamma_k^{2m}[i_1,...,i_{m+k-1}]}\big(j_1-\tfrac{1}{2}\big)^{-4}...\big(j_{m-k}-\tfrac{1}{2}\big)^{-4}\big(j_{m-k+1}-\tfrac{1}{2}\big)^{-2}...\big(j_{m+k}-\tfrac{1}{2}\big)^{-2}\nonumber
\end{align}
Notice that, by gathering inequality \eqref{restricted_i} for all $(i_1,...,i_{m+k-1})\in\Gamma_{k}^{2m-1}$ with $i_{m+k-1}\neq n$ we obtain exactly the sum over $\Gamma_m^{2k}$ on the right-hand side. However, the resulting sum in the left-hand side produced by the previous procedure is strictly smaller than the sum over all possible indexes in $\Gamma_{m}^{2m-1}$ since we have never used any index with $i_{m+k-1}=n$. Finally, notice that for fixed $k\in\{1,...,m\}$, the corresponding sum over $\Gamma_k^{2m}$ in the definition of $a_{2m}$ (see \eqref{def_a_2m} above) has an extra $2$ factor (extra with respect to the same term in $a_{2m-1}$, see \eqref{def_a_2m_1} above). Therefore, taking into account this extra multiplicative factor $2$ on the right-hand side, the analysis above ensure us that \begin{align*}
2a_{2m-1}&\geq \Pi_{n,0}^2\sum_{k=1}^m 2^{2k}\sum_{(i_1,...,i_{m+k})\in\Gamma_k^{2m}}\big(i_1-\tfrac{1}{2}\big)^{-4}...\, \big(i_{m-k}-\tfrac{1}{2}\big)^{-4}\times
\\ & \quad \ \times \big(i_{m-k+1}-\tfrac{1}{2}\big)^{-2}...\, \big(i_{m+k}-\tfrac{1}{2}\big)^{-2}.
\end{align*}
Finally, we shall use the remaining $2a_{2m-1}$ in the left-hand side of \eqref{equiv_impro_proof} to bound the sum associated to $\Gamma_0^{2m}$. In fact, it is easy to see from the definitions of $\Gamma_i^{2m}$ that $\Gamma_0^{2m}\subseteq \Gamma_1^{2m-1}$. Then, for any $(i_1,...,i_m)\in\Gamma_0^{2m}$, since the last entry always satisfies $(i_{m}-\tfrac{1}{2})^{-1}<1$, we infer \begin{align}\label{restricted_i_2}\big(i_1-\tfrac{1}{2}\big)^{-4}...\big(i_{m-1}-\tfrac{1}{2}\big)^{-4}\big(i_{m}-\tfrac{1}{2}\big)^{-2}\geq \big(i_1-\tfrac{1}{2}\big)^{-4}...\big(i_{m}-\tfrac{1}{2}\big)^{-4}.
\end{align}
Gathering inequality \eqref{restricted_i_2} associated to all possible $(i_1,...,i_m)\in\Gamma_0^{2m}$ we obtain that \[
2a_{2m-1}\geq \Pi_{n,0}^2\sum_{(i_1,...,i_{m})\in\Gamma_0^{2m}}\big(i_1-\tfrac{1}{2}\big)^{-4}...\, \big(i_{m}-\tfrac{1}{2}\big)^{-4},
\]
and therefore $4a_{2m-1}^2\geq a_{2m}^2$, which finish the proof of the lemma for the case $m\leq n$. The case $m> n$ follows exactly the same lines (up to obvious modifications) and hence we omit it.
\end{proof}
With this lemma at hand we are able to handle the remaining terms in $\mathcal{V}_n$, that is, the sum of $\mathbf{A}$ with the first addends in $\mathbf{B}$ and $\mathbf{C}$. We recall that, in the definition of $\mathcal{V}_n$, the factors $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{C}$ are multiplied by $x^0$, $x^2$ and $x^4$ respectively. Notice that the next proposition concludes the of the non-negativity of $\mathcal{V}_n$ in $(-\tfrac{1}{2},\tfrac{1}{2})$.
\begin{prop}\label{mon_prop_1} Let $n\in\mathbb{N}$ with $n\geq 2$. For all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ it holds:
\begin{align}\label{main_ineq_mon}
-\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n^2\Sigma_{n-1}^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3+4x^4\Pi_{n,0}^2\Sigma_{n-1}^4\geq0.
\end{align}
\end{prop}
\begin{proof}
In fact, first of all, by factorizing by $\Sigma_{n-1}^2$ we infer that inequality \eqref{main_ineq_42} is equivalent to proving that, for all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$ the following holds: \begin{align}\label{ineq_HH}
\mathrm{H}:=-\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}+4x^4\Pi_{n,0}^2\Sigma_{n-1}^2\geq0.
\end{align}
On the other hand, notice that by using inequality \eqref{improimpro} we have \begin{align*}
\mathrm{H}(x)&\geq x^2\Pi_{n,0}^2\Bigg(2\Pi_n^2\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}+2\Pi_n\Sigma_{n-1}+4x^2\Sigma_{n-1}^2-x^2\Pi_n^2\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-4}
\\ & \quad -4x^2\Pi_n^2\sum_{k=1}^{n-1}\sum_{j=k+1}^n\big(k-\tfrac{1}{2}\big)^{-2}\big(j-\tfrac{1}{2}\big)^{-2}\Bigg)
\\ & = x^2\Pi_{n,0}^2\Pi_n^2\sum_{k=1}^n\Bigg(2\big(k-\tfrac{1}{2})^{-2}-2\big((k-\tfrac{1}{2})^2-x^2\big)^{-1}-x^2\big(k-\tfrac{1}{2}\big)^{-4}
\\ & \quad +4x^2\big((k-\tfrac{1}{2})^2-x^2\big)^{-1}\sum_{j=1}^n\big((j-\tfrac{1}{2})^2-x^2\big)^{-1}
\\ & \quad -4x^2\big(k-\tfrac{1}{2}\big)^{-2}\sum_{j=k+1}^n\big(j-\tfrac{1}{2}\big)^{-2}\Bigg)=:x^2\Pi_{n,0}^2\Pi_n^2\sum_{k=1}^n \Delta_k,
\end{align*}
where the last sum before the last equality (the one indexed by $j=k+1,...,n$) must to be understood as zero when $k=n$. Then, in order to conclude inequality \eqref{ineq_HH} it is enough to show that $\Delta_k(x)\geq0$ for all $k\in\{1,...n\}$ and all $x\in(-\tfrac{1}{2},\tfrac{1}{2})$. In fact, first of all, recalling the first inequality in \eqref{tech_ident_proof} we deduce that, for all $n\in\mathbb{N}$ with $n\geq 2$, and any $k\in\{1,...,n\}$, \[
-\sum_{j=k+1}^n\big(j-\tfrac{1}{2}\big)^{-2}\geq -\sum_{j=2}^\infty\big(j-\tfrac{1}{2}\big)^{-2}>-1 \quad \hbox{ and }\quad \sum_{j=1}^n\big(j-\tfrac{1}{2}\big)^{-2}\geq 4,
\]
where the latter inequality simply follows by noticing that, when $j=1$, the first factor in the sum above $(j-\tfrac{1}{2})^{-2}=4$. Thus, by plugging the last two inequalities into the definition of $\Delta_k$, it follows \begin{align*}
\Delta_k&\geq 2\big(k-\tfrac{1}{2})^{-2}-2\big((k-\tfrac{1}{2})^2-x^2\big)^{-1}-x^2\big(k-\tfrac{1}{2}\big)^{-4}
\\ & \quad +16x^2\big((k-\tfrac{1}{2})^2-x^2\big)^{-1}-4x^2\big(k-\tfrac{1}{2}\big)^{-2}=:\widetilde{\Delta}_k.
\end{align*}
Then, since $\widetilde{\Delta}_k(x)$ is even and $\widetilde{\Delta}_k(0)=0$, we infer that, to prove the non-negativity of each $\Delta_k$, it is enough to prove that $\tfrac{d}{dx}\widetilde{\Delta}_k(x)\geq 0$ for all $x\in (0,\tfrac{1}{2})$ and all $k\in\{1,...,n\}$. In fact, first of all, for the sake of simplicity let us start by re-writing $\widetilde{\Delta}_k$ as \begin{align*}
\widetilde{\Delta}_k&:=2\big(k-\tfrac{1}{2})^{-2}+2(8x^2-1)\big((k-\tfrac{1}{2})^2-x^2\big)^{-1}-x^2\big(k-\tfrac{1}{2}\big)^{-2}\big(4+\big(k-\tfrac{1}{2}\big)^{-2}\big).
\end{align*}
Then, by direct computations we get:
\begin{align*}
\dfrac{d}{dx}\widetilde{\Delta}_k(x)&=\dfrac{4x\big(8(k-\tfrac{1}{2})^2-1\big)}{\big((k-\tfrac{1}{2})^2-x^2\big)^2}-\dfrac{2x\big(4(k-\tfrac{1}{2})^{2}+1\big)}{\big(k-\tfrac{1}{2})^4}=:\dfrac{\mathrm{A}}{\mathrm{B}}-\dfrac{\mathrm{C}}{\mathrm{D}}.
\end{align*}
Notice that $\mathrm{A},\,\mathrm{B},\,\mathrm{C},\,\mathrm{D}\geq0$ for $x\in (0,\tfrac{1}{2})$. Hence, it is enough to prove that \[
\mathrm{A}\geq \mathrm{C}\quad \hbox{and}\quad \mathrm{B}^{-1}\geq \mathrm{D}^{-1}.
\]
We point out that inequality $\mathrm{B}^{-1}\geq \mathrm{D}^{-1}$ follows directly. In fact, recalling that $x\in(-\tfrac{1}{2},\tfrac{1}{2})$, we deduce \[
\dfrac{1}{\mathrm{B}}=\dfrac{1}{\big((k-\tfrac{1}{2})^2-x^2\big)^2}\geq \dfrac{1}{(k-\tfrac{1}{2})^4}=\dfrac{1}{\mathrm{D}}.
\]
Then, it only remains to prove that $\mathrm{A}\geq \mathrm{C}$. In fact, by direct computations it immediately follows that, for $x\in (0,\tfrac{1}{2})$ and $k\in\{1,...,n\}$, we have \[
\mathrm{A}-\mathrm{C}=6x\big(4(k-\tfrac{1}{2})^2-1\big)=24kx\big(k-1\big)\geq0.
\]
Therefore, $\tfrac{d}{dx}\widetilde{\Delta}_k(x)\geq 0$ for all $x\in(0,\tfrac{1}{2})$ and all $k\in \{1,...,n\}$, which concludes the proof.
\end{proof}
\begin{proof}[End of the proof of Theorem \ref{thm_monotonicity}]
We start by pointing out that, by gathering Lemma \ref{mon_lem_1} and \ref{mon_lem_2} together with Proposition \ref{mon_prop_1}, we conclude that, for all $x\in (-\tfrac{1}{2},\tfrac{1}{2})$ it holds: \[
-\dfrac{d^2}{dx^2}\dfrac{V_{n}(x)-V_n(0)}{(V_{n}'(x))^2}=\dfrac{3V_{n}''\big( (V_{n}')^2-2(V_{n}-V_n(0))V_{n}''\big)+2(V_{n}-V_n(0))V_{n}'V_{n}'''}{(V_{n}')^4}\geq 0.
\]
Additionally, it is not difficult to see from the inequalities above that, whenever $x\neq 0$, the latter inequality holds strictly\footnote{It is enough to notice that, for example, some of the previous lemmas are proven by showing that certain even function $f(x)$ satisfies $f(0)=0$ with $f'(x)>0$ for $x\in(0,\tfrac{1}{2})$.}. However, due to the factor $(V_n')^4$, the latter quantity might have a singularity at $x=0$. Thus, it only remains to prove that, when $x$ goes to zero, the latter quantity is well-defined and strictly positive. Notice that this shall conclude the proof of the theorem by applying the main result in \cite{Chi}. Indeed, let us start by pointing out that, from the explicit formula of $V_{n}'$ in \eqref{comp_v_x}, we infer that $(V_{n}')^4=o(x^4)$. Then, we must first prove that $\mathcal{V}_{n}$ is also $o(x^4)$. In order to do this, we start by recalling that $\tfrac{1}{96}\mathcal{V}_n=\mathbf{A}+\mathbf{B}x^2+\mathbf{C}x^4$, where $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{C}$ are given by: \begin{align*}
\mathbf{A}&:=-\big(\Pi_n^2-\Pi_{n,0}^2\big)\Pi_n^2\Sigma_{n-1}^2,
\\ \mathbf{B}&:=2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3-4(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-1}\Sigma_{n-2},
\\ \mathbf{C}&:=4\Pi_{n,0}^2\Sigma_{n-1}^4
+8\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^2\Sigma_{n-2}+8(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-1}\Sigma_{n-3}
\\ & \quad \ \, -16(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-2}^2.
\end{align*}
Hence, in the sequel we seek to prove that $\lim_{x\to0}\tfrac{1}{x^4}\mathcal{V}_n(x)$ exists and is strictly positive. We split the analysis into two steps. First, we intend to prove that \begin{align}\label{second_order}
\lim_{x\to 0}\dfrac{1}{x^2}\big(\mathbf{A}+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3\big)=0.
\end{align}
It is worth noticing that, the latter limit ensures us that the quantity inside the parenthesis in \eqref{second_order} behaves (at least) as $x^4$ near zero. In fact, first of all, recall that in the proof of \eqref{improimpro} we have already shown that \begin{align}\label{approx_thetas}
-(\Pi_n^2-\Pi_{n,0}^2)&=2x^2\Pi_{n,0}^2\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}\nonumber
\\ & \quad \,-x^4\Pi_{n,0}^2\left(\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-4}+4\sum_{k=1}^{n-1}\sum_{j=k+1}^n\big(k-\tfrac{1}{2}\big)^{-2}\big(j-\tfrac{1}{2}\big)^{-2}\right)+o(x^6)\nonumber
\\ & =:\Theta_{2}(x)+\Theta_{4}(x)+o(x^6),
\end{align}
where $\Theta_2(x)$ and $\Theta_4(x)$ denote the terms of order $x^2$ and $x^4$ respectively. Then, we gather the term in $\mathbf{A}$ associated with $\Theta_2$ with the first term appearing in $\mathbf{B}$. Specifically, we group \begin{align}\label{theta_2}
\Theta_2\Pi_n^2\Sigma_{n-1}^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3&=2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^2\left(\Pi_n\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}+\Sigma_{n-1}\right).
\end{align}
Then, it is enough to notice the following trivial identities: \begin{align}\label{lim_sigma_1}
&\lim_{x\to0}\Pi_n=(-1)^{n}\Pi_{n,0} \quad \hbox{and} \quad \lim_{x\to0}\Sigma_{n-1}=(-1)^{n-1}\Pi_{n,0}\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}.
\end{align}
By plugging the last two identities into the parenthesis in \eqref{theta_2} we conclude the proof of \eqref{second_order}. In particular, we infer that $
\Theta_2\Pi_n^2\Sigma_{n-1}^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3=o(x^4)$. Similarly, now we gather the last term in $\mathbf{B}$ with the second one in $\mathbf{C}$. Specifically, we group (recall that the terms associated to $\mathbf{C}$ in $\mathcal{V}_n$ have an extra $x^2$ with respect to the ones in $\mathbf{B}$): \begin{align}\label{groupsII}
-4(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-1}\Sigma_{n-2}+8x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^2\Sigma_{n-2}.
\end{align}
However, by using the second identity in \eqref{lim_sigma_1} and \eqref{approx_thetas} again, it is easy to see that \[
\lim_{x\to 0}\dfrac{1}{x^2}\big(4(\Pi_n^2-\Pi_{n,0}^2)\Pi_n^2\Sigma_{n-1}\Sigma_{n-2}-8x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^2\Sigma_{n-2}\big)=0.
\]
Hence, due to the extra $x^2$ factor, the terms appearing in $\mathcal{V}_n$ associated to \eqref{groupsII} are of order $o(x^6)$. It is worth to notice that, except for the first term in $\mathbf{C}$, all the remaining terms appearing in $\mathcal{V}_n$ that we have not treated so far are of order $o(x^6)$. Consequently, the problem is reduced to study the following limit: \[
\lim_{x\to 0}\dfrac{1}{x^4}\big((\Theta_2+\Theta_4)\Pi_n^2\Sigma_{n-1}^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3+4x^4\Pi_{n,0}^2\Sigma_{n-1}^4\big).
\]
For the sake of simplicity let us start by some direct computations. In fact, on the one-hand we have: \begin{align*}
\Pi_n=\widehat{a}_0+\widehat{a}_1x^2+o(x^4), \quad &\hbox{where} \quad \widehat{a}_1=\Sigma_{n-1}(0),
\\ \Sigma_{n-1}=\widetilde{a}_0+\widetilde{a}_1x^2+o(x^4), \quad &\hbox{where}\quad \widetilde{a}_1=2\Sigma_{n-2}(0).
\end{align*}
Thus, gathering \eqref{theta_2} with the last identities we infer \begin{align}\label{theta2_x4}
&\lim_{x\to0}\dfrac{1}{x^4}\big(\Theta_2\Pi_n^2\Sigma_{n-1}^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3\big)=\nonumber
\\ & \qquad =2(-1)^n\Pi_{n,0}^3\Sigma_{n-1}^2(0)\left(\Sigma_{n-1}(0)\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-2}+2\Sigma_{n-2}(0)\right).
\end{align}
On the other hand, by taking limit directly in the definition of $\Theta_4(x)$ we obtain
\[
\lim_{x\to0}\dfrac{1}{x^4}\Theta_4(x)\Pi_n^2\Sigma_{n-1}^2=-\Pi_{n,0}^4\Sigma_{n-1}^2(0)\left(\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-4}+4\sum_{k=1}^{n-1}\sum_{j=k+1}^n\big(k-\tfrac{1}{2}\big)^{-2}\big(j-\tfrac{1}{2}\big)^{-2}\right).
\]
Finally, we gather the previous terms, that is, we gather the parenthesis in \eqref{theta2_x4} with the terms associated with the latter limit and the first term associated with $\mathbf{C}(x)$. More specifically, we group \begin{align*}
(\Theta_2+\Theta_4)\Pi_n^2\Sigma_{n-1}^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3+4x^4\Pi_{n,0}^2\Sigma_{n-1}^4.
\end{align*}
By using the last two limits, identity \eqref{lim_sigma_1} once again, and then performing some direct cancellations we obtain \begin{align*}
&\lim_{x\to0}\dfrac{1}{x^4}\left((\Theta_2+\Theta_4)\Pi_n^2\Sigma_{n-1}^2+2x^2\Pi_{n,0}^2\Pi_n\Sigma_{n-1}^3+4x^4\Pi_{n,0}^2\Sigma_{n-1}^4\right)=
\\ & \qquad = \Pi_{n,0}^4\Sigma_{n-1}^2(0)\sum_{k=1}^n\big(k-\tfrac{1}{2}\big)^{-4}+4(-1)^n\Pi_{n,0}^3\Sigma_{n-1}^2(0)\Sigma_{n-2}(0).
\end{align*}
Finally, since $(-1)^n\Sigma_{n-2}(0)>0$, we conclude the proof of the theorem.
\end{proof}
\medskip
\section{Spectral analysis \label{sec:SpecAna}}
In this section, we use the monotonicity of the period map $L$ with respect to the energy level $\beta$ to analyze
the spectrum of the linearized operator associated with the traveling wave
obtained in the previous section. From now on, with no loss of generality and in addition to the hypothesis in Theorem \ref{thm_monotonicity}, we shall assume $0\leq c<1$.
\subsection{Spectrum of the scalar linearized operator}
Our goal now is to study the spectral information associated to the scalar linear operator $\mathcal{L}_c$. Let us start by recalling that the odd traveling wave solution constructed in the last section satisfies
\begin{align}\label{trav_eq}
-\omega\phi_{c}'' +V_{n}'\left(\phi_{c}\right)=0,
\end{align}
where $\omega=1-c^2$. Then, the linearized operator $\mathcal{L}_{c}$ around $\phi_c$ is given by: \begin{align}\label{scalar_linear_op}
\mathcal{L}_{c}:=-\omega \partial_x^2+V_{n}''\left(\phi_{c}\right).
\end{align}
It is worth noticing that $\mathcal{L}_c$ can be regarded as a bounded self-adjoint operator defined on
$L^{2}\left(\mathbb{T}_{L}\right)$ with domain $H^{2}\left(\mathbb{T}_{L}\right)$. According to Oscillation Theorem, see Magnus-Winkler \cite{MaWi}, the spectrum of $\mathcal{L}_{c}$ is formed by a sequence of real numbers, bounded from below and going to $+\infty$ as $m$ goes to $+\infty$. More specifically, we can list the eigenvalues of $\mathcal{L}_c$ as
\[
\sigma_{0}<\sigma_{1}\leq\sigma_{2}<\sigma_{3}\leq\sigma_{4}<\cdots<\sigma_{2m-1}\leq\sigma_{2m}<\cdots.
\]
Moreover, the spectrum of $\mathcal{L}_{c}$ is also characterized by the number
of zeros of the corresponding eigenfunctions. Then, in order to analyze the (in)stability problem of traveling wave solutions, it is helpful to start studying the spectrum of $\mathcal{L}_c$ in more details. We start by recalling two results of Floquet theory that shall be useful in the sequel.
\begin{thm}[\cite{N}, Theorem 2.2] \label{thm:Neves1} Let $p(x)$ be any $L$-periodic solution of $\mathcal{L}_{c}y=0$. Consider any other linearly independent solution $y(x)$ such that the Wronskian $W(p,y)$ satisfies
\[
W(p,y):=\det\left(\begin{matrix}
p & y
\\ p_{x} & y_{x}
\end{matrix}\right)=1.
\]
Then, $y(x+L)=y(x)+\theta p(x)$ for some constant only depending on $y$. In particular $y(x)$ is $L$-periodic if and only if $\theta=0$.
\end{thm}
\begin{rem}
We point out that the constant $\theta$ can be explicitly computed (see \cite{N}).
\end{rem}
\begin{thm}[\cite{N}, Theorem 3.1]\label{thm:Neves2} Consider any eigenvalue $\sigma_{k}$ of $\mathcal{L}_c$ with $k\geq1$, and its associated eigenfunction $\widetilde{p}(x)$. Let $\theta$ be the constant given in Theorem \ref{thm:Neves1} associated to the operator $\widetilde{\mathcal{L}}:=(\mathcal{L}_{c}-\sigma_{k})$ and $p(x)=\widetilde{p}(x)$. Then, $\sigma_{k}$ is a simple eigenvalue of $\mathcal{L}_c$ if and only if $\theta\neq0$. Furthermore, if $p(x)$ has $2m$-zeros in $[0,L)$, then the following holds: \[
\{\hbox{if } \theta<0, \hbox{ then } \sigma_{k}=\sigma_{2m-1}\} \qquad \hbox{and} \qquad \{\hbox{if } \theta>0, \hbox{ then }\sigma_{k}=\sigma_{2m}\}.\]
\end{thm}
It is worthwhile to notice that, by differentiating the equation \eqref{trav_eq}, we infer that $\phi'_{c}$ belongs to $\ker(\mathcal{L}_{c})$, and hence zero is an eigenvalue of $\mathcal{L}_c$. Next we apply both theorems above to analyze the $0$ eigenvalue of
$\mathcal{L}_{c}$. By Theorem \ref{thm:Neves1}, for any solution to
$\mathcal{L}_{c}y=0$ linearly independent to $\phi_c'$ satisfying $W\left(\phi'_{c},y\right)=1$, one has
\begin{align}\label{eq_theta}
y\left(x+L\right)=y\left(x\right)+\theta\phi_{c}'\left(x\right),
\end{align}
for some constant $\theta$ only depending on $y$. Moreover, it is not difficult to see\footnote{From equation \eqref{trav_eq}, the oddness of the solution and the fact that $-(V(x)-V(0))$ is strictly increasing in $(0,\tfrac{1}{2})$, for example.} that $\phi'_{c}(x)$ has exactly two zeros $[0,L)$. Thus, by applying the Oscillation Theorem, we know that $0$ is either $\sigma_1$ or $\sigma_2$. To obtain more precise information of the eigenvalue $0$, by Theorem \ref{thm:Neves2}, we need to know $\theta$. The next lemma connects $\theta$ and $\frac{\partial}{\partial\beta}L$ computed from Theorem \ref{thm_monotonicity}.
\begin{lem}
\label{lem:thetaLbeta}
Under our current hypothesis, we have the following relation between $\frac{d L}{d\beta}$ from Theorem \ref{thm_monotonicity}
and $\theta$ from \eqref{eq_theta}:
\begin{align*}
\theta=-\frac{\partial L}{\partial\beta}.
\end{align*}
\end{lem}
\begin{proof}
Our proof follows a similar spirit to that given in \cite{deNa}. However, notice that this latter one contains some typos that must to be corrected (see Section \ref{ext_nata} for further details). In fact, let us start by defining $\mu$ to be the unique solution to the problem
\begin{align*}
\begin{cases}
-\omega\mu''+V_{n}''(\phi_{c})\mu=0\\
\mu(0)=0, \ \, \mu' (0)=\frac{1}{\phi_{c}'\left(0\right)} & .
\end{cases}
\end{align*}
Notice that by the definition of $\mu$ it immediately follows that $W(\phi'_{c},\mu)=1$. Then by Theorem \ref{thm:Neves1}, there is a constant $\theta$, only depending on $\mu$, such that
\begin{align*}
\mu(x+L)=\mu(x)+\theta\phi_{c}'(x).
\end{align*}
Therefore, by evaluating the latter identity at $x=0$, recalling that by construction $\mu(0)=0$, we deduce that $\theta=(\phi_{c}'(0))^{-1}\mu(L)$. On the other hand, since $\phi_{c}$ is odd and periodic it follows that $\phi_{c}\left(0\right)=\phi_{c}\left(L\right)=0$. Thus, by differentiating the latter identity at $x=L$ with respect to $\beta$, we deduce:
\begin{align}\label{dLdb_2}
\phi_{c}'(L)\frac{\partial L}{\partial\beta}+\frac{\partial\phi_{c}}{\partial\beta}(L)=0 \quad\implies \quad \frac{\partial L}{\partial\beta}=-\frac{1}{\phi_{c}'(0)}\frac{\partial\phi_{c}}{\partial\beta}(L),
\end{align}
where we have used the periodicity of the solution $\phi'_c(0)=\phi'_{c}(L)$. Finally, in order to obtain the relation between $\partial_{\beta}\phi_{c}$ and $\mu$, we start by recalling that, from Theorem \ref{thm_monotonicity} we know that for $L$ and $c$ fixed, there exist a unique $\beta(c)\in(0,E_\star)$ such that \begin{align}\label{ham_energy_beta_lvl}
\dfrac{1}{2}\phi_x^2(x)-\dfrac{1}{\omega}\big(V_n(\phi(x))-V_n(0)\big)=\beta.
\end{align}
Hence, by differentiating the latter equation with respect to $\beta$, and then differentiating the resulting equation with respect to $x$ we obtain
\begin{align*}
&\omega\phi_{c}''\partial_{\beta}\phi'_{c}+\omega\phi_{c}'\partial_{\beta}\phi_{c}''-V_{n}''\left(\phi_{c}\right)\phi_{c}'\partial_{\beta}\phi_{c}-V_{n}'\left(\phi_{c}\right)\partial_{\beta}\phi_{c}'\nonumber
\\ & \quad =\partial_{\beta}\phi_{c}'\left(\omega\phi_{c}''-V_{n}'\left(\phi_{c}\right)\right)+\phi_{c}'\left(\omega\partial_{\beta}\phi_{c}''-V_{n}''\left(\phi_{c}\right)\partial_{\beta}\phi_{c}\right)=0.
\end{align*}
Now, on the one-hand, by differentiating equation \eqref{trav_eq} with respect to $\beta$ we have
\begin{align}\label{phi_beta_2}
\omega\partial_{\beta}\phi_{c}''-V_{n}''\left(\phi_{c}\right)\partial_{\beta}\phi_{c}=0.\end{align}
On the other hand, evaluating identity \eqref{ham_energy_beta_lvl} at $x=0$, recalling that due to the oddness of the solution $\phi_c(0)=0$, we infer that:
\begin{align}\label{phibeta_origin}
\partial_{\beta}\phi_{c}\left(0\right)=0\quad \text{ and }\quad \frac{1}{2}\phi_{c,x}^{2}\left(0\right)=\beta.
\end{align}
Finally, differentiating the second identity in \eqref{phibeta_origin} with respect to $\beta$ we infer that $\partial_{\beta}\phi_{c,x}(0)=(\phi_{c}'(0)\big)^{-1}$. Therefore, by gathering the latter identity with \eqref{phi_beta_2} and \eqref{phibeta_origin}, we conclude that $\partial_{\beta}\phi_{c}$ satisfies the same ODE as $\mu$ with the same initial data. By the uniqueness of the solution, it follows that $\partial_{\beta}\phi_{c}\equiv\mu$. Therefore, recalling that we have shown that $\theta=(\phi_c'(0))^{-1}\mu(L)$, together with the second identity in \eqref{dLdb_2}, we conclude $\theta=-\partial_{\beta}L$ as desired.
\end{proof}
By computations from Section \ref{existence}, see Theorem \ref{thm_monotonicity}, we know that $\theta=-\partial_{\beta}L<0$. Then as a direct application of Theorem \ref{thm:Neves2}, we can conclude the following spectral information
of $\mathcal{L}_{c}$.
\begin{prop}
\label{prop:linearspec}
Under the hypothesis of Theorem \ref{thm_monotonicity}, the linear operator $\mathcal{L}_{c}$ given by \eqref{scalar_linear_op} above, defined
on $L^{2}\left(\mathbb{T}_{L}\right)$ with domain $H^{2}\left(\mathbb{T}_{L}\right)$, defines a bounded self-adjoint operator with exactly one negative
eigenvalue, a simple eigenvalue at zero and the rest of its spectrum
is positive, discrete and bounded away from zero.
\end{prop}
\subsection{Spectrum of the matrix operator}
Now we seek to use the previous spectral information for the scalar operator $\mathcal{L}_c$ to conclude related spectral properties associated to the so called linarized Hamiltonian. In fact, we start by pointing out that the equation solved by the periodic traveling wave solution $\vec{\phi}_c$ constructed in Section \ref{existence} can be re-written in terms of the conserved functionals $\mathcal{E}$ and $\mathcal{P}$ as
\[
\mathcal{E}'\big(\vec{\phi}_{c}\big)+c\mathcal{P}'\big(\vec{\phi}_{c}\big)=0,
\]
where $\mathcal{E}'$ and $\mathcal{P}'$ are the Frechet derivatives
of $\mathcal{E}$ and $\mathcal{P}$ in $H^{1}\left(\mathbb{T}_{L}\right)\times L^{2}\left(\mathbb{T}_{L}\right)$ respectively. Then, the linearized Hamiltonian around $\vec{\phi}_{c}$ is given by the matrix operator
\begin{equation}
\vec{\mathcal{L}}_{c}:=(\mathcal{E}''+c\mathcal{P}'')\big(\vec{\phi}_c\big)=\left(\begin{matrix}
-\partial_{xx}+V''_{n}\left(\phi_{c}\right) & -c\partial_{x}\\
c\partial_{x} & 1
\end{matrix}\right).\label{eq:mL}
\end{equation}
It is worthwhile to notice that $\vec{\mathcal{L}}_{c}$ can be regarded as a bounded self-adjoint operator defined on \[
\vec{\mathcal{L}}_c:H^{2}\left(\mathbb{T}_{L}\right)\times H^{1}\left(\mathbb{T}_{L}\right)\subset L^{2}\left(\mathbb{T}_{L}\right)\times L^{2}\left(\mathbb{T}_{L}\right)\to L^2(\mathbb{T}_L).
\]
Moreover, notice that with these definitions it immediately follows that $\vec{\phi}_{c,x}$ belongs to the kernel of $\vec{\mathcal{L}}_{c}$. On the other hand, the quadratic form $\mathcal{Q}_c$ associated to the matrix operator defined in \eqref{eq:mL} is given by:
\begin{align}
\mathcal{Q}_{c}\big(\varphi_1, \varphi_2\big) & :=\big\langle \vec{\mathcal{L}}_{c}(\varphi_1, \varphi_2),(\varphi_1, \varphi_2)\big\rangle =\int_{\mathbb{T}_{L}}\big(\varphi_{1,x}^{2}+V_{n}''\left(\phi_{c}\right)\varphi_1^{2}+2c\varphi_{1,x}\varphi_2+\varphi_2^{2}\big)\,dx\nonumber \\
&
=\int_{\mathbb{T}_{L}}\big(\omega\varphi_{1,x}^{2}+V_{n}''\left(\phi_{c}\right)\varphi_1^{2}\big)dx+\int_{\mathbb{T}_L}\big(c\varphi_{1,x}+\varphi_2\big)^{2}\,dx.\label{eq:MQ}
\end{align}
It is worth noticing that, from the first integral term on the latter identity we recognize the scalar quadratic
form \begin{align}
Q_c(\varphi) :=\left\langle\mathcal{L}_c \varphi,\varphi\right\rangle=\omega\int_{\mathbb{T}_{L}}\varphi_{x}^{2}dx+\int_{\mathbb{T}_L}V_{n}''\left(\phi_{c}\right)\varphi^{2}dx,\label{eq:Q}
\end{align}
which is the quadratic form associated to the linear operator $\mathcal{L}_{c}$ in \eqref{scalar_linear_op}. The following lemma links the spectral information of $\mathcal{L}_c$ derived in the previous subsection with the one of $\vec{\mathcal{L}}_c$.
\begin{lem}
\label{lem:matrixspe} Under the assumptions of Theorem \ref{thm_monotonicity}, the operator $\vec{\mathcal{L}}_{c}$ given in \eqref{eq:mL} defined on $L^{2}\left(\mathbb{T}_{L}\right)\times L^{2}\left(\mathbb{T}_{L}\right)$
with domain $H^{2}\left(\mathbb{T}_{L}\right)\times H^{1}\left(\mathbb{T}_{L}\right)$
defines a bounded self-adjoint operator with a unique negative eigenvalue. Furthermore, zero is the second eigenvalue, which is simple, and the rest of the spectrum is discrete and bounded away from zero.
\end{lem}
\begin{proof}
First, from Weyl's essential spectral Theorem, it follows that the essential spectra of $\vec{\mathcal{L}}_{c}$ is empty. Besides, by compact self-adjoint operator theory, $\vec{\mathcal{L}}_{c}$ has only point spectra. Next, we need to check the signs of eigenvalues. Recall that by Proposition
\ref{prop:linearspec} we already know that $\mathcal{L}_{c}$ has exactly one simple negative eigenvalue and that zero is also a simple eigenvalue (with $\phi_{c}'$ as its associated eigenfunction). Let $\sigma_{0}$ be be the unique negative eigenvalue
of $\mathcal{L}_{c}$ with eigenfunction $Y_{0}$ (notice that $Y_0$ is even). Then, it immediately follow by the definition of $\sigma_0$ that
\begin{align}\label{lambda0}
\mathcal{Q}_{c}\left(Y_{0},-cY_{0}'\right)=\sigma_{0}\int_{\mathbb{T}_{L}}Y_{0}^{2}\,dx+\int_{\mathbb{T}_{L}}\left(cY_{0}'-cY_{0}'\right)^{2}\,dx=\sigma_{0}\int_{\mathbb{T}_{L}}Y_{0}^{2}\,dx<0.
\end{align}
In the same fashion as in the previous section, by using Oscillation Theory we know we can list the eigenvalues of $\vec{\mathcal{L}}_{c}$ as \[
\widetilde{\sigma}_{0}<\widetilde{\sigma}_1\leq\widetilde{\sigma}_2<\widetilde{\sigma}_3\leq\widetilde{\sigma}_4<...<\widetilde{\sigma}_{2m-1}\leq\widetilde{\sigma}_{2m}<...
\]
Then by the using min-max principle (see for example \cite{RS}) and \eqref{lambda0}, we infer that $\widetilde{\sigma}_{0}<0$. Thus, in order to conclude it is enough to show that $\widetilde{\sigma}_{1}=0$ and $\widetilde{\sigma}_{2}>0$. In fact, for the sake of simplicity let us denote by $X=H^{1}\left(\mathbb{T}_{L}\right)\times L^{2}\left(\mathbb{T}_{L}\right)$. Then, by the min-max principle again, we know that $\widetilde{\sigma}_1$ satisfies the following characterization: \begin{align*}
\widetilde{\sigma}_{1}=\max_{\left(\psi_{1},\psi_{2}\right)\in X}\min_{\substack{(\varphi_1, \varphi_2)\in X\backslash\left\{ 0\right\} \\
(\varphi_1, \varphi_2)\perp\left(\psi_{1},\psi_{2}\right)
}
}\frac{\langle \vec{\mathcal{L}}_c(\varphi_1, \varphi_2),(\varphi_1, \varphi_2)\rangle}{\left\Vert (\varphi_1, \varphi_2)\right\Vert _{X}^{2}}.
\end{align*}
Thus, by the Spectral Theorem, recalling the properties deduced in Property \ref{prop:linearspec} it immediately follows that for any function $\varphi\in H^1(\mathbb{T}_L)$ it holds: \[
\langle \varphi,Y_0\rangle=0 \quad \implies \quad \langle \mathcal{L}_c\varphi,\varphi\rangle\geq 0.
\]
Hence, by choosing $\psi_{1}=Y_{0}$ and $\psi_{2}=0$, by using the explicit form of $\mathcal{Q}_c$ in \eqref{eq:MQ} together with the latter inequality, we infer that
\begin{align*}
\widetilde{\sigma}_{1}\geq\min_{\substack{(\varphi_1, \varphi_2)\in X\backslash\left\{ 0\right\} \\
(\varphi_1, \varphi_2)\perp\left(Y_{0},0\right)
}
}\frac{\langle\vec{\mathcal{L}}_c(\varphi_1, \varphi_2),(\varphi_1, \varphi_2)\rangle}{\left\Vert (\varphi_1, \varphi_2)\right\Vert _{X}^{2}}\geq0.
\end{align*}
Therefore, recalling that $\big\langle \vec{\phi}_{c,x},Y_{0}\big\rangle =0$ and $\vec{\phi}_{c,x}\in\ker\big(\vec{\mathcal{L}}_{c}\big)$, we conclude $\widetilde{\sigma}_{1}=0$. Finally, we follow a similar approach to obtain the needed information about $\widetilde{\sigma}_{2}$. In fact, by using the min-max principle once again, we can write $\widetilde{\sigma}_2$ as
\begin{align*}
\widetilde{\sigma}_{2}=\max_{\substack{\left(\psi_{1},\psi_{2}\right)\in X\\
\left(\psi_{3,}\psi_{4}\right)\in X
}
}\min_{\substack{(\varphi_1, \varphi_2)\in X\backslash\left\{ 0\right\} \\
(\varphi_1, \varphi_2)\perp\left(\psi_{1},\psi_{2}\right)\\
(\varphi_1, \varphi_2)\perp\left(\psi_{3},\psi_{4}\right)
}
}\frac{\langle \vec{\mathcal{L}}_{c}(\varphi_1, \varphi_2),(\varphi_1, \varphi_2)\rangle}{\left\Vert (\varphi_1, \varphi_2)\right\Vert _{X}^{2}}.
\end{align*}
Thus, in the same fashion as before, by taking $\left(\psi_{1},\psi_{2}\right)=\left(Y_{0},0\right)$ as well as $\left(\psi_{3},\psi_{4}\right)=\left(\phi_{c}',0\right)$, as an application of the Spectral Theorem and Property \ref{prop:linearspec} we infer that
\begin{align*}
\widetilde{\sigma}_{2}=\min_{\substack{(\varphi_1, \varphi_2)\in X\backslash\left\{ 0\right\} \\
\varphi_1\perp Y_{0}, \, \varphi_1\perp\phi_{c}'
}
}\frac{\langle \vec{\mathcal{L}_{c}}(\varphi_1, \varphi_2),(\varphi_1, \varphi_2)\rangle}{\left\Vert (\varphi_1, \varphi_2)\right\Vert _{X}^{2}}>0.
\end{align*}
More precisely, we have used the the explicit form of $\mathcal{Q}_c$ in \eqref{eq:MQ}, as well as the fact that $Q_{c}(\varphi_1, \varphi_1)\geq\sigma_{2}\left\Vert \varphi_1\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}$
for any function $\varphi_1\in H^1(\mathbb{T}_L)$ satisfying $\varphi_1\perp Y_{0}$ and $ \varphi_1\perp\phi_{c}'$, where $\sigma_{2}$ is the third eigenvalue of $\mathcal{L}_{c}$ (which is positive by Proposition \ref{prop:linearspec}). Summarizing, we have proven that $\lambda_{0}<0$, $\lambda_{1}=0$ were both eigenvalues are simple, and $\lambda_{2}>0$, which concludes the proof.
\end{proof}
To finish this section, we consider the spectrum of $\vec{\mathcal{L}}_{c}$ for the
standing solution $S(x):=\phi_{0}$ restricted onto odd functional
spaces.
\begin{lem}\label{lem:oddspec}Under the assumptions of Theorem \ref{thm_monotonicity} the operator $\vec{\mathcal{L}}$ with $c=0$, that is $\vec{\mathcal{L}}_{0}$, defined in $L_{\text{odd}}^{2}\left(\mathbb{T}_{L}\right)\times L_{\text{odd}}^{2}\left(\mathbb{T}_{L}\right)$
with domain $H_{\text{odd}}^{2}\left(\mathbb{T}_{L}\right)\times H_{\text{odd}}^{1}\left(\mathbb{T}_{L}\right)$,
defines as a bounded self-adjoint operator with no negative eigenvalues and $\ker(\vec{\mathcal{L}})=\{(0,0)\}$
in $L_{\text{odd}}^{2}\left(\mathbb{T}_{L}\right)\times L_{\text{odd}}^{2}\left(\mathbb{T}_{L}\right)$.
Moreover, the rest of the spectrum is discrete and bounded away from
zero.
\end{lem}
\begin{proof}
First of all, notice that, since $Y_0$ is associated to the smallest eigenvalue of $\mathcal{L}_c$, it immediately follows that $Y_0$ is an even function regarded as a function in the whole line $\mathbb{R}$. On the other hand, we already know that $S'=\phi_{0}'$ is even in $\mathbb{R}$. Thus, none of these two eigenfunctions can belong to $H^2_{\mathrm{odd}}$. Therefore, gathering this information with Proposition \ref{prop:linearspec} we infer that the spectra of $\mathcal{L}_s:=\mathcal{L}_0$ which is defined in $L^2_{\mathrm{odd}}(\mathbb{T}_L)$ with domain $H^2_{\mathrm{odd}}(\mathbb{T}_L)$ is strictly positive. Hence, by the spectral theorem we deduce that, for all $\varphi_1\in H_{\text{odd}}^{1}\left(\mathbb{T}_{L}\right)$, it holds
\begin{align}\label{coer_c_not}
\langle\mathcal{L}_s\varphi_1, \varphi_1\rangle\geq\sigma\left\Vert \varphi_1\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)},
\end{align}
for some $\sigma\geq \sigma_2>0$. Then the same arguments as in the proof of Lemma \ref{lem:matrixspe} above, using the min-max principle gives us the desired results for the matrix operator
$\vec{\mathcal{L}}$.
\end{proof}
\section{Orbital Stability of standing waves in the odd energy space}\label{stab_standing}
From now on, and for the rest of this section, we shall always assume that $c=0$ and $L>0$ arbitrary but satisfies the hypothesis of Theorem \ref{thm_monotonicity}. Our goal here is to use the spectral analysis carried out in the previous section to prove the orbital stability of standing solutions under the additional hypothesis of global (in time) spatial oddness. In fact, one important advantage in this case is given by the preservation of the spatial-oddness by the periodic flow of the $\phi^{4n}$-equation. That is, if the initial data is $(\mathrm{odd},\mathrm{odd})$, then so is the solution associated to it for all times in the maximal existence interval. Then, recalling that the traveling wave solution $\phi_c(x)$ constructed in Section \ref{existence} is odd, we obtain that if the initial perturbation $\vec\varepsilon_0=(\varepsilon_{0,1},\varepsilon_{0,2})=(\mathrm{odd},\mathrm{odd})$ and $c=0$, then so is the solution associated to \[
(\phi_{0,1},\phi_{0,2})=(S,0)+(\varepsilon_{0,1},\varepsilon_{0,2}).
\]
Thus, it is natural to study the time evolution of an initial odd perturbation of $(S,0)$ in terms of the evolution of its perturbation $\vec{\varepsilon}(t)$. In other words, for all times we shall write the solution as $\vec{\phi}(t,x)=(S(x),0)+\vec{\varepsilon}(t,x)$. Additionally, by using equation \eqref{phif_2} and Taylor expansion we deduce that $\vec{\varepsilon}(t,x)$ satisfy the first-order system
\begin{align}\label{eps_system}
\begin{cases}
\partial_t\varepsilon_1=\varepsilon_2,
\\ \partial_t\varepsilon_2=-\mathcal{L}_{\mathrm{s}}\varepsilon_1+\mathcal{O}(\varepsilon_1^2),
\end{cases}
\end{align}
where $\mathcal{L}_s$ is the linearized operator around $S$, which is given by: \[
\mathcal{L}_s=-\partial_x^2+V_n''(S).
\]
Now, on the one-hand, from the spectral analysis developed in the previous section, we know that there is only one negative eigenvalue associated with the operator $\mathcal{L}_s$. Even more, we recall that both, $Y_0$ and $S'(x)$, are even functions (regarded as functions defined in the whole line $\mathbb{R}$). Moreover, it is not too difficult to see that periodic odd and even functions (where the parity is regarded as functions defined in the whole line $\mathbb{R}$) belonging to $H^1(\mathbb{T}_L)$ are orthogonal in the corresponding $H^1(\mathbb{T}_L)$-inner product. Thus, gathering all the analysis above we are in position to establish the following lemma.
\begin{lem}\label{coercivity}
Under the assumptions of Theorem \ref{thm_monotonicity} the following holds: There exists $\gamma>0$ such that for any odd function $\upsilon\in H^1_{\mathrm{odd}}(\mathbb{T}_L)$ we have \[
\langle \mathcal{L}_{s}\upsilon,\upsilon\rangle \geq \gamma\Vert \upsilon\Vert_{H^1_{\mathrm{odd}}}^2.
\]
\end{lem}
\begin{proof}
In fact, first of all notice that we already know that the desired inequality holds if we change the $H^1$ norm for the $L^2$ in the right-hand side (see inequality \eqref{coer_c_not}). Now, we shall prove that by lowering the constant $\sigma$ we can \emph{improve} the latter inequality to put the $H^1$-norm in the right-hand side. In fact, by using the definition of $\mathcal{L}_s$ above, it immeditely follows that \begin{align}\label{QH}
\left\Vert \upsilon_{x}\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}^{2}= \langle \mathcal{L}_s\upsilon,\upsilon\rangle-\int_{\mathbb{T}_{L}}V''_n\big(S(x)\big)\upsilon^2\,dx.
\end{align}
On the other hand, from the coercivity property given by Proposition \eqref{coer_c_not} it follows that, for any pair of positive numbers $\delta$ and $\eta$, and any odd periodic function $\upsilon \in H^1_{\mathrm{odd}}(\mathbb{T}_L)$ we have
\begin{align*}
\delta\left\Vert \upsilon_{x}\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}^{2}+\eta\left\Vert \upsilon\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}^{2}&\lesssim \delta\left\Vert \upsilon_{x}\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}^{2}+\tfrac{\eta}{\sigma}\langle\mathcal{L}_s\upsilon, \upsilon\rangle
\\ &\lesssim\left(\delta+\tfrac{\eta}{\sigma}\right)\langle\mathcal{L}_s\upsilon, \upsilon\rangle+\delta C_{n}\Vert \upsilon\Vert^2 _{L^{2}\left(\mathbb{T}_{L}\right)},
\end{align*}
where in latter inequality we have used identity \eqref{QH} and $C_{n}:=\sup_{x\in\mathbb{T}_{L}}\vert V_{n}''(S(x))\vert$. Then, performing direct computations, reorganizing the latter inequality we conclude that
\begin{align*}
\delta\Vert \upsilon_{x}\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}^{2}+\left(\eta-\delta C_{n}\right)\left\Vert \upsilon\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}^{2}\lesssim\left(\delta+\frac{\eta}{\sigma}\right)\langle\mathcal{L}_s\upsilon, \upsilon\rangle.
\end{align*}
Choosing appropriate $\delta$ small enough and $\eta$ sufficiently large so that $\eta-\delta C_{n}>0$, it follows that there exists $\gamma>0$ which depends $C_{n}$
such that
\begin{equation}
\langle\mathcal{L}_s\upsilon, \upsilon\rangle\geq\gamma\left\Vert \upsilon\right\Vert _{H^{1}\left(\mathbb{T}_{L}\right)}^{2}.\label{eq:energycoer}
\end{equation}
The proof is complete.
\end{proof}
As an important application of the latter lemma we are able to improve the coercivity of the linearized Hamiltonian in \eqref{eq:mL} in the odd energy space to put the $X$-norm in the right-hand side (exactly as in the previous lemma). In fact, let us start by recalling that, in this case ($c=0$), the matrix quadratic form is given by:
\begin{align*}
\big\langle\vec{\mathcal{L}}_0(\upsilon_1, \upsilon_2),(\upsilon_1, \upsilon_2)\big\rangle =\int_{\mathbb{T}_{L}}\big(\upsilon_{1,x}^{2}+V_{n}''(S)\upsilon_1^{2}+\upsilon_2^{2}\big)dx.
\end{align*}
Applying the coercivity \eqref{eq:energycoer} to the first part of the matrix quadratic form it immediately follows that
\begin{align*}
\big\langle\vec{\mathcal{L}}_0(\upsilon_1, \upsilon_2),(\upsilon_1, \upsilon_2)\big\rangle\geq\gamma\left\Vert \upsilon_1\right\Vert _{H^{1}\left(\mathbb{T}_{L}\right)}^{2}+\left\Vert \upsilon_2\right\Vert _{L^{2}\left(\mathbb{T}_{L}\right)}^{2},
\end{align*}
which in particular implies that, for any odd $(\upsilon_1,\upsilon_2)\in H^1_{\mathrm{odd}}(\mathbb{T}_L)\times L^2_{\mathrm{odd}}(\mathbb{T}_L)$, one has the desired coercivity
\begin{align}\label{matrix_coerc}
\big\langle\vec{\mathcal{L}}_0(\upsilon_1,\upsilon_2),(\upsilon_1,\upsilon_2)\big\rangle\geq\widetilde{\gamma}\left\Vert (\upsilon_1, \upsilon_2)\right\Vert _{X_\text{odd}}^{2}.
\end{align}
With the information above we are in position to establish our orbital stability result.
\begin{thm}
\label{thm:staodd} Consider $n\in\mathbb{N}$ arbitrary but fixed and let $L>\sqrt{2}\delta_n$ so that the hypothesis of Theorem \ref{thm_monotonicity} holds with $c=0$. The periodic standing wave solution $\vec{S}(x)=(S(x),0)$ is orbitally stable in the odd energy space $X_{\mathrm{odd}}$ under the periodic flow of the $\phi^{4n}$-equation. More precisely, there exists $\delta>0$ small enough such that for any initial data
\begin{align*}
\vec{\varepsilon}_{0}=\big(\varepsilon_{0,1},\varepsilon_{0,2}\big)\in H_{\mathrm{odd}}^{1}\left(\mathbb{T}_{L}\right)\times L_{\mathrm{odd}}^{2}\left(\mathbb{T}_{L}\right),
\end{align*}
satisfying $\left\Vert \big(\varepsilon_{0,1},\varepsilon_{0,2}\big)\right\Vert_{X_{\mathrm{odd}}}\leq\delta$ the following holds: There exists a constant $C>0$ such that the solution to equation \eqref{eps_system} associated to $\vec{\varepsilon}_0$ satisfies:
\begin{align*}
\hbox{for all }\,t\in\mathbb{R}, \quad \big\Vert \big(\varepsilon_{1}(t),\varepsilon_{2}(t)\big)\big\Vert_{X_{\text{odd}}}\leq C\delta.
\end{align*}
\end{thm}
\begin{proof}
In fact, by the smallness assumption of the initial data $(\varepsilon_{0,1},\varepsilon_{0,2})$, it follows that \[
\big\vert\mathcal{E}\big(\vec{\phi}_{0}\big)-\mathcal{E}\big(\vec{S}\big)\big\vert \lesssim\delta,
\]
where $\mathcal{E}$ is the conserved energy functional defined in \eqref{energy} with $\lambda_n=1$. Then, for any $t\in\mathbb{R}$, explicitly computing the differences of the energies between $(\phi_1,\phi_2)$ and $(S,0)$, we get
\begin{align}
\mathcal{E}\big(\vec{\phi}\big)-\mathcal{E}\big(\vec{S}\big)=\frac{1}{2}\int_{\mathbb{T}_L} \Big(\varepsilon_{1,x}^{2}+2\varepsilon_{1,x}S'+\varepsilon_{2,x}^{2}+2V_{n}\big(S+\varepsilon_{1}\big)-2V_n(S)\Big)dx\label{diffE}
\end{align}
For the last two terms inside the integral in the latter identity, by Taylor expansion we infer
\begin{align*}
V_{n}\big(S+\varepsilon_{1}\big)=V_{n}(S)+V_{n}'(S)\varepsilon_{1}+\frac{1}{2}V_{n}''(S)\varepsilon_{1}^{2}+\mathcal{O}\big(\varepsilon_{1}^{3}\big).
\end{align*}
Hence, integrating by parts applied to the second integrand of \eqref{diffE} and using the equation solved by $S$, that is, replacing $S''=-V'_n(S)$, we can write
\begin{align*}
\big\vert\mathcal{E}\big(\vec{\phi}\big)-\mathcal{E}\big(\vec{S}\big)\big\vert & =\frac{1}{2}\int_{\mathbb{T}_L}\big(\varepsilon_{1,x}^{2}+\varepsilon_{2}^{2}+V_{n}''\left(S\right)\varepsilon_{1}^{2}+\mathcal{O}\big(\varepsilon_{1}^{2}\big)\big)dx
\\ & \gtrsim \gamma_{n}\left\Vert \left(\varepsilon_{1},\varepsilon_{2}\right)\right\Vert _{X_\text{odd}}^{2}+\left\Vert \varepsilon_{1}\right\Vert _{H^{1}\left(\mathbb{T}_{L}\right)}^{3} \gtrsim\big\Vert (\varepsilon_{1},\varepsilon_{2})\big\Vert _{X_{\text{odd}}}^{2}.\nonumber
\end{align*}
where in the first inequality above we have used the coercivity property \eqref{matrix_coerc} and Sobolev embedding. Therefore, due to the energy conservation we conclude that, for any $t\in\mathbb{R}$ the following holds:
\begin{align*}
\big\Vert (\varepsilon_{1}(t),\varepsilon_{2}(t))\big\Vert_{X_{\text{odd}}}^{2} & \lesssim\big\vert\mathcal{E}\big(\vec{\phi}(t)\big)-\mathcal{E}\big(\vec{S}\big)\big\vert \lesssim\big\vert\mathcal{E}\big(\vec{\phi}_{0}\big)-\mathcal{E}\big(\vec{S}\big)\big\vert\lesssim\delta.
\end{align*}
The proof is complete.
\end{proof}
\section{Orbital instability of traveling waves in the whole space\label{sec:Instability}}
In this section, we gather the spectral information of the linearized Hamiltonian given in Lemma \ref{lem:matrixspe} and the general result of Grillakis-Shatah-Strauss in \cite{GSS} to establish orbital instability of traveling wave solutions $\vec{\phi}_{c}\left(x-ct\right)$ under general perturbations in the energy space. It is worthwhile to notice that in order to apply the main result in \cite{GSS} we need to analyze the sign of $\tfrac{d}{dc}\Vert \phi_{c,x}\Vert_{L^2}^2$. However, the fact that no explicit formula for the solution $\vec{\phi}_{c}$ exists presents a hard obstacle to overcome. In order to surpass this difficulty, the monotonicity of the period $L$ with respect to $\beta$ shall play a key role in our analysis. Finally, we remark that, without loss of generality, from now on we shall assume always that $c>0$.
\begin{thm}
\label{thm:instab}
Consider $n\in\mathbb{N}$ and let $L>0$ be arbitrary but fixed. Under the hypothesis of Theorem \ref{thm_monotonicity}, the traveling wave $\vec{\phi}_{c}\left(x-ct\right)$ constructed in Section \ref{existence} is orbitally unstable in the energy space $H^{1}\left(\mathbb{T}_{L}\right)\times L^{2}\left(\mathbb{T}_{L}\right)$ under the periodic flow of the $\phi^{4n}$-equation.
\end{thm}
\begin{proof}
We recall that, by the standard Grillakis-Shatah-Strauss theory (see the main result in \cite{GSS}), we know that, once the existence of the smooth curve of traveling waves solutions and the main spectral information
of the linearized Hamiltonian around $\vec{\phi}_{c}$ are established, the (in)stablity problem is reduced to study the convexity/concavity of the scalar function \begin{align*}
d(c):=\mathcal{E}\big({\vec{\phi_{c}}}\big)+c\mathcal{P}\big(\vec{\phi_{c}}\big).
\end{align*}
In our current setting, the traveling wave $\vec{\phi}_{c}$ is orbitally stable if and only if $d(c)$ is strictly convex and unstable if and only if $d(c)$ is strictly concave. In other words, the orbital instability is equivalent to show that $d''(c)>0$. Moreover, recalling that $\vec{\phi_{c}}$ is a critical point of the action functional $\mathcal{E}+c\mathcal{P}$, we deduce that
\[
d'(c)=-c\int_{\mathbb{T}_{L}}\phi_{c,x}^{2}\,dx.
\]
Thus, in order to analyze the concavity/convexity of $d(c)$ we differentiate the latter identity with respect to $c$, from where we get
\begin{align}\label{asdasd}
d''(c)=\frac{d}{dc}\left(-c\int_{\mathbb{T}_{L}}\left(\phi_{c,x}\right)^{2}\,dx\right)=-\int_{\mathbb{T}_{L}}\left(\phi_{c,x}\right)^{2}\,dx -c\frac{d}{dc}\int_{\mathbb{T}_{L}}\left(\phi_{c,x}\right)^{2}\,dx.
\end{align}
Now, for the sake of simplicity, from now on we shall denote by $\eta_c$ the derivative of the solution with respect to the velocity $c$, that is, $\eta_{c}:=\frac{d}{dc}\phi_{c}$. We claim that the right-hand side of \eqref{asdasd} is strictly negative. Then, it suffices to analyze
\begin{align}\label{asdasdasd}
-c\frac{d}{dc}\int_{\mathbb{T}_{L}}\left(\phi_{c,x}\right)^{2}\,dx & =-2c\int_{\mathbb{T}_{L}}\eta_{c,x}\phi_{c,x}\,dx.
\end{align}
We intend to prove that the right-hand side of the latter equation is negative. In fact, first of all let us recall that once $L$ and $c$ are fixed, the traveling wave solution satisfies the Hamiltonian equation with energy $\beta$ (see \eqref{hamilt}, \eqref{gamma}): \begin{align}\label{ham_energy_beta}
\dfrac{1}{2}\phi_x^2-\dfrac{1}{\omega}\big(V_n(\phi)-V_n(0)\big)=\beta.
\end{align}
Hence, by differentiating the latter equation with respect to the speed $c$ we obtain
\begin{align*}
\phi_{c}'\eta_{c}'-\frac{2c}{\omega^2}\left(V_{n}\left(\phi_{c}\right)-V_{n}\left(0\right)\right)-\frac{1}{\omega}V_{n}'\left(\phi_{c}\right)\eta_{c}=\frac{d}{dc}\beta.
\end{align*}
Recalling that the traveling wave solution satisfies $\frac{1}{\omega}V_{n}'\left(\phi_{c}\right)=\phi_{c}''$, we can rewrite the latter equation in the more convenient form as
\begin{align*}
\phi_{c}'\eta_{c}'-\frac{2c}{\omega^2}\big(V_{n}\left(\phi_{c}\right)-V_{n}(0)\big)-\phi_{c}''\eta_{c}=\frac{d}{dc}\beta.
\end{align*}
Then, by integrating the latter equation over $\mathbb{T}_{L}$ and performing some integration by parts it follows
\begin{align*}
2\int_{\mathbb{T}_{L}}\phi_{c}'\eta'_{c}\,dx=L\frac{d\beta}{dc}+\frac{2c}{\omega^2}\int_{\mathbb{T}_{L}}\big(V_{n}(\phi_{c})-V_{n}(0)\big)dx.\end{align*}
Plugging identity \eqref{ham_energy_beta} to the last term on the right-hand side above, we deduce
\begin{align}\label{ham_2}
2\int_{\mathbb{T}_{L}}\phi_{c}'\eta'_{c}\,dx=\left(\frac{d}{dc}\beta-\frac{2c}{\omega}\beta\right)L+\frac{c}{\omega}\int_{\mathbb{T}_{L}}\phi_{c,x}^{2}\,dx.
\end{align}
Therefore, it suffices to analyze the sign of the inner parenthesis $\frac{d}{dc}\beta-\frac{2c}{\omega}\beta$. To achieve this, recalling the formula for the period in \eqref{eq:peri}, we define
\begin{align}
L = \sqrt{2\omega}\int_{x_0}^{x_1}\tfrac{dx}{\sqrt{\omega\beta-\big(V_n(x)-V_n(0)\big)}} =:\sqrt{2\omega}\widetilde{L}.\label{eq:periodfor1}
\end{align}
For the sake of simplicity, from now on we shall denote by $\widetilde{\beta}:=\omega\beta$. Hence, as a direct application of Theorem \ref{thm_monotonicity} we infer that $\frac{d}{d\widetilde{\beta}}\widetilde{L}>0$. Differentiating formula \eqref{eq:periodfor1} with respect $c$, recalling that the period $L$ is fixed, we obtain
\begin{align}\label{dldc}
0=-\frac{\sqrt{2}c}{\sqrt{\omega}}\widetilde{L}+\sqrt{2\omega}\frac{d\widetilde{L}}{d\widetilde{\beta}}\frac{d\widetilde{\beta}}{dc}.\end{align}
On the other hand, it easily follows from the definition of $\widetilde{\beta}$ that $\frac{d\widetilde{\beta}}{dc}=-2c\beta+\omega\frac{d\beta}{dc}$. Thus, plugging this relation into \eqref{dldc} and recalling that $d\widetilde{L}/d\widetilde{\beta}>0$ we infer
\[
\omega\sqrt{2\omega}\frac{d\widetilde{L}}{d\widetilde{\beta}}\left(\frac{d\beta}{dc}-\frac{2c}{\omega^2}\beta\right)=\frac{\sqrt{2}c}{\sqrt{\omega}}\widetilde{L} \quad \implies \quad \dfrac{d\beta}{dc}-\dfrac{2c}{\omega^2}\beta>0.
\]
Finally, by plugging the latter inequality into \eqref{ham_2} and recalling identity \eqref{asdasdasd} we conclude\footnote{Notice that we arrive at the same conclusion if $c<0$.}
\begin{align}
\frac{d}{dc}\left(-c\int_{\mathbb{T}_{L}}\left(\phi_{c,x}\right)^{2}\,dx\right) \leq-\frac{1}{\omega}\int_{\mathbb{T}_{L}}\left(\phi_{c,x}\right)^{2}\,dx.\nonumber
\end{align}
Therefore, $d''\left(c\right)<0$, and hence, by using the main result in \cite{GSS} we conclude that $\vec{\phi}_{c}\left(x-ct\right)$ is orbitally unstable in the energy space.
\end{proof}
\section{Extension of the main result in \cite{deNa}}\label{ext_nata}
\subsection{The Model} As mentioned in the introduction, as a by-product of our current analysis we are able to extend the main result in \cite{deNa} to general $n\in\mathbb{N}$. In order to avoid misunderstandings with our previous equation, from now on we shall denote the unknown by $\varphi(t,x)$. With this in mind, in the sequel we shall consider the following type of generalization of the $\phi^{4}$-equation, that we shall call $\varphi^{2n+2}$-equation:
\begin{align}\label{phi_w}
\partial_t^2\varphi-\partial_x^2\varphi-\varphi+\varphi^{2n+1}=0.
\end{align}
Before recalling the main results in \cite{deNa}, let us start by introducing some notations. To be consistent with our analysis above, we rewrite equation \eqref{phi_w} as
\begin{align}
\partial_{t}^{2}\varphi-\partial_{x}^{2}\varphi+\widetilde{V}_{n}'(\varphi)=0,\label{eq:phi_w_v}
\end{align}
where in this setting the potential is given by
\begin{align}\label{pot2}
\widetilde{V}_{n}(x):=-\frac{x^{2}}{2}+\frac{x^{2n+2}}{2\left(n+1\right)}.
\end{align}
We point out that, in sharp contrast with model \eqref{phif}, the potential associated to in \eqref{pot2} is not getting additional different minima as $n$ increases, and hence, no more soliton sectors. Instead, potential \eqref{pot2} has always (for all $n\in\mathbb{N}$) exactly $3$ real roots, which are located at $0,\,(n+1)^{1/2n},\,-(n+1)^{1/2n}$. Notice that $0$ has always multiplicity $2$, while both $\pm (n+1)^{1/2n}$ have multiplicity $\frac{n}{2}$ if $n$ is even and multiplicity $n$ otherwise.
\medskip
On the other hand, we can write equation \eqref{eq:phi_w_v} as a first order system for $\vec{\varphi}=(\varphi_1,\varphi_2)$ as \begin{align}\label{phif_w_v}
\begin{cases}
\partial_t\varphi_1=\varphi_2,
\\ \partial_t\varphi_2=\partial_x^2\varphi_1-\tilde{V}_n'(\varphi_1).
\end{cases}
\end{align}
As for model \eqref{phif} above, the Hamiltonian structure of the system gives us the energy conservation of \eqref{phif_w_v}, that is, the following functional is conserved along the flow:
\begin{align}\label{energy2}
\widetilde{\mathcal{E}}(\vec{\varphi}(t))&:=\dfrac{1}{2}\int_0^L \big(\varphi_2^2+\varphi_{1,x}^2+2\widetilde{V}_n(\varphi_1)\big)(t,x)dx=\widetilde{\mathcal{E}}(\vec{\varphi}_0).
\end{align}
We also have the conservation of momentum which is given by:
\begin{align}\label{momentum2}
\widetilde{\mathcal{P}}(\vec{\varphi}(t)):=\int_0^L \varphi_2(t,x)\varphi_{1,x}(t,x)dx=\widetilde{\mathcal{P}}(\vec{\varphi}_0).
\end{align}
Regarding the results in \cite{deNa}, in the case of traveling wave solutions orbiting around $(0,0)$, the authors in \cite{deNa} were able to prove the orbital instability in the whole energy space for $n=1,2$. However, the proof of the sign of $d''(c)$ relies in some numerical computations and is argued by the plot of a ``hidden'' function (they have not provided the function they are plotting to justify this sign). This is an important remark, since being able to compute the sign of $d''(c)$ is usually a very challenging part of the analysis, and is the only reason why the authors in \cite{deNa} are not able to extend their result for $n$ larger than $2$. In this section, we intend to extend their orbital instability result for all values of $n\in\mathbb{N}$, that is, for all $\varphi^{2n+2}$-equations.
\medskip
We point out that, at the date of this publication, the proof of the results in \cite{deNa} contain typos, some of them problematic, but in such a way that they ``cancel each other'' so that the authors end up with the correct conclusions. Thus, in order to extend their result we need to start by fixing some of these typos since they would provoke different conclusions in our analysis.
\subsection{Extension of the main result in \cite{deNa}}
We start by recalling the Hamiltonian system satisfied by traveling wave solutions:
\begin{align}\label{system1}
\begin{cases}\dot{u}=v,
\\ \dot{v}=\tfrac{1}{\omega}\widetilde{V}_{n}'(u),
\end{cases}
\end{align}
where $\omega:=1-c^2$.
By direct computations $\widetilde{V}'_n(u)=u(u^n-1)(u^n+1)$, so the system above has three critical points: $(u,v)=(0,0)$ which is a stable center, and $(u,v)=(\pm1,0)$ which are both saddle points.
We also recall that the Hamiltonian assoicated the system \ref{system1} is
\begin{align}
\widetilde{\mathcal{{H}}}\left(u,v\right):=\frac{1}{2}v^{2}-\frac{1}{\omega}\widetilde{V}_n(u)=\frac{1}{2}v^{2}+\frac{1}{\omega}\left(\frac{u^{2}}{2}-\frac{u^{2n+2}}{2\left(n+1\right)}\right)\label{hamilt1}
\end{align}
Therefore, by the standard ODE theory for Hamiltonian equations (see for example \cite{ChiL}), we know that all periodic solutions of \eqref{system1} orbiting around $(0,0)$ corresponds to regular level sets of $\widetilde{\mathcal{H}}$, with energy $\beta\in(0,\widetilde{E}_\star)$, where the maximal energy level $\widetilde{E}_\star=\frac{n}{2\omega (n+1)}$. Finally, we recall the period $L$ can be express as
\begin{equation}
L=\sqrt{2}\int_{x_0}^{x_1}\tfrac{dx}{\sqrt{\beta+\frac{1}{\omega}\widetilde{V}_n(x)}}.\label{eq:peri1}
\end{equation}
The main point now is to show the monotonicity of the period $L$ with respect
to the level of the energy $\beta$.
\medskip
\textbf{Monotonicity of the period map:} In section $2$ in \cite{deNa}, more specifically just below identity $(2.5)$, the authors claim that the period map $L(\beta)$ is strictly decreasing in $\beta$. Consequently, they claim that $L(\beta)$ goes to $+\infty$ when $\beta$ goes to zero, and that $L(\beta)$ converges to a finite constant when $\beta$ goes to the maximal possible energy $\beta_{\mathrm{max}}$. We give three different reasons why this cannot hold. First, in the first line of the proof of Lemma $2.1$ in \cite{deNa} the authors define the function $f(h):=-h+h^{2k+1}$. However, this definition of $f(h)$ does not match the definitions of the result the authors are refering to (see the definition of the Hamiltonian at the beginning of Section $2$ in \cite{deNa}). Specifically, $f(h)$ is missing a minus sign. Secondly, it is a well-known fact that $\varphi^4$ and $\varphi^6$ have explicit odd Kink solutions. Moreover, all of these odd Kink solutions belong to the separatrix curve, have finite energy and infinite period. This contradicts the fact that $L(\beta)'<0$ since it is mandatory for $L(\beta)$ to goes to $+\infty$ when $\beta\to\beta_{\mathrm{max}}$. Finally, by taking advantage of the fact that all the involved functions (including the period) are explicit in the case $n=1$, the second author of the present work has explicitly proved in \cite{Pa} that \[L(\beta)\to+\infty \ \hbox{ when }\ \beta\to\beta_{\mathrm{max}} \quad \hbox{ and } \quad L(\beta)\to 2\pi\sqrt{\omega} \ \hbox{ when } \ \beta\to0,
\]
which also contradicts Lemma $2.1$ in \cite{deNa}. Summarizing, we have the following lemma.
\begin{lem}\label{lem:monotonicity2}
Given the formula \eqref{eq:peri1}, one has $\frac{dL}{d\beta}>0$, for all $\beta\in(0,\widetilde{E}_\star)$.
\end{lem}
\begin{proof}
In the same fashion as before, by using the main result in \cite{Chi}, it suffices to show the convexity of $-\widetilde{V}_n(x)/(\widetilde{V}_n'(x))^2$
for all $x\in(-1,1)$.
In fact, by using the explicit form of $\widetilde{V}_n$, and after performing some direct computations we obtain
\[
-\dfrac{d^2}{dx^2}\dfrac{\widetilde{V}_{n}(x)}{(\widetilde{V}_{n}'(x))^2}=\frac{nx^{2n-2}\left(4n^{2}-1+2(1+n+4n^{2})x^{2n}-(1+2n)x^{4n}\right)}{(1+n)(-1+x^{2n})^{4}}.
\]
Since $x^{2n}\leq 1$ for $\vert x\vert <1$, and $n\geq1$, it immediately follows:
\[
4n^{2}-1+2(1+n+4n^{2})x^{2n}-(1+2n)x^{4n}\geq 3n^2+(1+4n^{2n})x^{2n}>0,\]
which concludes the proof.
\end{proof}
Then, by using the monotonicity of the period map, we deduce again the existence of a limit as $\beta\to0^+$. We shall recycle the notation and call this limit \[
\lim_{\beta\to 0^+}L(\beta)=:\sqrt{\omega}\widetilde{\delta_n}.
\]
\textbf{Spectral analysis:}
To study (in)stability of traveling waves, in \cite{deNa}, the authors analyzed the linearized operator around the traveling wave which is given by
\[
\mathcal{L}_{c}y:=-\omega y''+\left(-y+(2n+1)\varphi_{c}^{2n}y\right).
\]
In Section 3 of \cite{deNa}, the authors applied Theorem \ref{thm:Neves2} (of the present work) to study the spectral properties of the scalar linearzied operator. But, in Lemma 3.2 in \cite{deNa}, the authors obtained the relation $\theta=\frac{dL}{d\beta}$. Nevertheless, notice that after fixing the sign of $\tfrac{dL}{d\beta}$, this latter relation, together with the spectral analysis carried out in \cite{deNa}, would lead to different conclusions (so that it would not be possible to conclude the main theorems in \cite{deNa}). However, it turns out that this relation $\theta=\tfrac{dL}{d\beta}$ is not correct. The essential reason is that, in order to use the quantity $\theta$ defined in Theorem \ref{thm:Neves1}, one has to ensure that the Wronskian determinant is $1$ in the right order (notice that they have switched the order of the entries in the Wronskian to obtain an extra minus sign, see Theorem \ref{thm:Neves1} above or \cite{N} for further details). Thus, with this wrong relation $\theta=\frac{dL}{d\beta}$ and the opposite sign of $\frac{dL}{d\beta}$, the authors could still conclude the correct spectral properties of the linear scalar operator due to this ``cancellation'' of double minus signs (see Theorem \ref{thm:Neves2} above or \cite{N} to see the impact of the sign of $\theta$ in the spectral information). Summarizing, we have the following lemma.
\begin{lem}
Under our current hypothesis, the following relation between holds: $\theta=-\frac{\partial L}{\partial\beta}$.
\end{lem}
We point out that after fixing these two typos, the proofs given in \cite{deNa} follow.
\medskip
Finally, with the monotonicity of the period and spectral properties, we are able to extend the main result in \cite{deNa}.
\begin{thm}
Let $n\in\mathbb{N}$ and consider $L>0$ arbitrary but fixed. For any speed $c\in(-1,1)$ such that $L>\lambda_n$, the traveling wave solution $\vec{\varphi}_{c}\left(x-ct\right)$ constructed in Section $2$ in \cite{deNa} is orbitally unstable in the energy space $H^{1}(\mathbb{T}_{L})\times L^{2}(\mathbb{T}_{L})$ under the periodic flow of the $\varphi^{2n+2}$ model.
\end{thm}
\begin{proof}
Again, without loss of generality we shall assume $c>0$. The proof follows in a similar fashion as the one in the previous section, and hence we shall only give its main points. First, recall that by the main result of Grillakis-Shatah-Strauss in \cite{GSS}, the (in)stablity problem is reduced to study the convexity/concavity of the scalar function:
\begin{equation}
d\left(c\right):=\widetilde{\mathcal{E}}\left(\varphi_{c}\right)+c\widetilde{\mathcal{P}}\left(\varphi_{c}\right).\label{eq:dscalar}
\end{equation}
Once again, in our current setting, the traveling wave $\varphi_{c}$ is orbitally
stable if and only if $d\left(c\right)$ is convex. Then, in a similar fashion as before, computing the second derivative of $d(c)$ we deduce that it suffices to analyze
\begin{align}\label{asdasdasd1}
-c\frac{d}{dc}\int_{\mathbb{T}_{L}}\left(\varphi_{c,x}\right)^{2}\,dx & =-2c\int_{\mathbb{T}_{L}}\widetilde{\eta}_{c,x}\varphi_{c,x}\,dx,
\end{align}
where $\widetilde{\eta}_{c}:=\frac{d}{dc}\varphi_{c}$. Again, we will prove that the right-hand side of the equation above is negative. In fact, by differentiating (with respect to the speed $c$) the Hamiltonian equation with energy $\beta$ and performing the same manipulation as our proof of Theorem \ref{thm:instab}, we obtain
\begin{align}\label{ham_21}
2\int_{\mathbb{T}_{L}}\varphi_{c}'\tilde{\eta}'_{c}\,dx=\left(\frac{d}{dc}\beta-\frac{2c}{\omega}\beta\right)L+\frac{c}{\omega}\int_{\mathbb{T}_{L}}\varphi_{c,x}^{2}\,dx.
\end{align}
On the other hand, proceeding exactly as before we deduce $\frac{d}{dc}\beta-\frac{2c}{\omega}\beta>0$. Therefore, by plugging the latter inequality into \eqref{ham_21} and recalling identity \eqref{asdasdasd1} we conclude\footnote{Notice that we arrive to the same conclusion if $c<0$.}
\begin{align}
\frac{d}{dc}\left(-c\int_{\mathbb{T}_{L}}\varphi_{c,x}^{2}\,dx\right) \leq-\frac{1}{\omega}\int_{\mathbb{T}_{L}}\varphi_{c,x}^{2}\,dx.\nonumber
\end{align}
Thus, $d''(c)<0$, and hence, by using the main result in \cite{GSS} we conclude that $\vec{\varphi}_{c}(x-ct)$ is orbitally unstable in the energy space.
\end{proof}
|
1,108,101,564,130 | arxiv | \subsection*{\centering Abstract}
{\em
A triangulation of a $3$-manifold can be shown to be homeomorphic to the
$3$-sphere by describing a discrete Morse function on it with only two critical
faces, that is, a sequence of elementary collapses from the
triangulation with one tetrahedron removed down to a single vertex.
Unfortunately, deciding whether such a sequence exist is believed to be
very difficult in general.
In this article we present a method, based on uniform spanning trees, to
estimate how difficult it is to collapse
a given $3$-sphere triangulation after removing a tetrahedron. In addition
we show that out of all $3$-sphere triangulations with eight vertices or less,
exactly $22$ admit a non-collapsing sequence onto a contractible non-collapsible
$2$-complex. As a side product we classify all minimal triangulations of the
dunce hat, and all contractible non-collapsible $2$-complexes with at most $18$
triangles. This is complemented by large scale experiments on the collapsing
difficulty of $9$- and $10$-vertex spheres.
Finally, we propose an easy-to-compute
characterisation of $3$-sphere triangulations which experimentally exhibit
a low proportion of collapsing sequences, leading to a heuristic to produce
$3$-sphere triangulations with difficult combinatorial properties.}
\medskip
\noindent
\textbf{MSC 2010: }
{\bf 57Q15};
57N12;
57M15;
90C59.
\medskip
\noindent
\textbf{Keywords: } Discrete Morse theory, uniform spanning trees,
collapsibility, local constructibility, dunce hat, triangulated
manifolds, 3-sphere, complicated triangulations
\input{1_intro}
\input{2_collapsing}
\input{3_expts}
\input{4_nearlynoncollapsing}
\section*{Acknowledgement}
This work is supported by CNPq, FAPERJ, PUC-Rio, CAPES, and
the Department of Industry and Science, Australia under the Australia-India
Strategic Research Fund (project AISRF06660).
{\footnotesize
\bibliographystyle{abbrv}
\section{Introduction}
\label{sec:intro}
Collapsibility of triangulations and closely related topics, such as local
constructability, have been studied extensively over the past decades
\cite{Benedetti13RandomDMT,Durhuus95LC3Spheres,whitehead1939simplicial,
zeeman1966seminar}. If a triangulation is collapsible, its underlying
topological space is contractible but the converse is not true
\cite{Adiprasito14RDMTII,Benedetti13NonCollapsible,Benedetti11LocConstrBalls,
Lutz04SmallExNonconstrSimplBallsSph,Zeeman64DunceHat}. Thus, collapsibility can
be seen as a measure of how complicated a triangulation of a given contractible
topological space or manifold is. Understanding this
complicatedness of triangulations is a central topic in the field of
combinatorial topology with important consequences
for theory and applications.
For instance, recognising the $n$-dimensional (piecewise linear standard)
sphere or ball -- a major challenge in the field -- is still a very difficult
task in dimension three and even
undecidable in dimensions $\geq 5$ \cite{Volodin743SphereRec}. Nonetheless,
collapsing heuristics together with standard homology calculations and the
Poincar\'e conjecture solve the $n$-sphere recognition problem for many
complicated and large inputs in high dimensions, see for example
\cite{Benedetti13RandomDMT,Joswig14SphereRec}. In fact, when using standard
input, existing heuristics are too effective to admit proper insight into the
undecidability of the underlying problem. To address this issue,
Benedetti and Lutz recently proposed a ``library of complicated triangulations''
providing challenging input to test new methods \cite{Benedetti13RandomDMT}.
Here we focus on the analysis of collapsibility for
triangulations of the $3$-ball (typically given as a $3$-sphere with one
tetrahedron removed). More precisely, we study this question using a
{\em quantifying} approach. Given a triangulation of the $3$-ball, we
analyse the question of how likely it is that a collapsing sequence of the
tetrahedra, chosen uniformly at random, collapses down to a single vertex.
A similar question has been studied in \cite{Benedetti13RandomDMT}
using the framework of discrete Morse theory. While the approach in
\cite{Benedetti13RandomDMT} is efficient, provides valuable insights, and
works in much more generality (arbitrary dimension and
arbitrary topology of the triangulations), the probability distributions
involved in the experiments are difficult to control. As a consequence,
the complicatedness of a given triangulation depends on the heuristic in use.
\medskip
In this article we present an approach to quantify the ``collapsing
probability'' of a $3$-ball triangulation which can be phrased independently
from the methods in use. This probability can be estimated effectively
as long as there is a sufficient number of collapsing sequences.
We then use this approach to give an extended study of the ``collapsing
probability'' of small $3$-sphere triangulations with one tetrahedron removed.
\medskip
In addition, we decide for all $39$ $8$-vertex $3$-sphere triangulations
with one tetrahedron removed whether or not they are {\em extendably collapsible},
that is, whether or not they have a collapsing sequence of the tetrahedra
which collapses onto a contractible non-collapsible $2$-dimensional
complex: $17$ of them are, $22$ of them are not, see Theorem~\ref{thm:eight}.
As a side product of this experiment we present a classification of all minimal
triangulations of the Dunce hat, cf. Theorem~\ref{thm:class}.
\medskip
A major motivation for this study is to find new techniques to tackle the famous
$3$-sphere recognition problem.
Recognising the $3$-sphere is decidable due to Rubinstein's algorithm
\cite{Rubinstein953SphereRec} which has since been implemented
\cite{Burton04Regina,Burton09Regina} and
optimised by Burton \cite{Burton14Crushing}.
However, state-of-the art worst case running times are still exponential
while the problem itself is conjectured to be polynomial time solvable
\cite{Hass123SphereCoNP,Schleimer11SphereRecNP}.
We believe that analysing tools -- such as the ones presented in this article --
and simplification procedures designed to deal
with non-collapsible or nearly non-collapsible $3$-sphere triangulations
(i.e. input with pathological combinatorial features) together with
local modifications of triangulations such as Pachner moves is one way of
advancing research dealing with this important question.
\subsection*{Contributions}
In Section~\ref{sec:ust}, we describe a procedure to uniformly sample
collapsing sequences in $3$-ball triangulations,
based on the theory of uniform spanning trees
\cite{Aldous90RandomWalkUST,Broder89RandomWalkUST,
Guenoche83RandomSpanningTrees,Wilson96UST}.
\smallskip
In Section~\ref{sec:estimated}, we present extensive experiments on
the collapsing probability of small $3$-sphere triangulations with one
tetrahedron removed. The experiments include a complete classification
of extendably collapsible $8$-vertex $3$-spheres with one tetrahedron removed
and a classification of $17$ and $18$ triangle triangulations of
contractible non-collapsible $2$-complexes.
\smallskip
In Section~\ref{sec:nearlyNonColl} we describe an (experimental)
hint towards triangulations which are difficult to collapse.
The observation translates into heuristics to generate complicated
triangulations.
\subsection*{Software}
Most of the computer experiments which have been carried out in this article
can be replicated using the {\em GAP} package {\em simpcomp}
\cite{simpcomp,simpcompISSAC,simpcompISSAC11,GAP4}. As of Version 2.1.1.,
{\em simpcomp} contains the functionality to produce {\em discrete Morse
spectra} using the techniques developed in this article as well as the techniques
from \cite{Benedetti13RandomDMT}.
The necessary data to replicate all other experiments can be found in
the appendices and/or are available from the authors upon request.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Triangulations}
Most of this work is carried out in the $3$-dimensional simplicial setting.
However, whenever obvious generalisations of our results and methods hold
in higher dimensions, or for more general kinds of triangulations, we
point this out.
By a {\em triangulated $d$-manifold} (or {\em triangulation of a
$d$-manifold}) we mean a $d$-dimensional simplicial complex whose
underlying topological space is a closed $d$-manifold. Note that in dimension
three, the notion of a triangulated $3$-manifold is equivalent to the one
of a combinatorial manifold since every $3$-manifold is equipped with a unique
PL-structure.
A triangulation of a $d$-manifold $M$ is given by a $d$-dimensional, pure,
abstract simplicial complex $C$, i.e., a set of subsets
$\Delta \subset \{ 1 , \ldots , v \}$ each of order $| \Delta | = d+1$, called
the {\em facets} of $M$.
The $i$-skeleton $\operatorname{skel}_i (M)$, that is,
the set of $i$-dimensional faces of $M$ can then be
deduced by enumerating all subsets $\delta$ of order $| \delta | = i+1$ which
occur as a subset of some facet $\Delta \in M$. The $0$-skeleton is called the
{\em vertices} of $M$, denoted $V(M)$, and the $1$-skeleton is referred to as
the {\em edges} of $M$. The {\em $f$-vector} of $M$ is defined to be
$f(M) = (f_0, f_1, \ldots , f_d)$ where $f_i = |\operatorname{skel}_i (M)|$.
Note that in this article we often write $f_0 = v$ and $f_d = n$, and use
$n$ as a measure of input size.
If in a triangulation $M$ every $k$-tuple of vertices spans a $(k-1)$-face
in $\operatorname{skel}_i (M)$, i.e., if $f_{k-1} = {|V(M)| \choose k}$, then
$M$ is said to be {\em $k$-neighbourly}.
The {\em Hasse diagram} $\mathcal{H} (C)$ of a $d$-dimensional
simplicial complex $C$ is the directed $(d+1)$-partite graph whose nodes are the
$i$-faces of $C$, $0 \leq i \leq d$, and whose arcs point from
a node representing an $(i-1)$-face to a node representing an $i$-face if and only
if the $(i-1)$-face is contained in the $i$-face.
The \textit{dual graph} or {\em face pairing graph} $\Gamma (M)$
of a triangulated $d$-manifold $M$ is the graph whose nodes represent the facets of $M$,
and whose arcs represent pairs of facets of $M$ that are joined
together along a common $(d-1)$-face. It follows that $\Gamma (M)$ is
$(d+1)$-regular.
\subsection{Uniform spanning trees and random walks}
Most graphs in this article occur as the dual graph $\Gamma(M)$
of some triangulated $3$-manifold $M$. To avoid confusion, we denote
the $0$- and $1$-simplices of a triangulation as vertices and edges and we
refer to the corresponding elements of a graph as {\em nodes} and {\em arcs}.
A {\em spanning tree} of a graph $G = (V,E)$ is a tree $T = (V,E')$ such that
$E' \subset E$ covers all nodes in $V$. In other words, a spanning tree
of a graph $G$ is defined by a connected subset $E' \subset E$ of size
$|E'| = |V|-1$ such that all nodes $v \in V$ occur as an endpoint of an
arc in $E'$. A {\em uniform spanning tree} $T = (V,E')$ of $G=(V,E)$
is a spanning tree chosen uniformly at random from the set of all
spanning trees of $G$.
A {\em random walk of length $m$} in a graph $G=(V,E)$ is a sequence of
random variables $( v_0, v_1, v_2, \ldots , v_m )$
taking values in $V$, such that $v_0 \in V$ is
chosen uniformly at random and for each $v_i$, the vertex $v_{i+1}$ is chosen
uniformly at random from all nodes adjacent to $v_i$ in $G$.
\subsection{Collapsibility and local constructability}
Given a simplicial complex $C$, an $i$-face $\delta \in C$ is called
{\em free} if its corresponding node in the Hasse diagram $\mathcal{H}(C)$
is of outgoing degree one. Removing a free face $\delta$ of a simplicial complex
is called an {\em elementary collapse} of $C$, denoted by
$C \searrow C \setminus \delta$. A simplicial complex $C$ is called
{\em collapsible} if there exist a sequence of elementary collapses
$$ C \searrow C' \searrow C'' \searrow \ldots \searrow \emptyset , $$
in this case the above sequence is referred to as a {\em collapsing sequence}
of $C$ (sometimes we omit the last elementary collapse from a single
vertex to the empty set and still refer to the sequence as a collapsing
sequence).
If, for a simplicial complex $C$, {\em every} sequence of removing free faces
leads to a collapsing sequence, $C$ is called {\em extendably collapsible}. If,
on the other hand, no collapsing sequence exist, $C$ is said to be
{\em non-collapsible}.
\medskip
Given a $d$-dimensional simplicial complex $C$, we say that $C$ is {\em locally
constructible} or that $C$ {\em admits a local construction}, if there is a
sequence of pure simplicial complexes $T_1, \ldots , T_n, \ldots T_N$ such that
(i) $T_1$ is a $d$-simplex, (ii) $T_{i+1}$, $i+1 \leq n$, is constructed from
$T_i$ by gluing a new tetrahedron to $T_i$ along one of its $(d-1)$-dimensional
boundary faces, (iii) $T_{i+1}$, $i+1 > n$, is constructed from
$T_i$ by identifying a pair of $(d-1)$ faces of $T_i$ whose intersection
contains a common $(d-2)$-dimensional face, and (iv) $T_N = C$.
For $d=3$, locally constructible spheres were introduced by Durhuus and Jonsson
in \cite{Durhuus95LC3Spheres}. Locally constructible triangulations of
$3$-spheres are precisely the ones which are collapsible after removing
a facet due to a result by Benedetti and Ziegler
\cite{Benedetti11LocConstrBalls}.
For the remainder of this article we sometimes call a triangulated
$3$-sphere $S$ {\em collapsible} if it is locally constructible, i.e., if there
exist a facet $\Delta \in S$ such that $S \setminus \Delta$ is collapsible.
This notion is independent of the choice of $\Delta$ (cf.
\cite[Corollary 2.11]{Benedetti11LocConstrBalls}). The idea behind this abuse of
the notion of collapsibility is to refer to those $3$-sphere triangulations as
collapsible which have a chance of being recognised by a collapsing heuristic.
\section{Collapsibility of $2$-complexes and uniform spanning trees}
\label{sec:ust}
In this section, we want to propose a method to quantify collapsibility
of $3$-sphere triangulations (with one tetrahedron removed). Deciding
collapsibility is hard in general
but easy in most cases which occur in practice and thus methods to measure the
degree to which a triangulation is collapsible are of great help in
the search for pathological, i.e., non-collapsible $3$-ball triangulations.
The idea is closely related to the concept of the discrete Morse spectrum
as presented in \cite{Benedetti13RandomDMT},
the main difference being that our method is {\em independent} of the
collapsing heuristic in use. This, however, comes at the
cost of only focusing on triangulations of the $3$-sphere
and possibly slight generalisations thereof.
Our method uses the facts that (i) collapsibility of arbitrary $2$-complexes
is easy to decide by a linear time greedy type algorithm
\cite[Proposition 5]{Tancer12CollNPComplete}, (ii) spanning trees
of a graph can efficiently be sampled uniformly at random (see below for more
details), and (iii) the process of collapsing the $3$-cells of a $3$-manifold
triangulation $M$ along a spanning tree of the dual graph $T$ is well defined.
That is, we can collapse all $3$-cells of $M$ along $T$ by first removing the
$3$-cell $\Delta \in M$ corresponding to the root node of $T$ and then
successively collapse all other $3$-cells through the $2$-cells of $M$
corresponding to the arcs of $T$, and this procedure does not depend on the
choice of $\Delta$, see \cite[Corollary 2.11]{Benedetti11LocConstrBalls}.
More precisely, for a $3$-sphere triangulation $S$, we can efficiently
sample a spanning tree in the dual graph $T \subset \Gamma (S)$, collapse all
$3$-cells of $S$ along $T$, and then decide collapsibility of the
remaining $2$-complex in linear time in the number of facets of $S$. Our method
leads to the following notion.
\begin{definition}[Collapsing probability]
\label{def:pcoll}
Let $S$ be a $3$-sphere triangulation and let $p \in [ 0,1 ]$
be a (rational) number between zero and one. We say that $S$ has
{\it collapsing probability $p$} if the number of spanning trees leading to
a collapsing sequence of $S$ divided by the total number of
spanning trees of $S$ equals $p$.
In particular, collapsing probability $0$ is equivalent to
non-collapsibility and collapsing probability $1$ is equivalent to
extendable collapsibility.
\end{definition}
The above definition corresponds to a shortened version of what
is called the {\em discrete Morse spectrum} in \cite{Benedetti13RandomDMT}.
\medskip
Given the notion of collapsing probability, the potential difficulty of deciding
collapsibility for a $3$-sphere triangulation $S$ (with one tetrahedron
removed) must be entirely encapsulated within its
extremely large number of possible spanning trees: if this number were small,
we could simply try all spanning trees of $S$ until we
either find a collapsing sequence or conclude that $S$
(with one tetrahedron removed) is non-collapsible.
But how many spanning trees of $\Gamma (S)$ exist?
The dual graph of any $3$-manifold triangulation is $4$-regular.
Hence, following \cite{McKay83SpanningTreesRegGraphs}
the number of spanning trees of a $3$-sphere triangulation
with $n$ tetrahedra is bounded above by
$$ \# \textrm{ spanning trees }
\,\, < \,\, C \cdot \left ( \frac{27}{8} \right )^n \frac{\log n}{n}
\,\, < \,\, \frac{9}{2} \left ( \frac{27}{8} \right )^n. $$
Thus, enumerating spanning trees to decide collapsibility
does not seem like a viable option.
\medskip
However, the related task of sampling a spanning tree uniformly at
random is efficiently solvable.
The first polynomial time algorithm to uniformly sample spanning trees in an
arbitrary graph was presented by
Gu\'enoche in 1983 \cite{Guenoche83RandomSpanningTrees}. It has a running
time of $O(n^3 m)$ where $n$ is the number of nodes and $m$
is the number of arcs of the graph. Hence, in the case of the $4$-valent
dual graph of a triangulated $3$-manifold,
the running time of Gu\'enoche's algorithm is $O(n^4)$.
Since then many more deterministic algorithms were constructed with
considerably faster running times.
Here, we want to consider a randomised approach. Randomised sampling algorithms
for spanning trees were first presented by Broder
\cite{Broder89RandomWalkUST} and Aldous \cite{Aldous90RandomWalkUST}.
Their approach is based on a simple idea using random walks. Given a graph
$G$, follow a random walk in $G$ until all nodes have been visited
discarding all arcs on the way
which close a cycle. The result can be shown to be a spanning tree
chosen uniformly at random amongst {\em all} spanning trees of the graph.
The expected running time equals what is called the {\em cover time} of
$G$, i.e., the expected time it takes
a random walk to visit all nodes in $G$, with a worst case
expected running time of $O(n^3)$.
For many graphs, however, the expected running time is as low as $O(n \log n)$.
The algorithm we want to use for our
purposes is an improvement of the random walk construction due to Wilson
\cite{Wilson96UST} which always beats the cover time.
More precisely, the expected running time of Wilson's algorithm is $O(\tau)$
where $\tau$ denotes the expected
number of steps of a random walk until it hits an already determined subtree
$T' \subset G$ starting from a node
which is not yet covered by $T'$.
\begin{observation}
Let $S$ be a $3$-sphere triangulation with $n$ tetrahedra and
collapsing probability $p \in [0,1]$. Sampling
a uniform spanning tree in the dual graph of $S$ and testing
collapsibility of the remaining $2$-complex is a Bernoulli trial
$$ X = \left \{ \begin{array}{ll} 1 &\textrm{ with probability } p; \\
0 &\textrm{ else.} \end{array} \right . $$
with polynomial running time.
Sampling $N$ times yields $N$ such independent Bernoulli distributed random
variables $X_i$, $1\leq i \leq N$, and the maximum likelihood estimator
$$\hat{p} = \frac{1}{N} \sum \limits_{i=1}^{N} X_i $$
follows a normalised Binomial distribution with $\mathbb{E} \hat{p} = p$ and
$\operatorname{Var} \hat{p} = p(1-p)/N$.
By Chebyshev's inequality this translates to
$$ \mathbb{P} \left ( \, \mid \, \hat{p} \,-\, p \, \mid \, \leq \, \epsilon \right ) \,\, < \,\, \frac{p(1-p)}{N \epsilon^2} \,\, \leq \,\, \frac{1}{4 N \epsilon^2}. $$
Since we want to decide collapsibility of $S$ (with one tetrahedron removed),
we want to distinguish $p$
from $0$. Thus, setting $\epsilon = p/2$ we get
$$ \mathbb{P} \left ( \, \, \mid \, \hat{p} \,-\, p \, \mid \, \leq \, p/2 \, \right ) < \frac{4 (1-p)}{N p} $$
and for $p \to 0$ the error can be controlled by $N$ being (super-)linear in $p^{-1}$.
\medskip
Altogether, we note that collapsibility of $S$ (with one tetrahedron removed)
can be rejected with a
high level of confidence by a polynomial procedure as long as
$p^{-1}$ is polynomial in the size of the input $n$.
%
%
%
%
\end{observation}
Note that, using Wilson's algorithm, every sampling procedure only depends on
the size of the triangulation by a factor which, in average, has running
time less than $O(n \log n)$. This makes the approach well-suited for computer
experiments on larger triangulations where a higher proportion of
triangulations with a low collapsing rate is conjectured.
\section{Estimated collapsing probabilities for small $3$-sphere
triangulations}
\label{sec:estimated}
A small triangulation of a $3$-sphere admits very few or no
spanning trees in the dual graph, which lead to
non-collapsing sequences. As the size of
triangulations increase, we expect that the number of
non-collapsible sequences increases as well. See \cite{Joswig14SphereRec}
for experiments supporting this claim in higher dimensions.
However, given a sequence of somewhat ``averagely complicated'' $3$-sphere
triangulations in increasing size, the question of how exactly the
{\em proportion} of collapsing sequences to non-collapsing sequences changes,
that is, how quickly (if at all) the collapsing probability $p$ decreases,
is an interesting question with deep implications for important problems
in the field of computational $3$-manifold topology. One major difficulty in
this context comes from the fact that it is not at all clear what is meant by an
``averagely complicated'' $3$-sphere triangulation.
\medskip
In this section, we use the method of uniformly sampling spanning trees
described above, together with the classification of $3$-sphere triangulations
up to ten vertices to get a more thorough overview in the case of known small
triangulations.
\subsubsection*{$3$-spheres with up to $8$-vertices}
It is well-known that all $3$-balls with seven or less vertices are extendably
collapsible \cite{Bagchi08UniqWalkup9V3DimKleinBottle}.
Hence, all $v$-vertex $3$-sphere triangulations with $v \leq 7$ must have
collapsing probability $1$.
There are $39$ distinct $8$-vertex triangulations of the $3$-sphere, first
listed by Gr\"unbaum and Sreedharan in \cite{Gruenbaum67Enum8vtxSpheres}.
We use their notation for the remainder of this section. Two of them are
non-polytopal, one of them known as {\em Barnette's sphere}
\cite{Barnette69BarnettesSphere}, the other one known as {\em Br\"uckner's sphere}
(see \cite{Gruenbaum67Enum8vtxSpheres} where the sphere, first described in
\cite{Brueckner09FourPolytope}, is first shown to be non-polytopal).
The collapsibility of these $3$-sphere triangulations
(with one tetrahedron removed) was studied in \cite{Benedetti13NonExtColl3Ball},
where the authors showed that three of the $39$ spheres contain a collapsing
sequence onto a dunce hat and hence have collapsing probability $< 1$.
Here, we refine this study by a complete description of $8$-vertex
$3$-spheres with collapsing probability $1$ and a list of non-perfect
Morse functions for the remaining cases. Namely, we
present a computer assisted proof of the following statement.
\begin{theorem}
\label{thm:eight}
There are $17$ $8$-vertex triangulations of the $3$-sphere which, after
removing any tetrahedron, are extendably collapsible. In the Gr\"unbaum,
Sreedharan notation these are
$$ P_{1}, P_{2}, P_{3}, P_{4}, P_{5}, P_{6}, P_{7}, P_{9}, P_{10}, P_{13}, P_{14}, P_{15}, P_{16}, P_{17}, P_{18}, P_{21}, P_{34}. $$
The remaining $22$ $8$-vertex triangulations of the $3$-sphere triangulations
admit a collapsing sequence onto a contractible non-collapsible
$2$-complex.
\end{theorem}
In order to construct a computer assisted proof for Theorem~\ref{thm:eight},
we first need to make some theoretical observations.
\begin{lemma}
\label{lem:2compl}
Let $C$ be a contractible non-collapsible simplicial $2$-complex. Then $C$
must have at least $8$ vertices and $17$ triangles.
\end{lemma}
\begin{proof}
Let $C$ be a contractible, non-collapsible $2$-complex of minimal size.
Since $C$ is contractible, we must have vanishing $2$-homology.
In particular $H_2 (C, \mathbb{F}_2) = 0$, where $\mathbb{F}_2$
is the field with two elements. Look at the formal sum $\sigma$ of all
triangles of $C$. $\sigma$ is a $2$-chain and its boundary $\partial \sigma$
contains all edges of $C$ of odd degree.
If $C$ has no edges of odd degree then $\sigma$ is a non-vanishing
element of $H_2 (C, \mathbb{F}_2)$, contradiction.
Hence, $C$ must have edges of odd degree. In addition, since the edges of odd
degree are a boundary, they must form a closed cycle and since $C$ is
simplicial, this cycle must be of length at least three.
Since $C$ is minimal non-collapsible, no edge can be of degree one, and
altogether all edges must be of degree at least two and at least three edges
must be of degree at least three.
\medskip
Let $f(C) = (n, f_1, f_2)$ be the $f$-vector of $C$.
Since $C$ is contractible, $C$ has Euler characteristic one.
Hence
$$ \begin{array}{ccc}
1 &=& n - f_1 + f_2 \\
f_2 &=& \frac13 \sum \limits_{e \textrm{ edge of } C} \operatorname{deg}(e),
\end{array} $$
and $f_2$ is minimal if and only if three edges are of degree three and
all other edges are of degree two. Inserting these degrees into the
second equation yields
$$ f_2 \geq \frac23 f_1 + 1 $$
and using the first equation we get
$$ f_2 \geq 2n +1. $$
Following the results in \cite{Bagchi05CombTrigHomSpheres} we have $n \geq 8$
and thus $C$ must have at least $17$ triangles.
%
%
\end{proof}
\begin{corollary}
\label{cor:extcoll}
Let $S$ be a $3$-sphere triangulation with $f$-vector
$f(S) = (v, v+n, 2n, n)$, and let $B$ be a $3$-ball obtained from
$S$ by removing a tetrahedron. If $v < 8$ or $n < 16$ then
$B$ is extendably collapsible.
\end{corollary}
\begin{proof}
A $3$-ball $B$ is extendably collapsible if, after removing its $3$-cells
along a spanning tree of the dual graph, the remaining contractible
$2$-complex must be collapsible.
After collapsing the tetrahedra of $S$ along a spanning tree the remaining
$2$-complex has $(n+1)$ triangles and $v$ vertices. The result now follows
from Lemma~\ref{lem:2compl}.
\end{proof}
The observation made in Corollary~\ref{cor:extcoll} can be extended to
$3$-sphere triangulations $S$ with a larger number $n \geq 16$ of tetrahedra.
If we can show that the $2$-skeleton of $S$ does not contain a contractible
non-collapsible $2$-complex with at most $n+1$ triangles, then $S$ (with one
tetrahedron removed) must
be extendably collapsible. In order for this approach to work, we need to
know all such $2$-complexes up to $n+1$ triangles. Like any other attempt
to exhaustively classify small triangulations, this rapidly becomes infeasible
with $n$ growing larger. However, in the border case of $16 \leq n \leq 17$
this task turns out to be well in reach.
\begin{theorem}
\label{thm:class}
The only non-collapsible contractible simplicial complexes
with $8$ vertices and $17$ triangles are the seven minimal triangulations
of the dunce hat shown in Figure~\ref{fig:dH}.
Furthermore, the only non-collapsible contractible simplicial complexes
with $8$ vertices and $18$ triangles are the $19$ minimal saw-blade complexes
with four, three, and two blades shown in Figures~\ref{fig:sb4},
\ref{fig:sb3}, and \ref{fig:sb2} respectively, and the $61$ triangulations
of the Dunce hat listed in Appendix~\ref{app:18}.
\end{theorem}
\begin{proof}
Since $n \leq 18$, no edges of degree other than two and three can exist,
and the edges of degree three must form a simple cycle.
By ungluing the edges of degree three, an $8$-vertex, $17$-triangle
non-collapsible contractible $2$-complex can thus be represented as a
$14$-vertex triangulation of the disk with a nine-gon as boundary plus
some boundary identifications. Analogously, an $8$-vertex, $18$-triangle
non-collapsible contractible $2$-complex can be represented as a $16$-vertex
triangulation of the disk with a $12$-gon as boundary and some boundary
identifications.
%
\begin{center}
\begin{figure}[htb]
\includegraphics[width=\textwidth]{dunceHat}
\caption{A classification of minimal triangulations of the Dunce hat.
\label{fig:dH}}
\end{figure}
\end{center}
%
Using the software for planar graph enumeration by Brinkmann and McKay
\cite{Brinkmann07FastGenPlanarGraphs,plantri} together with its enormously
useful plug-in framework, we classify all such disks, only keeping those
which can be folded up to result in an $8$-vertex $17$ ($18$) triangle
contractible non-collapsible simplicial complex with $21$ degree two and
$3$ ($4$) degree three edges. Sorting out isomorphic copies yields
seven minimal triangulations of the dunce hat (drawn in Figure~\ref{fig:dH}),
$61$ dunce hats with $18$ triangles, and $19$ examples of three distinct
types of so-called {\em saw-blade complexes}, cf. \cite{Joswig14SphereRec}
(see Figures~\ref{fig:sb4}, \ref{fig:sb3}, and \ref{fig:sb2} for the $19$ saw
blade complexes and Appendix~\ref{app:18} for a list of all $18$-triangle
complexes).
\end{proof}
%
\begin{center}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.7\textwidth]{sawblade4}}
\caption{A classification of minimal saw-blade complexes with four blades.
\label{fig:sb4}}
\end{figure}
\end{center}
%
With these results in place we can now describe a computer assisted proof
of Theorem~\ref{thm:eight}.
%
\begin{center}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.435\textwidth]{sawblade3}}
\caption{A classification of minimal saw-blade complexes with three
blades. \label{fig:sb3}}
\end{figure}
\end{center}
%
\begin{proof}[Proof of \ref{thm:eight}]
Eight of the $39$ $8$-vertex $3$-spheres have $15$ or less
tetrahedra and thus, by Corollary~\ref{cor:extcoll}, must have collapsing
probability $1$ (i.e., they are extendably collapsible after removing a
tetrahedron). In Gr\"unbaum and Sreedharan's labelling these are the
triangulations $P_{1}$ to $P_{7}$, and~$P_{13}$.
Furthermore, using our uniform sampling technique of spanning trees we
are able to collapse $22$ of the remaining $31$ triangulations to a
contractible non-collapsible $2$-complex and thus show that these have
collapsing probability $<1$, i.e.,
that none of them, after removing a tetrahedron, is extendably collapsible.
A certificate for the non-extendable collapsibility for each of the $22$
cases, in form of a non-perfect Morse function, can be found in
Appendix~\ref{app:eightV}.
The remaining nine cases, triangulations $P_{9}, P_{10}, P_{14}, \ldots ,
P_{18}, P_{21}, P_{34}$ in \cite{Gruenbaum67Enum8vtxSpheres}, have between
$16$ and $17$ tetrahedra.
Following the proof of Corollary~\ref{cor:extcoll}, after collapsing the
tetrahedra of an $n$-tetrahedra $8$-vertex $3$-sphere $S$ along a spanning
tree the remaining $2$-complex $C$, which by construction must be
contractible, has $(n+1)$ triangles and at most $8$ vertices. Combining
Lemma~\ref{lem:2compl} and Theorem~\ref{thm:class} this means that $C$ either
collapses onto a point or it is isomorphic to one of the seven minimal
triangulations of the dunce hat, for $16 \leq n \leq 17$, or it is isomorphic
to one of the $80$ contractible non-collapsible $2$-complexes with $18$
triangles, in the case of $n=17$ only.
%
\begin{center}
\begin{figure}[htb]
\centerline{\includegraphics[width=\textwidth]{sawblade2}}
\caption{A classification of minimal saw-blade complexes with two blades.
\label{fig:sb2}}
\end{figure}
\end{center}
%
An exhaustive search for all labellings of all $87$ complexes in the
$2$-skeleta of all nine $3$-spheres showed that none of the nine remaining
$8$-vertex $3$-spheres contain such a contractible non-collapsing
$2$-complex. Thus all of them are, after removing a tetrahedron,
extendably collapsible.
\end{proof}
In order to complement the qualitative study on collapsibility given by
Theorem~\ref{thm:eight}, we applied our uniform spanning tree heuristic to the
$19$ and $20$ tetrahedra $8$-vertex $3$-spheres using $10^7$ samples for each
complex. This gives a more detailed estimate for the collapsing probability
in the $8$-vertex case.
The results are listed in Table~\ref{tab:eight} together with their edge
variance, which will be discussed as indicator for the collapsing probability
in Section~\ref{sec:nearlyNonColl}.
\begin{center}
\begin{longtable}{|l|l|r|r|}
\caption{Estimated collapsing probabilities and edge variance of
$8$-vertex $3$-sphere triangulations with
$19$ and $20$ tetrahedra, sample size
$N = 10^7$. \label{tab:eight}} \\
\hline
name in \cite{Gruenbaum67Enum8vtxSpheres}&$f$-vector / isomorphism signature*&coll. prob.&edge var. \\
\hline
\hline
\endfirsthead
\multicolumn{4}{l}%
{ \tablename\ \thetable{} -- continued from previous page} \\
\hline
name in \cite{Gruenbaum67Enum8vtxSpheres}&$f$-vector / isomorphism signature*&coll. prob.&edge var. \\
\endhead
\hline \multicolumn{4}{r}{{continued on next page --}} \\
\endfoot
\hline
\endlastfoot
&\multicolumn{3}{l|}{$(8,27,38,19)$} \\
\hline
$B$ &\texttt{deeffag.hbg.hag.hbg.hchdgbh.hehgh}** &$99.99902 \%$ &$0.83951$\\
$P_{32}$ &\texttt{deeffaf.gbg.gbgbh.gbh.hch.hahghcg} &$99.99922 \%$ &$0.83951$\\
$P_{31}$ &\texttt{deeffaf.gbg.gbgbh.hbg.hch.hghbhjh} &$99.99984 \%$ &$0.98765$\\
$P_{30}$ &\texttt{deeffaf.gbg.gbhbg.hbh.gfhahchehgg} &$99.99990 \%$ &$0.98765$\\
$P_{33}$ &\texttt{deeffaf.gbh.hbhbg.gbg.gbhcgegag.g} &$99.99988 \%$ &$1.13580$\\
\hline
&\multicolumn{3}{l|}{$(8,28,40,20)$} \\
\hline
$M$ &\texttt{deeffaf.gbh.gbgbh.gbh.hch.hbg.gehcg}*** &$99.99657 \%$ &$0.91837$\\
$P_{37}$ &\texttt{deeffaf.gbg.gbgbh.hbh.hch.hahehah.h} &$99.99942 \%$ &$1.20408$\\
$P_{36}$ &\texttt{deeffaf.gbg.gbhbg.hbh.hchbhahcheh.h} &$99.99968 \%$ &$1.20408$\\
$P_{35}$ &\texttt{deeffaf.gbg.gbhbh.hbh.gdhahahchfgfg} &$99.99997 \%$ &$1.34694$\\
\hline
\end{longtable}
\end{center}
\small
\noindent
* The isomorphism signature of a combinatorial manifold uniquely determines its isomorphism
type, i.e., two combinatorial manifolds have equal isomorphism signature if and only if
they are isomorphic. The isomorphism signature given in this table coincides with the one used
by simpcomp \cite{simpcomp,simpcompISSAC,simpcompISSAC11}. Use the function
\texttt{SCFromIsoSig(...)} to generate the complexes. See the manual for details.
\smallskip
\noindent
** This is Barnette's sphere.
\smallskip
\noindent
*** This is Br\"uckner's sphere.
\normalsize
\subsubsection*{$9$-vertex spheres}
There are $1296$ triangulated $9$-vertex $3$-spheres first described
in \cite{Altshuler76CombMnf9VertAll}. We sampled $5\cdot 10^4$ spanning trees
for each of them. The results are summarised below where all $3$-sphere
triangulations with equal number of tetrahedra $n$ are grouped together.
Similar to the $8$-vertex case, a higher number of tetrahedra
correlates with a lower collapsing probability. The rightmost column
shows the complex with the smallest collapsing probability for each
class of triangulations. Note that some of the empirical collapsing
probabilities below are too close to $1$ in order to give robust estimators.
\newpage
\begin{center}
\begin{longtable}{|r|r|r|r|}
\caption{Estimated collapsing probabilities of $9$-vertex $3$-sphere
triangulations grouped by $f$-vector, and minimal estimated(!) collapsing
probability per group, sample size $N = 5\cdot 10^4$ per complex. \label{tab:nine}} \\
\hline
$n$&$\#$ complexes&avg. coll. prob.&min. coll. prob. \\
\hline
\hline
\endfirsthead
\multicolumn{4}{l}%
{ \tablename\ \thetable{} -- continued from previous page} \\
\hline
$n$&$\#$ complexes&avg. coll. prob.&min. coll. prob. \\
\endhead
\hline \multicolumn{4}{r}{{continued on next page --}} \\
\endfoot
\hline
\endlastfoot
$17$& $7$&$100.00000 \%$ &$100.000 \%$\\
\hline
$18$& $23$&$100.00000 \%$ &$100.000 \%$\\
\hline
$19$& $45$&$100.00000 \%$ &$100.000 \%$\\
\hline
$20$& $84$&$99.99993 \%$ &$99.998 \%$\\
\hline
$21$&$128$&$99.99983 \%$ &$99.998 \%$\\
\hline
$22$&$175$&$99.99952 \%$ &$99.996 \%$\\
\hline
$23$&$223$&$99.99898 \%$ &$99.994 \%$\\
\hline
$24$&$231$&$99.99753 \%$ &$99.980 \%$\\
\hline
$25$&$209$&$99.99443 \%$ &$99.962 \%$\\
\hline
$26$&$121$&$99.99051 \%$ &$99.952 \%$\\
\hline
$27$& $50$&$99.98024 \%$ &$99.920 \%$\\
\hline
\end{longtable}
\end{center}
In Section~\ref{sec:nearlyNonColl} below we discuss how the square of the
average difference between an edge degree and the average edge degree of a
triangulation, the {\em edge variance} (cf. Definition~\ref{def:edgevar}), influences
the collapsing probability of a triangulation. Essentially, the findings of
Section~\ref{sec:nearlyNonColl} suggest that a smaller edge variance correlates
with a lower collapsing probability. The following table lists empirical
collapsing probabilities for the triangulation with minimum edge variance for
each class of triangulations with fixed $f$-vector with a much higher number
of $10^6$ samples per complex. Compare the estimated collapsing probabilities
with the values from the table above and with the values
given for the $8$- and $10$-vertex case.
\begin{center}
\begin{longtable}{|r|l|@{\,}l@{\,}l@{\,}l@{\,}l@{\,}l@{\, }|r|}
\caption{Estimated collapsing probabilities of $9$-vertex, $n$-tetrahedron
$3$-sphere triangulations with minimum edge variance, $23\leq n \leq 27$,
sample size $N = 10^6$ per complex. \label{tab:ninemin}} \\
\hline
$n$&isomorphism signature*&\multicolumn{5}{l|}{edge degrees}&coll. prob. \\
\hline
\hline
\endfirsthead
\multicolumn{8}{l}%
{ \tablename\ \thetable{} -- continued from previous page} \\
\hline
$n$&isomorphism signature*&\multicolumn{5}{l|}{edge degrees}&coll. prob. \\
\endhead
\hline \multicolumn{8}{r}{{continued on next page --}} \\
\endfoot
\hline
\endlastfoot
$23$& \texttt{deeffag.hbg.iag.ibh.ichbidh.ibhbi.hfipijh} & &$3^{5}$&$4^{14}$&$5^{11}$&$6^{2}$ &$99.9981 \%$\\
\hline
$24$& \texttt{deeffaf.gbh.gbgbi.gbh.hch.ibg.geicg.hgigiji} & &$3^{6}$&$4^{13}$&$5^{10}$&$6^{4}$ &$99.9912 \%$\\
\hline
$25$& \texttt{deeffaf.gbh.gbgbi.gbh.ici.ibi.gbibhchchcikhdg} & &$3^{6}$&$4^{12}$&$5^{12}$&$6^{4}$ &$99.9766 \%$\\
\hline
$26$& \texttt{deeffaf.gbh.gbgbh.gbh.ici.hbi.ibibhaiag.ifihhjg} & $3^{6}$&$4^{13}$&$5^{11}$&$6^{4}$&$7^{1}$ &$99.9485 \%$\\
\hline
$27$& \texttt{deeffaf.gbh.gbgbi.gbh.ici.hbi.ibibhaiahahahcihhjg} & &$3^{6}$&$4^{12}$&$5^{15}$&$7^{3}$ &$99.9007 \%$\\
\hline
\end{longtable}
\end{center}
The full list of complexes and their empirical collapsing probabilities for
$5\cdot 10^4$ samples is available from the authors upon request.
\subsubsection*{$10$-vertex spheres}
There are $247\,882$ triangulated $9$-vertex $3$-spheres first described
in \cite{Lutz08ThreeMfldsWith10Vertices}. We sampled $5\cdot 10^3$ spanning trees
for each of them. The results grouped by number of tetrahedra are summarised
in the table below.
\begin{center}
\begin{longtable}{|r|r|r|r|}
\caption{Estimated collapsing probabilities of $10$-vertex $3$-sphere triangulations
grouped by $f$-vector, and minimal estimated(!) collapsing probability per group,
sample size $N = 5\cdot 10^3$ per complex. \label{tab:ten}} \\
\hline
$n$&$\#$ complexes&avg. coll. prob.&min. coll. prob. \\
\hline
\hline
\endfirsthead
\multicolumn{4}{l}%
{ \tablename\ \thetable{} -- continued from previous page} \\
\hline
$n$&$\#$ complexes&avg. coll. prob.&min. coll. prob. \\
\endhead
\hline \multicolumn{4}{r}{{continued on next page --}} \\
\endfoot
\hline
\endlastfoot
$20$& $30$&$100.00000 \%$& $100.00 \%$\\
\hline
$21$& $124$&$100.00000 \%$& $100.00 \%$\\
\hline
$22$& $385$&$99.99990 \%$&$99.98 \%$\\
\hline
$23$& $952$&$99.99989 \%$&$99.98 \%$\\
\hline
$24$& $2142$&$99.99966 \%$&$99.96 \%$\\
\hline
$25$& $4340$&$99.99936 \%$&$99.96 \%$\\
\hline
$26$& $8106$&$99.99860 \%$&$99.94 \%$\\
\hline
$27$&$13853$&$99.99750 \%$&$99.92 \%$\\
\hline
$28$&$21702$&$99.99521 \%$&$99.90 \%$\\
\hline
$29$&$30526$&$99.99144 \%$&$99.86 \%$\\
\hline
$30$&$38553$&$99.98578 \%$&$99.80 \%$\\
\hline
$31$&$42498$&$99.97656 \%$&$99.72 \%$\\
\hline
$32$&$39299$&$99.96899 \%$&$99.52 \%$\\
\hline
$33$&$28087$&$99.94089 \%$&$99.40 \%$\\
\hline
$34$&$13745$&$99.91159 \%$&$99.16 \%$\\
\hline
$35$& $3540$&$99.87571 \%$&$99.20 \%$\\
\hline
\end{longtable}
\end{center}
Again, for each number of tetrahedra, we ran $10^6$ samples on the
triangulation with minimum edge variance.
\begin{center}
\begin{longtable}{|r|l|@{\,}l@{\,}l@{\,}l@{\,}l@{\,}l@{\, }|r|}
\caption{Estimated collapsing probability of $10$-vertex, $n$-tetrahedron
$3$-sphere triangulation with minimum edge variance, $27\leq n \leq 35$,
sample size $N = 10^6$ per complex. \label{tab:tenmin}} \\
\hline
$n$&isomorphism signature*&\multicolumn{5}{l|}{edge degrees}&coll. prob. \\
\hline
\hline
\endfirsthead
\multicolumn{8}{l}%
{ \tablename\ \thetable{} -- continued from previous page} \\
\hline
$n$&isomorphism signature*&\multicolumn{5}{l|}{edge degrees}&coll. prob. \\
\endhead
\hline \multicolumn{8}{r}{{continued on next page --}} \\
\endfoot
\hline
\endlastfoot
$27$& {\footnotesize \texttt{deeffaf.gbh.gbibj.ibh.jcj.jbg.iciahcigjghcidijjki}}&
&$3^{4}$&$4^{18}$&$5^{12}$&$6^{3}$&$99.9974 \%$\\
\hline
$28$& {\footnotesize \texttt{deeffaf.gbh.gbibj.ibh.hch.jbg.iciajci.hgjgjbidibjwi}}&
&$3^{4}$&$4^{19}$&$5^{10}$&$6^{5}$&$99.9917 \%$\\
\hline
$29$& {\footnotesize \texttt{deefgaf.hbg.hbi.iai.j.iaj.hdiaj.j.ibj.jajchcjkhgjdirh}}&
&$3^{5}$&$4^{16}$&$5^{13}$&$6^{5}$&$99.9739 \%$\\
\hline
$30$& {\footnotesize \texttt{deeffag.hbi.jag.ibh.jbi.ibj.ichchbj.hbj.jbgcgdjchcggjDh}}&
&&&$3^{10}$&$5^{30}$&$99.9612 \%$\\
\hline
$31$& {\footnotesize \texttt{deeffaf.gbh.ibjbh.ibh.hbi.i.hbi.jeg.ibgcgdgdifjcjcjbgcitj}}&
&$3^{7}$&$4^{10}$&$5^{19}$&$6^{5}$&$99.8560 \%$\\
\hline
$32$& {\footnotesize \texttt{deefgaf.hbg.ibj.jah.i.jag.jbi.jci.jeigf.jbfgi.hbicj.fjhdhah}}&
$3^{5}$&$4^{14}$&$5^{18}$&$6^{4}$&$7^{1}$&$99.7397 \%$\\
\hline
$33$& {\footnotesize \texttt{deefgaf.hbg.hbi.iah.j.iag.icicj.ibj.jajgf.ibfgj.hbjcjghgfdh.h}}&
$3^{5}$&$4^{15}$&$5^{16}$&$6^{6}$&$7^{1}$&$99.5714 \%$\\
\hline
$34$& {\footnotesize \texttt{deefgaf.hbi.gbh.iahaiag.jbj.jbg.g.hafcg.ibjcf.jbjgcdigjfjhjbccc}}&
$3^{6}$&$4^{12}$&$5^{19}$&$6^{6}$&$7^{1}$&$99.2457 \%$\\
\hline
$35$& {\footnotesize \texttt{deefgaf.hbi.gbh.hajajai.hbgaibj.hbjbjeiafaiciaj.gbjcj.ijghg.ggi.i}}&
$3^{5}$&$4^{17}$&$5^{16}$&$6^{2}$&$7^{5}$&$99.1755 \%$\\
\hline
\end{longtable}
\end{center}
The full list of complexes and their empirical collapsing probabilities for
$5\cdot 10^3$ samples is available from the authors upon request.
\bigskip
Altogether, if we restrict ourselves to $v$-vertex $2$-neighborly $3$-sphere
triangulations, $v\leq 10$, we have for the average empirical collapsing
probability
\begin{center}
\begin{tabular}{|r|r|r|}
\hline
$v$&$\#$ complexes&$1 - $ avg. coll. prob. \\
\hline
\hline
$\leq 7$& $3$&$0.0000000$\\
\hline
$8$& $4$&$0.0000109$\\
\hline
$9$& $50$&$0.0002064$\\
\hline
$10$& $3540$&$0.0012429$\\
\hline
\end{tabular}
\end{center}
This very limited amount of data already exhibits a rapid increase in the
proportion of non-collapsing sequences for triangulations of increasing size.
This supports speculations that pathologically complicated combinatorial
objects become rapidly more common as the size of a triangulation increases.
However, whether or not these numbers actually suggests that the average
collapsing probability approaches $0$ as $v \to \infty$ remains unclear.
\section{Producing nearly non-collapsible $3$-sphere triangulations}
\label{sec:nearlyNonColl}
In this section, we propose a heuristic to pre-evaluate if a
$3$-sphere triangulation has complicated
combinatorial properties (such as very few or no collapsing sequences) or not,
based on the simple combinatorial property of the edge degrees of the
triangulation. Of course, there must be theoretical limits to how effective
this pre-evaluation can be due to the potential hardness of the underlying
problem, but a deeper understanding of the impact of
simple, local combinatorial structures on (non-)collapsing sequences
gives valuable insights into triangulations with pathological
combinatorial characteristics.
\medskip
Let $S$ be a triangulation of the $3$-sphere with $f$-vector
$f(S) = (v,v+n,2n,n)$ and for any edge $e \in \operatorname{skel}_1 (S)$
let $\operatorname{deg}_S (e)$ be the edge degree of $e$ in $S$ (i.e., the
number of tetrahedra of $S$ containing $e$).
Furthermore, let $\Gamma(S)$ be the face pairing graph of $S$ and let
$T \subset \Gamma(S)$ be a spanning tree.
The $2$-dimensional simplicial complex obtained by collapsing $S$ along $T$
is denoted by ${S}_T$, and $e$ is called {\em free} in ${S}_T$ if
$e$ has degree one in ${S}_T$ (i.e., it is only contained in one triangle of
$S_T$).
For any given spanning tree $T \subset \Gamma(S)$, we expect the number of
free edges in ${S}_T$ (more precisely, the number of triangles with free edges)
to strongly correlate with whether or not $T$ leads to a collapsing sequence:
triangles of ${S}_T$ can be removed as long as there are free edges left and the
removal of any triangle has a clear tendency to produce new free edges. The
more free edges there are to begin with, the higher we expect the chances to be
that in this process all triangles can be removed.
In more concrete terms, let $ \operatorname{skel}_1 (S) = \{
e_1 , \ldots , e_{n+v} \} $ be the set of edges of $S$ and let
$$p_i = | \{ e_i \textrm{ free in } S_T \, | \, T \textrm{ spanning
tree of } S \} | \,\, /\,\, | \{ \textrm{spanning trees of } S \} | $$
be the probability that edge $e_i$ is free in $S_T$ for $T$ sampled
uniformly at random. Then, the vector $ (p_1, \ldots , p_{n+v} ) $
contains information about the collapsing probability of $S$.
Obviously, the exact value of the $p_i$ depends on the structure
of $S$ and getting a precise estimate for the $p_i$ involves sampling
spanning trees (which, at the same time, gives us an estimator for the
collapsing probability -- the quantity we want to pre-evaluate).
Instead we argue that the degree of the edge $e_i$, a quantity which can be
very easily extracted from $S$, influences the value of $p_i$ by virtue of the
following observation.
\begin{theorem}
\label{thm:freedegdedges}
Let $S$ be a triangulation of the $3$-sphere and let
$e_i \in \operatorname{skel}_1 (S)$ be an edge of $S$ of degree
$\operatorname{deg}_S (e_i) = k$. Furthermore, let $p_i$ be the proportion of
spanning trees of $\Gamma (S)$ for which $e_i$ is free in $S_T$. Then
$$ p_i \quad \geq \quad \frac{4}{7} \cdot \left ( \frac{4}{13} \right )^{k-2}.$$
\end{theorem}
\begin{proof}
Since $S$ is a simplicial complex, the star of $e_i$ is represented in
$\Gamma (S)$ as the boundary of a $k$-gon $C = \langle \Delta_1 , \Delta_2 ,
\ldots , \Delta_{k} \rangle$ with no chords. To see this note that
a chord in the $k$-gon represents an identification of triangular
boundary faces of the star of $e_i$ in $S$ which contradicts the
simplicial complex property (cf. Figure~\ref{fig:spindles} on the left hand
side). Moreover, for any spanning tree $T \subset \Gamma (S)$, $e_i$ is
free in $S_T$ if and only if $T$ intersects $C$ in a (connected) path of
length $k-1$.
A spanning tree $T \subset \Gamma (S)$ can be found by following a random walk
in $\Gamma (S)$ discarding all arcs on the way which close a cycle.
Here, we only concentrate on the probability of one particular class
of random walks which always result in a $2$-complex $S_T$
in which $e_i$ is free.
W.l.o.g., let $\Delta_1$ be the first node in $\Gamma (S)$ which is visited
by the random walk in step $m$. One way for
$T \cap C$ to be a path of length $k-1$ is if the random walk travels
through all arcs $ (\Delta_j , \Delta_{j+1} )$, $1 \leq j \leq k-1$,
travelling back and forth between nodes $\Delta_j$ and $u_j$, $v_j$ or
$\Delta_{j-1}$ on the way (cf. Figure~\ref{fig:spindles} on the right hand
side). In step $m+1$ the walk is at one of $\Delta_2$ or $\Delta_k$ with
probability $\frac{1}{2}$. If the random walk does not
choose one of these two options, there is an overall $\frac{1}{8}$ chance
that it revisits $\Delta_1$ in step $m+2$ without visiting any of
the $\Delta_{\ell}$ first (remember: there are no chords in the $k$-cycle).
Moreover, there is an at least $\frac{1}{64}$ chance that it
revisits $\Delta_1$ in step $m+4$, etc. Altogether, there is a
$$ \sum \limits_{i \geq 0} 8^{-i} \quad = \quad \frac{4}{7} $$
chance that the random walk hits $\Delta_2$ or $\Delta_k$ such that
it is still in the class of random walks we are considering (and, in
particular, can still produce a spanning tree leading to a free edge $e_i$).
\begin{figure}
\begin{center}
\includegraphics[width=.8\textwidth]{spindles}
\caption{Left: star of an edge in a simplicial $3$-dimensional
triangulation. Right: the corresponding feature in the
dual graph. \label{fig:spindles}}
\end{center}
\end{figure}
W.l.o.g., let the random walk be now at $\Delta_2$ (if it is at
$\Delta_k$ we relabel). There is a $\frac{1}{4}$ chance that the random walk
is at $\Delta_3$ in the next step and a $\frac{3}{16}$ chance
that the random walk revisits $\Delta_2$ after two steps (without hitting
any node other than $u_2$, $v_2$ or $\Delta_1$. Hence, there is an overall
chance of
$$ \frac{1}{4} \cdot \sum \limits_{i\geq 0} \left ( \frac{3}{16} \right )^{i}
\quad = \quad \frac{4}{13} $$
that the random walk reaches $\Delta_3$ and still is in the class of
random walks we are considering (and, in particular, still has the chance to
produce a spanning tree leading to a free edge $e_i$).
Iterating the argument proves the theorem.
\end{proof}
The above lower bounds are not expected to be sharp, especially not for higher
degrees. However, finding effective upper bounds for the $ p_i $'s is
difficult due to the unknown global structure of $S$.
Nonetheless, Theorem \ref{thm:freedegdedges} supports the intuitive
assumption that edges of small degrees have a much higher chance of becoming
free than higher degree edges.
\medskip
To see how the lower bounds given in Theorem~\ref{thm:freedegdedges} compare
to the actual values of the $p_i$ we take a closer look at the set of
$2$-neighbourly $9$-vertex $3$-sphere triangulations. Namely, for
every $9$-vertex $2$-neighbourly $3$-sphere $S$ and for every edge $e_i \in S$,
$1 \leq i \leq n+v$,
we uniformly sample $10^5$ spanning trees $T$ and record how often
$e_i$ is a free edge in $S_T$ -- the $2$-complex obtained by collapsing all
tetrahedra of $S$ along $T$. We then compare this number to the degree
$\operatorname{deg}_S(e_i)$ of $e_i$.
The results of this experiment are shown in Figure~\ref{fig:degExpts}.
On the horizontal axis all $50$ $2$-neighbourly $9$-vertex triangulations
of the $3$-sphere are listed. The vertical axis lists the estimators
$\hat{p}_i$, $1 \leq i \leq n+v$, of the probability of the edge $e_i$ to
be free in $S_T$ with spanning tree $T$ chosen uniformly at random.
Note how the estimators $\hat{p_i}$ display an exponential decay in the degree
of the edges, as suggested by Theorem~\ref{thm:freedegdedges}. Moreover,
for most edges $e_i$ of degree $\operatorname{deg}_S(e_i)$, we have for the
estimator $\hat{p}_i \sim 2^{2-\operatorname{deg}_S(e_i)}$.
\begin{figure}
\begin{center}
\input{freeEdges}
\caption{Every dot represents the estimated probability
$\hat{p}_i$ of an edge $e_i$, $1 \leq i \leq 36$, in one of the
$2$-neighbourly $9$-vertex $3$-sphere triangulations being free after
collapsing the $3$-cells along a uniformly sampled spanning tree.
The lines indicate the average probability of an edge being free
grouped by degree. Sample size $10^5$. \label{fig:degExpts}}
\end{center}
\end{figure}
\subsection*{The edge variance of a triangulation}
Assuming the above observations about the quantities $p_i$ hold in reasonable
generality this motivates the following strategy to produce complicated
triangulations of the $3$-sphere (i.e., triangulations of the $3$-sphere
with collapsing probability near zero).
\medskip
Let $S$ be an $n$-tetrahedra $v$-vertex triangulation of the $3$-sphere. Every
tetrahedron has six edges and the total number of edges in $S$ is given
by $n+v$. Hence, the average edge degree of an edge in $S$ is given by
$$ \overline{\operatorname{deg}}_{S} = \frac{6 n}{n+v}. $$
In addition, since $S$ is simplicial, every edge degree must be at least three.
We have seen in the above section that, given a spanning tree
$T \subset \Gamma (S)$ chosen uniformly at random, the value $p_i$ for
edge $e_i$ being of degree $k$ can be assumed to decay exponentially in $k$.
At the same time, it is reasonable to assume that, at least on average,
collapsing sequences.
Combining these two statements, we can expect that $3$-sphere
triangulations with few edges of degree three and four should be, on average,
more difficult to collapse than triangulations with many low degree edges.
To quantify this property we define the following combinatorial invariant.
\begin{definition}
\label{def:edgevar}
Let $S$ be an $n$-tetrahedron, $v$-vertex simplicial triangulation of
the $3$-sphere, and let $\overline{\operatorname{deg}}_{S} = \frac{6 n}{n+v}$
be its average edge degree. The quantity
$$ \operatorname{var} (S) = \frac{1}{n+v} \sum \limits_{i=1}^{n+v}
(\overline{\operatorname{deg}}_{S} - \operatorname{deg}_S (e_i))^2 $$
is referred to as the {\it edge variance} of $S$.
\end{definition}
Given a $3$-sphere triangulation $S$, computing $\operatorname{var} (S)$
is a very simple procedure but might give away valuable hints towards
the collapsing probability of $S$.
Note that this measure must fail in general since non-collapsibility
(i.e., collapsing probability zero) can be a local feature of a
triangulation and thus cannot always be picked up by the
edge variance (which has a global characteristic). Nonetheless, first experiments
with heuristics based on the edge variance seem promising (see below).
\begin{remark}
For experimental evidence that the edge variance is indeed a valuable
measure of complicatedness, compare the average collapsing probability
of $8$-, $9$- and $10$-vertex $3$-sphere in Tables~\ref{tab:eight},
\ref{tab:ninemin} and \ref{tab:tenmin},
and the collapsing probabilities for spheres with minimal edge variance
in Tables~\ref{tab:eight}, \ref{tab:nine}, and \ref{tab:ten} respectively.
Keep in mind that some of the complexes admit very few non-collapsing
sequences and thus only the estimators of near-neighbourly triangulations
can be assumed to be reasonably robust.
\end{remark}
\subsection*{A heuristic to produce complicated $3$-sphere triangulations}
The heuristic to produce complicated $3$-sphere triangulations is
straightforward: Given a $3$-sphere triangulation $S$, we perform bistellar
one- and two-moves in order to reduce the edge variance.
The heuristic follows a simulated annealing approach where phases of
reducing the edge variance are followed by phases where the edge variance is
deliberately increased. For a much more detailed discussion of simulated
annealing type simplification heuristics based on bistellar moves see
\cite{Bjoerner00SimplMnfBistellarFlips}. The complex with currently the smallest
edge variance is stored and returned after a maximum number of moves is
performed.
\medskip
As a proof of concept, we are able to produce a $15$-vertex $3$-sphere $S_{15}$
triangulation with only $2.5903 \pm 0.0618 \%$ collapsing sequences,
error probability $0.01 \%$.
Its facet list is given in Appendix~\ref{app:fiveteen}. See
the table below for a comparison of this number with
known small and complicated $3$-sphere triangulations from
\cite{Benedetti13RandomDMT}.
\bigskip
\begin{center}
\begin{tabular}{|l|l|r|}
\hline
triangulation&$f$-vector&exp. coll. prob. ($N = 10^6$) \\
\hline
\hline
$S_{15}$&$(15,105,180,90)$&$0.025903 \pm 0.000618$ \\
\hline
\texttt{trefoil}&$(13,69,112,56)$&$0.839725 \pm 0.001427$ \\
\hline
\texttt{double\_trefoil}&$(16,108,184,92)$&$0.193914 \pm 0.001538$ \\
\hline
\texttt{triple\_trefoil}&$(18,143,250,125)$&$0.000000 \pm 0.000000$ \\
\hline
\hline
\end{tabular}
\end{center}
\bigskip
Extending this approach to a more sophisticated heuristic with optimised
parameters is work in progress.
The potential of this approach lies in applying this framework to
the inverse problem of producing a collapsible triangulation.
Given a $3$-manifold triangulation $M$, one of the most basic approaches to
prove that $M$ is a $3$-sphere is to describe a collapsing sequence of
$M$. This approach, however, cannot work for non-collapsible $3$-sphere
triangulations.
Using the idea of the edge variance, and given a complicated triangulation
of a $3$-manifold, we first try to increase its edge variance
and then try to collapse it.
\medskip
Implementing such a heuristic is simple. However, testing its effectiveness is
not: as of today there are simply not enough small but complicated $3$-spheres
known to test this approach against existing heuristics.
\section{Non-collapsing sequences of $22$ of the $39$ $3$-sphere triangulations with $8$-vertices}
\label{app:eightV}
\begin{center}
\begin{longtable}{|@{}l@{}|@{}l@{}|@{}l@{}|@{}l@{}|}
\caption{Non-collapsing sequences (non-perfect discrete Morse functions)
of $22$ of the $39$ $8$-vertex $3$-sphere triangulations. \label{tab:test}} \\
\hline
name&facets&collapsing of $3$-skeleton&contr. non-coll. $2$-compl. \\
\hline
\hline
\endfirsthead
\multicolumn{4}{l}%
{ \tablename\ \thetable{} -- continued from previous page} \\
\hline
name&facets&collapsing of $3$-skeleton&contr. non-coll. $2$-compl. \\
\endhead
\hline \multicolumn{4}{r}{{continued on next page --}} \\
\endfoot
\endlastfoot
$P_{25}$&
\footnotesize $\begin{array}{lll}
\langle 1,2,3,4 \rangle, & \langle 1,3,6,8 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 2,3,5,6 \rangle, & \langle 3,5,6,8 \rangle, & \langle 4,5,6,8 \rangle,\\
\langle 3,5,7,8 \rangle, & \langle 4,5,7,8 \rangle, & \langle 4,6,7,8 \rangle,\\
\langle 1,4,5,7 \rangle, & \langle 1,4,6,7 \rangle, & \langle 1,2,4,6 \rangle,\\
\langle 1,6,7,8 \rangle, & \langle 1,3,7,8 \rangle, & \langle 1,3,5,7 \rangle,\\
\langle 1,3,4,5 \rangle, & \langle 2,3,4,5 \rangle, & \langle 2,4,5,6 \rangle.
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 2,4,5,6 \rangle, &
\langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle,\\
\langle 3,4,5 \rangle&\to&\langle 3,4,5,4 \rangle, & \langle 1,3,5 \rangle&\to&\langle 1,3,5,5 \rangle,\\
\langle 1,3,7 \rangle&\to&\langle 1,3,7,7 \rangle, & \langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle,\\
\langle 2,4,6 \rangle&\to&\langle 2,4,6,4 \rangle, & \langle 1,4,6 \rangle&\to&\langle 1,4,6,6 \rangle,\\
\langle 1,4,7 \rangle&\to&\langle 1,4,7,5 \rangle, & \langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle,\\
\langle 4,5,7 \rangle&\to&\langle 4,5,7,7 \rangle, & \langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle,\\
\langle 4,5,8 \rangle&\to&\langle 4,5,8,6 \rangle, & \langle 5,6,8 \rangle&\to&\langle 5,6,8,6 \rangle,\\
\langle 3,5,6 \rangle&\to&\langle 3,5,6,5 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle,\\
\langle 1,3,8 \rangle&\to&\langle 1,3,8,6 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 4,6,7 \rangle, & \langle 4,7,8 \rangle, & \langle 4,6,8 \rangle,\\
\langle 1,3,6 \rangle, & \langle 1,3,4 \rangle, & \langle 3,6,8 \rangle,\\
\langle 1,6,7 \rangle, & \langle 3,7,8 \rangle, & \langle 2,3,4 \rangle,\\
\langle 1,2,4 \rangle, & \langle 3,5,7 \rangle, & \langle 2,3,5 \rangle,\\
\langle 1,2,6 \rangle, & \langle 2,5,6 \rangle, & \langle 4,5,6 \rangle,\\
\langle 1,4,5 \rangle, & \langle 1,5,7 \rangle.&
\end{array}$\\
\hline\hline
$P_{32}$&
\footnotesize $\begin{array}{lll}
\langle 1,4,6,7 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 2,5,7,8 \rangle, & \langle 1,2,4,6 \rangle, & \langle 4,5,6,8 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 2,5,6,8 \rangle, & \langle 2,3,6,8 \rangle,\\
\langle 1,2,3,6 \rangle, & \langle 1,3,6,7 \rangle, & \langle 1,3,5,7 \rangle,\\
\langle 1,4,5,7 \rangle, & \langle 4,5,7,8 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 4,6,7,8 \rangle, & \langle 3,6,7,8 \rangle, & \langle 2,3,7,8 \rangle,\\
\langle 2,3,5,7 \rangle.&&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 2,3,5,7 \rangle, &
\langle 2,3,7 \rangle&\to&\langle 2,3,7,7 \rangle,\\
\langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle, & \langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle,\\
\langle 2,3,5 \rangle&\to&\langle 2,3,5,4 \rangle, & \langle 4,7,8 \rangle&\to&\langle 4,7,8,7 \rangle,\\
\langle 4,5,7 \rangle&\to&\langle 4,5,7,5 \rangle, & \langle 1,5,7 \rangle&\to&\langle 1,5,7,5 \rangle,\\
\langle 1,3,7 \rangle&\to&\langle 1,3,7,6 \rangle, & \langle 1,3,6 \rangle&\to&\langle 1,3,6,3 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,6 \rangle, & \langle 2,6,8 \rangle&\to&\langle 2,6,8,6 \rangle,\\
\langle 2,4,5 \rangle&\to&\langle 2,4,5,5 \rangle, & \langle 4,5,6 \rangle&\to&\langle 4,5,6,6 \rangle,\\
\langle 2,4,6 \rangle&\to&\langle 2,4,6,4 \rangle, & \langle 2,5,8 \rangle&\to&\langle 2,5,8,7 \rangle,\\
\langle 3,4,5 \rangle&\to&\langle 3,4,5,4 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle,\\
\langle 1,4,6 \rangle&\to&\langle 1,4,6,6 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 4,6,8 \rangle, & \langle 4,6,7 \rangle, & \langle 1,4,7 \rangle,\\
\langle 1,6,7 \rangle, & \langle 4,5,8 \rangle, & \langle 1,2,6 \rangle,\\
\langle 3,6,8 \rangle, & \langle 1,4,5 \rangle, & \langle 1,3,5 \rangle,\\
\langle 3,6,7 \rangle, & \langle 3,5,7 \rangle, & \langle 5,6,8 \rangle,\\
\langle 2,5,6 \rangle, & \langle 5,7,8 \rangle, & \langle 1,2,3 \rangle,\\
\langle 2,3,8 \rangle, & \langle 2,5,7 \rangle, & \langle 2,7,8 \rangle.
\end{array}$\\
\hline\hline
$P_{31}$&
\footnotesize $\begin{array}{lll}
\langle 2,5,7,8 \rangle, & \langle 5,6,7,8 \rangle, & \langle 4,5,6,7 \rangle,\\
\langle 1,2,4,6 \rangle, & \langle 1,4,6,7 \rangle, & \langle 1,6,7,8 \rangle,\\
\langle 1,3,6,8 \rangle, & \langle 1,4,5,7 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 1,2,3,4 \rangle, & \langle 1,2,3,6 \rangle, & \langle 2,3,6,8 \rangle,\\
\langle 2,5,6,8 \rangle, & \langle 2,4,5,6 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 2,3,5,7 \rangle, & \langle 2,3,7,8 \rangle, & \langle 1,3,7,8 \rangle,\\
\langle 1,3,5,7 \rangle.&&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,3,5,7 \rangle, &
\langle 1,3,7 \rangle&\to&\langle 1,3,7,7 \rangle,\\
\langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle, & \langle 2,3,7 \rangle&\to&\langle 2,3,7,5 \rangle,\\
\langle 2,3,5 \rangle&\to&\langle 2,3,5,4 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,5 \rangle,\\
\langle 2,5,6 \rangle&\to&\langle 2,5,6,6 \rangle, & \langle 2,6,8 \rangle&\to&\langle 2,6,8,6 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle,\\
\langle 1,3,4 \rangle&\to&\langle 1,3,4,4 \rangle, & \langle 1,4,5 \rangle&\to&\langle 1,4,5,5 \rangle,\\
\langle 1,3,6 \rangle&\to&\langle 1,3,6,6 \rangle, & \langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle,\\
\langle 1,6,7 \rangle&\to&\langle 1,6,7,6 \rangle, & \langle 1,4,6 \rangle&\to&\langle 1,4,6,4 \rangle,\\
\langle 4,5,7 \rangle&\to&\langle 4,5,7,6 \rangle, & \langle 5,6,7 \rangle&\to&\langle 5,6,7,7 \rangle,\\
\langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 2,3,8 \rangle, & \langle 1,3,8 \rangle, & \langle 1,6,8 \rangle,\\
\langle 1,3,5 \rangle, & \langle 1,2,6 \rangle, & \langle 3,4,5 \rangle,\\
\langle 2,3,4 \rangle, & \langle 2,4,6 \rangle, & \langle 1,2,4 \rangle,\\
\langle 4,6,7 \rangle, & \langle 6,7,8 \rangle, & \langle 1,4,7 \rangle,\\
\langle 4,5,6 \rangle, & \langle 1,5,7 \rangle, & \langle 2,7,8 \rangle,\\
\langle 5,6,8 \rangle, & \langle 2,5,7 \rangle, & \langle 2,5,8 \rangle.
\end{array}$\\
\hline\hline
$P_{37}$&
\footnotesize $\begin{array}{lll}
\langle 2,4,5,6 \rangle, & \langle 2,3,5,7 \rangle, & \langle 1,3,6,8 \rangle,\\
\langle 1,6,7,8 \rangle, & \langle 1,2,4,6 \rangle, & \langle 1,4,6,7 \rangle,\\
\langle 1,3,5,7 \rangle, & \langle 1,3,7,8 \rangle, & \langle 2,3,7,8 \rangle,\\
\langle 2,5,7,8 \rangle, & \langle 4,5,7,8 \rangle, & \langle 1,4,5,7 \rangle,\\
\langle 1,3,4,5 \rangle, & \langle 2,3,4,5 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 1,2,3,6 \rangle, & \langle 2,3,6,8 \rangle, & \langle 2,5,6,8 \rangle,\\
\langle 4,5,6,8 \rangle, & \langle 4,6,7,8 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 4,6,7,8 \rangle, &
\langle 4,6,8 \rangle&\to&\langle 4,6,8,6 \rangle,\\
\langle 5,6,8 \rangle&\to&\langle 5,6,8,6 \rangle, & \langle 2,6,8 \rangle&\to&\langle 2,6,8,6 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle,\\
\langle 2,3,4 \rangle&\to&\langle 2,3,4,4 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,4 \rangle,\\
\langle 1,4,5 \rangle&\to&\langle 1,4,5,5 \rangle, & \langle 4,5,7 \rangle&\to&\langle 4,5,7,7 \rangle,\\
\langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle, & \langle 2,7,8 \rangle&\to&\langle 2,7,8,7 \rangle,\\
\langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle, & \langle 1,3,7 \rangle&\to&\langle 1,3,7,5 \rangle,\\
\langle 4,6,7 \rangle&\to&\langle 4,6,7,6 \rangle, & \langle 1,4,6 \rangle&\to&\langle 1,4,6,4 \rangle,\\
\langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle, & \langle 1,6,8 \rangle&\to&\langle 1,6,8,6 \rangle,\\
\langle 2,3,5 \rangle&\to&\langle 2,3,5,5 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,5 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 2,4,6 \rangle, & \langle 2,5,6 \rangle, & \langle 1,2,4 \rangle,\\
\langle 1,4,7 \rangle, & \langle 4,7,8 \rangle, & \langle 4,5,6 \rangle,\\
\langle 2,5,7 \rangle, & \langle 4,5,8 \rangle, & \langle 1,5,7 \rangle,\\
\langle 1,2,6 \rangle, & \langle 2,3,7 \rangle, & \langle 2,5,8 \rangle,\\
\langle 3,5,7 \rangle, & \langle 2,3,8 \rangle, & \langle 1,6,7 \rangle,\\
\langle 6,7,8 \rangle, & \langle 3,6,8 \rangle, & \langle 1,3,5 \rangle,\\
\langle 1,3,6 \rangle.&&
\end{array}$\\
\hline\hline
$P_{30}$&
\footnotesize $\begin{array}{lll}
\langle 1,3,7,8 \rangle, & \langle 1,2,4,6 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 1,3,6,8 \rangle, & \langle 1,4,6,8 \rangle, & \langle 1,4,7,8 \rangle,\\
\langle 4,5,7,8 \rangle, & \langle 2,5,6,7 \rangle, & \langle 2,3,5,7 \rangle,\\
\langle 1,3,5,7 \rangle, & \langle 2,4,5,6 \rangle, & \langle 1,4,5,7 \rangle,\\
\langle 1,3,4,5 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 2,3,6,7 \rangle, & \langle 3,6,7,8 \rangle, & \langle 5,6,7,8 \rangle,\\
\langle 4,5,6,8 \rangle.&&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 4,5,6,8 \rangle, &
\langle 5,6,8 \rangle&\to&\langle 5,6,8,7 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle, & \langle 3,6,7 \rangle&\to&\langle 3,6,7,6 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle,\\
\langle 1,3,4 \rangle&\to&\langle 1,3,4,4 \rangle, & \langle 1,4,5 \rangle&\to&\langle 1,4,5,5 \rangle,\\
\langle 4,5,6 \rangle&\to&\langle 4,5,6,5 \rangle, & \langle 1,5,7 \rangle&\to&\langle 1,5,7,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 2,5,7 \rangle&\to&\langle 2,5,7,6 \rangle,\\
\langle 4,5,7 \rangle&\to&\langle 4,5,7,7 \rangle, & \langle 4,7,8 \rangle&\to&\langle 4,7,8,7 \rangle,\\
\langle 1,4,8 \rangle&\to&\langle 1,4,8,6 \rangle, & \langle 1,6,8 \rangle&\to&\langle 1,6,8,6 \rangle,\\
\langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle, & \langle 2,4,6 \rangle&\to&\langle 2,4,6,4 \rangle,\\
\langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 1,2,6 \rangle, & \langle 2,3,4 \rangle, & \langle 2,5,6 \rangle,\\
\langle 1,3,6 \rangle, & \langle 2,6,7 \rangle, & \langle 5,6,7 \rangle,\\
\langle 1,2,4 \rangle, & \langle 1,3,8 \rangle, & \langle 2,3,7 \rangle,\\
\langle 1,3,7 \rangle, & \langle 1,7,8 \rangle, & \langle 3,6,8 \rangle,\\
\langle 1,4,6 \rangle, & \langle 2,3,5 \rangle, & \langle 5,7,8 \rangle,\\
\langle 4,6,8 \rangle, & \langle 3,4,5 \rangle, & \langle 4,5,8 \rangle.
\end{array}$\\
\hline\hline
$P_{36}$&
\footnotesize $\begin{array}{lll}
\langle 3,6,7,8 \rangle, & \langle 2,5,7,8 \rangle, & \langle 1,4,5,7 \rangle,\\
\langle 1,4,6,8 \rangle, & \langle 1,2,4,6 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 2,6,7,8 \rangle, & \langle 2,5,6,8 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 4,5,6,8 \rangle, & \langle 4,5,7,8 \rangle,\\
\langle 1,4,7,8 \rangle, & \langle 1,3,7,8 \rangle, & \langle 1,3,6,8 \rangle,\\
\langle 1,2,3,6 \rangle, & \langle 2,3,6,7 \rangle, & \langle 2,3,5,7 \rangle,\\
\langle 1,3,5,7 \rangle, & \langle 1,3,4,5 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,3,4,5 \rangle, &
\langle 1,3,5 \rangle&\to&\langle 1,3,5,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 2,3,7 \rangle&\to&\langle 2,3,7,6 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle, & \langle 1,3,6 \rangle&\to&\langle 1,3,6,6 \rangle,\\
\langle 1,3,8 \rangle&\to&\langle 1,3,8,7 \rangle, & \langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle,\\
\langle 4,7,8 \rangle&\to&\langle 4,7,8,7 \rangle, & \langle 4,5,8 \rangle&\to&\langle 4,5,8,6 \rangle,\\
\langle 4,5,6 \rangle&\to&\langle 4,5,6,5 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle,\\
\langle 2,5,6 \rangle&\to&\langle 2,5,6,6 \rangle, & \langle 2,6,8 \rangle&\to&\langle 2,6,8,7 \rangle,\\
\langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle, & \langle 1,2,4 \rangle&\to&\langle 1,2,4,4 \rangle,\\
\langle 1,4,6 \rangle&\to&\langle 1,4,6,6 \rangle, & \langle 1,5,7 \rangle&\to&\langle 1,5,7,5 \rangle,\\
\langle 2,5,7 \rangle&\to&\langle 2,5,7,7 \rangle, & \langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 3,6,7 \rangle, & \langle 3,7,8 \rangle, & \langle 3,6,8 \rangle,\\
\langle 1,6,8 \rangle, & \langle 2,7,8 \rangle, & \langle 2,3,5 \rangle,\\
\langle 2,5,8 \rangle, & \langle 2,6,7 \rangle, & \langle 1,2,6 \rangle,\\
\langle 1,2,3 \rangle, & \langle 1,3,7 \rangle, & \langle 1,4,8 \rangle,\\
\langle 1,4,7 \rangle, & \langle 4,6,8 \rangle, & \langle 5,7,8 \rangle,\\
\langle 4,5,7 \rangle, & \langle 2,4,6 \rangle, & \langle 3,4,5 \rangle,\\
\langle 2,3,4 \rangle.&&
\end{array}$\\
\hline\hline
$M$&
\footnotesize $\begin{array}{lll}
\langle 1,3,4,5 \rangle, & \langle 1,4,5,8 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 3,6,7,8 \rangle, & \langle 4,6,7,8 \rangle, & \langle 4,5,6,8 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 2,3,4,5 \rangle, & \langle 2,3,7,8 \rangle,\\
\langle 1,4,6,7 \rangle, & \langle 1,2,4,6 \rangle, & \langle 2,3,5,7 \rangle,\\
\langle 1,3,5,7 \rangle, & \langle 1,3,6,7 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 2,3,6,8 \rangle, & \langle 2,5,6,8 \rangle, & \langle 2,5,7,8 \rangle,\\
\langle 1,5,7,8 \rangle, & \langle 1,4,7,8 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,4,7,8 \rangle, &
\langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle,\\
\langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle, & \langle 2,5,8 \rangle&\to&\langle 2,5,8,6 \rangle,\\
\langle 2,6,8 \rangle&\to&\langle 2,6,8,6 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle,\\
\langle 1,3,6 \rangle&\to&\langle 1,3,6,6 \rangle, & \langle 1,3,7 \rangle&\to&\langle 1,3,7,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 1,2,6 \rangle&\to&\langle 1,2,6,4 \rangle,\\
\langle 1,4,6 \rangle&\to&\langle 1,4,6,6 \rangle, & \langle 2,3,7 \rangle&\to&\langle 2,3,7,7 \rangle,\\
\langle 2,3,5 \rangle&\to&\langle 2,3,5,4 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,5 \rangle,\\
\langle 4,5,6 \rangle&\to&\langle 4,5,6,6 \rangle, & \langle 4,6,8 \rangle&\to&\langle 4,6,8,7 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle, & \langle 2,3,4 \rangle&\to&\langle 2,3,4,3 \rangle,\\
\langle 1,4,8 \rangle&\to&\langle 1,4,8,5 \rangle, & \langle 1,4,5 \rangle&\to&\langle 1,4,5,4 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 3,7,8 \rangle, & \langle 2,7,8 \rangle, & \langle 2,5,7 \rangle,\\
\langle 3,6,7 \rangle, & \langle 3,6,8 \rangle, & \langle 1,5,7 \rangle,\\
\langle 1,4,7 \rangle, & \langle 2,3,8 \rangle, & \langle 4,6,7 \rangle,\\
\langle 1,2,3 \rangle, & \langle 1,2,4 \rangle, & \langle 2,4,6 \rangle,\\
\langle 1,3,5 \rangle, & \langle 2,5,6 \rangle, & \langle 1,3,4 \rangle,\\
\langle 3,4,5 \rangle, & \langle 5,6,8 \rangle, & \langle 4,5,8 \rangle,\\
\langle 4,7,8 \rangle.&&
\end{array}$\\
\hline\hline
$P_{27}$&
\footnotesize $\begin{array}{lll}
\langle 1,2,4,6 \rangle, & \langle 4,5,6,8 \rangle, & \langle 1,4,6,8 \rangle,\\
\langle 1,4,5,8 \rangle, & \langle 2,5,7,8 \rangle, & \langle 1,5,7,8 \rangle,\\
\langle 1,6,7,8 \rangle, & \langle 1,3,6,7 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 1,2,3,4 \rangle, & \langle 2,3,4,5 \rangle, & \langle 2,4,5,6 \rangle,\\
\langle 2,5,6,8 \rangle, & \langle 2,6,7,8 \rangle, & \langle 2,3,6,7 \rangle,\\
\langle 2,3,5,7 \rangle, & \langle 1,3,5,7 \rangle, & \langle 1,3,4,5 \rangle.
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,3,4,5 \rangle, &
\langle 1,3,5 \rangle&\to&\langle 1,3,5,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 2,3,7 \rangle&\to&\langle 2,3,7,6 \rangle,\\
\langle 2,6,7 \rangle&\to&\langle 2,6,7,7 \rangle, & \langle 2,6,8 \rangle&\to&\langle 2,6,8,6 \rangle,\\
\langle 2,5,6 \rangle&\to&\langle 2,5,6,5 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle,\\
\langle 2,3,4 \rangle&\to&\langle 2,3,4,3 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle,\\
\langle 1,3,6 \rangle&\to&\langle 1,3,6,6 \rangle, & \langle 1,6,7 \rangle&\to&\langle 1,6,7,7 \rangle,\\
\langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle, & \langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle,\\
\langle 1,4,5 \rangle&\to&\langle 1,4,5,5 \rangle, & \langle 1,4,8 \rangle&\to&\langle 1,4,8,6 \rangle,\\
\langle 4,6,8 \rangle&\to&\langle 4,6,8,6 \rangle, & \langle 1,4,6 \rangle&\to&\langle 1,4,6,4 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 2,7,8 \rangle, & \langle 2,5,7 \rangle, & \langle 2,5,8 \rangle,\\
\langle 1,5,8 \rangle, & \langle 1,6,8 \rangle, & \langle 6,7,8 \rangle,\\
\langle 1,5,7 \rangle, & \langle 3,6,7 \rangle, & \langle 1,2,6 \rangle,\\
\langle 1,3,7 \rangle, & \langle 1,3,4 \rangle, & \langle 1,2,4 \rangle,\\
\langle 2,3,6 \rangle, & \langle 2,3,5 \rangle, & \langle 3,4,5 \rangle,\\
\langle 5,6,8 \rangle, & \langle 2,4,6 \rangle, & \langle 4,5,6 \rangle.
\end{array}$\\
\hline\hline
$P_{28}$&
\footnotesize $\begin{array}{lll}
\langle 1,3,6,8 \rangle, & \langle 2,5,6,7 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 1,4,6,8 \rangle, & \langle 1,4,5,8 \rangle, & \langle 1,5,7,8 \rangle,\\
\langle 1,3,7,8 \rangle, & \langle 2,3,5,7 \rangle, & \langle 1,3,5,7 \rangle,\\
\langle 1,3,4,5 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 2,3,6,7 \rangle, & \langle 3,6,7,8 \rangle, & \langle 5,6,7,8 \rangle,\\
\langle 4,5,6,8 \rangle, & \langle 2,4,5,6 \rangle, & \langle 1,2,4,6 \rangle.
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,2,4,6 \rangle, &
\langle 2,4,6 \rangle&\to&\langle 2,4,6,5 \rangle,\\
\langle 4,5,6 \rangle&\to&\langle 4,5,6,6 \rangle, & \langle 5,6,8 \rangle&\to&\langle 5,6,8,7 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle, & \langle 3,6,7 \rangle&\to&\langle 3,6,7,6 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle,\\
\langle 1,3,4 \rangle&\to&\langle 1,3,4,4 \rangle, & \langle 1,3,5 \rangle&\to&\langle 1,3,5,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle,\\
\langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle, & \langle 1,4,5 \rangle&\to&\langle 1,4,5,5 \rangle,\\
\langle 1,4,8 \rangle&\to&\langle 1,4,8,6 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle,\\
\langle 2,5,7 \rangle&\to&\langle 2,5,7,6 \rangle, & \langle 1,6,8 \rangle&\to&\langle 1,6,8,6 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 2,3,5 \rangle, & \langle 2,6,7 \rangle, & \langle 2,3,7 \rangle,\\
\langle 1,3,7 \rangle, & \langle 2,5,6 \rangle, & \langle 3,4,5 \rangle,\\
\langle 2,3,4 \rangle, & \langle 5,6,7 \rangle, & \langle 1,3,6 \rangle,\\
\langle 1,2,6 \rangle, & \langle 1,5,7 \rangle, & \langle 1,2,4 \rangle,\\
\langle 1,3,8 \rangle, & \langle 1,4,6 \rangle, & \langle 3,6,8 \rangle,\\
\langle 1,5,8 \rangle, & \langle 4,6,8 \rangle, & \langle 4,5,8 \rangle.
\end{array}$\\
\hline\hline
$P_{33}$&
\footnotesize $\begin{array}{lll}
\langle 1,3,6,7 \rangle, & \langle 1,3,5,8 \rangle, & \langle 2,5,6,7 \rangle,\\
\langle 2,3,6,7 \rangle, & \langle 1,2,3,6 \rangle, & \langle 1,3,7,8 \rangle,\\
\langle 1,6,7,8 \rangle, & \langle 1,4,6,8 \rangle, & \langle 4,6,7,8 \rangle,\\
\langle 1,2,4,6 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 1,4,5,8 \rangle, & \langle 4,5,7,8 \rangle, & \langle 3,5,7,8 \rangle,\\
\langle 2,3,5,7 \rangle, & \langle 2,3,4,5 \rangle, & \langle 2,4,5,6 \rangle,\\
\langle 4,5,6,7 \rangle.&&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 4,5,6,7 \rangle, &
\langle 4,5,6 \rangle&\to&\langle 4,5,6,5 \rangle,\\
\langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle, & \langle 2,3,5 \rangle&\to&\langle 2,3,5,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,7 \rangle, & \langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle,\\
\langle 4,5,8 \rangle&\to&\langle 4,5,8,5 \rangle, & \langle 1,4,5 \rangle&\to&\langle 1,4,5,4 \rangle,\\
\langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle, & \langle 1,2,4 \rangle&\to&\langle 1,2,4,4 \rangle,\\
\langle 4,6,7 \rangle&\to&\langle 4,6,7,7 \rangle, & \langle 4,6,8 \rangle&\to&\langle 4,6,8,6 \rangle,\\
\langle 1,6,8 \rangle&\to&\langle 1,6,8,7 \rangle, & \langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle,\\
\langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,6 \rangle,\\
\langle 2,6,7 \rangle&\to&\langle 2,6,7,6 \rangle, & \langle 1,3,8 \rangle&\to&\langle 1,3,8,5 \rangle,\\
\langle 1,3,6 \rangle&\to&\langle 1,3,6,6 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 1,3,7 \rangle, & \langle 4,5,7 \rangle, & \langle 3,4,5 \rangle,\\
\langle 2,3,7 \rangle, & \langle 2,5,7 \rangle, & \langle 2,3,4 \rangle,\\
\langle 2,4,6 \rangle, & \langle 1,3,5 \rangle, & \langle 3,5,8 \rangle,\\
\langle 3,7,8 \rangle, & \langle 1,5,8 \rangle, & \langle 2,5,6 \rangle,\\
\langle 5,6,7 \rangle, & \langle 4,7,8 \rangle, & \langle 1,4,8 \rangle,\\
\langle 1,4,6 \rangle, & \langle 1,6,7 \rangle.&
\end{array}$\\
\hline\hline
$B$&
\footnotesize $\begin{array}{lll}
\langle 1,2,3,6 \rangle, & \langle 3,6,7,8 \rangle, & \langle 1,4,7,8 \rangle,\\
\langle 1,4,5,7 \rangle, & \langle 2,4,5,7 \rangle, & \langle 2,3,5,8 \rangle,\\
\langle 1,3,4,5 \rangle, & \langle 4,6,7,8 \rangle, & \langle 2,4,6,7 \rangle,\\
\langle 2,3,6,7 \rangle, & \langle 2,3,7,8 \rangle, & \langle 2,5,7,8 \rangle,\\
\langle 1,5,7,8 \rangle, & \langle 1,3,5,8 \rangle, & \langle 1,3,6,8 \rangle,\\
\langle 1,4,6,8 \rangle, & \langle 1,2,4,6 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 2,3,4,5 \rangle.&&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 2,3,4,5 \rangle, &
\langle 2,3,4 \rangle&\to&\langle 2,3,4,3 \rangle,\\
\langle 1,2,4 \rangle&\to&\langle 1,2,4,4 \rangle, & \langle 1,4,6 \rangle&\to&\langle 1,4,6,6 \rangle,\\
\langle 1,6,8 \rangle&\to&\langle 1,6,8,6 \rangle, & \langle 1,3,8 \rangle&\to&\langle 1,3,8,5 \rangle,\\
\langle 1,5,8 \rangle&\to&\langle 1,5,8,7 \rangle, & \langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle,\\
\langle 2,7,8 \rangle&\to&\langle 2,7,8,7 \rangle, & \langle 2,3,7 \rangle&\to&\langle 2,3,7,6 \rangle,\\
\langle 2,6,7 \rangle&\to&\langle 2,6,7,6 \rangle, & \langle 4,6,7 \rangle&\to&\langle 4,6,7,7 \rangle,\\
\langle 1,3,5 \rangle&\to&\langle 1,3,5,4 \rangle, & \langle 2,3,5 \rangle&\to&\langle 2,3,5,5 \rangle,\\
\langle 2,4,5 \rangle&\to&\langle 2,4,5,5 \rangle, & \langle 4,5,7 \rangle&\to&\langle 4,5,7,5 \rangle,\\
\langle 1,4,7 \rangle&\to&\langle 1,4,7,7 \rangle, & \langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 2,3,8 \rangle, & \langle 1,2,6 \rangle, & \langle 1,2,3 \rangle,\\
\langle 1,3,6 \rangle, & \langle 3,6,8 \rangle, & \langle 1,3,4 \rangle,\\
\langle 2,5,8 \rangle, & \langle 2,4,6 \rangle, & \langle 4,6,8 \rangle,\\
\langle 3,5,8 \rangle, & \langle 3,4,5 \rangle, & \langle 2,4,7 \rangle,\\
\langle 1,4,5 \rangle, & \langle 2,5,7 \rangle, & \langle 1,5,7 \rangle,\\
\langle 4,7,8 \rangle, & \langle 1,7,8 \rangle, & \langle 1,4,8 \rangle.
\end{array}$\\
\hline\hline
$P_{23}$&
\footnotesize $\begin{array}{lll}
\langle 2,5,6,7 \rangle, & \langle 2,3,4,5 \rangle, & \langle 2,4,5,6 \rangle,\\
\langle 1,2,4,6 \rangle, & \langle 1,3,6,8 \rangle, & \langle 3,6,7,8 \rangle,\\
\langle 1,4,6,8 \rangle, & \langle 1,4,7,8 \rangle, & \langle 1,3,7,8 \rangle,\\
\langle 1,3,5,7 \rangle, & \langle 2,3,5,7 \rangle, & \langle 2,3,6,7 \rangle,\\
\langle 1,2,3,6 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 1,4,5,7 \rangle, & \langle 4,5,6,7 \rangle, & \langle 4,6,7,8 \rangle.
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 4,6,7,8 \rangle, &
\langle 4,6,7 \rangle&\to&\langle 4,6,7,6 \rangle,\\
\langle 4,5,7 \rangle&\to&\langle 4,5,7,5 \rangle, & \langle 1,4,5 \rangle&\to&\langle 1,4,5,4 \rangle,\\
\langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,6 \rangle, & \langle 2,3,7 \rangle&\to&\langle 2,3,7,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 1,3,7 \rangle&\to&\langle 1,3,7,7 \rangle,\\
\langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle, & \langle 1,4,8 \rangle&\to&\langle 1,4,8,6 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle, & \langle 3,6,8 \rangle&\to&\langle 3,6,8,6 \rangle,\\
\langle 1,4,6 \rangle&\to&\langle 1,4,6,4 \rangle, & \langle 2,4,6 \rangle&\to&\langle 2,4,6,5 \rangle,\\
\langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle, & \langle 2,6,7 \rangle&\to&\langle 2,6,7,6 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 1,6,8 \rangle, & \langle 4,5,6 \rangle, & \langle 2,3,5 \rangle,\\
\langle 1,3,8 \rangle, & \langle 3,4,5 \rangle, & \langle 1,3,6 \rangle,\\
\langle 4,6,8 \rangle, & \langle 4,7,8 \rangle, & \langle 3,7,8 \rangle,\\
\langle 1,3,5 \rangle, & \langle 2,5,6 \rangle, & \langle 2,3,4 \rangle,\\
\langle 1,2,6 \rangle, & \langle 1,2,4 \rangle, & \langle 1,4,7 \rangle,\\
\langle 1,5,7 \rangle, & \langle 3,6,7 \rangle, & \langle 5,6,7 \rangle.
\end{array}$\\
\hline\hline
$P_{8}$&
\footnotesize $\begin{array}{lll}
\langle 2,3,4,5 \rangle, & \langle 1,2,7,8 \rangle, & \langle 1,5,7,8 \rangle,\\
\langle 5,6,7,8 \rangle, & \langle 3,5,6,7 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 1,3,5,7 \rangle, & \langle 1,3,6,7 \rangle, & \langle 1,2,6,7 \rangle,\\
\langle 2,6,7,8 \rangle, & \langle 2,5,6,8 \rangle, & \langle 1,2,5,8 \rangle,\\
\langle 2,3,5,6 \rangle, & \langle 1,2,3,6 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 1,2,4,5 \rangle.&&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,2,4,5 \rangle, &
\langle 1,2,4 \rangle&\to&\langle 1,2,4,3 \rangle,\\
\langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,5 \rangle,\\
\langle 1,2,5 \rangle&\to&\langle 1,2,5,5 \rangle, & \langle 2,5,8 \rangle&\to&\langle 2,5,8,6 \rangle,\\
\langle 2,6,8 \rangle&\to&\langle 2,6,8,7 \rangle, & \langle 2,6,7 \rangle&\to&\langle 2,6,7,6 \rangle,\\
\langle 1,6,7 \rangle&\to&\langle 1,6,7,6 \rangle, & \langle 1,3,7 \rangle&\to&\langle 1,3,7,5 \rangle,\\
\langle 1,3,5 \rangle&\to&\langle 1,3,5,4 \rangle, & \langle 3,5,6 \rangle&\to&\langle 3,5,6,6 \rangle,\\
\langle 5,6,7 \rangle&\to&\langle 5,6,7,7 \rangle, & \langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle,\\
\langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle, & \langle 3,4,5 \rangle&\to&\langle 3,4,5,4 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 1,4,5 \rangle, & \langle 2,3,4 \rangle, & \langle 1,5,8 \rangle,\\
\langle 1,3,4 \rangle, & \langle 5,6,8 \rangle, & \langle 2,4,5 \rangle,\\
\langle 2,3,5 \rangle, & \langle 2,5,6 \rangle, & \langle 3,5,7 \rangle,\\
\langle 1,5,7 \rangle, & \langle 1,2,6 \rangle, & \langle 1,3,6 \rangle,\\
\langle 1,2,8 \rangle, & \langle 1,2,7 \rangle, & \langle 3,6,7 \rangle,\\
\langle 2,7,8 \rangle, & \langle 6,7,8 \rangle.&
\end{array}$\\
\hline\hline
$P_{12}$&
\footnotesize $\begin{array}{lll}
\langle 1,2,6,8 \rangle, & \langle 3,5,6,8 \rangle, & \langle 3,6,7,8 \rangle,\\
\langle 1,6,7,8 \rangle, & \langle 1,2,7,8 \rangle, & \langle 1,2,5,7 \rangle,\\
\langle 1,2,4,5 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 1,3,6,7 \rangle, & \langle 1,3,4,5 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 2,3,5,6 \rangle, & \langle 2,5,6,8 \rangle, & \langle 1,3,5,7 \rangle,\\
\langle 3,5,7,8 \rangle, & \langle 2,5,7,8 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 2,5,7,8 \rangle, &
\langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 2,5,8 \rangle&\to&\langle 2,5,8,6 \rangle,\\
\langle 2,5,6 \rangle&\to&\langle 2,5,6,5 \rangle, & \langle 2,3,5 \rangle&\to&\langle 2,3,5,4 \rangle,\\
\langle 3,4,5 \rangle&\to&\langle 3,4,5,4 \rangle, & \langle 1,3,7 \rangle&\to&\langle 1,3,7,6 \rangle,\\
\langle 1,3,6 \rangle&\to&\langle 1,3,6,3 \rangle, & \langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle,\\
\langle 1,2,4 \rangle&\to&\langle 1,2,4,4 \rangle, & \langle 1,2,5 \rangle&\to&\langle 1,2,5,5 \rangle,\\
\langle 1,2,7 \rangle&\to&\langle 1,2,7,7 \rangle, & \langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle, & \langle 3,6,8 \rangle&\to&\langle 3,6,8,6 \rangle,\\
\langle 2,6,8 \rangle&\to&\langle 2,6,8,6 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 1,4,5 \rangle, & \langle 1,3,4 \rangle, & \langle 2,4,5 \rangle,\\
\langle 1,3,5 \rangle, & \langle 1,5,7 \rangle, & \langle 2,5,7 \rangle,\\
\langle 1,6,7 \rangle, & \langle 5,6,8 \rangle, & \langle 3,5,6 \rangle,\\
\langle 2,3,4 \rangle, & \langle 2,3,6 \rangle, & \langle 3,6,7 \rangle,\\
\langle 1,6,8 \rangle, & \langle 3,5,8 \rangle, & \langle 3,7,8 \rangle,\\
\langle 2,7,8 \rangle, & \langle 1,2,8 \rangle, & \langle 1,2,6 \rangle.
\end{array}$\\
\hline\hline
$P_{19}$&
\footnotesize $\begin{array}{lll}
\langle 4,6,7,8 \rangle, & \langle 3,6,7,8 \rangle, & \langle 3,5,6,8 \rangle,\\
\langle 3,5,7,8 \rangle, & \langle 1,3,5,7 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 2,3,4,5 \rangle, & \langle 2,3,5,6 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 1,3,6,7 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,2,4,6 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 4,5,6,8 \rangle, & \langle 4,5,7,8 \rangle,\\
\langle 1,4,5,7 \rangle, & \langle 1,4,6,7 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,4,6,7 \rangle, &
\langle 1,4,7 \rangle&\to&\langle 1,4,7,5 \rangle,\\
\langle 4,5,7 \rangle&\to&\langle 4,5,7,7 \rangle, & \langle 4,5,8 \rangle&\to&\langle 4,5,8,6 \rangle,\\
\langle 4,5,6 \rangle&\to&\langle 4,5,6,5 \rangle, & \langle 2,4,6 \rangle&\to&\langle 2,4,6,4 \rangle,\\
\langle 1,2,4 \rangle&\to&\langle 1,2,4,3 \rangle, & \langle 1,6,7 \rangle&\to&\langle 1,6,7,6 \rangle,\\
\langle 1,3,6 \rangle&\to&\langle 1,3,6,3 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,5 \rangle,\\
\langle 2,3,5 \rangle&\to&\langle 2,3,5,4 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,4 \rangle,\\
\langle 1,3,5 \rangle&\to&\langle 1,3,5,5 \rangle, & \langle 3,5,7 \rangle&\to&\langle 3,5,7,7 \rangle,\\
\langle 3,5,8 \rangle&\to&\langle 3,5,8,6 \rangle, & \langle 3,6,8 \rangle&\to&\langle 3,6,8,7 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 1,2,6 \rangle, & \langle 2,5,6 \rangle, & \langle 2,4,5 \rangle,\\
\langle 5,6,8 \rangle, & \langle 4,6,8 \rangle, & \langle 5,7,8 \rangle,\\
\langle 1,4,6 \rangle, & \langle 4,7,8 \rangle, & \langle 1,4,5 \rangle,\\
\langle 1,5,7 \rangle, & \langle 4,6,7 \rangle, & \langle 1,2,3 \rangle,\\
\langle 1,3,7 \rangle, & \langle 3,6,7 \rangle, & \langle 3,5,6 \rangle,\\
\langle 2,3,4 \rangle, & \langle 3,4,5 \rangle.&
\end{array}$\\
\hline\hline
$P_{20}$&
\footnotesize $\begin{array}{lll}
\langle 3,6,7,8 \rangle, & \langle 2,3,4,5 \rangle, & \langle 5,6,7,8 \rangle,\\
\langle 1,5,7,8 \rangle, & \langle 1,4,5,7 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 1,2,3,4 \rangle, & \langle 1,2,3,6 \rangle, & \langle 2,3,5,6 \rangle,\\
\langle 3,5,6,8 \rangle, & \langle 1,3,5,8 \rangle, & \langle 1,3,7,8 \rangle,\\
\langle 1,3,6,7 \rangle, & \langle 1,4,6,7 \rangle, & \langle 1,2,4,6 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 4,5,6,7 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 4,5,6,7 \rangle, &
\langle 4,5,6 \rangle&\to&\langle 4,5,6,5 \rangle,\\
\langle 2,4,6 \rangle&\to&\langle 2,4,6,4 \rangle, & \langle 1,4,6 \rangle&\to&\langle 1,4,6,6 \rangle,\\
\langle 1,6,7 \rangle&\to&\langle 1,6,7,6 \rangle, & \langle 1,3,7 \rangle&\to&\langle 1,3,7,7 \rangle,\\
\langle 1,3,8 \rangle&\to&\langle 1,3,8,5 \rangle, & \langle 3,5,8 \rangle&\to&\langle 3,5,8,6 \rangle,\\
\langle 3,5,6 \rangle&\to&\langle 3,5,6,5 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle,\\
\langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,4 \rangle,\\
\langle 1,4,5 \rangle&\to&\langle 1,4,5,5 \rangle, & \langle 1,5,7 \rangle&\to&\langle 1,5,7,7 \rangle,\\
\langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 3,7,8 \rangle, & \langle 1,3,5 \rangle, & \langle 1,5,8 \rangle,\\
\langle 3,6,8 \rangle, & \langle 3,6,7 \rangle, & \langle 1,3,6 \rangle,\\
\langle 5,6,8 \rangle, & \langle 2,5,6 \rangle, & \langle 1,7,8 \rangle,\\
\langle 1,2,6 \rangle, & \langle 2,3,5 \rangle, & \langle 5,6,7 \rangle,\\
\langle 3,4,5 \rangle, & \langle 4,5,7 \rangle, & \langle 1,4,7 \rangle,\\
\langle 1,2,4 \rangle, & \langle 2,3,4 \rangle.&
\end{array}$\\
\hline\hline
$P_{22}$&
\footnotesize $\begin{array}{lll}
\langle 4,5,6,8 \rangle, & \langle 1,2,4,6 \rangle, & \langle 1,5,7,8 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 2,3,4,5 \rangle, & \langle 5,6,7,8 \rangle,\\
\langle 2,5,6,7 \rangle, & \langle 2,3,6,7 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 1,2,3,4 \rangle, & \langle 1,3,4,5 \rangle, & \langle 1,4,5,8 \rangle,\\
\langle 1,4,6,8 \rangle, & \langle 1,6,7,8 \rangle, & \langle 2,3,5,7 \rangle,\\
\langle 1,3,5,7 \rangle, & \langle 1,3,6,7 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,3,6,7 \rangle, &
\langle 1,3,7 \rangle&\to&\langle 1,3,7,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle, & \langle 1,6,7 \rangle&\to&\langle 1,6,7,7 \rangle,\\
\langle 1,6,8 \rangle&\to&\langle 1,6,8,6 \rangle, & \langle 1,4,8 \rangle&\to&\langle 1,4,8,5 \rangle,\\
\langle 1,4,5 \rangle&\to&\langle 1,4,5,4 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle,\\
\langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,6 \rangle,\\
\langle 2,6,7 \rangle&\to&\langle 2,6,7,6 \rangle, & \langle 5,6,7 \rangle&\to&\langle 5,6,7,7 \rangle,\\
\langle 2,3,5 \rangle&\to&\langle 2,3,5,4 \rangle, & \langle 2,4,5 \rangle&\to&\langle 2,4,5,5 \rangle,\\
\langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle, & \langle 2,4,6 \rangle&\to&\langle 2,4,6,4 \rangle,\\
\langle 5,6,8 \rangle&\to&\langle 5,6,8,6 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 4,5,8 \rangle, & \langle 4,6,8 \rangle, & \langle 6,7,8 \rangle,\\
\langle 4,5,6 \rangle, & \langle 1,5,8 \rangle, & \langle 3,6,7 \rangle,\\
\langle 1,7,8 \rangle, & \langle 1,5,7 \rangle, & \langle 3,4,5 \rangle,\\
\langle 1,3,5 \rangle, & \langle 1,3,6 \rangle, & \langle 2,5,7 \rangle,\\
\langle 2,5,6 \rangle, & \langle 2,3,7 \rangle, & \langle 1,2,6 \rangle,\\
\langle 1,4,6 \rangle, & \langle 1,2,4 \rangle, & \langle 2,3,4 \rangle.
\end{array}$\\
\hline\hline
$P_{35}$&
\footnotesize $\begin{array}{lll}
\langle 1,3,6,8 \rangle, & \langle 1,2,4,6 \rangle, & \langle 2,3,5,7 \rangle,\\
\langle 2,3,4,5 \rangle, & \langle 5,6,7,8 \rangle, & \langle 2,4,5,6 \rangle,\\
\langle 2,5,6,7 \rangle, & \langle 1,3,5,7 \rangle, & \langle 2,6,7,8 \rangle,\\
\langle 1,4,7,8 \rangle, & \langle 1,3,7,8 \rangle, & \langle 2,3,7,8 \rangle,\\
\langle 2,3,6,8 \rangle, & \langle 1,2,3,6 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 1,3,4,5 \rangle, & \langle 1,4,5,7 \rangle, & \langle 4,5,7,8 \rangle,\\
\langle 4,5,6,8 \rangle, & \langle 1,4,6,8 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,4,6,8 \rangle, &
\langle 4,6,8 \rangle&\to&\langle 4,6,8,6 \rangle,\\
\langle 4,5,8 \rangle&\to&\langle 4,5,8,7 \rangle, & \langle 4,5,7 \rangle&\to&\langle 4,5,7,5 \rangle,\\
\langle 1,4,5 \rangle&\to&\langle 1,4,5,4 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle,\\
\langle 1,2,3 \rangle&\to&\langle 1,2,3,3 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,6 \rangle,\\
\langle 2,3,8 \rangle&\to&\langle 2,3,8,7 \rangle, & \langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle,\\
\langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle, & \langle 2,6,8 \rangle&\to&\langle 2,6,8,7 \rangle,\\
\langle 1,3,5 \rangle&\to&\langle 1,3,5,5 \rangle, & \langle 2,6,7 \rangle&\to&\langle 2,6,7,6 \rangle,\\
\langle 2,5,6 \rangle&\to&\langle 2,5,6,5 \rangle, & \langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle,\\
\langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle, & \langle 3,5,7 \rangle&\to&\langle 3,5,7,5 \rangle,\\
\langle 1,4,6 \rangle&\to&\langle 1,4,6,4 \rangle, & \langle 1,6,8 \rangle&\to&\langle 1,6,8,6 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 2,3,5 \rangle, & \langle 2,3,4 \rangle, & \langle 3,4,5 \rangle,\\
\langle 4,5,6 \rangle, & \langle 2,5,7 \rangle, & \langle 2,4,6 \rangle,\\
\langle 2,3,7 \rangle, & \langle 1,2,6 \rangle, & \langle 5,6,8 \rangle,\\
\langle 1,2,4 \rangle, & \langle 1,3,6 \rangle, & \langle 5,7,8 \rangle,\\
\langle 1,3,7 \rangle, & \langle 1,4,7 \rangle, & \langle 3,6,8 \rangle,\\
\langle 4,7,8 \rangle, & \langle 1,4,8 \rangle, & \langle 1,3,8 \rangle.
\end{array}$\\
\hline\hline
$P_{29}$&
\footnotesize $\begin{array}{lll}
\langle 4,5,6,8 \rangle, & \langle 1,4,5,8 \rangle, & \langle 1,2,3,6 \rangle,\\
\langle 3,5,7,8 \rangle, & \langle 1,2,3,4 \rangle, & \langle 1,3,4,5 \rangle,\\
\langle 1,3,5,8 \rangle, & \langle 1,3,7,8 \rangle, & \langle 1,3,6,7 \rangle,\\
\langle 2,3,6,7 \rangle, & \langle 2,3,5,7 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 1,2,4,6 \rangle, & \langle 1,4,6,8 \rangle,\\
\langle 1,6,7,8 \rangle, & \langle 5,6,7,8 \rangle, & \langle 2,5,6,7 \rangle.
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 2,5,6,7 \rangle, &
\langle 5,6,7 \rangle&\to&\langle 5,6,7,7 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle, & \langle 1,6,8 \rangle&\to&\langle 1,6,8,6 \rangle,\\
\langle 1,4,6 \rangle&\to&\langle 1,4,6,4 \rangle, & \langle 2,4,6 \rangle&\to&\langle 2,4,6,5 \rangle,\\
\langle 2,4,5 \rangle&\to&\langle 2,4,5,4 \rangle, & \langle 2,3,5 \rangle&\to&\langle 2,3,5,5 \rangle,\\
\langle 2,3,7 \rangle&\to&\langle 2,3,7,6 \rangle, & \langle 3,6,7 \rangle&\to&\langle 3,6,7,6 \rangle,\\
\langle 1,3,7 \rangle&\to&\langle 1,3,7,7 \rangle, & \langle 1,3,8 \rangle&\to&\langle 1,3,8,5 \rangle,\\
\langle 1,3,5 \rangle&\to&\langle 1,3,5,4 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle,\\
\langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle, & \langle 1,2,6 \rangle&\to&\langle 1,2,6,3 \rangle,\\
\langle 1,5,8 \rangle&\to&\langle 1,5,8,5 \rangle, & \langle 4,5,8 \rangle&\to&\langle 4,5,8,6 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 4,5,6 \rangle, & \langle 3,4,5 \rangle, & \langle 2,5,6 \rangle,\\
\langle 2,3,6 \rangle, & \langle 2,5,7 \rangle, & \langle 2,3,4 \rangle,\\
\langle 5,6,8 \rangle, & \langle 1,3,6 \rangle, & \langle 3,5,7 \rangle,\\
\langle 4,6,8 \rangle, & \langle 2,6,7 \rangle, & \langle 1,6,7 \rangle,\\
\langle 1,2,3 \rangle, & \langle 1,2,4 \rangle, & \langle 1,4,8 \rangle,\\
\langle 1,7,8 \rangle, & \langle 3,7,8 \rangle, & \langle 3,5,8 \rangle.
\end{array}$\\
\hline\hline
$P_{24}$&
\footnotesize $\begin{array}{lll}
\langle 2,3,4,5 \rangle, & \langle 4,6,7,8 \rangle, & \langle 2,3,5,6 \rangle,\\
\langle 1,3,7,8 \rangle, & \langle 3,6,7,8 \rangle, & \langle 3,5,6,8 \rangle,\\
\langle 1,2,3,6 \rangle, & \langle 1,3,6,7 \rangle, & \langle 1,4,6,7 \rangle,\\
\langle 1,4,5,7 \rangle, & \langle 4,5,7,8 \rangle, & \langle 4,5,6,8 \rangle,\\
\langle 2,4,5,6 \rangle, & \langle 1,2,4,6 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 1,5,7,8 \rangle, & \langle 1,3,5,8 \rangle, & \langle 1,3,4,5 \rangle.
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,3,4,5 \rangle, &
\langle 1,3,5 \rangle&\to&\langle 1,3,5,5 \rangle,\\
\langle 1,5,8 \rangle&\to&\langle 1,5,8,7 \rangle, & \langle 1,3,4 \rangle&\to&\langle 1,3,4,3 \rangle,\\
\langle 1,2,4 \rangle&\to&\langle 1,2,4,4 \rangle, & \langle 2,4,6 \rangle&\to&\langle 2,4,6,5 \rangle,\\
\langle 4,5,6 \rangle&\to&\langle 4,5,6,6 \rangle, & \langle 4,5,8 \rangle&\to&\langle 4,5,8,7 \rangle,\\
\langle 4,5,7 \rangle&\to&\langle 4,5,7,5 \rangle, & \langle 1,4,7 \rangle&\to&\langle 1,4,7,6 \rangle,\\
\langle 1,6,7 \rangle&\to&\langle 1,6,7,6 \rangle, & \langle 1,3,6 \rangle&\to&\langle 1,3,6,3 \rangle,\\
\langle 3,5,8 \rangle&\to&\langle 3,5,8,6 \rangle, & \langle 3,6,8 \rangle&\to&\langle 3,6,8,7 \rangle,\\
\langle 1,3,8 \rangle&\to&\langle 1,3,8,7 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,5 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle, & \langle 2,3,5 \rangle&\to&\langle 2,3,5,4 \rangle.
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 1,4,5 \rangle, & \langle 2,4,5 \rangle, & \langle 1,4,6 \rangle,\\
\langle 4,6,8 \rangle, & \langle 3,5,6 \rangle, & \langle 2,5,6 \rangle,\\
\langle 5,6,8 \rangle, & \langle 3,4,5 \rangle, & \langle 5,7,8 \rangle,\\
\langle 2,3,4 \rangle, & \langle 1,2,6 \rangle, & \langle 4,7,8 \rangle,\\
\langle 4,6,7 \rangle, & \langle 1,5,7 \rangle, & \langle 3,6,7 \rangle,\\
\langle 1,2,3 \rangle, & \langle 1,3,7 \rangle.&
\end{array}$\\
\hline\hline
$P_{26}$&
\footnotesize $\begin{array}{lll}
\langle 4,6,7,8 \rangle, & \langle 3,6,7,8 \rangle, & \langle 3,5,7,8 \rangle,\\
\langle 2,3,5,7 \rangle, & \langle 1,3,4,5 \rangle, & \langle 1,3,5,8 \rangle,\\
\langle 1,3,6,8 \rangle, & \langle 1,2,3,6 \rangle, & \langle 2,3,6,7 \rangle,\\
\langle 2,3,4,5 \rangle, & \langle 2,4,6,7 \rangle, & \langle 2,4,5,7 \rangle,\\
\langle 4,5,7,8 \rangle, & \langle 1,4,5,8 \rangle, & \langle 1,4,6,8 \rangle,\\
\langle 1,2,4,6 \rangle, & \langle 1,2,3,4 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,2,3,4 \rangle, &
\langle 1,2,4 \rangle&\to&\langle 1,2,4,4 \rangle,\\
\langle 1,4,6 \rangle&\to&\langle 1,4,6,6 \rangle, & \langle 1,4,8 \rangle&\to&\langle 1,4,8,5 \rangle,\\
\langle 4,5,8 \rangle&\to&\langle 4,5,8,7 \rangle, & \langle 4,5,7 \rangle&\to&\langle 4,5,7,5 \rangle,\\
\langle 2,4,7 \rangle&\to&\langle 2,4,7,6 \rangle, & \langle 2,3,4 \rangle&\to&\langle 2,3,4,4 \rangle,\\
\langle 2,6,7 \rangle&\to&\langle 2,6,7,6 \rangle, & \langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle,\\
\langle 1,3,6 \rangle&\to&\langle 1,3,6,6 \rangle, & \langle 1,3,8 \rangle&\to&\langle 1,3,8,5 \rangle,\\
\langle 1,3,5 \rangle&\to&\langle 1,3,5,4 \rangle, & \langle 2,3,5 \rangle&\to&\langle 2,3,5,5 \rangle,\\
\langle 3,5,7 \rangle&\to&\langle 3,5,7,7 \rangle, & \langle 3,7,8 \rangle&\to&\langle 3,7,8,7 \rangle,\\
\langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 3,4,5 \rangle, & \langle 1,3,4 \rangle, & \langle 1,2,3 \rangle,\\
\langle 1,4,5 \rangle, & \langle 1,2,6 \rangle, & \langle 2,4,5 \rangle,\\
\langle 2,4,6 \rangle, & \langle 2,5,7 \rangle, & \langle 5,7,8 \rangle,\\
\langle 1,5,8 \rangle, & \langle 3,5,8 \rangle, & \langle 1,6,8 \rangle,\\
\langle 4,6,8 \rangle, & \langle 2,3,7 \rangle, & \langle 4,7,8 \rangle,\\
\langle 3,6,8 \rangle, & \langle 4,6,7 \rangle, & \langle 3,6,7 \rangle.
\end{array}$\\
\hline\hline
$P_{11}$&
\footnotesize $\begin{array}{lll}
\langle 3,5,6,8 \rangle, & \langle 3,5,7,8 \rangle, & \langle 1,2,3,4 \rangle,\\
\langle 1,5,7,8 \rangle, & \langle 1,2,7,8 \rangle, & \langle 1,2,6,7 \rangle,\\
\langle 1,2,3,6 \rangle, & \langle 2,3,5,6 \rangle, & \langle 1,2,4,5 \rangle,\\
\langle 1,2,5,8 \rangle, & \langle 2,5,6,8 \rangle, & \langle 2,6,7,8 \rangle,\\
\langle 3,6,7,8 \rangle, & \langle 1,3,6,7 \rangle, & \langle 2,3,4,5 \rangle,\\
\langle 1,3,4,5 \rangle, & \langle 1,3,5,7 \rangle.&
\end{array}$&
\tiny
$\begin{array}{llllll}
&&\langle 1,3,5,7 \rangle, &
\langle 1,3,5 \rangle&\to&\langle 1,3,5,4 \rangle,\\
\langle 3,4,5 \rangle&\to&\langle 3,4,5,4 \rangle, & \langle 1,3,7 \rangle&\to&\langle 1,3,7,6 \rangle,\\
\langle 3,6,7 \rangle&\to&\langle 3,6,7,7 \rangle, & \langle 6,7,8 \rangle&\to&\langle 6,7,8,7 \rangle,\\
\langle 2,6,8 \rangle&\to&\langle 2,6,8,6 \rangle, & \langle 2,5,8 \rangle&\to&\langle 2,5,8,5 \rangle,\\
\langle 1,2,5 \rangle&\to&\langle 1,2,5,4 \rangle, & \langle 2,3,5 \rangle&\to&\langle 2,3,5,5 \rangle,\\
\langle 2,3,6 \rangle&\to&\langle 2,3,6,3 \rangle, & \langle 1,2,6 \rangle&\to&\langle 1,2,6,6 \rangle,\\
\langle 1,2,7 \rangle&\to&\langle 1,2,7,7 \rangle, & \langle 1,7,8 \rangle&\to&\langle 1,7,8,7 \rangle,\\
\langle 1,2,4 \rangle&\to&\langle 1,2,4,3 \rangle, & \langle 5,7,8 \rangle&\to&\langle 5,7,8,7 \rangle,\\
\langle 3,5,8 \rangle&\to&\langle 3,5,8,6 \rangle.&&&
\end{array}$&
\footnotesize
$\begin{array}{lll}
\langle 3,5,6 \rangle, & \langle 3,6,8 \rangle, & \langle 2,6,7 \rangle,\\
\langle 2,7,8 \rangle, & \langle 3,7,8 \rangle, & \langle 5,6,8 \rangle,\\
\langle 3,5,7 \rangle, & \langle 1,5,7 \rangle, & \langle 1,5,8 \rangle,\\
\langle 1,6,7 \rangle, & \langle 1,3,6 \rangle, & \langle 1,4,5 \rangle,\\
\langle 1,2,8 \rangle, & \langle 1,3,4 \rangle, & \langle 2,5,6 \rangle,\\
\langle 1,2,3 \rangle, & \langle 2,4,5 \rangle, & \langle 2,3,4 \rangle.
\end{array}$\\
\hline\hline
\end{longtable}
\end{center}
\normalsize
\newpage
\section{A classification of $18$-triangle $8$-vertex triangulations of
contractible non-collapsible $2$-complexes}
\label{app:18}
\subsection*{Saw-blade complexes}
\subsubsection*{Type I}
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 8 \rangle$, \\
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 5, 7 \rangle$, &
$\langle 2, 6, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 7 \rangle$, &
$\langle 5, 6, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 8 \rangle$, \\
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 5, 7 \rangle$, &
$\langle 2, 6, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 7, 8 \rangle$, &
$\langle 6, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 8 \rangle$, &
$\langle 1, 7, 8 \rangle$, &
$\langle 2, 3, 6 \rangle$, \\
$\langle 2, 3, 8 \rangle$, &
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 5, 7 \rangle$, &
$\langle 2, 6, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 7, 8 \rangle$.
\end{tabular}
\subsubsection*{Type II}
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 6 \rangle$, &
$\langle 1, 3, 7 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 5, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 4, 7 \rangle$, \\
$\langle 2, 4, 8 \rangle$, &
$\langle 2, 5, 6 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 5, 8 \rangle$, &
$\langle 4, 5, 6 \rangle$, &
$\langle 5, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 6 \rangle$, &
$\langle 1, 3, 7 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 5, 8 \rangle$, &
$\langle 1, 7, 8 \rangle$, &
$\langle 2, 3, 6 \rangle$, \\
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 4, 8 \rangle$, &
$\langle 2, 5, 6 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 5, 8 \rangle$, &
$\langle 4, 5, 6 \rangle$.
\end{tabular}
\subsubsection*{Type III}
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 7 \rangle$, \\
$\langle 2, 4, 6 \rangle$, &
$\langle 2, 5, 7 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 6, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 7 \rangle$, &
$\langle 5, 6, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 7 \rangle$, \\
$\langle 2, 4, 6 \rangle$, &
$\langle 2, 5, 7 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 6, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 7, 8 \rangle$, &
$\langle 6, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 7 \rangle$, \\
$\langle 2, 4, 6 \rangle$, &
$\langle 2, 5, 8 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 6, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 6, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 7 \rangle$, \\
$\langle 2, 4, 8 \rangle$, &
$\langle 2, 5, 8 \rangle$, &
$\langle 2, 6, 7 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 7, 8 \rangle$, &
$\langle 4, 5, 7 \rangle$, &
$\langle 5, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 8 \rangle$, \\
$\langle 2, 4, 5 \rangle$, &
$\langle 2, 6, 7 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 5, 7 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 8 \rangle$, \\
$\langle 2, 4, 5 \rangle$, &
$\langle 2, 6, 8 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 5, 7 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 7 \rangle$, &
$\langle 5, 6, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 8 \rangle$, \\
$\langle 2, 4, 5 \rangle$, &
$\langle 2, 6, 8 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 5, 7 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 7, 8 \rangle$, &
$\langle 6, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 8 \rangle$, \\
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 5, 6 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 7 \rangle$, &
$\langle 5, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 7 \rangle$, &
$\langle 2, 3, 6 \rangle$, &
$\langle 2, 3, 8 \rangle$, \\
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 5, 6 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 8 \rangle$, &
$\langle 6, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 8 \rangle$, &
$\langle 1, 7, 8 \rangle$, &
$\langle 2, 3, 6 \rangle$, \\
$\langle 2, 3, 7 \rangle$, &
$\langle 2, 4, 6 \rangle$, &
$\langle 2, 5, 7 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 6, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 7, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 8 \rangle$, &
$\langle 1, 7, 8 \rangle$, &
$\langle 2, 3, 6 \rangle$, \\
$\langle 2, 3, 7 \rangle$, &
$\langle 2, 4, 6 \rangle$, &
$\langle 2, 5, 8 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 6, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 6, 8 \rangle$, &
$\langle 1, 7, 8 \rangle$, &
$\langle 2, 3, 6 \rangle$, \\
$\langle 2, 3, 8 \rangle$, &
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 5, 6 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 4 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 3, 6 \rangle$, &
$\langle 1, 3, 7 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 5, 7 \rangle$, &
$\langle 2, 3, 8 \rangle$, &
$\langle 2, 4, 6 \rangle$, \\
$\langle 2, 4, 7 \rangle$, &
$\langle 2, 5, 8 \rangle$, &
$\langle 2, 6, 7 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 4, 8 \rangle$, &
$\langle 3, 5, 6 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 7 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1, 2, 3 \rangle$, &
$\langle 1, 2, 5 \rangle$, &
$\langle 1, 2, 6 \rangle$, &
$\langle 1, 3, 5 \rangle$, &
$\langle 1, 4, 6 \rangle$, &
$\langle 1, 4, 7 \rangle$, &
$\langle 1, 4, 8 \rangle$, &
$\langle 1, 7, 8 \rangle$, &
$\langle 2, 3, 7 \rangle$, \\
$\langle 2, 3, 8 \rangle$, &
$\langle 2, 5, 6 \rangle$, &
$\langle 2, 7, 8 \rangle$, &
$\langle 3, 4, 5 \rangle$, &
$\langle 3, 4, 6 \rangle$, &
$\langle 3, 4, 7 \rangle$, &
$\langle 3, 6, 8 \rangle$, &
$\langle 4, 5, 8 \rangle$, &
$\langle 5, 6, 8 \rangle$.
\end{tabular}
\subsection*{Dunce hats}
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$,\\
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,7 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$,\\
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,6 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$,\\
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,8 \rangle$, &
$\langle 4\,5\,7 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,7 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$,\\
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,8 \rangle$, &
$\langle 4\,5\,7 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,7 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,6 \rangle$,\\
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,6 \rangle$,\\
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,6 \rangle$,\\
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,6 \rangle$,\\
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,5\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 2\,6\,7 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,5\,7 \rangle$, &
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,6 \rangle$, &
$\langle 4\,5\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$,\\
$\langle 2\,5\,8 \rangle$, &
$\langle 2\,6\,7 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,8 \rangle$, &
$\langle 4\,5\,7 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,5\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,6 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,6 \rangle$, &
$\langle 4\,5\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,5\,6 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 2\,6\,7 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,5\,6 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,8 \rangle$,\\
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 2\,6\,7 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,5 \rangle$, &
$\langle 2\,4\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,5 \rangle$, &
$\langle 2\,4\,7 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,7 \rangle$, &
$\langle 1\,6\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,5 \rangle$,\\
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,4\,5 \rangle$, &
$\langle 2\,4\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,7 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,7 \rangle$, &
$\langle 4\,5\,7 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 2\,3\,7 \rangle$, &
$\langle 2\,4\,6 \rangle$, &
$\langle 2\,4\,8 \rangle$,\\
$\langle 2\,5\,8 \rangle$, &
$\langle 2\,6\,7 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,6\,8 \rangle$, &
$\langle 4\,5\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,2\,6 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,4\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,4\,8 \rangle$, &
$\langle 1\,7\,8 \rangle$, &
$\langle 2\,3\,7 \rangle$,\\
$\langle 2\,3\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,6\,8 \rangle$, &
$\langle 3\,4\,6 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,7 \rangle$, &
$\langle 2\,7\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,7 \rangle$, &
$\langle 5\,7\,8 \rangle$.
\end{tabular}
\bigskip
\hrule
\bigskip
\noindent
\begin{tabular}{lllllllll}
$\langle 1\,2\,3 \rangle$, &
$\langle 1\,2\,4 \rangle$, &
$\langle 1\,2\,5 \rangle$, &
$\langle 1\,3\,5 \rangle$, &
$\langle 1\,3\,6 \rangle$, &
$\langle 1\,4\,7 \rangle$, &
$\langle 1\,6\,7 \rangle$, &
$\langle 2\,3\,6 \rangle$, &
$\langle 2\,4\,6 \rangle$,\\
$\langle 2\,4\,8 \rangle$, &
$\langle 2\,5\,8 \rangle$, &
$\langle 3\,4\,5 \rangle$, &
$\langle 3\,4\,7 \rangle$, &
$\langle 3\,4\,8 \rangle$, &
$\langle 3\,7\,8 \rangle$, &
$\langle 4\,5\,6 \rangle$, &
$\langle 5\,6\,8 \rangle$, &
$\langle 6\,7\,8 \rangle$.
\end{tabular}
\newpage
\section{A complicated $15$-vertex triangulation of the $3$-sphere}
\label{app:fiveteen}
\noindent
$f$-vector: $(15, 105, 180, 90)$ \\
\noindent
Collapsing probability: $2.5903 \pm 0.0618 \%$ (for error probability $0.01 \%$)
\bigskip
\noindent
Isomorphism signature: \\
\noindent
\texttt{deefgaf.hbi.gbj.kalamai.nbo.n.lbl.oaj.hbg.k.oaoamaj.ibi.hbl.hbm.obg} \\
\texttt{.l.kam.jcicn.mak.nbkakcmak.lahanbm.nbg.mclcmbncn.jbnchambkbccc.jbfa} \\
\texttt{kaf.fcifkbiekgmbchikm.nbnfboddf.iecsjanOd} \\
\bigskip
\begin{center}
\begin{tabular}{lllll}
$\langle 1\,2\,3\,4 \rangle$, &
$\langle 1\,2\,3\,7 \rangle$, &
$\langle 1\,2\,4\,6 \rangle$, &
$\langle 1\,2\,6\,11 \rangle$, &
$\langle 1\,2\,7\,13 \rangle$,\\
$\langle 1\,2\,11\,13 \rangle$, &
$\langle 1\,3\,4\,5 \rangle$, &
$\langle 1\,3\,5\,7 \rangle$, &
$\langle 1\,4\,5\,9 \rangle$, &
$\langle 1\,4\,6\,10 \rangle$,\\
$\langle 1\,4\,9\,15 \rangle$, &
$\langle 1\,4\,10\,15 \rangle$, &
$\langle 1\,5\,7\,8 \rangle$, &
$\langle 1\,5\,8\,13 \rangle$, &
$\langle 1\,5\,9\,12 \rangle$,\\
$\langle 1\,5\,12\,13 \rangle$, &
$\langle 1\,6\,10\,11 \rangle$, &
$\langle 1\,7\,8\,13 \rangle$, &
$\langle 1\,9\,12\,14 \rangle$, &
$\langle 1\,9\,14\,15 \rangle$,\\
$\langle 1\,10\,11\,14 \rangle$, &
$\langle 1\,10\,14\,15 \rangle$, &
$\langle 1\,11\,12\,13 \rangle$, &
$\langle 1\,11\,12\,14 \rangle$, &
$\langle 2\,3\,4\,5 \rangle$,\\
$\langle 2\,3\,5\,8 \rangle$, &
$\langle 2\,3\,7\,12 \rangle$, &
$\langle 2\,3\,8\,12 \rangle$, &
$\langle 2\,4\,5\,6 \rangle$, &
$\langle 2\,5\,6\,14 \rangle$,\\
$\langle 2\,5\,8\,14 \rangle$, &
$\langle 2\,6\,11\,15 \rangle$, &
$\langle 2\,6\,14\,15 \rangle$, &
$\langle 2\,7\,9\,12 \rangle$, &
$\langle 2\,7\,9\,13 \rangle$,\\
$\langle 2\,8\,9\,10 \rangle$, &
$\langle 2\,8\,9\,12 \rangle$, &
$\langle 2\,8\,10\,14 \rangle$, &
$\langle 2\,9\,10\,13 \rangle$, &
$\langle 2\,10\,13\,15 \rangle$,\\
$\langle 2\,10\,14\,15 \rangle$, &
$\langle 2\,11\,13\,15 \rangle$, &
$\langle 3\,5\,7\,10 \rangle$, &
$\langle 3\,5\,8\,15 \rangle$, &
$\langle 3\,5\,10\,11 \rangle$,\\
$\langle 3\,5\,11\,15 \rangle$, &
$\langle 3\,6\,12\,13 \rangle$, &
$\langle 3\,6\,12\,15 \rangle$, &
$\langle 3\,6\,13\,14 \rangle$, &
$\langle 3\,6\,14\,15 \rangle$,\\
$\langle 3\,7\,10\,12 \rangle$, &
$\langle 3\,8\,12\,15 \rangle$, &
$\langle 3\,9\,10\,11 \rangle$, &
$\langle 3\,9\,10\,13 \rangle$, &
$\langle 3\,9\,11\,15 \rangle$,\\
$\langle 3\,9\,13\,14 \rangle$, &
$\langle 3\,9\,14\,15 \rangle$, &
$\langle 3\,10\,12\,13 \rangle$, &
$\langle 4\,5\,6\,9 \rangle$, &
$\langle 4\,6\,7\,8 \rangle$,\\
$\langle 4\,6\,7\,10 \rangle$, &
$\langle 4\,6\,8\,9 \rangle$, &
$\langle 4\,7\,8\,14 \rangle$, &
$\langle 4\,7\,10\,12 \rangle$, &
$\langle 4\,7\,12\,14 \rangle$,\\
$\langle 4\,8\,9\,11 \rangle$, &
$\langle 4\,8\,11\,14 \rangle$, &
$\langle 4\,9\,11\,15 \rangle$, &
$\langle 4\,10\,12\,13 \rangle$, &
$\langle 4\,10\,13\,15 \rangle$,\\
$\langle 4\,11\,12\,13 \rangle$, &
$\langle 4\,11\,12\,14 \rangle$, &
$\langle 4\,11\,13\,15 \rangle$, &
$\langle 5\,6\,9\,12 \rangle$, &
$\langle 5\,6\,12\,13 \rangle$,\\
$\langle 5\,6\,13\,14 \rangle$, &
$\langle 5\,7\,8\,15 \rangle$, &
$\langle 5\,7\,10\,11 \rangle$, &
$\langle 5\,7\,11\,15 \rangle$, &
$\langle 5\,8\,13\,14 \rangle$,\\
$\langle 6\,7\,8\,15 \rangle$, &
$\langle 6\,7\,10\,11 \rangle$, &
$\langle 6\,7\,11\,15 \rangle$, &
$\langle 6\,8\,9\,12 \rangle$, &
$\langle 6\,8\,12\,15 \rangle$,\\
$\langle 7\,8\,13\,14 \rangle$, &
$\langle 7\,9\,12\,14 \rangle$, &
$\langle 7\,9\,13\,14 \rangle$, &
$\langle 8\,9\,10\,11 \rangle$, &
$\langle 8\,10\,11\,14 \rangle$.
\end{tabular}
\end{center}
|
1,108,101,564,131 | arxiv | \section{Introduction}
Asymptotic analysis, the theory and practice of asymptotic estimates for various\,---\,often discrete\,---\,quantities, belongs to the main applications of the integral
calculus. An archetypal example is the Stirling formula $n!\sim\sqrt{2\pi n}(\frac{n}{e})^n$ where $n$ is in $\mathbb{N}=\{1,2,\dots\}$, $n\to\infty$, and $n!$, the factorial of $n$, is the product $1\cdot2\cdot\ldots\cdot n$ of the first $n$ positive integers and also equals the number of $n$-tuples $(a_1,a_2,\dots,a_n)$ in $\{1,2,\dots,n\}^n$
such that the cardinality $|\{a_1,a_2,\dots,a_n\}|=n$. In Section 3 we present two proofs of the Stirling formula by integrals. But what kind of integrals does asymptotic analysis use, or should use?
The integrals most often used are the Riemann integral $(R)\int$, the Riemann--Stieltjes integral $(RS)\int$, the Lebesgue integral $(L)\int$,
the Cauchy integral $(C)\int$ in $\mathbb{C}$ (see, for example, G.\,P. Egorychev \cite{egor} and M.\,R.~Riedel \cite{ried}), and their multivariate
versions, especially the multivariate Cauchy integral $(MC)\int=(MC)\int\int\dots\int$
(see, for example, B.\,D.~McKay \cite{mcka} and R. Pemantle and M.\,C. Wilson \cite{pema_wils}). We give four expressions of $n!$ by an integral.
The first two are
\begin{eqnarray*}
\log(n!)&=&c+O(1/n)+(R)\int_{1/2}^{n+1/2}\log x,\mbox{ for a $c\in\mathbb{R}$ and all $n\in\mathbb{N}$, and}\\
n!&=&(R)\int_0^{+\infty}x^ne^{-x}\\
&:=&\lim_{y\to+\infty}(R)\int_0^y x^ne^{-x},\;\mbox{ for all $n\in\mathbb{N}_0=\{0,\,1,\,2,\,\dots\}$}\;.
\end{eqnarray*}
We obtain them below in Propositions~\ref{appr_sum_logs} and \ref{gamma}, respectively, as Newton integrals $(N)\int$. The third and fourth expression
of $n!$ by an $\int$ are ($n\in\mathbb{N}$)
\begin{eqnarray*}
\frac{1}{n!}&=&\frac{1}{2\pi i}\cdot(C)\int\frac{e^z}{z^{n+1}}\;\mbox{ and}\\
n!&=&\frac{1}{(2\pi i)^n}\cdot(MC)\int\int\dots\int\frac{(z_1+z_2+\dots+z_n)^n}{(z_1z_2\dots z_n)^2}
\end{eqnarray*}
where we integrate along counter-clockwise oriented circles in $\mathbb{C}$, centered at the origins. We will not consider in detail these two expressions, which are
easy to establish by the Cauchy residue theorem. Three features set apart the last formula. The integrand is a rational and not a transcendental function. It computes $n!$ combinatorially (as the number of permutations of an $n$-element set) and not arithmetically
(as the product of the first $n$ natural numbers). Finally, the first three integral expressions for $n!$ are well known, but we have not encountered the fourth one in the literature.
We wonder if there are more simple integral representations of $n!$. One can give many more not so simple relations involving integrals and factorials. For example, F. Qi and B.-N. Guo \cite{qi_guo} present many
integral representations of the Catalan numbers $C_n=\frac{(2n)!}{(n+1)!n!}$, yielding relations like (\cite[Theorem 3]{qi_guo})
$$
(2n)!=(n+1)!\cdot n!\cdot\frac{1}{\pi}(R)\int_0^2 x^{2n}\sqrt{4-x^2}\,dx\;.
$$
Texts on asymptotic analysis, like the books N.\,G. de Bruijn \cite{debr} or P.~Flajolet and R. Sedgewick \cite{flaj_sedg}, usually do not devote much attention to the exact definition
and properties of integrals they use and take their theory for granted, which is understandable, but some books do. For example, the monograph \cite{mont_vaug}
by H.\,L. Montgomery and R.\,C. Vaughan has an appendix on the $(RS)\int$ in which its definition and basic properties are given. In our article we want to present derivations
of the Stirling formula with all their integral details and we aim at logical simplicity.
Thus we need a theoretically simple integral. For example, not to take the $(C)\int$ for granted and instead to develop this powerful and versatile
integral from scratch is not a straightforward task. It is not enough to open some of many textbooks on complex analysis because they all reach the Cauchy integral formula only after
several tens of pages. Does it mean that a proof of this formula has to be 50 pages long?\,---\,see M.~Klazar \cite{klaz}. Thus we will not discuss derivations of the Stirling formula based on the third and fourth expression.
Speaking of the $(C)\int$, it is often defined by reduction to the $(R)\int$ or $(RS)\int$ for the real and imaginary parts. But it seems sensible (integration contours are
usually composed only of straight segments and circular arcs) to integrate these parts just by the (generalized) $(N)\int$, as it is done for example in the textbook \cite[Kapitola 1.6]{vese_komp} of
J. Vesel\'y.
The simplest integral sufficient for our task is the historically first integral, the $(N)\int$ of I. Newton. It is not a big surprise because in practice we compute most $(R)\int$s and $(RS)\int$s by the $(N)\int$. Our contribution to the debate (see, for example, B.\,S. Thomson \cite{thom,thom1}) about the merits of the $(R)\int$, the $(RS)\int$, the $(L)\int$ or the
Henstock--Kurzweil $(HK)\int$ is that the primordial $(N)\int$ is in its way superior because it completely suffices without any further sophistication
for deducing the fundamental Stirling formula. We develop all properties of the $(N)\int$ needed for these deductions in Section 2. The
simplest version of the $(N)\int$ for continuous functions suffices for our purposes, for a more general $(N)\int$ with generalized primitives see J. Vesel\'y \cite{vese}
or B.\,S. Thomson \cite{thom,thom1}.
The two derivations of the Stirling formula in Section 3 are well known to researchers in asymptotic analysis, and so are the results on the
$(N)\int$ in Section 2 to real analysts, except possibly for Theorems~\ref{fubi_infi_inter} and \ref{fubi_genthm} which are Fubini-type results for iterated Newton integrals over infinite intervals.
Hopefully their combination, presented here with all details, together with the fact that the $(N)\int$ suffices for these derivations may be of interest to both groups and may constitute our original contribution to the subject.
We want to present all relevant details and are inspired in this by formalized
mathematics, see \cite[90. Stirling's formula]{formal} for formalizations of the Stirling formula. For example, the Coq formalization builds on the $(R)\int$.
Because of the space and effort limitations we also take some things for granted. It includes the following basic results from real analysis: the definition and properties of the real numbers $\mathbb{R}$,
the properties of derivatives (the Leibniz formula, differentiation of composite and inverse functions), Lagrange's mean value theorem, uniform continuity of continuous functions on compact sets, and especially the definitions and properties of the functions $\log x$, $e^x$ and
$\cos x$, and of the number $\pi$. The Stirling formula is a popular topic, and many proofs and derivations can be found in the literature, most of them using integrals. We mention a sample of ten: A.\,J.~Coleman \cite{cole}, P. Diaconis and D.~Freedman
\cite{diac_free}, C. Impens \cite{impe}, G.\,J.\,O.~Jameson \cite{jame}, H. Lou \cite{lou}, R. Michel \cite{mich}, M.\,R. Murty and K. Sampath \cite{murt_samp}, S. Niizeki
and M.~Araki \cite{niiz_arak}, J.\,M. Patin \cite{pati}, and T. Tao \cite{tao}. This list could be much extended.
\section{The Newton integral}\label{sect_intr_newt}
We use the extended reals $\mathbb{R}^*=\mathbb{R}\cup\{-\infty,+\infty\}$ where $\mathbb{R}$ are the real numbers and $-\infty<a<+\infty$ for every $a\in\mathbb{R}$. By an {\em interval $I$} we mean any subset $I\subset\mathbb{R}$ containing more than one element
and such that $a\le x\le b$ with $a,b\in I$ and $x\in\mathbb{R}$ implies $x\in I$. For $a,b\in\mathbb{R}^*$ with $a<b$ we write $(a,b)=\{x\in\mathbb{R}\;|\;a<x<b\}$ for the {\em open intervals}. The {\em compact intervals} are $[a,b]=\{x\in\mathbb{R}\;|\;a\le x\le b\}$ for $a,b\in\mathbb{R}$, $a<b$. Recall that if $I$ is an interval, $F,f\colon I\to\mathbb{R}$ are two functions, and
$F'(x)=f(x)$ for every $x\in I$, where $F'(x)$ means the corresponding one-sided derivative of $F$ if $x$ is an endpoint of $I$, then $F$ is called a {\em primitive to $f$ (on $I$)}.
For the discussion of complexity of finding or recovering primitives see the studies \cite{doug_kech} by R. Dougherty and A.\,S. Kechris and \cite{frei} by Ch. Freiling, or the surveys \cite{bull,bull1} by P.\,S. Bullen.
\begin{defi}[the Newton integral]\hspace{-1.6mm}{\bf. }\label{defi_newt}
Suppose that $a,b\in\mathbb{R}^*$, $a<b$, and that $f\colon(a,b)\to\mathbb{R}$ is a real function. The Newton integral of $f$ over
$(a,b)$ is the real number
$$
(N)\int_a^bf:=F(b^-)-F(a^+)
$$
where $F$ is on $(a,b)$ primitive to $f$ and both limits $F(a^+):=\lim_{x\to a^+}F(x)$ and $F(b^-):=\lim_{x\to b^-}F(x)$ are finite. We set
$$
(N)\int_b^af:=-\;(N)\int_a^bf\;.
$$
\end{defi}
If $f$ does not have a primitive on $(a,b)$ or one of the limits of $F$ does not exist or is infinite, the Newton integral of $f$ is undefined. It is well known and easy to prove by Lagrange's
mean value theorem that any two primitives to the same function only differ by a constant shift, and thus the definition is correct (independent of the choice of $F$). The functions $F$ and $f$ may be defined also outside $(a,b)$, and therefore the limits of $F$ have to be marked as one-sided. We use the traditional notation
$$
\int_a^b f\;dx=\int_a^b f(x)\;dx
$$
only in situations when it is necessary to identify the integration variable ($x$ in this case).
Existence of the $(N)\int_a^bf$ for any finite $a,b$ and any $f$ continuous on $[a,b]$ follows from the next theorem. Recall that a sequence of functions $f_n$, $n\in\mathbb{N}$, defined on a set $M\subset\mathbb{R}$ {\em converges on $M$ locally uniformly to a function
$f\colon M\to\mathbb{R}$}, briefly written {\em locally $f_n\rightrightarrows f$ on $M$}, if for every $a\in M$ there is an open interval $I\ni a$ such that for every $\varepsilon>0$ there is an $n_0\in\mathbb{N}$ such that if $n\ge n_0$ and $x\in I\cap M$
then $|f_n(x)-f(x)|<\varepsilon$. If one may always set $I=\mathbb{R}$, we say that {\em $f_n$ converge on $M$ uniformly to $f$} and write briefly {\em $f_n\rightrightarrows f$ on $M$}.
\begin{thm}[primitive by limit transition]\hspace{-1.6mm}{\bf. }\label{thm_prim_limi}
Let $I$ be an interval, $a$ in $I$ be arbitrary but fixed, and functions $f,f_n\colon I\to\mathbb{R}$, $n\in\mathbb{N}$, be such that (i) locally $f_n\rightrightarrows f$ on $I$ and (ii) each $f_n$ has on $I$ a primitive. Then the
primitives $F_n$ to $f_n$ satisfying $F_n(a)=0$, $n\in\mathbb{N}$, converge on $I$ locally uniformly to a primitive $F$ to $f$.
\end{thm}
\par\medskip\noindent{\em Proof. }
First we show that the sequence $F_n(x)$, $n=1,2,\dots$, is Cauchy, uniformly in $x\in J$ for any compact interval $J\subset I$ containing $a$. Indeed, if $m\ge n$ and $x\in J$ then, by Lagrange's mean value theorem,
\begin{eqnarray*}
|F_m(x)-F_n(x)|&\le&|(F_m-F_n)(x)-(F_m-F_n)(a)|+|F_m(a)-F_n(a)|\\
&=&|(x-a)\cdot(f_m-f_n)(b)|\;,
\end{eqnarray*}
for some $b$ lying between $x$ and $a$ and thus in $J$. By (i) and the compactness of $J$, $f_n\rightrightarrows f$ on $J$ and the last absolute value is for large $n$ uniformly small. Thus for any $\varepsilon>0$
there is an $n_0\in\mathbb{N}$ such that if $m\ge n\ge n_0$ then $|F_m(x)-F_n(x)|<\varepsilon$ for every $x\in J$. It follows that for some $F\colon I\to\mathbb{R}$ we have $F_n\rightrightarrows F$ on $J$, and hence locally $F_n\rightrightarrows F$ on $I$.
Next we show that $F$ is on $I$ primitive to $f$. Let an $x_0\in I$ be given and let $J\subset I$ be a compact interval containing $x_0$ in its relative interior. Let an $\varepsilon>0$ be given. Since $f_n\rightrightarrows f$ on $J$, we can take an
$n_0\in\mathbb{N}$ such that if $m\ge n\ge n_0$ then $|f_m(x)-f_n(x)|<\varepsilon$ for every $x\in J$. We fix an $n\ge n_0$ such that $|f_n(x_0)-f(x_0)|<\varepsilon$. Since $F_n'=f_n$ on $I$, we can take a relatively open interval $K\subset J$ such that $x_0\in K$
and for every $x\in K$, $x\ne x_0$, we have $|\frac{F_n(x)-F_n(x_0)}{x-x_0}-f_n(x_0)|<\varepsilon$. Let an $x\in K$, $x\ne x_0$, be given. We fix an $m\ge n$ such that
$\left|\frac{F(x)-F(x_0)}{x-x_0}-\frac{F_m(x)-F_m(x_0)}{x-x_0}\right|<\varepsilon$.
Then for the given $x\in K$ we have, by the previous choices, by Lagrange's mean value theorem, and by the triangle inequality,
\begin{eqnarray*}
&&\left|\frac{F(x)-F(x_0)}{x-x_0}-f(x_0)\right|\le\left|\frac{F(x)-F(x_0)}{x-x_0}-\frac{F_m(x)-F_m(x_0)}{x-x_0}\right|+\\
&&+\;\left|\frac{(F_m-F_n)(x)-(F_m-F_n)(x_0)}{x-x_0}\right|+\left|\frac{F_n(x)-F_n(x_0)}{x-x_0}-f_n(x_0)\right|+\\
&&+\;|f_n(x_0)-f(x_0)|\\
&&<\varepsilon+|(f_m-f_n)(y)|+\varepsilon+\varepsilon<4\varepsilon\;,
\end{eqnarray*}
for some $y$ lying between $x_0$ and $x$ and thus in $J$. Hence $F'(x_0)=f(x_0)$.
\hfill{$\Box$}\bigskip
\noindent
The following is an existence theorem for the $(N)\int$ on which we rely in the case of bounded intervals.
\begin{cor}[existence of the $(N)\int$]\hspace{-1.6mm}{\bf. }\label{prop_ex_newt}
Let $a,b\in\mathbb{R}$, $a<b$, and $f\colon[a,b]\to\mathbb{R}$ be a continuous function. Then $f$ has a primitive $F$ on $[a,b]$ and the $(N)\int_a^bf$ exists.
\end{cor}
\par\medskip\noindent{\em Proof. }
It suffices to prove the existence of $F$ because $F(a^+)=F(a)$ and $F(b^-)=F(b)$ by its continuity. Due to compactness of $[a,b]$ the function $f$ is uniformly continuous.
So for every $n\in\mathbb{N}$ there is a partition $a=a_0<a_1<\dots<a_k=b$ of $[a,b]$ (we do not mark the dependence on $n$) such that
$$
a_i\le x\le a_{i+1}\Rightarrow|f(x)-f(a_i)|<\frac{1}{n}\;\mbox{ and }\;|f(x)-f(a_{i+1})|<\frac{1}{n}
$$
for every $i=0,1,\dots,k-1$. Let $f_n\colon[a,b]\to\mathbb{R}$ be the piecewise linear continuous function whose graph is the broken line with breaks exactly in the points $(a_i,f(a_i))$, $i=0,1,\dots,k$.
We check that $f_n$ and $f$ satisfy both hypotheses in Theorem~\ref{thm_prim_limi}. By the definition of $f_n$, if $x\in[a_i,a_{i+1}]$ then the value $f_n(x)$ lies between
$f(a_i)$ and $f(a_{i+1})$, thus $|f(x)-f_n(x)|<\frac{2}{n}$ and we see that even $f_n\rightrightarrows f$ on $[a,b]$ and (i) holds. Since
for every $u,v,w\in\mathbb{R}$ the function $(u/2)x^2+vx+w$ is primitive on any interval to the linear function $ux+v$, it is easy by employing the shifts $w$ to patch from the local primitives to the linear pieces of $f_n$ on
the intervals $[a_i,a_{i+1}]$, $i=0,1,\dots,k-1$, a function $g_n$ that is primitive to $f_n$ on the whole interval $[a,b]$. In this we use the fact that for any real function
$h$, if $h'_-(x)=y$ and $h'_+(x)=y$ then $h'(x)=y$. Thus (ii) holds. By Theorem~\ref{thm_prim_limi}, $f$ has on $[a,b]$ a primitive function $F$.
\hfill{$\Box$}\bigskip
\noindent
As is well known one can obtain a primitive $F$ to $f$ also as the Riemann integral $F(x)=(R)\int_a^x f$, but this goes against the spirit of our article.
Similar limit constructions of primitives appear, for example, in J. Jost \cite[Chapter 6]{jost} or B.\,S. Thomson \cite[Chapter 1.2]{thom}. If one is interested only in proving
the existence of a primitive to any continuous $f$, then the proof in \cite[Chapter 1.2]{thom} is simpler compared to our argument Theorem~\ref{thm_prim_limi} $\to$ Corollary~\ref{prop_ex_newt}.
By a historical note in \cite[Chapter 1.2]{thom} quoting F.\,A. Medvedev \cite[p. 66]{medv}, it was only in 1905 when H.~Lebesgue provided in \cite{lebe} a $(R)\int$-free
construction of primitives to continuous functions; up to then some arguments justifying their existence were logically circular, as they obtained a primitive in terms of
an integral that they had earlier defined in terms of a primitive.
\begin{prop}[Hake's theorem]\hspace{-1.6mm}{\bf. }\label{Hake_thm}
Let $a,b\in\mathbb{R}^*$, $a<b$, and $f\colon(a,b)\to\mathbb{R}$ be a function. Then in
$$
(N)\int_a^b f=\lim_{c\to b^-}(N)\int_a^c f
$$
if one side is defined and finite, so is the other side and the equality holds. Similar result holds for the limit with $c\to a^+$.
\end{prop}
\par\medskip\noindent{\em Proof. }
If the left side is defined and finite, it is $F(b^-)-F(a^+)$ where $F$ is on $(a,b)$ primitive to $f$. For any $c\in(a,b)$ then, for the restricted $f$ and $F$, the
$(N)\int_a^c f$ exists and equals $F(c^-)-F(a^+)=F(c)-F(a^+)$ by the continuity of $F$ at $c$. The limit transition $c\to b^-$ then shows that the right
side equals $F(b^-)-F(a^+)$ too.
If the right side is defined and finite, for every $c\in(a,b)$ we have on $(a,c)$ a primitive $F_c$ to to the restricted $f$, and $(N)\int_a^c f=F_c(c^-)-F_c(a^+)$.
By the property of primitives, we can take such $F_c$ that $F_c(a^+)=0$ for every $c$ in $(a,b)$. Then $a<c<d<b\Rightarrow F_c\subset F_d$ (i.e. $F_d$ extends $F_c$)
and $F=\bigcup_{c\in(a,b)}F_c$ is a primitive to $f$ on $(a,b)$. Then
\begin{eqnarray*}
\lim_{c\to b^-}(N)\int_a^c f&=&\lim_{c\to b^-}(F_c(c^-)-F_c(a^+))=\lim_{c\to b^-}(F(c)-F(a^+))\\
&=&\lim_{c\to b^-}F(c)-F(a^+)=F(b^-)-F(a^+)\\
&=&(N)\int_a^b f\;.
\end{eqnarray*}
\hfill{$\Box$}\bigskip
\noindent
Since $(N)\int_a^c f$ is not defined for $c>b$, we could write $\lim_{c\to b}$ in the statement.
\begin{prop}[linearity and additivity]\hspace{-1.6mm}{\bf. }\label{prop_linear}
Suppose that $a,b\in\mathbb{R}^*$, $a<b$, and that the integrals $(N)\int_a^bf$ and $(N)\int_a^bg$ exist. Then the following holds.
\begin{enumerate}
\item For every $\alpha,\beta\in\mathbb{R}$ the function $h=\alpha f+\beta g$ has Newton integral over $(a,b)$ and
$$
(N)\int_a^bh=\alpha\cdot(N)\int_a^bf+\beta\cdot(N)\int_a^bg\;.
$$
\item For every $c\in(a,b)$ the integrals $(N)\int_a^cf$ and $(N)\int_c^bf$ exist and
$$
(N)\int_a^bf=(N)\int_a^cf+(N)\int_c^bf\;.
$$
\end{enumerate}
\end{prop}
\par\medskip\noindent{\em Proof. }
1. This follows from the fact that if $F$ and $G$ are on $(a,b)$ primitive to $f$ and $g$, respectively, then $\alpha F+\beta G$ is on $(a,b)$ primitive to $\alpha f+\beta g$, and from linearity of functional limits at $a^+$ and $b^-$.
2. If $F$ is on $(a,b)$ primitive to $f$, it (its restriction) is primitive to (the restricted) $f$ also on $(a,c)$ and on $(c,b)$. The integrals $(N)\int_a^cf$ and $(N)\int_c^bf$ exist because
$$
\lim_{x\to c^+}F(x)=\lim_{x\to c^-}F(x)=F(c)
$$
by the continuity of $F$ at $c$. Also, $F(b^-)-F(a^+)=(F(b^-)-F(c^+))+(F(c^-)-F(a^+))$ gives the stated equality.
\hfill{$\Box$}\bigskip
\noindent
Manipulations of integrals very often use part 1, and we will not always acknowledge it.
\begin{prop}[monotonicity]\hspace{-1.6mm}{\bf. }\label{prop_mono_newt}
Suppose that $a,b\in\mathbb{R}^*$, $a<b$, the integrals $(N)\int_a^bf$ and $(N)\int_a^bg$ exist, and that
$f(x)\le g(x)$ for every $x\in(a,b)$. Then
$$
(N)\int_a^bf\le(N)\int_a^bg\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
Let $a<a'<b'<b$ where $a',b'\in\mathbb{R}$ and let $F$ and $G$ be on $(a,b)$ primitive to $f$ and $g$, respectively. By Lagrange's mean value theorem we have, with some $c\in(a',b')$,
\begin{eqnarray*}
G(b')-G(a')-(F(b')-F(a'))&=&(G-F)(b')-(G-F)(a')\\
&=&(b'-a')\cdot(G-F)'(c)\\
&=&(b'-a')(g(c)-f(c))\\&\ge&0
\end{eqnarray*}
because $g-f\ge0$ on $(a,b)$. So
$$
F(b')-F(a')\le G(b')-G(a')\;,
$$
and limit transitions $a'\to a^+$ and $b'\to b^-$ give the stated inequality.
\hfill{$\Box$}\bigskip
\noindent
As a corollary we obtain the most often used estimate in the integral calculus.
\begin{cor}[ML bound]\hspace{-1.6mm}{\bf. }\label{ML_bound}
Suppose that $a,b,c\in\mathbb{R}$, $a<b$, the integral $(N)\int_a^bf$ exists, and $f(x)\le c$ (resp. $f(x)\ge c$) for every $x\in(a,b)$. Then
$$
(N)\int_a^bf\le c(b-a)\ \bigg(\mbox{resp. }(N)\int_a^bf\ge c(b-a)\bigg)\;.
$$
\end{cor}
\par\medskip\noindent{\em Proof. }
Apply the proposition to $f(x)$ and the constant function $c$, and compute the $(N)\int$ of a constant function.
\hfill{$\Box$}\bigskip
The next theorem can be found in a more general form with generalized primitives in J. Vesel\'y \cite[V\v{e}ta 11.3.13]{vese}.
\begin{thm}[integration by parts]\hspace{-1.6mm}{\bf. }\label{prop_pp_newt}
Let $a,b\in\mathbb{R}^*$, $a<b$, and $F$, resp. $G$, be on $(a,b)$ primitive to $f$, resp. to $g$. Then in
$$
(N)\int_a^b fG=\big((FG)(b^-)-(FG)(a^+)\big)-(N)\int_a^b Fg
$$
if two of the three terms are defined and finite then so is the third one and the equality holds.
\end{thm}
\par\medskip\noindent{\em Proof. }
Suppose that the first term $E(b^-)-E(a^+)$, where $E$ is on $(a,b)$ primitive to $fG$, and the second term $(FG)(b^-)-(FG)(a^+)$ are defined and finite. By the Leibniz rule,
$$
(FG-E)'=fG+Fg-fG=Fg
$$
on $(a,b)$ and $FG-E$ is primitive to $Fg$. Also, by the assumptions, $(FG-E)(b^-)=(FG)(b^-)-E(b^-)$ and $(FG-E)(a^+)=(FG)(a^+)-E(a^+)$. The stated equality therefore follows by subtraction and rearrangement. If the third term and the second
term are defined and finite, the argument is similar. Suppose that the first term $E(b^-)-E(a^+)$ and the third term $D(b^-)-D(a^+)$, where $E$ and $D$ are on $(a,b)$ primitive to $fG$ and $Fg$, respectively, are defined
and finite. By the Leibniz rule,
$$
(E+D)'=fG+Fg=(FG)'
$$
on $(a,b)$. Thus (by Lagrange's mean value theorem) $E+D$ and $FG$ only differ by a constant shift $c$. Hence $(FG)(b^-)=E(b^-)+D(b^-)+c$ and $(FG)(a^+)=E(a^+)+D(a^+)+c$. The stated equality again follows by subtraction
and rearrangement.
\hfill{$\Box$}\bigskip
\begin{prop}[substitution rule]\hspace{-1.6mm}{\bf. }\label{prop_subst}
Suppose that $a,b,c,d\in\mathbb{R}^*$, $a<b$ and $c<d$, $g\colon (c,d)\to(a,b)$, $g(x)\to a$ for $x\to c$, $g(x)\to b$ for $x\to d$, $g$ is differentiable on $(c,d)$, $f\colon(a,b)\to\mathbb{R}$, and the
$(N)\int_a^bf$ exists. Then the next integral exists and
$$
(N)\int_c^d (f\circ g)g'=(N)\int_a^b f=(N)\int_{g(c)}^{g(d)} f\;.
$$
We extend $g$ by $g(c)=a$ and $g(d)=b$ by limit transitions, and understand it as a mere notation when $c=-\infty$ or $d=+\infty$.
\end{prop}
\par\medskip\noindent{\em Proof. }
Let $F$ be on $(a,b)$ primitive to $f$. Then
$$
(N)\int_a^b f=F(b^-)-F(a^+)=(F\circ g)(d^-)-(F\circ g)(c^+)=(N)\int_c^d (f\circ g)g'
$$
because $(F\circ g)'=(f\circ g)g'$ on $(c,d)$ and therefore $F\circ g$ is on $(c,d)$ primitive to $(f\circ g)g'$.
\hfill{$\Box$}\bigskip
\noindent
In the situation when the substitution $g$ flips the interval by $g(x)\to b$ for $x\to c$ and $g(x)\to a$ for $x\to d$ and we modify the hypothesis accordingly, we obtain identical formula:
\begin{eqnarray*}
(N)\int_{g(c)}^{g(d)} f&=&(N)\int_b^a f=-\;(N)\int_a^b f
=-F(b^-)+F(a^+)\\
&=&-\;(F\circ g)(c^+)+(F\circ g)(d^-)=(N)\int_c^d (f\circ g)g'\;.
\end{eqnarray*}
For the second derivation of the Stirling formula we need for the Newton integral a Fubini-type result. We obtain it in the next two theorems. The first one appears, with a different proof, in P. Walker \cite[Theorem A.9 (i)]{walk}
for the $(W)\int$. The integral, which we call tentatively Walker's, is introduced in \cite[Chapter 4]{walk} that was not available to us, and we could not determine its relation to other integrals. Later we see that $(W)\int\ne(N)\int$.
\begin{thm}[Fubini \`a la Newton]\hspace{-1.6mm}{\bf. }\label{thm_fubi}
Suppose that $a,b,c,d\in\mathbb{R}$, $a<b$ and $c<d$, and
$$
f=f(x,\,y)\colon[a,\,b]\times[c,\,d]\to\mathbb{R}
$$
is a continuous function. Then the following two iterated Newton integrals exist and are equal:
$$
(N)\int_a^b\left((N)\int_c^d f(x,\,y)\,dy\right)\,dx=(N)\int_c^d\left((N)\int_a^b f(x,\,y)\,dx\right)\,dy\;.
$$
\end{thm}
\par\medskip\noindent{\em Proof. }
Each inner integral $I(x)=(N)\int_c^d f(x,y)\,dy$ exists by Corollary~\ref{prop_ex_newt}. If $x_1,x_2\in[a,b]$ then
$$
I(x_1)-I(x_2)=(N)\int_c^d(f(x_1,\,y)-f(x_2,\,y))\,dy\;.
$$
By the uniform continuity of $f(x,y)$ on the compact rectangle $[a,\,b]\times[c,\,d]$ we see that for close $x_1$ and $x_2$ the value
$|f(x_1,y)-f(x_2,y)|$ is small for any $y$, and thus by Corollary~\ref{ML_bound} also the integral and $|I(x_1)-I(x_2)|$ are small\,---\,$I(x)$ is continuous. Thus $(N)\int_a^b I(x)$ exists. Similar argument shows existence of the integrals
$J(y)=(N)\int_a^bf(x,y)\,dx$ and $(N)\int_c^d J(y)$ on the right side of the formula.
We prove the equality by showing that the two iterated Newton integrals are arbitrarily close. Let $\varepsilon>0$ be given. By the uniform continuity of $f$ on the rectangle there exist a partition $a=a_0<a_1<\dots<a_k=b$ of $[a,b]$, a partition $c=b_0<b_1<\dots<b_l=d$ of $[c,d]$,
and constants $c_{i,j}\in\mathbb{R}$, $i=0,1,\dots,k-1$ and $j=0,1,\dots,l-1$, such that if $(x,y)$ lies in $[a_i,a_{i+1}]\times[b_j,b_{j+1}]$ then $|f(x,y)-c_{i,j}|<\varepsilon$. Let $f_{i,j}$ be the restriction of $f$ to $[a_i,a_{i+1}]\times[b_j,b_{j+1}]$ and let $c_{i,j}$
also denote the constant function $c_{i,j}$ on this rectangle. Then
\begin{eqnarray*}
I_{i,j}&:=&(N)\int_{a_i}^{a_{i+1}}\left((N)\int_{b_j}^{b_{j+1}}c_{i,j}\,dy\right)\,dx=(N)\int_{a_i}^{a_{i+1}}c_{i,j}(b_{j+1}-b_j)\,dx\\
&=&c_{i,j}(a_{i+1}-a_i)(b_{j+1}-b_j)=(N)\int_{b_j}^{b_{j+1}}c_{i,j}(a_{i+1}-a_i)\,dy\\
&=&(N)\int_{b_j}^{b_{j+1}}\left((N)\int_{a_i}^{a_{i+1}}c_{i,j}\,dx\right)\,dy=:J_{i,j}\;.
\end{eqnarray*}
By parts 1 and 2 of Proposition~\ref{prop_linear},
\begin{eqnarray*}
(N)\int_a^b\left((N)\int_c^d f\,dy\right)\,dx&=&(N)\int_a^b\left(\sum_{j=0}^{l-1}(N)\int_{b_j}^{b_{j+1}} f\,dy\right)\,dx\\
&=&\sum_{j=0}^{l-1}(N)\int_a^b\left((N)\int_{b_j}^{b_{j+1}} f\,dy\right)\,dx\\
&=&\sum_{j=0}^{l-1}\sum_{i=0}^{k-1}(N)\int_{a_i}^{a_{i+1}}\left((N)\int_{b_j}^{b_{j+1}} f_{i,j}\,dy\right)\,dx\;.
\end{eqnarray*}
A similar computation shows that
$$
(N)\int_c^d\left((N)\int_a^b f\,dx\right)\,dy=\sum_{i=0}^{k-1}\sum_{j=0}^{l-1}(N)\int_{b_j}^{b_{j+1}}\left((N)\int_{a_i}^{a_{i+1}} f_{i,j}\,dx\right)\,dy\;.
$$
Since $|f_{i,j}-c_{i,j}|<\varepsilon$ on $[a_i,a_{i+1}]\times[b_j,b_{j+1}]$, it follows by Corollary~\ref{ML_bound} that the first iterated Newton integral in the equality we are proving differs from $\sum_{j=0}^{l-1}\sum_{i=0}^{k-1}I_{i,j}$
by less than $\varepsilon(b-a)(d-c)$, and the second one differs from $\sum_{i=0}^{k-1}\sum_{j=0}^{l-1}J_{i,j}$ by less than $\varepsilon(d-c)(b-a)$. Since always $I_{i,j}=J_{i,j}$, these two double sums are equal and the two iterated Newton integrals differ by less
than $2\varepsilon(b-a)(d-c)$, as we need.
\hfill{$\Box$}\bigskip
\noindent
The theorem inverts the well known result that $\partial_x\partial_yf=\partial_y\partial_xf$ at a point if both second order partial derivatives exist in a neighborhood of
the point and are continuous at it.
But what we need in Section 3 is an extension of the previous theorem with infinite $b$ and $d$. Then \cite[Theorem A.9 (ii)]{walk} says:
\begin{quote}
(ii) If $f$ is continuous on $I\times J$ where $I,J$ are intervals in $R$ which may be finite or infinite, and if $f$ is positive on $E$
[$=I\times J$] then the integrals in (A.1) [the two iterated integrals] are either all infinite, or all finite and equal.
\end{quote}
This does not hold for the $(N)\int$. Consider a continuous function
$$
f=f(x,\,y)\colon[0,\,+\infty)^2\to(0,\,1]
$$
such that $f(x,1)=1$ for every $x\ge0$, and such that for
each fixed $x\ge0$ the section $f(x,y)$ first increases for $0\le y\le 1$ from $0^+$ to $1$ and then for $y\ge 1$ rapidly decreases
from $1$ to $0^+$ in such a way that the width of the base of this peak decreases for $x\to+\infty$ to $0$ fast enough so that each $J(x)=(N)\int_0^{+\infty}f(x,y)\,dy$ exists, $J\colon[0,+\infty)\to(0,+\infty)$ is
continuous, and
$$
(N)\int_0^{+\infty}J(x)
$$
exists. Then the iterated Newton integral
$$
(N)\int_0^{+\infty}\left((N)\int_0^{+\infty} f(x,\,y)\,dy\right)\,dx=(N)\int_0^{+\infty}J(x)
$$
exists. However, the other iterated Newton integral
$$
(N)\int_0^{+\infty}\left((N)\int_0^{+\infty} f(x,\,y)\,dx\right)\,dy
$$
is undefined because for $y=1$ the inner Newton integral is not defined. The reader will have no problems to supply numerical details.
We do not give a general Fubini-type theorem for the $(N)\int$ over infinite intervals strong enough to prove
Proposition~\ref{gauss_int} because we could not find such a theorem. Instead we directly establish only the needed instance
for the function $ue^{-u^2(1+v^2)}$.
\begin{thm}[a (N) Fubini result]\hspace{-1.6mm}{\bf. }\label{fubi_infi_inter}
The next two iterated Newton integrals exist and are equal: if $f(x,z)=xe^{-x^2(1+z^2)}=xe^{-x^2}e^{-x^2z^2}$ then
$$
(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}f(x,\,z)\,dz\right)\,dx=(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}f(x,\,z)\,dx\right)\,dz\;.
$$
\end{thm}
\par\medskip\noindent{\em Proof. }
The $(N)\int_0^{+\infty} e^{-x^2}=:c>0$ exists by the majorization $e^{-x^2}\le e^{-x}$ for $x\ge1$, Corollary~\ref{prop_ex_newt}, and Propositions~\ref{Hake_thm}, \ref{prop_linear} (part 2), and \ref{prop_mono_newt}. The calculation
\begin{eqnarray*}
(N)\int_0^{+\infty} e^{-x^2}\cdot(N)\int_0^{+\infty} e^{-y^2}&=&(N)\int_0^{+\infty} e^{-x^2}\left((N)\int_0^{+\infty} e^{-y^2}\right)\,dx\\
&=&(N)\int_0^{+\infty}\left((N)\int_0^{+\infty} e^{-x^2-y^2}\,dy\right)\,dx\\
&=&(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}f(x,\,z)\,dz\right)\,dx
\end{eqnarray*}
then shows that the first iterated Newton integral $A$ exists. On the first two lines we multiplied an integral by a constant
according to part 1 of Proposition~\ref{prop_linear} and we used that $e^a e^b=e^{a+b}$. On the third line we used Proposition~\ref{prop_subst} with the substitution $y\leftarrow z$, $y=xz$. To prove the equality we estimate how much the last iterated integral $A$
differs from its finite approximation
$$
A(b):=(N)\int_0^b\left((N)\int_0^b f(x,\,z)\,dz\right)\,dx,\ \mathbb{R}\ni b\ge1\;.
$$
The integrals $A(b)$ exist by Theorem~\ref{thm_fubi}.
For any $b\ge1$ (we justify the estimates after the computation),
\begin{eqnarray*}
&&0\le A-A(b)=(N)\int_0^b\left((N)\int_b^{+\infty}f(x,z)\,dz\right)\,dx\,+\\
&&+\,(N)\int_b^{+\infty}\left((N)\int_0^{+\infty}f(x,z)\,dz\right)\,dx\\
&&\le(N)\int_0^b\left((N)\int_{bx}^{+\infty}e^{-x^2}e^{-y^2}\,dy\right)\,dx+(N)\int_b^{+\infty}\left((N)\int_0^1 c_0x^{-2}\,dz\,+\right.\\
&&\left.+\,(N)\int_1^{+\infty} c_0x^{-2}e^{-z}\,dz\right)\,dx\\
&&\le(N)\int_0^{1/b^{1/2}} c\,dx+(N)\int_{1/b^{1/2}}^b\left((N)\int_{b^{1/2}}^{+\infty}e^{-y}\right)\,dx+\\
&&+\,(N)\int_b^{+\infty}(c_0x^{-2}+c_0x^{-2}e^{-1})\,dx\\
&&\le cb^{-1/2}+be^{-b^{1/2}}+c_0(1+e^{-1})b^{-1}\;.
\end{eqnarray*}
In the initial $=$ we used part 2 of Proposition~\ref{prop_linear}. In the next $\le$ we returned from $z$ to the variable $y$ and set $c_0=\max_{x\ge 1}xe^{-x^2}/x^{-2}$. In the penultimate $\le$ we invoked the existence of $(N)\int_0^{+\infty}e^{-y^2}$. We also were using part 2 of
Proposition~\ref{prop_linear}, Proposition~\ref{prop_mono_newt}, Definition~\ref{defi_newt}, and majorizations $e^{-a}\le1$ for $a\ge0$ and $e^{-a^2}\le e^{-a}$ for $a\ge1$. Thus $A-A(b)\to0$ as $b\to+\infty$.
By Theorem~\ref{thm_fubi},
$$
A(b)=B(b):=(N)\int_0^b\left((N)\int_0^b f(x,\,z)\,dx\right)\,dz
$$
for any $b\in\mathbb{R}$ with $b>0$. We complete the proof by showing that
$$
B:=(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}f(x,\,z)\,dx\right)\,dz
$$
exists and that $B-B(b)\to0$ as $b\to+\infty$.
For any $z,b>0$ we define $I(z,b)=(N)\int_0^b f(x,z)\,dx$, this integral exists by Corollary~\ref{prop_ex_newt}. Since $f(x,z)$ is for $x\ge1$ majorized by $xe^{-x}$, the inner integral
$$
I(z):=(N)\int_0^{+\infty}f(x,\,z)\,dx=\lim_{b\to+\infty}I(z,\,b)
$$
exists for any $z\ge0$ by Propositions~\ref{prop_linear} (part 2), \ref{prop_mono_newt}, and \ref{Hake_thm}. We prove that $I(z)$ is continuous for
$z\ge0$. By the uniform continuity of $f(x,z)$ on compact sets, for any given $z_0\ge0$, $b\ge1$, and $\varepsilon>0$ there is a $\delta>0$ such that if $z\ge0$ satisfies $|z-z_0|<\delta$ then
$|f(x,z)-f(x,z_0)|<\varepsilon$ for any $x\in[0,b]$. Then
$$
|I(z)-I(z_0)|<(N)\int_0^b\varepsilon\,dx+(N)\int_b^{+\infty}xe^{-x}=b\varepsilon+be^{-b}+e^{-b}\;,
$$
which shows that $I(z)$ is continuous at $z_0$. Thus $(N)\int_0^bI(z)$ exists for any $b>0$ by Corollary~\ref{prop_ex_newt}.
Let $c_1=\max_{x\ge0}xe^{-x^2}>0$. For any $z\ge1$ we have
\begin{eqnarray*}
0\le I(z)&<&(N)\int_0^{1/z^{2/3}}x\cdot1\,dx+(N)\int_{1/z^{2/3}}^{+\infty}xe^{-x^2}e^{-xz}\,dx\\
&<&z^{-4/3}+c_1e^{-z^{1/3}}/z<c_2z^{-4/3}\;,
\end{eqnarray*}
for an absolute constant $c_2$. This majorizations implies that the integral
$$
B=(N)\int_0^{+\infty}I(z)
$$
exists.
It remains to estimate its distance from $B(b)$. For any $b\ge1$ we have, using again part 2 of Proposition~\ref{prop_linear}, that
\begin{eqnarray*}
&&0\le B-B(b)=(N)\int_0^b\left((N)\int_b^{+\infty}f(x,z)\,dx\right)\,dz+\\
&&+\,(N)\int_b^{+\infty}\left((N)\int_0^{+\infty}f(x,z)\,dx\right)\,dz\\
&&<(N)\int_0^b\left((N)\int_b^{+\infty}xe^{-x}\right)\,dz+(N)\int_b^{+\infty}I(z)\\
&&<(N)\int_0^b(be^{-b}+e^{-b})\,dz+(N)\int_b^{+\infty}c_2z^{-4/3}\\
&&=(b^2+b)e^{-b}+3c_2b^{-1/3}\;,
\end{eqnarray*}
and again $B-B(b)\to0$ as $b\to+\infty$. Thus $A=B$, the two iterated Newton integrals are equal.
\hfill{$\Box$}\bigskip
\noindent
We hoped to prove Proposition~\ref{gauss_int} as an instance of a general Fubini theorem for iterated Newton integrals of $f(x,y)$ over $[0,+\infty)^2$, when $f(x,y)$
satisfies a symmetric decay condition for $x,y\to+\infty$. But we only could prove (again as a corollary of Theorem~\ref{thm_fubi}) the following theorem which unfortunately does not apply to
$f(x,y)=xe^{-x^2}e^{-x^2y^2}$; we omit the proof.
\begin{thm}[a (N) Fubini theorem]\hspace{-1.6mm}{\bf. }\label{fubi_genthm}
If $c>0$ is a constant and $f=f(x,y)\colon[0,+\infty)^2\to\mathbb{R}$ is a continuous function such that
$$
|f(x,y)|\le c\max(x,y)^{-3}\mbox{ for }\max(x,y)\ge1\;,
$$
then the next two iterated Newton integrals exist and are equal,
$$
(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}f(x,\,y)\,dy\right)\,dx=(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}f(x,\,y)\,dx\right)\,dy\;.
$$
\end{thm}
\section{The Stirling formula}
With the help of the properties of the $(N)\int$ in Section 2 we prove in two ways the next basic asymptotic formula.
\begin{thm}[Stirling formula]\hspace{-1.6mm}{\bf. }\label{thm_stir}
For $n\in\mathbb{N}$ one has
\begin{eqnarray*}
n!&=&\prod_{i=1}^n i=\#\{(a_1,\,\dots,\,a_n)\in\{1,\,2,\,\dots,\,n\}^n\;|\;|\{a_1,\,\dots,\,a_n\}|=n\}\\
&=&(1+o(1))\sqrt{2\pi n}\left(\frac{n}{e}\right)^n
\end{eqnarray*}
where $\pi=3.14159\dots$ and $e=2.71828\dots$ are well known constants.
\end{thm}
Here the asymptotic notation $f=o(g)$ for $f,g\colon M\to\mathbb{R}$, $M\subset\mathbb{R}$ and $\sup(M)=+\infty$, means that
$$
\lim_{x\to+\infty}\frac{f(x)}{g(x)}=0\;.
$$
We start the first proof by borrowing an estimate, but not its proof, from G.~Tenenbaum \cite[Theorem I.0.4]{tene}.
There it is proven via integration by parts in a $(RS)\int$. We actually learned this proof of Theorem~\ref{thm_stir} from
G.~Tenenbaum \cite[Exercise 3 on p. 8]{tene}. We could do without the next proposition, see the remark on telescoping after Corollary~\ref{suma},
but we keep it as a basic result on the interplay of sums and integrals. $\mathbb{Z}$ denotes the ring of integers.
\begin{prop}[basic estimate]\hspace{-1.6mm}{\bf. }\label{basi_esti}
Let $a,b\in\mathbb{Z}$, $a<b$, and $f\colon[a,b]\to\mathbb{R}$ be a continuous monotonic function. Then there exists a number $\theta\in[0,1]$ such that
$$
\sum_{a<n\le b}f(n)=(N)\int_a^bf+\theta(f(b)-f(a))\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
Suppose that $f$ is nondecreasing, the proof for nonincreasing $f$ is similar (by reverting the next two inequalities). The equality we need to prove is equivalent with the estimate
$$
0\le\sum_{a<n\le b}f(n)-(N)\int_a^bf\le f(b)-f(a)\;.
$$
Note that by part 2 of Proposition~\ref{prop_linear}, the estimate is additive: if $c$ is an integer with $a<c<b$ and we have the estimate for both pairs
$a,c$ and $c,b$ (in place of $a,b$), then by summing we get it for $a,b$. Therefore it suffices to prove it only for $b=a+1$ (one can partition $[a,b]$ into unit
intervals $[x,x+1]$, $x\in\mathbb{Z}$ with $a\le x<b$). For $b=a+1$ the estimate becomes
$$
0\le f(a+1)-(N)\int_a^{a+1}f\le f(a+1)-f(a)\;.
$$
By Corollary~\ref{ML_bound},
$$
f(a)\cdot1\le(N)\int_a^{a+1}f\le f(a+1)\cdot1\;,
$$
and the instance $a,a+1$ of the estimate follows.
\hfill{$\Box$}\bigskip
\noindent
The more general result \cite[Theorem I.0.4]{tene} drops the continuity of $f$ and uses the $(R)\int_a^b f$. Then one can prove it easily by lower and upper Riemann--Darboux sums, which seems to be the simplest of the three arguments (if one has already built the theory of the $(R)\int$).
We learned the additive estimate trick used
in the previous proof in E.\,C. Titchmarsh \cite[p. 13/14]{titc_rf}. By it he gives a simple, few lines proof of the more precise formula ($a,b,c\in\mathbb{R}$ with $a<b$, $f=f(t)\colon[a,b]\to\mathbb{R}$ is continuously differentiable,
and $\{x\}=x-\lfloor x\rfloor\in[0,1)$ is the fractional part of $x\in\mathbb{R}$)
\begin{eqnarray*}
\sum_{a<n\le b}f(n)&=&(R)\int_a^b f+(R)\int_a^b (\{t\}+c)f'(t)\,dt\\
&&+\,(\{a\}+c)f(a)-(\{b\}+c)f(b)\;.
\end{eqnarray*}
Another proof in a monograph on analytic number theory takes $1\frac{1}{2}$ pages. The Euler--Maclaurin summation formula (EMSF), see for example
\cite[Chapter I.0.2]{tene}, is much more precise. An alternative to EMSF, using only integrals and with derivatives only in the error term, was recently proposed by I. Pinelis \cite{pine}.
We use the standard asymptotic notation $O$ and $\ll$: if $f,g\colon M\to\mathbb{R}$, $M\subset\mathbb{R}$, then $f=O(g)$ (on $M$) and $f\ll g$ (on $M$) both mean that there is a constant $c>0$ such that
for every $x\in M$ ones has $|f(x)|\le c|g(x)|$.
\begin{cor}[reciprocal squares]\hspace{-1.6mm}{\bf. }\label{suma}
For all $n\in\mathbb{N}$ one has
$$
\sum_{m=1}^n O(m^{-2})=c+O(n^{-1})\;,
$$
for a constant $c\in\mathbb{R}$.
\end{cor}
\par\medskip\noindent{\em Proof. }
The claim is that if $f\colon\mathbb{N}\to\mathbb{R}$ satisfies $f(m)=O(m^{-2})$ then the sum $\sum_{m=1}^n f(m)$ has the stated asymptotics. We have
$$
\sum_{m=1}^n f(m)=\lim_{N\to\infty}\sum_{m=1}^N f(m)-\lim_{N\to\infty}\sum_{m=n+1}^N f(m)=:\lim_{N\to\infty}S(N)-\lim_{N\to\infty}S(n,N)
$$
provided, of course, that both limits exist and are finite. But for $M>N$ we have by the previous proposition that
$$
|S(M)-S(N)|\ll\sum_{m=N+1}^M m^{-2}\le(N)\int_N^M x^{-2}=N^{-1}-M^{-1}<N^{-1}\;.
$$
Thus $S(N)$, $N=1,2,\dots$, is a Cauchy sequence and has a finite limit $c$. Similar argument shows for each $n$ existence and finiteness of the second limit. By the previous proposition we again have ($N>n$)
$$
|S(n,N)|\ll\sum_{m=n+1}^N m^{-2}\le(N)\int_n^Nx^{-2}=n^{-1}-N^{-1}<n^{-1}\;.
$$
Therefore the second limit is $O(1/n)$.
\hfill{$\Box$}\bigskip
\noindent
Alternatively, we can bound finite sums of reciprocal squares without any integral by using telescoping sums with the telescoper $m^{-2}=m^{-1}-(m+1)^{-1}$.
The previous proof is in a way remarkable. Usually one obtains infinite sums (products, integrals, $\dots$) as
limit cases of finite approximations, one of the best known examples being ($|q|<1$)
$$
\sum_{n=0}^{\infty}q^n=\lim_{n\to\infty}(1+q+q^2+\dots+q^n)=\lim_{n\to\infty}\frac{1-q^{n+1}}{1-q}=\frac{1}{1-q}\;.
$$
In contrast, the previous proof reverts this process and expresses a finite sum by two infinite ones. There seems to be no other way to deduce this
asymptotics apparently involving no infinite expression (the infinity, however, hides in ``all $n\in\mathbb{N}$'') than via the limits at infinity.
In the following proposition we use the Taylor expansion $\log(1+x)=x-\frac{x^2}{2}+O(x^3)$ ($x\in[-\frac{1}{2},2]$) which is yet another application of Lagrange's
mean value theorem.
\begin{prop}[first expression of $n!$ by an $\int$]\hspace{-1.6mm}{\bf. }\label{appr_sum_logs}
There is a real constant $c$ such that for all $n\in\mathbb{N}$ we have
$$
\log(n!)=c+O(1/n)+(N)\int_{\frac{1}{2}}^{n+\frac{1}{2}}\log x\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
We prove that for all $m\in\mathbb{N}$,
$$
(N)\int_{m-\frac{1}{2}}^{m+\frac{1}{2}}\log x=\log m+O(m^{-2})\;.
$$
Indeed, by Definition~\ref{defi_newt} and by the expansion of $\log(1+x)$ the integral equals
\begin{eqnarray*}
&& (x\log x-x)(m+1/2)-(x\log x-x)(m-1/2)\\
&&=m\log\left(1+\frac{1}{m-1/2}\right)+\log m+\frac{\log(1-1/4m^2)}{2}-1\\
&&=\log m+\frac{2m(m-1/2)-m-2(m-1/2)^2}{2(m-1/2)^2}+O(m^{-2})+O(m^{-2})\\
&&=\log m+\frac{-1/2}{2(m-1/2)^2}+O(m^{-2})=\log m+O(m^{-2})\;.
\end{eqnarray*}
Using equation $\log(n!)=\sum_{m=1}^n\log m$, part 2 of Proposition~\ref{prop_linear} and Corollary~\ref{suma} we get the first expression.
\hfill{$\Box$}\bigskip
Now the Stirling formula with an undetermined constant follows easily. We use another Taylor expansion $e^x=1+O(x)$ ($x\in[-c,c]$ for any $c>0$) which implies that
$e^{O(1/n)}=1+O(1/n)$ (for $n\in\mathbb{N}$).
\begin{prop}[incomplete Stirling formula]\hspace{-1.6mm}{\bf. }\label{stir_unde}
There is a real constant $d>0$ such that for all $n\in\mathbb{N}$ we have
$$
n!=(d+O(1/n))\sqrt{n}\left(\frac{n}{e}\right)^n\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
We compute the integral in the previous proposition in the same way as in its proof and get that
\begin{eqnarray*}
\log(n!)&=&c+O(1/n)+(x\log x-x)(n+1/2)-(x\log x-x)(1/2)\\
&=&n\log(n+1/2)-n+\frac{\log(n+1/2)}{2}+c_0+O(1/n)\\
&=&n\log n-n+\frac{\log n}{2}+n\log(1+1/2n)+\frac{\log(1+1/2n)}{2}+\\
&&+\;c_0+O(1/n)\\
&=&n\log n-n+\frac{\log n}{2}+c_1+O(1/n)\;.
\end{eqnarray*}
We used the above expansion of $\log(1+x)$, collected in the $c_i$ several constant contributions to $c$, and merged several $O(1/n)$ terms in one.
Applying the exponential function we get the expression for $n!$, with $d=e^{c_1}$.
\hfill{$\Box$}\bigskip
\noindent
We remark that if one is in Proposition~\ref{stir_unde} content with $o(1)$ in place of $O(1/n)$, then the argument so far can be shortened and made integral-free by
simply proving that the sequence $(n!/\sqrt{n}e^{-n}n^n)$ is monotonic and bounded (see for example \cite{impe}).
It remains to prove that $d=\sqrt{2\pi}$. We do it by another and quite unexpected, at least to the author, application of (Newton) integrals.
\begin{prop}[resolving a recurrence by $\int$]\hspace{-1.6mm}{\bf. }\label{wn}
Suppose that the sequence $(W_n)$ of positive real numbers is given by the recurrence
$$
W_0=\frac{\pi}{2},\ W_1=1,\mbox{ and for $n\ge2$,}\;W_n=\frac{n-1}{n}W_{n-2}\;.
$$
Then
$$
\lim_{n\to\infty}\frac{W_n}{W_{n-1}}=1\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
The trick is to prove that
$$
W_n=(N)\int_0^{\pi/2}(\cos x)^n,\ n\in\mathbb{N}_0\;.
$$
By Definition~\ref{defi_newt}, $(N)\int_0^{\pi/2}(\cos x)^0=x(\pi/2)-x(0)=\pi/2$ and $(N)\int_0^{\pi/2}\cos x=\sin(\pi/2)-\sin(0)=1$. For $n\ge2$
one has by Theorem~\ref{prop_pp_newt} (Corollary~\ref{prop_ex_newt} shows that in the integration by parts identity below both the first and the third term are defined and finite) and by part 1 of Proposition~\ref{prop_linear}
that, denoting $h(x)=(\sin x)(\cos x)^{n-1}$ and using that $\sin^2x=1-\cos^2x$,
\begin{eqnarray*}
&&(N)\int_0^{\pi/2}(\cos x)^n=(N)\int_0^{\pi/2}(\sin x)'(\cos x)^{n-1}\\
&&=h(\pi/2)-h(0)-(N)\int_0^{\pi/2}(\sin x)((\cos x)^{n-1})'\\
&&=0-0+(n-1)\cdot(N)\int_0^{\pi/2}(\sin x)^2(\cos x)^{n-2}\\
&&=(n-1)\left((N)\int_0^{\pi/2}(\cos x)^{n-2}-(N)\int_0^{\pi/2}(\cos x)^n\right)\;.
\end{eqnarray*}
Thus the sequences $\left((N)\int_0^{\pi/2}(\cos x)^n\right)$ and $(W_n)$, $n=0,1,2,\dots$, follow the same recurrence and coincide. Crucially\,---\,this is hard to get
from the original definition of $W_n$ but it follows easily from the integral representation\,---\,the sequence $(W_n)$ is nonincreasing, it in fact decreases. Indeed, since
$0\le \cos^n\le\cos^{n-1}$ on $[0,\pi/2]$, Proposition~\ref{prop_mono_newt} shows that $W_n\le W_{n-1}$. Thus for $n\ge2$ we have, by the monotonicity of $W_n$ and the recurrence,
$$
1=\frac{W_{n-1}}{W_{n-1}}\ge\frac{W_n}{W_{n-1}}\ge\frac{W_{n+1}}{W_{n-1}}=\frac{n}{n+1}\;\mbox{ and }\;\frac{W_n}{W_{n-1}}\to1,\ n\to\infty\;.
$$
\hfill{$\Box$}\bigskip
\noindent
Before we complete the determination of $d$ we contemplate for a while the function $\cos x$ used in the previous proof. If to define $\cos x$
one needed, say, the $(R)\int$, our undertaking would be less convincing. (We were in a similar situation at the beginning when we needed primitives to continuous functions.)
This function is defined by a limit process but without Riemann integral,
$$
\cos x=\sum_{n=0}^{\infty}\frac{(-1)^nx^{2n}}{(2n)!},\ \ x\in\mathbb{R}\;.
$$
From this formula one derives without using the $(R)\int$ all properties of $\cos x$ needed for the proof, such as the related function $\sin x$, the identity $\sin^2+\cos^2=1$, the relations $\sin'x=\cos x$ and $\cos'x=-\sin x$, and the fact that $\pi/2$ is the smallest positive zero of $\cos x$. Which actually serves
as a definition of $\pi$ for our article. If the adopted definition of $\pi$ were that
$$
\pi=\lim_{n\to\infty}\frac{n!^2}{2n\left(\frac{n}{e}\right)^{2n}}\;,
$$
we would be done after Proposition~\ref{stir_unde}.
The recurrence for $W_n$ has for $n\in\mathbb{N}$ another explicit solution:
\begin{eqnarray*}
W_{2n}&=&\frac{(2n-1)(2n-3)\ds1}{2n(2n-2)\ds2}\cdot\frac{\pi}{2}=\frac{(2n)!}{(2^n n!)^2}\cdot\frac{\pi}{2}\ \mbox{ and}\\
W_{2n+1}&=&\frac{2n(2n-2)\ds2}{(2n+1)(2n-1)\ds3}\cdot1=\frac{(2^n n!)^2}{(2n+1)!}\;.
\end{eqnarray*}
Employing the asymptotic notation $f\sim g$ which for $f,g\colon M\to\mathbb{R}$, $M\subset\mathbb{R}$ and $\sup(M)=+\infty$, means that
$$
\lim_{x\to+\infty}\frac{f(x)}{g(x)}=1\;,
$$
we get by Propositions~\ref{wn} and \ref{stir_unde} that
$$
1\sim\frac{W_{2n+1}}{W_{2n}}\sim\frac{(2^n n!)^4}{2n(2n)!^2}\cdot\frac{2}{\pi}\sim\frac{2^{4n}\cdot d^4\cdot n^2\cdot(n/e)^{4n}}{2n\cdot d^2\cdot 2n\cdot(2n/e)^{4n}}\cdot\frac{2}{\pi}=\frac{d^2}{2\pi}\;.
$$
Thus $d=\sqrt{2\pi}$ and the first proof of Theorem~\ref{thm_stir} is complete.\hfill{$\Box$}\bigskip
We turn to the second proof of Theorem~\ref{thm_stir}, by so called Laplace's method, and we follow N.\,G. de Bruijn \cite[Chapter 4]{debr}. We start with a classical formula, due essentially but not entirely to L. Euler. By V.\,S. Varadarajan \cite[p. 100]{vara},
L. Euler would write the gamma function integral below as
$$
\int_0^1(-\log x)^n\;dx\;,
$$
and it was A.-M. Legendre who wrote it in the familiar form in the infinite range. Our next calculation is less anachronistic than some of the others
because in the times of L. Euler and A.-M. Legendre there were only Newton integrals. The second proof uses integrals over infinite intervals and for their existence we cannot rely on Corollary~\ref{prop_ex_newt}.
\begin{prop}[second expression of $n!$ by an $\int$]\hspace{-1.6mm}{\bf. }\label{gamma}
For all $n$ in $\mathbb{N}_0$ we have
$$
n!=(N)\int_0^{+\infty}x^n e^{-x}=(N)\int_0^{+\infty}x^n e^{-x}\,dx\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
We denote the integral by $I_n$. We prove its existence and compute its value by induction on $n$. First, $I_0=(-e^{-x})(+\infty^-)-(-e^{-x})(0^+)=0-(-1)=1$. For $n>0$ we get by Theorem~\ref{prop_pp_newt}
(in the integration by parts identity below the second term is clearly defined and finite, and so is the third by the inductive assumption) and by part 1 of Proposition~\ref{prop_linear} that
\begin{eqnarray*}
I_n&=&(N)\int_0^{+\infty}x^n(-e^{-x})'=(-x^ne^{-x})(+\infty^-)-(-x^ne^{-x})(0^+)+\\
&&+\;(N)\int_0^{+\infty}(x^n)'e^{-x}=0-0+n\cdot(N)\int_0^{+\infty}x^{n-1}e^{-x}\\
&=&nI_{n-1}\;.
\end{eqnarray*}
By induction, $I_n$ exists for every $n\in\mathbb{N}_0$ and $I_n=n!$.
\hfill{$\Box$}\bigskip
Let $n\in\mathbb{N}$. Substitution $x\leftarrow y$, $x=n(1+y)$, by Proposition~\ref{prop_subst} gives
$$
(N)\int_0^{+\infty}x^n e^{-x}=e^{-n}n^{n+1}\cdot(N)\int_{-1}^{+\infty}(e^{-y}(1+y))^n\;.
$$
Let $f(y)=e^{-y}(1+y)$. Then $f'(y)=-e^{-y}y>0$ on $[-1,0)$ and is $<0$ on $(0,+\infty)$, and we see that $f(y)$ increases from $0$ to $1$ on $[-1,0]$ and decreases from
$1$ to $0^+$ on $[0,+\infty)$. We identify intervals around $0$ with the bulk of the last integral concentrated in them, and replace the integrand with a neater function.
\begin{prop}[concentration of the $\int$]\hspace{-1.6mm}{\bf. }\label{prop_bulk_conce}
If $\delta=\delta(n)\colon\mathbb{N}\to(0,1)$ is a~sequence such that $n\delta^3\to0$
as $n\to\infty$, then for all $n\in\mathbb{N}$ one has
$$
(N)\int_{-1}^{+\infty}(e^{-y}(1+y))^n=(1+O(n\delta^3))\cdot(N)\int_{-\delta}^{\delta}e^{-ny^2/2}+O(e^{-n\delta^2/2})\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
Using again the expansion of $\log(1+x)$ we have
\begin{eqnarray*}
f(y)=e^{-y}(1+y)&=&\exp(-y+\log(1+y))=\exp(-y^2/2+O(y^3))\\
&=&e^{-y^2/2}(1+O(y^3))\ (y\in[-1/2,\,2],\mbox{ say})\;.
\end{eqnarray*}
If $\delta=\delta(n)$ is as stated then
$$
(1+O(\delta^3))^n=\exp(n\log(1+O(\delta^3)))=\exp(O(n\delta^3))=1+O(n\delta^3)
$$
and $f(-\delta)^n, f(\delta)^n=O(e^{-n\delta^2/2})$. Using part 2 of Proposition~\ref{prop_linear} we define the decomposition
\begin{eqnarray*}
(N)\int_{-1}^{+\infty}f(y)^n&=&(N)\int_{-1}^{-\delta}+\;(N)\int_{-\delta}^{\delta}+\;(N)\int_{\delta}^4+\;(N)\int_4^{+\infty}\\
&=:&I_1+I_2+I_3+I_4\;.
\end{eqnarray*}
Since $0\le f(y)^n\le f(-\delta)^n$ on $[-1,-\delta]$ and $0< f(y)^n\le f(\delta)^n$ on $[\delta,+\infty)$, the above estimates and Corollary~\ref{ML_bound} show that both
$I_1,I_3=O(e^{-n\delta^2/2})$. Since $1+y\le 1+y/2+y^2/8\le e^{y/2}$ for $y\ge4$, $f(y)\le e^{-y/2}$ for $y\ge4$ and by Proposition~\ref{prop_mono_newt},
$$
I_4\le(N)\int_4^{+\infty}e^{-ny/2}=\frac{2e^{-2n}}{n}\;.
$$
So $I_4=O(e^{-n\delta^2/2})$ too. The remaining integral satisfies
\begin{eqnarray*}
I_2&=&(N)\int_{-\delta}^{+\delta}f(y)^n=(N)\int_{-\delta}^{+\delta}e^{-ny^2/2}(1+O(ny^3))\\
&=&(1+O(n\delta^3))\cdot(N)\int_{-\delta}^{\delta}e^{-ny^2/2}
\end{eqnarray*}
(the last equality follows by Proposition~\ref{prop_mono_newt}) and we are done.
\hfill{$\Box$}\bigskip
\begin{prop}[reduction to the Gauss $\int$]\hspace{-1.6mm}{\bf. }\label{redu_to_gauss}
If $\delta=\delta(n)$ is as in the previous proposition and $m\in\mathbb{N}$ then
$$
(N)\int_{-\delta}^{\delta}e^{-ny^2/2}=\sqrt{\frac{2}{n}}\cdot(N)\int_{-\infty}^{+\infty}e^{-t^2}+O(e^{-n\delta^2/2})\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
We define, using part 2 of Proposition~\ref{prop_linear}, eveness of the integrand, and the version of Proposition~\ref{prop_subst} with the
flipping substitution $g(y)=-y$, the decomposition
\begin{eqnarray*}
(N)\int_{-\delta}^{\delta}e^{-ny^2/2}&=&(N)\int_{-\infty}^{+\infty}e^{-ny^2/2}-2\cdot(N)\int_{\delta}^{+\infty}e^{-ny^2/2}\\
&=:&I_5-2I_6\;,
\end{eqnarray*}
provided that $I_5$ exists. We are in a similar situation as in the beginning of the proof of Corollary~\ref{suma}. But $I_5$ exists by the majorization $e^{-a^2}\le e^{-a}$ for $a\ge1$ (as we already know from the beginning of the proof of Theorem~\ref{fubi_infi_inter}). We estimate $I_6$ in the same way as we estimated $I_3$ and $I_4$ in the previous proof and get
the same bound $I_6=O(e^{-n\delta^2/2})$. Proposition~\ref{prop_subst} with the substitution
$y\leftarrow t$, $y=t\sqrt{2/n}$, yields
$$
I_5=\sqrt{\frac{2}{n}}\cdot(N)\int_{-\infty}^{+\infty}e^{-t^2}\;.
$$
\hfill{$\Box$}\bigskip
It remains to compute the Gauss integral $(N)\int_{-\infty}^{+\infty}e^{-t^2}$ and to select the sequence $\delta=\delta(n)$.
\begin{prop}[the Gauss $\int$]\hspace{-1.6mm}{\bf. }\label{gauss_int}
We have the identity
$$
(N)\int_{-\infty}^{+\infty}e^{-t^2}=\sqrt{\pi}\;.
$$
\end{prop}
\par\medskip\noindent{\em Proof. }
By part 2 of Proposition~\ref{prop_linear} and the version of Proposition~\ref{prop_subst} with the flipping substitution $g(t)=-t$, we need to prove that
$$
I_7:=(N)\int_0^{+\infty}e^{-t^2}=\frac{\sqrt{\pi}}{2}
$$
(in the beginning of the proof of Theorem~\ref{fubi_infi_inter} we proved that $I_7$ exists). This is equivalent with $I_7^2=\pi/4$. Indeed, we compute (we justify each of the eight
steps after the computation)
\begin{eqnarray*}
I_7^2&=&(N)\int_0^{+\infty}e^{-t^2}\cdot(N)\int_0^{+\infty}e^{-u^2}=(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}e^{-t^2}\right)e^{-u^2}\\
&=&(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}e^{-t^2-u^2}\;dt\right)\;du\\
&=&(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}ue^{-u^2(1+v^2)}\;dv\right)\;du\\
&=&(N)\int_0^{+\infty}\left((N)\int_0^{+\infty}ue^{-u^2(1+v^2)}\;du\right)\;dv=(N)\int_0^{+\infty}\frac{1}{2(1+v^2)}\\
&=&\frac{\arctan(+\infty^-)-\arctan(0^+)}{2}=\frac{\pi}{4}\;.
\end{eqnarray*}
The first four steps repeat the computation from the beginning of the proof of Theorem~\ref{fubi_infi_inter}, and the crucial fifth step is this theorem.
In the sixth step we compute the inner integral according to Definition~\ref{defi_newt} by the primitive (for fixed $v\in\mathbb{R}$)
$$
\frac{d}{du}\left(\frac{-e^{-u^2(1+v^2)}}{2(1+v^2)}\right)=ue^{-u^2(1+v^2)}\;.
$$
In the last two steps we compute the integral according to Definition~\ref{defi_newt} by the primitive $(\arctan v)'=\frac{1}{1+v^2}$.
\hfill{$\Box$}\bigskip
\noindent
The number $\pi/2$ came about again as the smallest positive root of $\cos x$. Computation of the Gauss integral just by
the Newton integration may be of some interest, for in B. Conrad \cite[p. 1]{conr} we read:
``$\Phi(u)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^ue^{-u^2/2}\mathrm{d}u$ (\dots) so the evaluation of $\Phi(\infty)$ must proceed by a
method different from the calculation of anti-derivatives as in calculus.'' The point of our article is that anti-derivatives
(primitives) fully suffice for such evaluation. But the difficulties with Theorem~\ref{fubi_infi_inter} show that it is not as straightforward as one might think.
To finish, we set $\delta=\delta(n)=n^{-1/2+\varepsilon/3}$ where $\varepsilon\in(0,1/2)$. Combining Propositions~\ref{gamma}--\ref{gauss_int}
we obtain the asymptotics
\begin{eqnarray*}
n!&=&e^{-n}n^{n+1}\left((1+O(n\delta^3)).(\sqrt{2\pi/n}+O(e^{-n\delta^2/2}))+O(e^{-n\delta^2/2})\right)\\
&=&\sqrt{2\pi n}\left(\frac{n}{e}\right)^n(1+O(n^{-1/2+\varepsilon}))
\end{eqnarray*}
because $O(e^{-n\delta^2/2})=O(e^{-n^{2\varepsilon/3}/2})$ goes to $0$ for $n\to\infty$ faster than $n^{-c}$ for any $c>0$. This completes the second proof of Theorem~\ref{thm_stir}. \hfill{$\Box$}\bigskip
{\small
|
1,108,101,564,132 | arxiv | \section{Primordial vorticities?}
\label{sec1}
The observed Universe might originate from a strongly coupled electromagnetic plasma
existing prior to photon decoupling where the angular momentum transfer between
ions, electrons and photons in an expanding space-time geometry leads
to the formation of large-scale vortices as speculated by
various authors including, with slightly different perspectives,
Hoyle \cite{hoyle}, Harrison \cite{harrison1,harrison2,harrison3}, Mishustin and Ruzmaikin \cite{mishustin}, Ozernoy and Chernin \cite{ozernoy1,ozernoy2,ozernoy3}
and others (see also \cite{peebles}). The primordial vorticity, if present in the
pre-decoupling plasma, might lead eventually to the
formation of large-scale magnetic fields possibly relevant for galactic magnetogenesis.
The physical description of the angular momentum
exchange between ions, electrons and photons can be realized by
appropriately translating the evolution equations describing ionized
gases \cite{spitzer,krall} to an expanding geometry supplemented
by its own relativistic fluctuations \cite{mg1,mg1aa}. In the context of the $\Lambda$CDM
paradigm\footnote{The acronym $\Lambda$CDM (where
$\Lambda$ denotes the dark energy component and CDM stands for cold dark matter) and the
terminology concordance paradigm will be used interchangeably.}, it
is both reasonable and justified to assume that the background
geometry is conformally flat and that its inhomogeneities stem from the relativistic
fluctuations of the spatial curvature described either in gauge-invariant terms or in an appropriate
gauge. The latter assumption rests
exactly on the absence of large-scale vorticity which is assumed to be vanishing at least
within the current observational precision.
A gross argument could suggest that the vorticity must be negligible for $\Lambda$CDM initial conditions, since it is
the curl of a velocity. Thanks to the momentum constraint (connecting the first derivatives of the linearized fluctuations of the geometry to the peculiar velocities),
the total velocity field is subleading when compared with the density contrasts or with the curvature perturbation for typical scales larger than the Hubble radius and in the case of the conventional adiabatic initial conditions postulated in the vanilla $\Lambda$CDM scenario. The latter argument suggests that the treatment
of large-scale vorticity assumes, more or less tacitly, a correct treatment of the spatial gradients.
To transform this incomplete observation in a more rigorous approach
it is necessary to introduce a description of the vorticity which does not rely on the
purported smallness or largeness of the gravitational fluctuations. It is
rather desirable to describe the angular momentum exchange
between ions, electrons and photons in a gravitating plasma which is also fully inhomogeneous. By
fully inhomogeneous plasma we mean the situation where
not only the concentrations of charged and neutral species
depend, in an arbitrary manner, upon the spatial coordinates
but where the geometry as well as the electromagnetic
fields are not homogeneous. It has been recently
argued \cite{mg2} that such a description can be rather effective
for the analysis of a wide range of phenomena including the
physics of pre-decoupling plasmas. In the present
paper the results of Ref. \cite{mg2} shall be first extended and then applied to a
concrete situation with the
purpose of obtaining an explicit set of equations describing the evolution of the vorticities of the various species of the plasma.
The proposal of \cite{mg2} is built on the fully inhomogeneous description
of the geometry in terms of the Adler-Misner-Deser (ADM) variables \cite{ADM1,ADM2} which are customarily exploited for the implementation of the general relativistic gradient expansion \cite{gr1,gr2,gr3,gr4,gr5,gr6}.
The second key ingredient of Ref. \cite{mg2} is the fully inhomogeneous description of cold plasmas in flat space which is the starting points of the analysis of nonlinear effects in kinetic theory and in magnetohydrodynamics
(see, e.g. \cite{gr7,gr8,gr9}).
Consequently, the vorticity exchange
between ions, electrons and photons can be analyzed in gravitating plasmas
with the help of an expansion scheme which involves not only the gradients of the geometry, but also
the gradients of the electromagnetic sources, by so combining the general relativistic gradient expansion and the drift approximation (sometimes dubbed giuding center approximation) typical of cold plasmas.
The approach pursued in this paper reproduces, in the conformally flat limit,
the conventional treatment which will be made more precise in section \ref{sec2}.
The evolution of the vorticity in gravitating plasmas which are also fully inhomogeneous will be
discussed in section \ref{sec3}. In sections \ref{sec4} and \ref{sec5} the total vorticity
of the geometry will be computed within the gradient expansion and estimated in the
framework of the $\Lambda$CDM paradigm. The maximal magnetic field
induced by the total vorticity will be computed in section \ref{sec6}.
Section \ref{sec7} contains our concluding remarks. In appendix \ref{APPA}
some useful complements have been included to make the paper self-contained while
in appendix \ref{APPB} useful details on the calculations of correlation
functions of multiple fields in real space have been included for the technical
benefit of the interested readers.
\renewcommand{\theequation}{2.\arabic{equation}}
\setcounter{equation}{0}
\section{Vorticities in conventional perturbative expansions}
\label{sec2}
The treatment proposed here differs slightly from the one of
Refs. \cite{harrison1,harrison2,harrison3,mishustin} for three reasons:
(i) the conformal time coordinate is preferred to the cosmic time;
(ii) the relativistic fluctuations of the geometry are included in the longitudinal gauge;
(iii) the three-fluid, two-fluids and one fluid descriptions are
discussed more explicitly within the appropriate temperature ranges where
they are applicable.
The conformal
flatness of the geometry does not imply the invariance
of the system under the Weyl rescaling of the metric. Such a potential symmetry is broken
by the masses of the electrons and ions which are crucial in the large-scale
evolution of the vorticity. The considerations of the present section can also be formulated in the case of a geometry which is not spatially flat; this is not essential since the subsequent generalizations will automatically
include also geometries which are not necessarily spatially flat.
Consider first the case of a conformally flat background geometry characterized
by a metric tensor $\overline{g}_{\mu\nu}= a^2(\tau)\eta_{\mu\nu}$ and supplemented
by the corresponding relativistic fluctuations which we write in the longitudinal gauge
\begin{equation}
\delta_{\mathrm{s}} g_{00}(\vec{x},\tau) = 2\, a^2(\tau) \phi(\vec{x},\tau),\qquad \delta_{\mathrm{s}} g_{ij}(\vec{x},\tau) = 2\,a^2(\tau)
\psi(\vec{x},\tau) \delta_{ij};
\label{met1}
\end{equation}
note that $\delta_{\mathrm{s}}$ describes a metric perturbation which preserves the scalar nature of the
fluctuation since, in the $\Lambda$CDM paradigm, the dominant source of inhomogeneity
comes from the scalar modes of the geometry.
By defining the comoving electromagnetic fields $\vec{E}$ and $\vec{B}$ as well
as the comoving concentrations of electrons and ions (i.e. $n_{\mathrm{e}}$ and $n_{\mathrm{i}}$)
\begin{eqnarray}
&& \vec{E}(\vec{x},\tau) = a^2(\tau) \vec{{\mathcal E}}(\vec{x},\tau), \qquad \vec{B}(\vec{x},\tau) = a^2(\tau) \vec{{\mathcal B}}(\vec{x},\tau),
\nonumber\\
&& n_{\mathrm{i}}(\vec{x},\tau) = a^3(\tau) \tilde{n}_{\mathrm{i}}(\vec{x},\tau),\qquad
n_{\mathrm{e}}(\vec{x},\tau) = a^3(\tau) \tilde{n}_{\mathrm{e}}(\vec{x},\tau),
\label{S1}
\end{eqnarray}
Maxwell's equations read
\begin{eqnarray}
&& \vec{\nabla} \cdot \vec{E} = 4 \pi e (n_{\mathrm{i}} - n_{\mathrm{e}}),\qquad\vec{\nabla} \cdot \vec{B} =0,
\label{S2}\\
&& \vec{\nabla} \times \vec{E} = - \partial_{\tau} \vec{B}, \qquad \vec{\nabla}\times \vec{B} = 4\pi e (n_{\mathrm{i}}\, \vec{v}_{\mathrm{i}} -
n_{\mathrm{e}}\, \vec{v}_{\mathrm{e}} ) + \partial_{\tau}\vec{E}.
\label{S3}
\end{eqnarray}
In Eqs. (\ref{S1}), (\ref{S2}) and (\ref{S3}) all the fields are appropriately rescaled so that
the resulting equations are formally equivalent to the ones of the flat space-time. The peculiar velocities of the ions, electrons and photons obey the following set of
equations\footnote{As usual ${\mathcal H} = \partial_{\tau} \ln{a}$ and its relation
with the Hubble rate is simply $ {\mathcal H} = a H$.}
\begin{eqnarray}
&& \partial_{\tau}\vec{v}_{\mathrm{e}} + {\mathcal H}\,\vec{v}_{\mathrm{e}} = - \frac{e n_{\mathrm{e}}}{\rho_{\mathrm{e}} \, a^{4}} [ \vec{E} + \vec{v}_{\mathrm{e}} \times \vec{B}] - \vec{\nabla} \phi
+
\frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{e}}} a
\Gamma_{\gamma \, \mathrm{e}} (\vec{v}_{\gamma} - \vec{v}_{\mathrm{e}}) + a \Gamma_{\mathrm{e\,i}} ( \vec{v}_{\mathrm{i}} - \vec{v}_{\mathrm{e}}),
\label{SA}\\
&& \partial_{\tau} \vec{v}_{\mathrm{i}} + {\mathcal H}\,\vec{v}_{\mathrm{i}} = \frac{e n_{\mathrm{i}}}{\rho_{\mathrm{i}} \, a^{4}}[ \vec{E} + \vec{v}_{\mathrm{i}} \times \vec{B}] - \vec{\nabla} \phi
+
\frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{i}}} a
\Gamma_{\gamma \, \mathrm{i}} (\vec{v}_{\gamma}-\vec{v}_{\mathrm{i}} ) + a \Gamma_{\mathrm{e\,i}} \frac{\rho_{\mathrm{e}}}{\rho_{\mathrm{i}}}( \vec{v}_{\mathrm{e}} - \vec{v}_{\mathrm{i}}),
\label{SB}\\
&& \partial_{\tau} \vec{v}_{\gamma} = - \frac{1}{4} \vec{\nabla} \delta_{\gamma} - \vec{\nabla} \phi
+ a \Gamma_{\gamma\mathrm{i}} (\vec{v}_{\mathrm{i}} - \vec{v}_{\gamma}) +
a \Gamma_{\gamma\mathrm{e}} ( \vec{v}_{\mathrm{e}} - \vec{v}_{\gamma}).
\label{SC}
\end{eqnarray}
In Eqs. (\ref{SA})--(\ref{SC}) the relativistic fluctuations of the geometry are included from the very beginning in terms of the longitudinal gauge variables of Eq. (\ref{met1}); the electron-photon, electron-ion and ion-photon
rates of momentum exchange appearing in Eqs. (\ref{SA})--(\ref{SC}) are given by\footnote{Note that $T$ denotes the temperature and $\Lambda_{\mathrm{C}}$ is the Coulomb logarithm \cite{spitzer,krall}.}:
\begin{eqnarray}
&&\Gamma_{\gamma\mathrm{e}} = \tilde{n}_{\mathrm{e}}
\sigma_{\mathrm{e}\gamma},\qquad
\Gamma_{\gamma\mathrm{i}} = \tilde{n}_{\mathrm{i}}
\sigma_{\mathrm{i}\gamma},\qquad \sigma_{\mathrm{e}\gamma}
= \frac{8}{3}\pi \biggl(\frac{e^2}{m_{\mathrm{e}}}\biggr)^2, \qquad
\sigma_{\mathrm{i}\gamma}
= \frac{8}{3}\pi \biggl(\frac{e^2}{m_{\mathrm{i}}}\biggr)^2,
\label{S9}\\
&& \Gamma_{\mathrm{e\,i}} = \tilde{n}_{\mathrm{e}} \sqrt{\frac{T}{m_{\mathrm{e}}}} \, \sigma_{\mathrm{e\,i}} = \Gamma_{\mathrm{i\, e}},\qquad \sigma_{\mathrm{e\,i}} =
\frac{e^4}{T^2} \ln{\Lambda_{\mathrm{C}}},\qquad \Lambda_{\mathrm{C}} = \frac{3}{2 e^3} \sqrt{\frac{T^3}{\tilde{n}_{\mathrm{e}}\pi}}.
\label{S10}
\end{eqnarray}
Note that, in Eq. (\ref{S9}) and (\ref{S10}), $T$ and $\tilde{n}$ are, respectively,
physical temperatures and physical concentrations. If the rates and the cross sections
would be consistently expressed in terms of comoving temperatures $\overline{T} = a T$ and
comoving concentrations $n = a^3 \, \tilde{n}$ the corresponding rates will inherit a scale factor for each
mass. For instance $a\Gamma_{\mathrm{e\,i}}$ becomes $n_{\mathrm{e}} \, \sqrt{\overline{T}/(m_{\mathrm{e}} a)} \, (e^4/\overline{T}^2) \, \ln{\Lambda_{\mathrm{C}}}$, if comoving temperature and concentrations are used.
Let us then define the vorticities associated with the peculiar velocities of the various species
\begin{equation}
\vec{\omega}_{\mathrm{e}}(\vec{x},\tau) = \vec{\nabla} \times \vec{v}_{\mathrm{e}},
\qquad \vec{\omega}_{\mathrm{i}}(\vec{x},\tau)= \vec{\nabla} \times \vec{v}_{\mathrm{i}}, \qquad
\vec{\omega}_{\gamma}(\vec{x},\tau) = \vec{\nabla} \times \vec{v}_{\gamma},
\label{S4}
\end{equation}
and their corresponding three-divergences:
\begin{equation}
\theta_{\mathrm{e}}(\vec{x},\tau) = \vec{\nabla} \cdot \vec{v}_{\mathrm{e}}, \qquad \theta_{\mathrm{i}}(\vec{x},\tau) = \vec{\nabla}\cdot \vec{v}_{\mathrm{i}}, \qquad
\theta_{\gamma}(\vec{x},\tau) = \vec{\nabla} \cdot \vec{v}_{\gamma}.
\label{S5}
\end{equation}
The evolution equations of the vorticities and of the divergences can be obtained by taking, respectively, the
curl and the divergence of Eqs. (\ref{SA})--(\ref{SC}) and by using Eqs. (\ref{S2}) and (\ref{S3}). To simplify
the obtained expressions it is useful to introduce the total comoving charge density and the comoving current density.
\begin{equation}
\rho_{\mathrm{q}}= e(n_{\mathrm{i}} - n_{\mathrm{e}}),\qquad
\vec{J} = e ( n_{\mathrm{i}} \vec{v}_{\mathrm{i}} - n_{\mathrm{e}} \vec{v}_{\mathrm{e}}).
\label{th1a}
\end{equation}
Thus, the evolution of the vorticities and of the divergences of the electrons are, respectively,
\begin{eqnarray}
\partial_{\tau}\vec{\omega}_{\mathrm{e}}+ {\mathcal H}\,\vec{\omega}_{\mathrm{e}} &=& \frac{e n_{\mathrm{e}}}{\rho_{\mathrm{e}} \, a^{4}} \biggl[ \partial_{\tau} \vec{B}
+ (\vec{v}_{\mathrm{e}}\cdot\vec{\nabla}) \vec{B} + \theta_{\mathrm{e}} \vec{B} - (\vec{B} \cdot\vec{\nabla})\vec{v}_{\mathrm{e}}\biggr]
\nonumber\\
&+&
\frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{e}}} a
\Gamma_{\gamma \, \mathrm{e}} (\vec{\omega}_{\gamma} - \vec{\omega}_{\mathrm{e}}) + a \Gamma_{\mathrm{e\,i}} ( \vec{\omega}_{\mathrm{i}} - \vec{\omega}_{\mathrm{e}}),
\label{S6}\\
\partial_{\tau} \theta_{\mathrm{e}} + {\mathcal H} \theta_{\mathrm{e}} &=& - \frac{e n_{\mathrm{e}}}{\rho_{\mathrm{e}} \, a^{4}} \biggl[ 4 \pi \rho_{\mathrm{q}} +
\vec{\omega}_{\mathrm{e}} \cdot \vec{B} - 4 \pi \vec{v}_{\mathrm{e}} \cdot \vec{J} - \vec{v}_{\mathrm{e}} \cdot \partial_{\tau} \vec{E}\biggr] - \nabla^2 \phi
\nonumber\\
&+& \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{e}}} a \Gamma_{\gamma\mathrm{e}} (\theta_{\gamma} - \theta_{\mathrm{e}})+
a \Gamma_{\mathrm{e}\,\mathrm{i}} (\theta_{\mathrm{i}} - \theta_{\mathrm{e}}),
\label{th1}
\end{eqnarray}
Conversely the vorticity and the three-divergence of the ions evolve as:
\begin{eqnarray}
\partial_{\tau} \vec{\omega}_{\mathrm{i}} + {\mathcal H}\,\vec{\omega}_{\mathrm{i}} &=& - \frac{e n_{\mathrm{i}}}{\rho_{\mathrm{i}} \, a^{4}}
\biggl[ \partial_{\tau} \vec{B} + (\vec{v}_{\mathrm{i}}\cdot\vec{\nabla}) \vec{B} + \theta_{\mathrm{i}} \vec{B} - (\vec{B} \cdot\vec{\nabla})\vec{v}_{\mathrm{i}}\biggr]
\nonumber\\
&+& \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{i}}} a
\Gamma_{\gamma \, \mathrm{i}} (\vec{\omega}_{\gamma}-\vec{\omega}_{\mathrm{i}} )
+ a \Gamma_{\mathrm{e\,i}} \frac{\rho_{\mathrm{e}}}{\rho_{\mathrm{i}}}( \vec{\omega}_{\mathrm{e}} - \vec{\omega}_{\mathrm{i}}),
\label{S7}\\
\partial_{\tau} \theta_{\mathrm{i}} + {\mathcal H} \theta_{\mathrm{i}} &=& \frac{e n_{\mathrm{i}}}{\rho_{\mathrm{i}} \, a^{4}} \biggl[ 4 \pi \rho_{\mathrm{q}} +
\vec{\omega}_{\mathrm{i}} \cdot \vec{B} - 4 \pi \vec{v}_{\mathrm{i}} \cdot \vec{J} - \vec{v}_{\mathrm{i}} \cdot \partial_{\tau} \vec{E}\biggr] - \nabla^2 \phi
\nonumber\\
&+& \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{i}}} a \Gamma_{\gamma\mathrm{i}} (\theta_{\gamma} - \theta_{\mathrm{i}})+
a \Gamma_{\mathrm{e}\,\mathrm{i}} \frac{\rho_{\mathrm{e}}}{\rho_{\mathrm{i}}} (\theta_{\mathrm{e}} - \theta_{\mathrm{i}}).
\label{th2}
\end{eqnarray}
Finally, the evolution equations for the photons are given by:
\begin{eqnarray}
\partial_{\tau}\vec{\omega}_{\gamma} &=& a \Gamma_{\gamma\mathrm{i}} (\vec{\omega}_{\mathrm{i}} - \vec{\omega}_{\gamma}) +
a \Gamma_{\gamma\mathrm{e}} ( \vec{\omega}_{\mathrm{e}} - \vec{\omega}_{\gamma}),
\label{S8}\\
\partial_{\tau} \theta_{\gamma} &=& - \frac{1}{4}\nabla^2 \delta_{\gamma} - \nabla^2 \phi +
a \Gamma_{\gamma\mathrm{i}} (\theta_{\mathrm{i}} - \theta_{\gamma}) +
a \Gamma_{\gamma\mathrm{e}} ( \theta_{\mathrm{e}} - \theta_{\mathrm{i}}).
\label{th3}
\end{eqnarray}
The system described by the set of equations deduced so far will be considered as globally neutral. In particular, prior to photon decoupling,
the electron and ion (comoving) concentrations have a common value $n_{0}$, i.e. $n_{\mathrm{i}} = n_{\mathrm{e}} = n_{0}$ where\footnote{If not otherwise stated the pivotal values of the cosmological parameters will be the ones determined
from the WMAP 7yr data alone in the light of the $\Lambda$CDM paradigm.}
\begin{equation}
n_{0}= \eta_{\mathrm{b}0} n_{\gamma}, \qquad \eta_{\mathrm{b}0}=6.177 \times 10^{-10}
\biggl(\frac{h_{0}^2\Omega_{\mathrm{b}0}}{0.02258}\biggr) \biggl(\frac{2.725\, \mathrm{K}}{T_{\gamma 0}}
\biggr)^{3},
\label{th4}
\end{equation}
and $T_{\gamma 0}$ is the present value of the CMB temperature determining the concentration of the photons; $\Omega_{\mathrm{b}0}$ is the present value of the critical fraction of baryons, while $h_{0}$ is the Hubble constant in units
of $100\, \mathrm{Km}/(\mathrm{Mpc} \times \mathrm{sec})$.
The system of Eqs. (\ref{S6})--(\ref{th3}) is coupled
with the evolution of the density contrasts of the electrons, ions and photons (i.e.
$\delta_{\mathrm{e}}$, $\delta_{\mathrm{i}}$ and $\delta_{\gamma}$)
\begin{eqnarray}
&& \delta_{\mathrm{e}}' = - \theta_{\mathrm{e}} + 3 \psi'
- \frac{e n_{\mathrm{e}}}{\rho_{\mathrm{e}} a^4}\vec{E} \cdot \vec{v}_{\mathrm{e}},\qquad \delta_{\mathrm{i}}' = - \theta_{\mathrm{i}} + 3 \psi'
+ \frac{e n_{\mathrm{i}}}{\rho_{\mathrm{i}} a^4}\vec{E} \cdot \vec{v}_{\mathrm{i}},
\label{dc1}\\
&& \delta_{\gamma}' = 4\psi' - \frac{4}{3} \theta_{\gamma}.
\label{dc2}
\end{eqnarray}
Finally the metric fluctuations, the density contrasts and the divergences of the peculiar
velocities are both determined and constrained by the perturbed Einstein equations
(see, e.g. Eqs. (2.43)--(2.46) in the first article of Ref. \cite{mg1}).
Concerning the system of Eqs. (\ref{S6})--(\ref{th3}) two comments are in order:
\begin{itemize}
\item{} Eqs. (\ref{S6})--(\ref{th1}) (as well as Eqs. (\ref{S7})--(\ref{th2})) couple together the evolution of the vorticities, the evolution of the
divergences and the gradients of the magnetic field; while in the linearized approximation
the spatial gradients are simply neglected, in the forthcoming sections the
evolution of the vorticity will be studied
to a given order in the spatial gradients;
\item{} the electron and ion masses break the Weyl rescaling of the whole system
of equations; this aspect can be appreciated by noticing that the prefactor appearing in front of the square brackets at the right hand side of Eqs. (\ref{S6})--(\ref{th1})
and Eqs. (\ref{S7})--(\ref{th2})
is, respectively, $e/(m_{\mathrm{e}} a)$ and $e/(m_{\mathrm{i}} a)$.
\end{itemize}
Equations (\ref{S6})--(\ref{th3}) have three different
scales of vorticity exchange: the photon-ion, the photon-electron and the electron ion rates whose
respective magnitude determines the subleading terms and the different dynamical regimes. By taking the ratios of the two rates appearing at the right hand side of Eqs. (\ref{S6}) and
(\ref{S7}) the following two dimensionless ratios can be constructed\footnote{Note that $\rho_{\mathrm{i}}$ must
simplify when taking the ratio of the two rates in Eq. (\ref{S7}).}:
\begin{eqnarray}
\frac{3 \rho_{\mathrm{e}} \, \Gamma_{\mathrm{e\, i}}}{4 \, \rho_{\gamma} \Gamma_{\gamma\mathrm{e}}} &=& \frac{135 \, \zeta(3)}{16 \, \pi^5} \biggl(\frac{T}{m_{\mathrm{e}}}\biggr)^{-5/2}\, \eta_{\mathrm{b} 0} \, \ln{\Lambda_{\mathrm{C}}} \equiv
\biggl(\frac{T}{T_{\mathrm{e}\gamma}}\biggr)^{-5/2},
\label{R1}\\
\frac{3 \rho_{\mathrm{e}} \, \Gamma_{\mathrm{e\, i}}}{4 \, \rho_{\gamma} \Gamma_{\gamma\mathrm{i}}} &=&
\biggl(\frac{m_{\mathrm{p}}}{m_{\mathrm{e}}}\biggr)^2 \,\biggl(\frac{T}{T_{\mathrm{e}\gamma}}\biggr)^{-5/2} \equiv
\biggl(\frac{T}{T_{\mathrm{i}\gamma}}\biggr)^{-5/2},
\label{R2}
\end{eqnarray}
where $\zeta(3) =1.202...$ and the ion mass has been estimated through the proton mass; the effective temperatures
$T_{\mathrm{e}\gamma}$ and $T_{\mathrm{i}\gamma}$ introduced in the second equality of Eqs. (\ref{R1})
and (\ref{R2}) are defined as:
\begin{equation}
T_{\mathrm{e}\gamma} = m_{\mathrm{e}} \, {\mathcal N}^{2/5} \, \eta_{\mathrm{b}0}^{2/5}, \qquad
T_{\mathrm{i}\gamma} = m_{\mathrm{e}}^{-1/5} m_{\mathrm{p}}^{4/5} {\mathcal N}^{2/5} \, \eta_{\mathrm{b}0}^{2/5},
\qquad {\mathcal N} = \frac{270 \zeta(3)}{32 \, \pi^5} \ln{\Lambda_{\mathrm{C}}}.
\label{R3}
\end{equation}
In explicit terms and for the fiducial set of cosmological parameters determined on the basis of the WMAP 7yr data
alone in the light of the $\Lambda$CDM scenario \cite{wmap7a,wmap7b}
\begin{equation}
T_{\mathrm{e}\gamma} = 88.6\, \biggl(\frac{h_{0}^2 \Omega_{\mathrm{b}0}}{0.02258}\biggr)^{2/5} \, \mathrm{eV},\qquad
T_{\mathrm{i}\gamma} = 36.08\, \biggl(\frac{h_{0}^2 \Omega_{\mathrm{b}0}}{0.02258}\biggr)^{2/5} \, \mathrm{keV}.
\label{R4}
\end{equation}
On the basis of Eq. (\ref{R4}) there are three different dynamical regimes. When $T> T_{\mathrm{i}\gamma}$ the
ion-photon and the electron-photon rates dominate against the Coulomb rate: in this regime the photons, electrons
and ions are all coupled together and form a unique physical fluid with the same effective velocity. When
$T_{\mathrm{e}\gamma}< T < T_{\mathrm{i}\gamma}$ the electron-photon rate dominates against the Coulomb
rate which is anyway larger than the ion-photon rate. Finally for $T< T_{\mathrm{e} \gamma}$ the Coulomb
rate is always dominant which means that the ion-electron fluid represents a unique entity characterized
by a single velocity which is customarily referred to as the baryon velocity.
The effective temperatures $T_{\mathrm{e}\gamma}$ and $T_{\mathrm{e\, i}}$ determine the hierarchies
between the different rates and should not be confused with the kinetic temperatures of the electrons and of the ions
which coincide approximately with the photon temperature $T_{\gamma} \simeq T_{\mathrm{e}} \simeq T_{\mathrm{i}}$.
For instance after matter-radiation equality $(T_{\mathrm{e}} - T_{\gamma})/T_{\gamma} \simeq {\mathcal O}(H/\Gamma_{\mathrm{e}\gamma})$ and $(T_{\mathrm{i}} - T_{\mathrm{e}})/T_{\gamma} \simeq
{\mathcal O}(H/\Gamma_{\mathrm{e}\mathrm{i}})$ where $H$ is the standard Hubble rate at the corresponding epoch.
Depending on the range of temperatures the effective evolution equations for the vorticities will change.
In the regime $T> T_{\mathrm{i}\gamma}$ the Coulomb rate can be neglected in comparison
with the Thomson rates and the vorticities of photons, electrons and ions approximately
coincide. For $T_{\mathrm{e}\gamma}< T < T_{\mathrm{i}\gamma}$ the Ohm law can be easily
obtained from Eq. (\ref{SA}) and it is given by
\begin{equation}
\vec{E} + \vec{v}_{\mathrm{e}} \times \vec{B} = \frac{\vec{J}}{\sigma} +
\frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} \frac{m_{\mathrm{i}}}{e} a^2 \Gamma_{\gamma\mathrm{e}}(\vec{v}_{\gamma}
- \vec{v}_{\mathrm{e}}),
\label{dc5}
\end{equation}
where it has been used that the baryon density $\rho_{\mathrm{b}} = (m_{\mathrm{i}} + m_{\mathrm{e}}) \tilde{n}_{0}$
coincides approximately with the ion density in the globally neutral case and that $n_{0} = a^3 \tilde{n}_{0}$; furthermore, in Eq. (\ref{dc5}), $\sigma$ denotes
the electric conductivity \cite{mg1aa}
\begin{equation}
\sigma = \frac{\omega_{\mathrm{p\, e}}^2}{4 \pi a \Gamma_{\mathrm{e i}}}, \qquad \omega_{\mathrm{p}\, e} = \sqrt{\frac{4 \pi e^2\, n_{\mathrm{e}}}{m_{\mathrm{e}} a}},
\label{dc5aa}
\end{equation}
expressed in terms of the Coulomb rate and in terms of the electron
plasma frequency\footnote{The electron plasma frequency of Eq. (\ref{dc5aa})
must not be confused with the vorticity} $\omega_{\mathrm{p\,e}}$. By taking the curl of both sides of Eq. (\ref{dc5}) the following relation can be easily derived:
\begin{equation}
\vec{\nabla}\times \vec{E} + \vec{\nabla} \times (\vec{v}_{\mathrm{e}}\times \vec{B})=
\frac{\vec{\nabla}\times \vec{J}}{\sigma} + \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} \frac{m_{\mathrm{i}}}{e}
a^2 \Gamma_{\gamma\mathrm{e}} (\vec{\omega}_{\gamma} - \vec{\omega}_{\mathrm{e}}).
\label{dc6}
\end{equation}
Recalling now Eq. (\ref{S2}) and (\ref{S3}), Eq. (\ref{dc6}) becomes:
\begin{eqnarray}
\frac{\partial \vec{B}}{\partial \tau} = \vec{\nabla} \times (\vec{v}_{\mathrm{e}} \times \vec{B}) + \frac{\nabla^2 \vec{B}}{4\pi \sigma} - \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} \, a^2 \, \frac{m_{\mathrm{i}}}{e} \Gamma_{\mathrm{e}\gamma} (\vec{\omega}_{\gamma} - \vec{\omega}_{\mathrm{e}}).
\label{dc6a}
\end{eqnarray}
In the same regime the evolution equation for the vorticities of the ions and of the photons are, up to spatial gradients,
\begin{eqnarray}
&& \partial_{\tau} \vec{\omega}_{\mathrm{i}} + {\mathcal H} \vec{\omega}_{\mathrm{i}} =
- \frac{e n_{\mathrm{i}}}{\rho_{\mathrm{i}} a^4} \partial_{\tau} \vec{B},
\label{dc7}\\
&& \partial_{\tau} \vec{\omega}_{\gamma} =a \Gamma_{\gamma\mathrm{e}} ( \vec{\omega}_{\mathrm{e}}-\vec{\omega}_{\gamma}).
\label{dc8}
\end{eqnarray}
By eliminating the electron-photon rate between Eqs. (\ref{dc7}) and (\ref{dc8}) and by neglecting the
spatial gradients in Eq. (\ref{dc6a}), the following pair of approximate conservation laws can be obtained
\begin{eqnarray}
\partial_{\tau} \biggl( a \vec{\omega}_{\mathrm{i}} + \frac{e}{m_{\mathrm{i}}} \vec{B}\biggr) =0,
\label{dc10a}\\
\partial_{\tau} \biggl( \frac{e}{m_{\mathrm{i}}} \vec{B} - \frac{a}{R_{\mathrm{b}}} \vec{\omega}_{\gamma}\biggr) =0,
\label{dc10b}
\end{eqnarray}
where the ratio $R_{\mathrm{b}}$ is given by:
\begin{equation}
R_{\mathrm{b}} = \frac{3}{4} \frac{\rho_{\mathrm{b}}}{\rho_{\gamma}} = 30.36 \, \biggl(\frac{10^{3}}{z}\biggr) \, h_{0}^2
\Omega_{\mathrm{b}0}.
\label{dc11}
\end{equation}
By further combining the relations of Eqs. (\ref{dc10a}) and (\ref{dc10b}) the vorticity of the photons can be directly related
to the vorticity of the ions since $\partial_{\tau}[ R_{\mathrm{b}} \vec{\omega}_{\mathrm{i}} + \vec{\omega}_{\gamma}] =0$. By assuming that
at a given time $\tau_{\mathrm{r}}$ the primordial value of the vorticity in the
electron photon system is $\vec{\omega}_{\mathrm{r}}$ and that $\vec{B}(\tau_{\mathrm{r}})=0$
we shall have that
\begin{equation}
a_{\mathrm{r}} \vec{\omega}_{\mathrm{i}}(\tau_{\mathrm{r}}) +
\frac{4}{3} \frac{\rho_{\gamma}(\tau_{\mathrm{r}})}{\rho_{\mathrm{b}}(\tau_{\mathrm{r}})}
a_{\mathrm{r}} \vec{\omega}_{\gamma}(\tau_{\mathrm{r}}) = \vec{\omega}_{\mathrm{r}}.
\label{dc11a}
\end{equation}
Thus the solution of Eqs. (\ref{dc10a}) and (\ref{dc10b}) with the initial condition (\ref{dc11a}) can be written as:
\begin{eqnarray}
&& \vec{\omega}_{\mathrm{i}}(\vec{x},\tau) = - \frac{e}{m_{\mathrm{i}}} \frac{\vec{B}(\vec{x},\tau)}{a(\tau)} + \frac{a_{\mathrm{r}}}{a(\tau)} \vec{\omega}_{\mathrm{r}},
\label{dc12}\\
&& \vec{\omega}_{\gamma}(\vec{x},\tau) = \frac{R_{\mathrm{b}}(\tau)}{a(\tau)} [ \vec{\omega}_{\mathrm{r}} - a(\tau)
\vec{\omega}_{\mathrm{i}}(\vec{x},\tau)].
\label{dc13}
\end{eqnarray}
The approximate conservation laws of Eqs. (\ref{dc10a})--(\ref{dc10b}) can also be phrased in terms of the physical vorticities
$\vec{\Omega}_{X}(\vec{x},\tau) = a(\tau) \vec{\omega}_{X}(\vec{x},\tau)$ where $X$ denotes a generic subscript\footnote{Note that while $\vec{\omega}_{X}$ is related to $\vec{B}$, the physical vorticity $\vec{\Omega}_{X}$ is directly proportional to $\vec{{\mathcal B}}$. For instance, in the treatment of \cite{harrison1,harrison2,harrison3} the use of the physical vorticity and of the physical magnetic field is preferred.}.
For typical temperatures $T< T_{\mathrm{e}\gamma}$ the electrons and the ions are more strongly coupled than the
electrons and the photons. This means that the effective evolution can be described in terms of the one-fluid magnetohydrodynamical
(MHD in what folllows) equations where, on top of the total current $\vec{J}$ the
center of mass vorticity of the electron-ion system is introduced
\begin{equation}
\vec{\omega}_{\mathrm{b}} = \frac{m_{\mathrm{i}} \vec{\omega}_{\mathrm{i}} + m_{\mathrm{e}} \vec{\omega}_{\mathrm{e}}}{m_{\mathrm{e}} + m_{\mathrm{i}}}.
\label{dc3}
\end{equation}
Equation (\ref{S6}) (multiplied by $m_{\mathrm{e}}$) and Eq. (\ref{S7}) (multiplied
by $m_{\mathrm{i}}$) can therefore be summed up with the result that
\begin{equation}
\partial_{\tau}\vec{\omega}_{\mathrm{b}} + {\mathcal H} \vec{\omega}_{\mathrm{b}} = \frac{\vec{\nabla}\times(\vec{J}\times
\vec{B})}{a^4 \rho_{\mathrm{b}}} + \frac{4}{3}
\frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} a \Gamma_{\gamma\, \mathrm{e}} (\vec{\omega}_{\gamma} - \vec{\omega}_{\mathrm{b}}).
\label{dc4}
\end{equation}
The evolution equation for the total current can be obtained from the difference of Eqs. (\ref{SA}) and (\ref{SB}). Since
the interaction rates are typically much larger than the expansion rates the Ohm equation can be simplified and becomes
\begin{equation}
\vec{E} + \vec{v}_{\mathrm{b}} \times \vec{B} = \frac{\vec{J}}{\sigma} +
\frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} \frac{m_{\mathrm{i}}}{e} a^2 \Gamma_{\gamma\mathrm{e}}(\vec{v}_{\gamma}
- \vec{v}_{\mathrm{b}}),
\label{dc4a}
\end{equation}
where $\vec{v}_{\mathrm{b}}$ is the baryon velocity related to the baryon vorticity as
$\vec{\omega}_{\mathrm{b}} = \vec{\nabla} \times \vec{v}_{\mathrm{b}}$.
The similarity of Eqs. (\ref{dc6}) and (\ref{dc4}) should not be
misunderstood: while Eq. (\ref{dc6}) follows from the right hand side
of Eq. (\ref{SA}), Eq. (\ref{dc4}) follows by taking the difference of Eq. (\ref{SB}) (multiplied by $n_{\mathrm{i}}$) and of Eq. (\ref{SA}) (multiplied by $n_{\mathrm{e}}$).
The expression obtained by means of the latter difference is rather
lengthy and can be found in its full generality, in Ref. \cite{mg1aa} (see, in particular, Eqs. (7) and (10)). Here the expression has been simplified by neglecting
higher orders in $(m_{\mathrm{e}}/m_{\mathrm{i}})$.
The effective set of evolution equations can then be written, in this regime, as
\begin{eqnarray}
&& \partial_{\tau} \vec{\omega}_{\mathrm{b}} + {\mathcal H} \vec{\omega}_{\mathrm{b}} =
\frac{\vec{\nabla}\times(\vec{J} \times \vec{B})}{a^4 \, \rho_{\mathrm{b}}} + \frac{\epsilon'}{R_{\mathrm{b}}}
(\vec{\omega}_{\gamma} - \vec{\omega}_{\mathrm{b}}),
\label{dc4b}\\
&& \partial_{\tau} \vec{B} = \vec{\nabla}\times(\vec{v}_{\mathrm{b}}\times \vec{B}) + \frac{\nabla^2 \vec{B}}{4 \pi \sigma}
+ \frac{m_{\mathrm{i}} a}{e \, R_{\mathrm{b}}} \epsilon' (\vec{\omega}_{\mathrm{b}} - \vec{\omega}_{\gamma}),
\label{dc4c}\\
&& \partial_{\tau} \vec{\omega}_{\gamma} = \epsilon' (\vec{\omega}_{\mathrm{b}} - \vec{\omega}_{\gamma}),
\label{dc4d}
\end{eqnarray}
where $\epsilon' = a \Gamma_{\mathrm{e}\gamma}$ is the differential optical depth where, as usual, the
contribution of the ions has been neglected. In the tight coupling limit Eqs. (\ref{dc4b}), (\ref{dc4c}) and (\ref{dc4d})
imply that $\vec{\omega}_{\mathrm{b}\gamma} \simeq \vec{\omega}_{\mathrm{b}} \simeq \vec{\omega}_{\gamma}$
while $\vec{\omega}_{\mathrm{b}\gamma}$ obeys
\begin{equation}
\partial_{\tau} \vec{\omega}_{\mathrm{b}\gamma} + \frac{{\mathcal H} R_{\mathrm{b}}}{R_{\mathrm{b}} + 1}
\vec{\omega}_{\mathrm{b}\gamma} = R_{\mathrm{b}}\frac{\vec{\nabla}\times (\vec{J} \times \vec{B})}{\rho_{\mathrm{b}} \, a^4
(R_{\mathrm{b}} + 1)}.
\label{dcde}
\end{equation}
In analogy with what has been done before, the conservation laws can be derived by combining Eqs. (\ref{dc4b}) and (\ref{dc4c})
\begin{equation}
\partial_{\tau} \biggl( \vec{B} + \frac{m_{\mathrm{i}}}{e}\, a \,\vec{\omega}_{\mathrm{b}}\biggr) =
\vec{\nabla} \times (\vec{v}_{\mathrm{b}} \times \vec{B}) + \frac{\nabla^2 \vec{B}}{4\pi \sigma} +
\frac{m_{\mathrm{i}}}{e} \frac{\vec{\nabla} \times (\vec{J} \times \vec{B})}{a^3 \rho_{\mathrm{b}}}.
\label{dcdf}
\end{equation}
From Eqs. (\ref{dc4c}) and (\ref{dc4d}) and by neglecting the spatial
gradients it also follows
\begin{equation}
\partial_{\tau} \biggl( \vec{B} - \frac{a}{R_{\mathrm{b}}} \frac{m_{\mathrm{i}}}{e} \vec{\omega}_{\gamma} \biggr) = 0.
\label{dcdg}
\end{equation}
Equations (\ref{dcdf}) and (\ref{dcdg})
are separately valid, but, taken together and in the limit of tight baryon-photon coupling,
they imply that the magnetic filed must be zero when the tight-coupling is exact (i.e. $\vec{\omega}_{\gamma} =\vec{\omega}_{\mathrm{b}}$).
In spite of the various physical regimes encountered in the analysis of the evolution of the vorticity the key point is to find a suitable source of large-scale vorticity which could be converted, in some way into a large-scale magnetic field \cite{reviewmax} (see also \cite{cov1,cov2}). The conversion can not only occur prior to matter-radiation equality but also after \cite{mishustin} in the
regime where, as explained, the baryon-photon coupling becomes weak. Indeed, Eqs. (\ref{dc10a}) and (\ref{dcdf}) have the same
dynamical content when the spatial gradients are neglected and the only difference involves the coupling to the photons.
There have been, through the years, suggestions involving primordial turbulence
(see the interesting accounts of Refs. \cite{B1a}), cosmic strings with small scale structure (see, e. g. \cite{vort1,shellard1,dm2}). Since
matter flow in baryonic wakes is turbulent, velocity gradients will be induced in the flow by the small-scale wiggles of the string producing ultimately the vorticity. Dynamical friction between cosmic strings and matter may provide a further source of vorticity \cite{shellard1}. There have been also studies
trying to generate large-scale magnetic fields in the context of superconducting cosmic strings (see, for instance,\cite{dm2} and references therein). The possible generation of large-scale magnetic fields prior to hydrogen
recombination has been discussed in \cite{dolr1,dolr2,hoganr} (see also \cite{dolr3}). The vorticity required in order to produce the magnetic fields is generated, according to \cite{dolr1}, by the photon diffusion at second order in the temperature fluctuations. In a similar perspective Hogan \cite{hoganr} got less optimistic estimates which, according to \cite{dolr1,dolr2}, should be attributed to different approximation schemes employed in the analysis. Along this perspective various analyses
discussed higher-order effects using the conventional
perturbative expansion in the presence of the relativistic fluctuations
of the geometry \cite{vort2}. In the present paper, as already mentioned, we are going to follow a different route
since we intend to use the gradient expansion for a direct estimate of the vorticity.
\renewcommand{\theequation}{3.\arabic{equation}}
\setcounter{equation}{0}
\section{Vorticity evolution in gradient expansion}
\label{sec3}
The conservation laws derived in section \ref{sec2} hold under the hypothesis that the spatial gradients are neglected in the evolution equations of the vorticity.
The logic of the gradient expansion \cite{gr1,gr2,gr3,gr4,gr5,gr6} can be combined
with the tenets of the drift approximation \cite{gr7,gr8,gr9} in the context of the ADM decomposition \cite{ADM1,ADM2}. It will be shown hereunder that the resulting formalism \cite{mg2} provides a more general description of the angular
momentum transfer between the various species of the plasma.
Consider therefore the standard ADM decomposition where the shift vectors are set to zero but the lapse function kept arbitrary, i.e. $g_{00}(\vec{x},\tau) = N^2(\vec{x},\tau)$ and $g_{ij}(\vec{x},\tau) = - \gamma_{ij}(\vec{x},\tau)$. In this case the Maxwell equations can be written as
\begin{eqnarray}
&& \vec{\partial} \cdot \vec{E} = 4 \pi e [n_{\mathrm{i}} - n_{\mathrm{e}}],
\qquad \vec{\partial} \cdot \vec{B}=0,
\label{NH1}\\
&&\partial_{\tau} \vec{B} + \vec{\partial} \times \vec{E} =0,\qquad \vec{\partial} \times \vec{B} = 4 \pi e \biggl[n_{\mathrm{i}} \, \vec{v}_{\mathrm{i}} - n_{\mathrm{e}} \, \vec{v}_{\mathrm{e}}\biggr] + \partial_{\tau}\vec{E},
\label{NH2}
\end{eqnarray}
where the rescaled electric and magnetic fields are given by:
\begin{equation}
E^{i}(\vec{x},\tau) = \biggl(\frac{\sqrt{\gamma}}{N}\biggr)_{(\vec{x},\tau)} {\mathcal E}^{i}(\vec{x},\tau),\qquad B^{i}(\vec{x},\tau) = \biggl(\frac{\sqrt{\gamma}}{N}\biggr)_{(\vec{x},\tau)} {\mathcal B}^{i}(\vec{x},\tau);
\label{NH3}
\end{equation}
in Eq. (\ref{NH3}) the subscripts specify that the rescaling is space-time dependent. The rescaled concentrations are
\begin{equation}
n_{\mathrm{i}}(\vec{x},\tau) = \sqrt{\gamma} \, \tilde{n}_{\mathrm{i}}(\vec{x},\tau),
\qquad n_{\mathrm{e}}(\vec{x},\tau) = \sqrt{\gamma} \, \tilde{n}_{\mathrm{e}}(\vec{x},\tau).
\label{NH3a}
\end{equation}
The shorthand notation\footnote{Note that the operators introduced in Eqs. (\ref{NH1})--(\ref{NH3}) are the generalized curl, divergence and gradient operators; they reduce to the conventional curl, divergence and gradient operators in the conformally flat limit.} employed in Eqs. (\ref{NH1})--(\ref{NH3}) implies
for a generic vector $A^{i}$,
\begin{equation}
\vec{\partial}\cdot \vec{A} \equiv \partial_{i} A^{i},\qquad
(\vec{\partial}\times \vec{A})^{i} = \partial_{j}\biggl[N \gamma^{i k} \, \gamma^{j n} \, \eta_{n m k} \, A^{m}\biggr].
\label{NH4}
\end{equation}
In appendix \ref{APPA} some relevant complements on this formalism have been collected to avoid
a digression from the main line of arguments contained in the present section. Two relevant
aspects must anyway be borne in mind:
\begin{itemize}
\item{} in the conformally flat limit (i.e. $N(\vec{x}, \tau) \to a(\tau)$ and $\gamma_{ij}(\vec{x}, \tau) \to
a^2(\tau) \delta_{ij}$) Eqs. (\ref{NH1}) and (\ref{NH2}) reproduce exactly Eqs. (\ref{S1}) and (\ref{S2});
\item{} the same comment holds for all the other fields (i.e. comoving or physical) involved
in the fully inhomogeneous description.
\end{itemize}
Using the generalized curl operator of Eq. (\ref{NH4}) the vorticity of the ions, of the electrons and of the
photons can be written as
\begin{equation}
\omega^{i}_{\mathrm{i}} = \partial_{j} \bigl( \Lambda^{ij}_{m} \, v_{\mathrm{i}}^{m}\bigr),\qquad
\omega^{i}_{\mathrm{e}} = \partial_{j} \bigl( \Lambda^{ij}_{m} \, v_{\mathrm{e}}^{m}\bigr),
\qquad
\omega^{i}_{\gamma} = \partial_{j} \bigl( \Lambda^{ij}_{m} \, v_{\gamma}^{m}\bigr),
\label{NH5}
\end{equation}
where $K_{ij}$ is the extrinsic curvature (see appendix \ref{APPA}) while $\Lambda_{m}^{ij}$ and $\overline{\Lambda}^{ij}_{m}$ are defined as\footnote{Recall that $\eta_{a b c} = \sqrt{\gamma} \, \epsilon_{a b c}$ and that $\eta^{a b c} = \epsilon^{a b c}/\sqrt{\gamma}$.}:
\begin{equation}
\Lambda^{ij}_{m} = N \gamma^{ik}\,\gamma^{j n} \, \eta_{n m k},\qquad
\overline{\Lambda}^{ij}_{m} = 2 N^2 [ K^{i k} \, \gamma^{j n} + K^{j n} \, \gamma^{i k} ] \eta_{n m k}.
\label{NH5a}
\end{equation}
Using Eqs. (\ref{NH5})--(\ref{NH5a}) as well as the evolution equations of the velocities (see, Eqs. (\ref{Av1})--(\ref{Av2})), the evolution for the vorticity of the electrons and of the ions can be written, respectively, as\footnote{We shall focus, without loss of generality, on the situation where the lapse function is homogeneous, i.e. $N(\vec{x},\tau) = N(\tau)$; in this case
the already lengthy expressions will be more manageable since the spatial derivatives of the lapse function
will vanish.}
\begin{eqnarray}
&& \partial_{\tau} \omega_{\mathrm{e}}^{i} + \biggl( N K - \frac{\partial_{\tau} N}{N}\biggr) \omega_{\mathrm{e}}^{i} -
{\mathcal G}^{i}_{k} \omega^{k}_{\mathrm{e}} - {\mathcal F}_{\mathrm{e}}^{i} =
\nonumber\\
&&- \frac{e \tilde{n}_{\mathrm{e}} N^2}{ \rho_{\mathrm{e}} \sqrt{\gamma}} \biggl\{ (\vec{\partial} \times \vec{E})^{i} + [ \vec{\partial}\times (\vec{v}_{\mathrm{e}} \times \vec{B})]^{i} \biggr\} + N \Gamma_{\mathrm{e}\mathrm{i}} ( \omega^{i}_{\mathrm{i}} -
\omega^{i}_{\mathrm{e}})
+ \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{e}}} N \Gamma_{\mathrm{e} \gamma}(\omega_{\gamma}^{i} - \omega_{\mathrm{e}}^{i}),
\label{NH6}\\
&& \partial_{\tau} \omega_{\mathrm{i}}^{i} + \biggl( N K - \frac{\partial_{\tau} N}{N}\biggr) \omega_{\mathrm{i}}^{i} -
{\mathcal G}^{i}_{k} \omega^{k}_{\mathrm{i}} - {\mathcal F}_{\mathrm{i}}^{i} =
\nonumber\\
&& \frac{e \tilde{n}_{\mathrm{i}} N^2}{ \rho_{\mathrm{i}} \sqrt{\gamma}} \biggl\{ (\vec{\partial} \times \vec{E})^{i} + [ \vec{\partial}\times (\vec{v}_{\mathrm{i}} \times \vec{B})]^{i} \biggr\} + N \Gamma_{\mathrm{i}\mathrm{e}} \frac{\rho_{\mathrm{e}}}{\rho_{\mathrm{i}}}( \omega^{i}_{\mathrm{e}} -
\omega^{i}_{\mathrm{i}})
+ \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{i}}} N \Gamma_{\mathrm{i} \gamma}(\omega_{\gamma}^{i} - \omega_{\mathrm{i}}^{i}).
\label{NH7}
\end{eqnarray}
Similarly, from Eq. (\ref{Av2a}) the evolution equation for the vorticity of the photons can be written as
\begin{equation}
\partial_{\tau} \omega_{\gamma}^{i} + \biggl[ \frac{4}{3} N K - \frac{\partial_{\tau} N}{N} \biggr] \omega_{\gamma}^{i} -
{\mathcal G}^{i}_{k} \omega_{\gamma}^{k} - {\mathcal F}^{i}_{\gamma}
= N \Gamma_{\gamma\mathrm{e}} (\omega_{\mathrm{e}}^{i} -
\omega_{\gamma}^{i}) + N \Gamma_{\gamma\mathrm{i}} ( \omega_{\mathrm{i}}^{i} - \omega_{\gamma}^{i}).
\label{NH7a}
\end{equation}
The quantities ${\mathcal F}_{\mathrm{e}}^{i}$,
${\mathcal F}_{\mathrm{i}}^{i}$ and ${\mathcal F}_{\gamma}^{i}$ appearing in Eqs. (\ref{NH6}), (\ref{NH7}) and (\ref{NH7a}) are of the same order of the other terms appearing in the equations and they are defined as
\begin{eqnarray}
{\mathcal F}_{\mathrm{e}}^{i} &=& \partial_{j} \biggl( \overline{\Lambda}^{ij}_{m} v^{m}_{\mathrm{e}}\biggr) +
\frac{4}{3} N \Gamma_{\gamma\mathrm{e}} \partial_{j} \biggl(\frac{\rho_{\gamma}}{\rho_{\mathrm{e}}}\biggr) \Lambda^{ij}_{m}
(v^{m}_{\gamma} - v^{m}_{\mathrm{e}}),
\nonumber\\
&+& \partial_{j} {\mathcal G}^{m}_{a} \Lambda^{ij}_{m} v_{\mathrm{e}}^{a} - N \partial_{j} K \Lambda^{ij}_{m} v^{m}_{\mathrm{e}} - \partial_{j} \biggl( \frac{e \tilde{n}_{\mathrm{e}} N^2}{\rho_{\mathrm{e}} \sqrt{\gamma}} \biggr) \Lambda^{ij}_{m}\biggl[ E^{m} + (\vec{v}_{\mathrm{e}}\times \vec{B})^{m} \biggr],
\label{NH8}\\
{\mathcal F}^{i}_{\mathrm{i}} &=& \partial_{j} \biggl( \overline{\Lambda}^{ij}_{m} v^{m}_{\mathrm{i}}\biggr) +
\frac{4}{3} N \Gamma_{\gamma\mathrm{i}} \partial_{j} \biggl(\frac{\rho_{\gamma}}{\rho_{\mathrm{i}}}\biggr) \Lambda^{ij}_{m}
(v^{m}_{\gamma} - v^{m}_{\mathrm{i}}) + N \partial_{j}\biggl(\frac{\rho_{\mathrm{e}}}{\rho_{\mathrm{i}}}\biggr) \, \Lambda^{ij}_{m}\, \Gamma_{\mathrm{i e}} (v^{m}_{\mathrm{e}} - v^{m}_{\mathrm{i}}),
\nonumber\\
&+& \partial_{j} {\mathcal G}^{m}_{a} \Lambda^{ij}_{m} v_{\mathrm{i}}^{a} - N \partial_{j} K \Lambda^{ij}_{m} v^{m}_{\mathrm{i}} + \partial_{j} \biggl( \frac{e \tilde{n}_{\mathrm{i}} N^2}{\rho_{\mathrm{i}} \sqrt{\gamma}} \biggr) \Lambda^{ij}_{m} \biggl[ E^{m} + (\vec{v}_{\mathrm{i}}\times \vec{B})^{m} \biggr],
\label{NH9}\\
{\mathcal F}_{\gamma}^{i} &=& \partial_{j} \biggl(\overline{\Lambda}^{i j}_{k} \, v^{k}_{\gamma}\biggr) + \Lambda^{i j}_{k} \, v^{q}_{\gamma} \partial_{j} {\mathcal G}^{k}_{q} - \frac{4}{3} N\partial_{j} K \, \Lambda^{i j}_{k} \, v^{k}_{\gamma} - \frac{N^2}{4} \partial_{j}\biggl\{ \frac{\Lambda^{i j}_{k}}{\rho_{\gamma}} \partial_{m}\biggl[ \rho_{\gamma} \gamma^{m k} \biggr]\biggr\}.
\label{NH9a}
\end{eqnarray}
The generalized scalar and vector products appearing in Eqs. (\ref{NH8}), (\ref{NH9}) and (\ref{NH9a}) are defined as
\begin{equation}
\vec{F} \cdot \vec{G} = \gamma_{m n} F^{m} G^{n},\qquad (\vec{F} \times \vec{G})^{k} = \frac{\gamma_{i n} \gamma_{m \ell}}{N}
F^{n} G^{m} \eta^{i \,\ell\, k},
\label{NH9b}
\end{equation}
and coincide with the ordinary scalar and vector products in the conformally
flat limit introduced after Eq. (\ref{NH4}).
The velocity fields appearing in Eqs. (\ref{NH6}) and (\ref{NH7}) are all subjected to the fully inhomogeneous
form of the momentum constraint implying, from Eq. (\ref{0i}),
\begin{equation}
\frac{1}{N} \biggl( \nabla_{i} K - \nabla_{k} K^{k}_{i} \biggr) = \ell_{\mathrm{P}}^2 (p + \rho) u^{0} u_{i},\qquad u^{0} = \frac{1}{N}
\sqrt{1 + u^2},
\label{CON1}
\end{equation}
where $u^2 = u^{i} u^{j} \gamma_{ij}$ and where $u^{0}$ and $u^{i}$ can also be defined in terms of the total velocity field
$v^{i}$ which turns out to be the weighted sum of the velocity fields
of the electrically charged and of the electrically neutral species, i.e.
\begin{equation}
(p + \rho) v^{k} = \sum_{a} ( p_{\mathrm{a}} + \rho_{\mathrm{a}}) v^{k}_{\mathrm{a}} =
\rho_{\mathrm{e}} v^{k}_{\mathrm{e}} + \rho_{\mathrm{i}} v^{k}_{\mathrm{i}} + \frac{4}{3} \rho_{\gamma} v^{k}_{\gamma}
+ \frac{4}{3} \rho_{\nu} v^{k}_{\nu} + \rho_{\mathrm{c}} v_{\mathrm{c}}^{k},
\label{CON4}
\end{equation}
where the contribution of the cold dark matter particles and of the massless
neutrinos has been also added.
The explicit connection between $u^{0}$, $u^{i}$ and $v^{i}$ is given by:
\begin{equation}
u^{0} = \frac{\cosh{y}}{N}, \qquad u^{i} = \frac{v^{i}}{N} \cosh{y}, \qquad \cosh{y} = \frac{1}{\sqrt{1 - v^2/N^2}},
\label{CON2}
\end{equation}
where $v^2 = v^{i} v^{j} \gamma_{ij}$. In terms of $v^{i}$ and $v^2$ the momentum constraint of Eq. (\ref{CON1}) can also be written as
\begin{equation}
\ell_{\mathrm{P}}^2 ( p + \rho) \frac{v^{i}}{N} = \biggl(1 - \frac{v^2}{N^2} \biggr) \, \nabla_{k} \biggl(K^{k i} - K \gamma^{k i}\biggr).
\label{CON3}
\end{equation}
All the discussion of section \ref{sec2} can be generalized to the fully inhomogeneous
case and we shall be particularly interested in the generalization
of the conservation laws determining the angular momentum exchange
between the various species. Consider then the situation where
the electron-photon rate dominates against the Coulomb rate. In this
case the fully inhomogeneous form of the Ohm law reads
\begin{equation}
- E^{k} - (\vec{v}_{\mathrm{e}} \times \vec{B})^{k} + \frac{J^{k}}{\sigma} +
\frac{4}{3\, e} \, \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} m_{\mathrm{i}} \Gamma_{\mathrm{e} \gamma}
\, \frac{\sqrt{\gamma}}{N} ( v_{\gamma}^{k} - v_{\mathrm{e}}^{k}) =0.
\label{D1}
\end{equation}
By taking the generalized curl of Eq. (\ref{D1}) (see Eq. (\ref{NH4})) the following
equation can be obtained
\begin{eqnarray}
&& - \vec{\partial} \times \vec{E} - \vec{\partial}\times(\vec{v}_{\mathrm{e}} \times \vec{B}) + \vec{\partial}\times(\vec{J}/\sigma)
\nonumber\\
&& +
\frac{4}{3\, e} \, \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} m_{\mathrm{i}} \Gamma_{\mathrm{e} \gamma}
\, \frac{\sqrt{\gamma}}{N} ( \vec{\omega}_{\gamma} - \vec{\omega}_{\mathrm{e}})
- \frac{4}{3} \,\frac{m_{\mathrm{i}}}{e} \,N^2 (\vec{v}_{\gamma} -\vec{v}_{\mathrm{e}}) \times \vec{\partial}
\biggl[ \Gamma_{\mathrm{e}\gamma} \frac{\sqrt{\gamma}}{N} \, \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}}\biggr]=0,
\label{D2}
\end{eqnarray}
where, consistently with Eq. (\ref{NH9b}), the last term at the left hand side is defined in terms of the generalized
vector product and it vanishes exactly in the conformally flat limit.
By assuming, as physically plausible prior to decoupling, that the conductivity is
homogeneous, Eqs. (\ref{NH1}) and (\ref{NH2}) can be used inside
Eq. (\ref{D2}) and the final equation will then be:
\begin{eqnarray}
&& \partial_{\tau} \vec{B} = \vec{\partial} \times (\vec{v}_{\mathrm{e}}\times \vec{B})
- \frac{1}{4\pi \sigma} \vec{\partial} \times (\vec{\partial} \times \vec{B}) -
\frac{4}{3\, e} \, \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} m_{\mathrm{i}} \Gamma_{\mathrm{e} \gamma}
\, \frac{\sqrt{\gamma}}{N} ( \vec{\omega}_{\gamma} - \vec{\omega}_{\mathrm{e}})
\nonumber\\
&& + \frac{4}{3} N^2 \frac{m_{\mathrm{i}}}{e} (\vec{v}_{\gamma} -\vec{v}_{\mathrm{e}}) \times \vec{\partial} \biggl[ \Gamma_{\mathrm{e}\gamma} \frac{\sqrt{\gamma}}{N}\, \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}}\biggr].
\label{D3}
\end{eqnarray}
Equation (\ref{D3}) reduces, in the conformally flat limit, to Eq. (\ref{dc6a}).
The same logic can be applied in all the other
derivations and the obtained result expanded to first order in the spatial gradients
with the result that the generalized system for the evolution
of the vorticities reads
\begin{eqnarray}
&& \partial_{\tau} \omega_{\mathrm{i}}^{k} = \biggl( N K + 2 \frac{\partial_{\tau} N}{N}
\biggr) \omega_{\mathrm{i}}^{k} - \frac{e \tilde{n}_{\mathrm{i}}}{\rho_{\mathrm{i}} \sqrt{\gamma}} \, N^2 \, \partial_{\tau} B^{k},
\label{D4}\\
&& \partial_{\tau} B^{k} = -\frac{4}{3\, e} \Gamma_{\mathrm{e}\gamma} \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} m_{\mathrm{i}} \frac{\sqrt{\gamma}}{N} (\omega_{\gamma}^{k} -
\omega_{\mathrm{e}}^{k}),
\label{D5}\\
&& \partial_{\tau} \omega_{\gamma}^{k} = \biggl( \frac{2}{3} N K + 2 \frac{\partial_{\tau} N}{N} \biggr) \omega_{\gamma}^{k} + N \Gamma_{\mathrm{e}\gamma}
(\omega_{\mathrm{e}}^{k} - \omega_{\gamma}^{k}).
\label{D6}
\end{eqnarray}
Equations (\ref{D4}), (\ref{D5}) and (\ref{D6}) reduce, respectively, to Eqs.
(\ref{dc6a}), (\ref{dc7}) and (\ref{dc8}) in the conformally flat limit.
Equations (\ref{D4}), (\ref{D5}) and (\ref{D6}) apply in the situation where
the magnetic fields are initially zero and do not contribute to the extrinsic curvature so that
$K_{i}^{j} = K/3 \delta_{i}^{j} + \overline{K}_{i}^{j}$ with $\overline{K}_{i}^{j}=0$. In this case Eqs. (\ref{D4})--(\ref{D6}) reduce to a pair
of remarkable conservation laws whose explicit expression, up to
spatial gradients, is
\begin{eqnarray}
&&\partial_{\tau} \biggl[ \frac{\sqrt{\gamma}}{N^2} \omega_{\mathrm{i}}^{k} +
\frac{ e \tilde{n}_{\mathrm{i}}}{\rho_{\mathrm{b}}} B^{k} \biggr] =0,
\label{D7}\\
&& \partial_{\tau} \biggl[ B^{k} - \frac{m_{\mathrm{i}}}{e \, \overline{R}_{\mathrm{b}}}
\frac{\gamma^{1/3}}{N^2} \omega_{\gamma}^{k} \biggr] =0,
\label{D8}
\end{eqnarray}
where $\overline{R}_{\mathrm{b}}(\vec{x},\tau_{1})$ is a constant in time (but not in space) and come from the inhomogeneous generalization of $R_{\mathrm{b}}(\vec{x},\tau)$:
\begin{equation}
R_{\mathrm{b}}(\vec{x},\tau) = \frac{3}{4} \frac{\rho_{\mathrm{b}}(\vec{x},\tau)}
{\rho_{\gamma}(\vec{x},\tau)} = \overline{R}_{\mathrm{b}}(\vec{x},\tau_{1})\gamma^{1/6}.
\label{D9}
\end{equation}
The evolution of the vorticity of the baryons
as well as the tight coupling between the baryons and the photons can be discussed
in full analogy with the considerations already developed above in the
case of the electron-photon coupling. The inhomogeneous generalization
of the Ohm law when the Coulomb scattering dominates against both the electron-photon and the ion-photon coupling has been derived in Ref. \cite{mg2}
(see Eq. (3.34)). To leading order in the gradient expansion
the evolution of the baryon vorticity can be written as
\begin{eqnarray}
&& \partial_{\tau} \omega_{\mathrm{b}}^{k} = \biggl( N K + 2 \frac{\partial_{\tau} N}{N} \biggr) \omega_{\mathrm{b}}^{k} + \frac{\epsilon'}{R_{b}}(\omega_{\gamma}^{k} - \omega_{\mathrm{b}}^{k}),
\label{D10}\\
&& \partial_{\tau} B^{k} = - \frac{m_{\mathrm{i}}}{e} \frac{\epsilon'}{R_{\mathrm{b}}}
\frac{\sqrt{\gamma}}{N^2} (\omega_{\gamma}^{k} - \omega_{\mathrm{b}}^{k}).
\label{D11}
\end{eqnarray}
where $\epsilon'= N \Gamma_{\mathrm{e}\gamma}$ is the inhomogeneous
generalization of the optical depth. By eliminating
$\epsilon'$ between Eqs. (\ref{D10}) and (\ref{D11}) the following equation
\begin{equation}
\partial_{\tau} \biggl[ \frac{\sqrt{\gamma}}{N^2} \omega_{\mathrm{b}}^{k} +
\frac{ e \tilde{n}_{\mathrm{i}}}{\rho_{\mathrm{b}}} B^{k} \biggr] =0
\label{D12}
\end{equation}
is readily obtained. Note that Eq. (\ref{D12}) coincides, up to spatial
gradients and in the conformally flat limit, with Eq. (\ref{dcdf}).
\renewcommand{\theequation}{4.\arabic{equation}}
\setcounter{equation}{0}
\section{Maximal vorticity induced by the geometry}
\label{sec4}
In this paper the the expansion is organized not
in terms of the relative magnitude of the gravitational and electromagnetic fluctuations but in terms
of the number of gradients carried by each order of the expansion.
From the momentum constraint (see Eq. (\ref{CON3})), the total
velocity field can be written, formally,
\begin{eqnarray}
v^{i} &=& - \frac{N\, S^{i}}{ 2 S^2} \biggl[ 1 - \sqrt{1 + 4 S^2} \biggr] \simeq N S^{i} \biggl[ 1 - S^2 + {\mathcal O}(\epsilon^3)\biggr] +
{\mathcal O}(\epsilon^4),
\label{S0}\\
S^{i} &=& \frac{1}{\ell_{\mathrm{P}}^2 ( p + \rho)}
\nabla_{k}\biggl( K^{k i} - K \gamma^{k i}\biggr),
\label{ES1}
\end{eqnarray}
where the orders of the expansion appearing in Eq. (\ref{S0}) are defined by the number
of gradients. From Eqs. (\ref{S0}) and (\ref{NH5})--(\ref{NH5a}) the total vorticity can be written as
\begin{equation}
\omega_{\mathrm{tot}}^{i} = \partial_{j}\biggl\{ N \Lambda^{ij}_{m} S^{m} \biggl[ 1 - S^2 + {\mathcal O}(\epsilon^3)\biggr]\biggr\}.
\label{ES2}
\end{equation}
To implement the gradient expansion let us parametrize the geometry as
\begin{equation}
\gamma_{ij}(\vec{x},\tau) = a^2(\tau)[ \alpha_{ij}(\vec{x}) + \beta_{ij}(\vec{x},\tau) ],
\qquad \gamma^{ij}(\vec{x},\tau) = \frac{1}{a^2(\tau)}[ \alpha^{i j}(\vec{x}) - \beta^{ij}(\vec{x},\tau)].
\label{ES3}
\end{equation}
and keep the lapse function homogeneous, i.e. $N(\tau) = a(\tau)$; $\alpha_{ij}(\vec{x})$ does not contain any spatial gradient while $\beta_{ij}(\vec{x},\tau)$ contains at least one spatial gradient. The extrinsic curvature becomes:
\begin{equation}
K_{i}^{j} = - \biggl(\frac{{\mathcal H}}{a} \delta_{i}^{j} + \frac{1}{2} \frac{\partial_{\tau} \beta_{i}^{j}}{a} \biggr),\qquad
K^{i k} = - \frac{1}{a^3}\biggl[{\mathcal H}( \alpha^{i k} - \beta^{ik}) + \frac{1}{2} \partial_{\tau} \beta^{ik} \biggr].
\label{ES4}
\end{equation}
Furthermore we have also that the spatial Christoffel are:
\begin{eqnarray}
\Gamma^{k}_{k a} &=& \partial_{a} \ln{\sqrt{\gamma}} = \frac{1}{2} \biggl( \frac{\partial_{a} \alpha}{\alpha} + \partial_{a} \beta\biggr),
\label{ES5}\\
\Gamma^{m}_{a b} &=& \frac{1}{2}\biggl[ \alpha^{m n} \lambda_{n a b} + \alpha^{m n} \overline{\lambda}_{n a b} -
\beta^{m n} \lambda_{n a b}\biggr],
\label{ES6}
\end{eqnarray}
where $\lambda_{n a b}$ and $\overline{\lambda}_{n a b}$
\begin{eqnarray}
\lambda_{n a b} &=& - \partial_{n} \alpha_{a b} + \partial_{b} \alpha_{n a} + \partial_{a}\alpha_{b n},
\label{ES7}\\
\overline{\lambda}_{n a b} &=& - \partial_{n} \beta_{a b} + \partial_{b} \beta_{n a} + \partial_{a}\beta_{b n}.
\label{ES8}
\end{eqnarray}
The relevant term appearing in the momentum constraint becomes then
\begin{equation}
\nabla_{k}\biggl( K^{ k m} - K \gamma^{k m} \biggr) = \nabla_{k} K^{k m} + \frac{\alpha^{k m} \partial_{\tau} \partial_{k} \beta}{2 a^3},
\label{ES9}
\end{equation}
where
\begin{eqnarray}
\nabla_{k} K^{k m} &=& - \frac{1}{a^3} \biggl[\frac{{\mathcal H}}{ 2 \alpha} (\partial_{a} \alpha) \alpha^{a m}
+ {\mathcal H} \partial_{k} \alpha^{k m} + {\mathcal H} \frac{\alpha^{m i} \alpha^{a b}}{2} \lambda_{i a b}\biggr]
\nonumber\\
&+& \frac{1}{2a^3} \biggl[- \partial_{k} \partial_{\tau} \beta^{km} - \frac{1}{2}\biggl(\partial_{\tau} \beta^{a b} \alpha^{m i}\biggr) \lambda_{i a b}
- \biggl(\frac{\partial_{a} \alpha}{2 \alpha}\biggr) \partial_{\tau} \beta^{a m}
\nonumber\\
&-& {\mathcal H} \partial_{a} \beta \alpha^{a m}
+ {\mathcal H} \biggl(\frac{\partial_{a} \alpha}{\alpha}\biggr) \beta^{am}
+ 2 {\mathcal H} \partial_{k} \beta^{k m}
- {\mathcal H} \alpha^{m i} \alpha^{a b} \overline{\lambda}_{i a b}
\nonumber\\
&+& {\mathcal H} \biggl( \alpha^{m i} \beta^{ab} + \alpha^{a b} \beta^{m i}\biggr) \lambda_{i a b}
\biggr].
\label{ES10}
\end{eqnarray}
Equation (\ref{ES9}) can therefore be written as
\begin{eqnarray}
\nabla_{k}\biggl( K^{ k m} - K \gamma^{k m} \biggr) &=& - \frac{{\mathcal H}}{a^3} \biggl[\frac{1}{ 2 \alpha} (\partial_{a} \alpha) \alpha^{a m}
+ \partial_{k} \alpha^{k m} + \frac{\alpha^{m i} \alpha^{a b}}{2} \lambda_{i a b}\biggr]
\nonumber\\
&+& \frac{1}{2a^3} \biggl\{\alpha^{k m} \partial_{k} \partial_{\tau} \beta - \partial_{k} \partial_{\tau} \beta^{km}
\nonumber\\
&-& \biggl[\frac{1}{2}\biggl(\partial_{\tau} \beta^{a b} \alpha^{m i}\biggr) \lambda_{i a b}
+ \biggl(\frac{\partial_{a} \alpha}{2 \alpha}\biggr) \partial_{\tau} \beta^{a m}\biggr]
\nonumber\\
&+& {\mathcal H}\biggl[2 \partial_{k} \beta^{k m}
- \partial_{a} \beta \alpha^{a m} + \biggl(\frac{\partial_{a} \alpha}{\alpha}\biggr) \beta^{am}
- \alpha^{m i} \alpha^{a b} \overline{\lambda}_{i a b}
\nonumber\\
&+& \biggl( \alpha^{m i} \beta^{ab} + \alpha^{a b} \beta^{m i}\biggr) \lambda_{i a b} \biggr]
\biggr\}.
\label{ES11}
\end{eqnarray}
The previous expression can also be recast in a more handy form:
\begin{equation}
\nabla_{k}\biggl( K^{ k m} - K \gamma^{k m} \biggr) = - \frac{{\mathcal H}}{a^3} {\mathcal Z}^{m}(\alpha) +
\frac{1}{2 a^3} \biggl[ {\mathcal I}_{1}^{m}(\alpha,\beta) - {\mathcal I}_{2}^{m}(\alpha,\beta) + {\mathcal H} {\mathcal I}^{m}_{3}(\alpha,\beta)\biggr],
\label{ES12}
\end{equation}
where the three functionals of $\alpha_{ij}(\vec{x})$ and $\beta_{ij}(\vec{x},\tau)$ are defined as
\begin{eqnarray}
{\mathcal Z}^{m}(\alpha) &=& \frac{1}{2} \frac{\partial_{a} \alpha}{\alpha} \alpha^{a m} + \partial_{q} \alpha^{q m} +
\frac{\alpha^{m q} \alpha^{a b}}{2} \lambda_{q a b},
\label{ES13}\\
{\mathcal I}_{1}^{m}(\alpha,\beta) &=&\alpha^{q m} \partial_{q} \partial_{\tau} \beta - \partial_{\tau} \partial_{q} \beta^{q m},
\label{ES14}\\
{\mathcal I}_{2}^{m}(\alpha,\beta) &=& \frac{\alpha^{q m}}{2} (\partial_{\tau} \beta^{a b}) \lambda_{q a b} + \frac{\partial_{a} \alpha}{2 \alpha} \partial_{\tau} \beta^{a m},
\label{ES15}\\
{\mathcal I}_{3}^{m}(\alpha,\beta) &=& 2 \partial_{q} \beta^{q m} - (\partial_{a} \beta) \alpha^{a m} + \frac{\partial_{a} \alpha}{\alpha}
\beta^{a m} + \lambda_{q a b} \biggl( \alpha^{q m} \beta^{a b} + \alpha^{a b} \beta^{m q} \biggr)
\nonumber\\
&-& \alpha^{m q} \alpha^{a b} \overline{\lambda}_{q a b}.
\label{ES16}
\end{eqnarray}
With the result of Eq. (\ref{ES12}) we can compute the first relevant part of the final expression, namely:
\begin{eqnarray}
&& N^2 \gamma^{a j} \gamma^{i n} \eta_{a m n} \nabla_{k}\biggl( K^{ k m} - K \gamma^{k m} \biggr) =
\frac{\sqrt{\alpha}}{a^2} \biggl( 1 + \frac{\beta}{2}\biggr) \biggl\{ - {\mathcal H} \alpha^{k j} \alpha^{i n} {\mathcal Z}^{m}(\alpha)
\epsilon_{k m n}
\nonumber\\
&+& {\mathcal H} \biggl( \alpha^{k j} \beta^{i n} + \alpha^{i n} \beta^{k j}\biggr) {\mathcal Z}^{m}(\alpha) \epsilon_{k m n}
+ \frac{\alpha^{k j} \alpha^{i n}}{2} \epsilon_{k m n} \biggl[{\mathcal I}_{1}^{m}(\alpha,\beta) - {\mathcal I}^{m}_{2}(\alpha,\beta)
\nonumber\\
&+& {\mathcal H} {\mathcal I}_{3}^{m}(\alpha,\beta) \biggr]\biggr\}.
\label{ES17}
\end{eqnarray}
Recalling that, furthermore\footnote{We shall assume that
$w$, the dominant barotropic index of the fluid sources, is constant.}
\begin{equation}
\ell_{\mathrm{P}}^2 ( p + \rho) a^2 = \frac{3 {\mathcal H}_{1}^2 ( 1 + w)}{\alpha^{(w +1)/2} \, ( 1 + \beta/2)^{w +1}}
\biggl(\frac{a_{1}}{a}\biggr)^{3 w+ 1},\qquad \ell_{\mathrm{P}}^2 \overline{\rho}_{1} a^2_{1} = 3 {\mathcal H}_{1}^2.
\label{ES18}
\end{equation}
Putting all the various parts of the calculation together we have that, from Eq. (\ref{ES2}),
\begin{equation}
\omega_{\mathrm{tot}}^{i} = \partial_{j} {\mathcal A}^{i j}, \qquad
{\mathcal A}^{ij} = \frac{N^2 \, \gamma^{k j} \, \gamma^{i n}\, \eta_{k m n}}{\ell_{\mathrm{P}}^2 ( p + \rho)} \nabla_{a}\biggl(
K^{a m} - \gamma^{a m} K\biggr),
\label{ES19}
\end{equation}
then the quantity ${\mathcal A}^{ij}$ becomes:
\begin{eqnarray}
{\mathcal A}^{ij}(\alpha,\beta) &=& \frac{ \alpha^{(w + 2)/2}}{3 {\mathcal H}_{1}^2 (w + 1)} \biggl(\frac{a}{a_{1}}\biggr)^{3 w+1}
\biggl\{ - {\mathcal H} \alpha^{k j} \alpha^{i n} {\mathcal Z}^{m}(\alpha) \epsilon_{k m n}
\nonumber\\
&+& {\mathcal H} \biggl[ \alpha^{k j} \beta^{i n} + \alpha^{i n} \beta^{k j}\biggr] {\mathcal Z}^{m}(\alpha) \epsilon_{k m n}
+ \frac{\alpha^{k j} \alpha^{i n}}{2} \epsilon_{k m n} \biggl[ {\mathcal I}_{1}^{m}(\alpha,\beta)
\nonumber\\
&-& {\mathcal I}^{m}_{2}(\alpha,\beta) + {\mathcal H} {\mathcal I}_{3}^{m}(\alpha,\beta) \biggr]
- \frac{{\mathcal H}}{2} (w + 2) \beta \alpha^{k j} \alpha^{i n} {\mathcal Z}^{m}(\alpha) \epsilon_{k m n}\biggr\}.
\label{ES20}
\end{eqnarray}
The first line at the right hand side of Eq. (\ref{ES20}) does not contain any spatial gradient and it
is therefore ${\mathcal O}(\alpha)$. The remaining part of the expression at the right hand side
of the relation reported in Eq. (\ref{ES20}) are instead ${\mathcal O}(\beta)$.
Sticking to the situation
treated in the present paper the explicit form of $\beta_{ij}(\vec{x},\tau)$ can be determined in terms
of $\alpha_{ij}(\vec{x})$ by solving the remaining Einstein equations written in terms of the ADM decomposition
\cite{ADM1,ADM2}. For this purpose Eqs. (\ref{00}) and (\ref{ij}) can be written, respectively, as
\begin{eqnarray}
&&\partial_{\tau} K - N {\mathrm Tr} K^2 = \frac{N \ell^2_{\mathrm{P}}}{2}(3 p + \rho),
\label{FORM2}\\
&& \partial_{\tau} K_{i}^{j} - N K K_{i}^{j} - N r_{i}^{j} =
\frac{N \ell_{\mathrm{P}}^2}{2} (p - \rho) \delta_{i}^{j}.
\label{FORM3}
\end{eqnarray}
Using Eqs. (\ref{ES3}) and (\ref{ES4}) into Eqs. (\ref{FORM2}), the following pair of conditions
are obtained
\begin{eqnarray}
&& \partial_{\tau}\biggl(\frac{\partial_{\tau} \beta}{2 a }\biggr) + \frac{{\mathcal H}}{a} \partial_{\tau} \beta = -
\frac{a \ell_{\mathrm{P}}^2}{2} (3 p^{(1)} + \rho^{(1)}),
\label{SOL1}\\
&& \partial_{\tau} {\mathcal H} = - \frac{a^2 \ell_{\mathrm{P}}^2}{2} ( \rho^{(0)} + 3 p^{(0)}).
\label{SOL1a}
\end{eqnarray}
To obtain Eqs. (\ref{SOL1}) and (\ref{SOL1a}) the total pressure and the total energy density
have been separated as:
\begin{equation}
p(\vec{x},\tau) = p^{(0)}(\tau) + p^{(1)}(\vec{x},\tau), \qquad \rho(\vec{x},\tau) = \rho^{(0)}(\tau) + \rho^{(1)}(\vec{x},\tau),
\label{SOL1b}
\end{equation}
where $p^{(1)}(\vec{x},\tau)$ and $\rho^{(1)}(\vec{x},\tau)$ vanish in the conformally flat limit.
Using Eqs. (\ref{ES3}) and (\ref{ES4}) into Eqs. (\ref{FORM3}) two further equations are obtained
and they are:
\begin{eqnarray}
&& \partial_{\tau} \biggl( \frac{\partial_{\tau} \beta_{i}^{j}}{2 a} \biggr) +
{\mathcal H} \frac{\partial_{\tau} \beta}{2 a} \delta_{i}^{j} +
\frac{3 {\mathcal H}}{2 a } \partial_{\tau} \beta_{i}^{j} + a r_{i}^{j} = - \frac{a \ell_{\mathrm{P}}^2}{2} (p^{(1)} - \rho^{(1)}) \delta_{i}^{j},
\label{SOL2}\\
&& \partial_{\tau} {\mathcal H} + 2 {\mathcal H}^2 = -
\frac{\ell_{\mathrm{P}}^2 a^2}{2} (p^{(0)} - \rho^{(0)}).
\label{SOL2a}
\end{eqnarray}
Solving Eqs. (\ref{SOL1a}) and (\ref{SOL2a}) under the hypothesis of constant barotropic index (already
assumed in Eq. (\ref{ES18})), $ p^{(1)}$ and $\rho^{(1)}$ can be eliminated
between Eqs. (\ref{SOL1}) and (\ref{SOL2}) and it turns out that $\beta_{ij}(\vec{x},\tau)$ obeys
the following evolution equation:
\begin{equation}
\partial_{\tau}^2 \beta_{i}^{j} + 2 {\mathcal H} \partial_{\tau} \beta_{i}^{j} +
\delta_{i}^{j} \biggl( \frac{1 - w}{1 + 3 w} \partial_{\tau}^2 \beta + 2 \frac{1+w}{1 + 3 w}
{\mathcal H} \partial_{\tau} \beta\biggr)
+ 2 a^2 r_{i}^{j} =0.
\label{FORM3a}
\end{equation}
By solving Eq. (\ref{FORM3a}) the explicit form of $\beta_{ij}$ can be written in a separable form as
$\beta_{i}^{j}(\vec{x},\tau) = g(\tau) \mu_{i}^{j}(\vec{x})$ where:
\begin{eqnarray}
&& g(\tau) = a^{3 w +1},
\label{FORM4}\\
&& \mu_{i}^{j}(\vec{x}) = - \frac{4}{H_{\mathrm{i}}^2 ( 3 w + 5) ( 3 w +1)} \biggl[ P_{i}^{j}(\vec{x})
+ \frac{3 w^2 - 6 w - 5}{4 ( 9 w + 5)} P(\vec{x}) \delta_{i}^{j} \biggr].
\label{FORM5}
\end{eqnarray}
Note that $P_{i}^{j}(\vec{x}) = r_{i}^{j}(\vec{x},\tau) a^2(\tau)$ accounts for the
intrinsic curvature computed from $\alpha_{ij}(\vec{x})$. In Eqs. (\ref{FORM2}) and (\ref{FORM3}) the contribution
of the velocity fields and of the magnetic fields
has been neglected because they are subleading to ${\mathcal O}(\beta)$.
In the following two sections we will therefore present the full estimate
of the vorticity to first-order in the gradient expansion. If needed the first-order
result, together with Eqs. (\ref{FORM4}) and (\ref{FORM5}) can be used to estimate the
vorticity to higher order.
\renewcommand{\theequation}{5.\arabic{equation}}
\setcounter{equation}{0}
\section{Vorticity to first-order in the gradient expansion}
\label{sec5}
The simplest parametrization of $\alpha_{ij}(\vec{x})$ which does
not contain spatial gradients can be written as
\begin{equation}
\alpha_{ij}(\vec{x}) = e^{- 2 \Psi(\vec{x})} \delta_{ij}, \qquad \alpha = \mathrm{det}\,\alpha_{ij} = e^{- 6 \Psi(\vec{x})}.
\label{F0}
\end{equation}
In this case it is easy to show that ${\mathcal Z}^{m}(\alpha)= 0$ and therefore the first-order in the gradient expansion vanishes identically. In the $\Lambda$CDM scenario the scalar mode appearing in Eq. (\ref{F0})
leads to a $|\Psi(\vec{x})| \ll 1$ and therefore, in practice, $\alpha_{ij}(\vec{x})$ is accurately
estimated by $\delta_{ij} - 2 \Psi(\vec{x}) \delta_{ij}$. To have a ${\mathcal Z}^{m}(\alpha) \neq 0$
the contribution of the
tensor modes must be included and $\alpha_{ij}(\vec{x})$ will then given by:
\begin{equation}
\alpha_{ij}(\vec{x}) = \biggl[ \delta_{ij} + h_{ij}(\vec{x})\biggr],\qquad
\alpha^{ij}(\vec{x}) = \biggl[ \delta^{ij} - h^{ij} + h^{i k} h_{k}^{j}\biggr],\qquad\sqrt{\alpha} = \biggl[ 1 - \frac{1}{4} h_{i}^{k} \, h_{k}^{i} \biggr],
\label{F0a}
\end{equation}
where $h_{ij}$ is divergenceless and traceless, i.e. $\partial_{i} h^{i j}= h_{i}^{i}= 0$. It must be borne in mind that
the scalar and the tensor modes, in the $\Lambda$CDM scenario and in its tensor extension, are defined in terms
of the conventional perturbative expansion. As a consequence of the latter statement, the informations
on the spatial inhomogeneities of the model are not specified by assigning the analog
$\alpha_{ij}( \vec{x})$ (or $\gamma_{ij}(\vec{x},\tau)$ to a given order in the spatial gradients).
On the contrary, as it is more natural, the scalar and tensor modes of the geometry are
specified by assigning the corresponding power spectra at a given pivot scale. To evaluate
the appropriate correlators defining the vorticity we shall need first to obtain
the fluctuations in real space (as opposed to Fourier space). Therefore, as we will show in the
present and in the following section, the idea will be first to compute the fluctuations in real space
and then to use the obtained result for the determination of the correlators defining the vorticity.
This procedure will circumvent the calculation of complicated convolutions and will also be perfectly
suitable for the applications described in section \ref{sec6}.
Using then Eq. (\ref{ES13}) we have that
\begin{equation}
{\mathcal Z}^{m}(\alpha) = \partial_{q} \alpha^{q m} + \alpha^{m q} \alpha^{a b} \partial_{b} \alpha_{q a}
= h^{ q m} \, h^{a b}\, \partial_{b} h_{q a} + h^{a p}\, h_{p}^{b}\, \partial_{b} h_{a}^{m}.
\label{F0b}
\end{equation}
From Eq. (\ref{ES20}) the tensor ${\mathcal A}^{ij}(\alpha,\beta)$ can be computed to lowest order
(i.e. by setting $\beta=0$) and the result will therefore be written, using Eq. (\ref{F0b}), as
\begin{equation}
{\mathcal A}^{ij}(\alpha) =- \frac{{\mathcal H}}{3 {\mathcal H}_{1}^2 (w + 1)} \biggl(\frac{a}{a_1}\biggr)^{3 w +1} \, \epsilon^{m i j}
\biggl[ h^{a \ell} h_{\ell}^{b} \partial_{b} h_{a m} + h_{m q} h^{b a } \partial_{b} h^{q}_{a} \biggr]
+ {\mathcal O}(\epsilon^2).
\label{F2}
\end{equation}
Finally, the total vorticity can be derived directly from Eq. (\ref{ES19})
\begin{eqnarray}
\omega^{i}_{\mathrm{tot}} &=& - {\mathcal L}(\tau,w) \, \epsilon^{m i j} \partial_{j}\biggl[ h^{a \ell} h_{\ell}^{b} \partial_{b} h_{a m} + h_{m q} h^{b a } \partial_{b} h_{q a} \biggr] + {\mathcal O}(\epsilon^3),
\nonumber\\
{\mathcal L}(\tau,w) &=& \frac{{\mathcal H}}{3 {\mathcal H}_{1}^2 (w + 1)} \biggl(\frac{a}{a_1}\biggr)^{3 w +1}.
\label{F3}
\end{eqnarray}
To give an explicit estimate of the primordial vorticity the relevant cosmological parameters will be taken to be the
ones determined on the basis of the WMAP 7yr data alone \cite{wmap7a,wmap7b}.
In the $\Lambda$CDM paradigm the sole source of curvature inhomogeneities is represented by the
standard adiabatic mode whose associated power spectrum is assigned
at the comoving pivot scale $k_{\mathrm{p}} = 0.002\, \mathrm{Mpc}^{-1}$
with characteristic amplitude ${\mathcal A}_{{\mathcal R}}$
\begin{equation}
\langle {\mathcal R}(\vec{k},\tau) {\mathcal R}(\vec{p},\tau) \rangle = \frac{2 \pi^2}{k^3}
{\mathcal P}_{{\mathcal R}}(k) \delta^{(3)}(\vec{k} + \vec{p}), \qquad {\mathcal P}_{{\mathcal R}}(k) = {\mathcal A}_{{\mathcal R}} \biggl(\frac{k}{k_{\mathrm{p}}}\biggr)^{n_{\mathrm{s}}-1},
\label{AD1}
\end{equation}
where $n_{\mathrm{s}}$ denotes the spectral index associated with the fluctuations of the spatial curvature. According to the WMAP 7yr data alone analyzed in the light of the $\Lambda$CDM paradigm and without tensors modes \cite{wmap7a,wmap7b} the determinations
of ${\mathcal A}_{{\mathcal R}}$ and of $n_{\mathrm{s}}$ lead, respectively, to
${\mathcal A}_{{\mathcal R}} = (2.43 \pm 0.11)\times 10^{-10} $ and to $n_{\mathrm{s}} = 0.963 \pm 0.014$.
The standard $\Lambda$CDM scenario, sometimes dubbed vanilla $\Lambda$CDM is defined by six pivotal parameters
whose specific values are, in the absence of tensor modes\footnote{Following the standard notations (slightly modified to
avoid possible clashes with previously defined variables)
$\Omega_{\mathrm{b}}, \, \Omega_{\mathrm{c}}, \Omega_{\mathrm{de}}$ denote, respectively, the present critical fractions
of thebaryons, of the dark matter, of the dark energy; $h_{0}$ is the Hubble constant in units of $100\, \mathrm{km}/(\mathrm{sec}\, \mathrm{Mpc})$, $n_{\mathrm{s}}$ is the scalar spectral index while $\epsilon_{\mathrm{re}}$ denotes
the optical depth at recombination.}
\begin{equation}
( \Omega_{\mathrm{b}}, \, \Omega_{\mathrm{c}}, \Omega_{\mathrm{de}},\, h_{0},\,n_{\mathrm{s}},\, \epsilon_{\mathrm{re}}) \equiv
(0.0449,\, 0.222,\, 0.734,\,0.710,\, 0.963,\,0.088).
\label{PP1}
\end{equation}
To estimate the correlation functions associated with Eqs. (\ref{F2}) and (\ref{F3})
it is mandatory to know in detail the numerical value
of the correlation function of the tensor modes of the geometry which have not been
detected so far but whose specific upper limits will determine the maximal magnetic field
obtainable from the vorticity of the geometry. The tensor modes of the geometry
are described in terms of a rotationally and parity invariant two-point function
\begin{equation}
\langle h_{ij}(\vec{x},\tau) \, h_{ij}(\vec{y},\tau) \rangle = \int \frac{dk}{k} {\mathcal P}_{\mathrm{T}}(k,\tau) \, \frac{\sin{k r}}{kr},\qquad
\label{PP2}
\end{equation}
where the tensor power spectrum at the generic time $\tau$ is given by the product of the appropriate transfer
function multiplied by the primordial spectrum:
\begin{equation}
{\mathcal P}_{\mathrm{T}}(k,\tau) = {\mathcal M}(k,\, k_{\mathrm{eq}},\, \tau) \overline{{\mathcal P}}_{\mathrm{T}}(k), \qquad
\overline{{\mathcal P}}_{\mathrm{T}}(k) = {\mathcal A}_{\mathrm{T}} \biggl(\frac{k}{k_{\mathrm{p}}}\biggr)^{n_{\mathrm{T}}};
\label{PP2a}
\end{equation}
note that ${\mathcal A}_{\mathrm{T}}$ is the amplitude of the tensor power spectrum and $n_{\mathrm{T}}$ is the tensor
spectral index. The transfer function ${\mathcal M}(k,\, k_{\mathrm{eq}},\, \tau)$ can be computed under several approximations depending
upon the required accuracy. The transfer function for the amplitude of the tensor modes can be numerically
computed by solving the evolution of the tensor fluctuations across the matter-radiation equality and the result is \cite{mgt1,mgt2}
\begin{equation}
{\mathcal M}(k,\, k_{\mathrm{eq}},\, \tau)= \frac{9\,j^2_{1}(k\tau)}{|k\tau|^2}
\biggl[ 1 + c_{1} \biggl(\frac{k}{k_{\mathrm{eq}}}\biggr) + c_{2}\biggl(\frac{k}{k_{\mathrm{eq}}}\biggr)^2\biggr],
\label{PP2b}
\end{equation}
where\footnote{The analysis of \cite{tt1} gave $c_{1}= 1.34$ and $c_{2}= 2.50$
which is fully compatible with the results of \cite{mgt1,mgt2}. In the approach of \cite{tt1} (see also \cite{tur1}) the calculation of the amplitude transfer function, in fact, involve a delicate matching on the phases of the tensor mode
functions. Conversely, if the transfer function is computed directly for the spectral energy density, the oscillatory contributions are suppressed as the wavelengths get shorter than the Hubble radius (see below).},
according to \cite{mgt1,mgt2}, $c_{1} = 1.26$ and $c_{2} = 2.68$.
In Eq. (\ref{PP2b}) $j_{1}(y) = (\sin{y}/y^2 - \cos{y}/y)$ is the spherical Bessel function of first kind which is related to the approximate solution of the evolution equations for the tensor mode functions whenever the solutions are computed deep in the matter-dominated phase (i.e. $a(\tau) \simeq \tau^2$). Instead of working directly with ${\mathcal A}_{\mathrm{T}}$ it
is often preferred to introduce the quantity customarily called $r_{\mathrm{T}}$ denoting the
ratio between the tensor and the scalar amplitude at the pivot scale $k_{\mathrm{p}}$
\begin{equation}
r_{\mathrm{T}} = \frac{{\mathcal A}_{\mathrm{T}}}{{\mathcal A}_{{\mathcal R}}}= \frac{\overline{{\mathcal P}}_{\mathrm{T}}(k_{\mathrm{p}})}{{\mathcal P}_{{\mathcal R}}(k_{\mathrm{p}})}.
\label{PP3}
\end{equation}
In principle $n_{\mathrm{T}}$ can be taken to be independent of $r_{\mathrm{T}}$ and this possibility will also
be contemplated in the present discussion. At the same time, if the scalar and the tensor modes
are both of inflationary origin, $n_{\mathrm{T}}$ is related to $r_{\mathrm{T}}$ and to the slow-roll
parameter $\epsilon$ which measure the rate of decrease of the Hubble parameter
during the conventional inflationary stage of expansion:
\begin{equation}
n_{\mathrm{T}} = - \frac{r_{\mathrm{T}}}{8} = - 2 \epsilon, \qquad \epsilon = - \frac{\dot{H}}{H^2};
\label{PP4}
\end{equation}
the overdot denotes the usual derivative with respect to the cosmic time coordinate; in Eq. (\ref{PP4})
the spectral index is frequency-independent but there exist situations where more general possibilities
can be contemplated such as, for instance
\begin{equation}
n_{\mathrm{T}} = - 2 \epsilon + \frac{\alpha_{\mathrm{T}}}{2} \ln{(k/k_{\mathrm{p}})}, \qquad \alpha_{\mathrm{T}} = \frac{r_{\mathrm{T}}}{8}\biggl[ (n_{\mathrm{s}} -1) + \frac{r_{\mathrm{T}}}{8}\biggr].
\label{PP5}
\end{equation}
If $\alpha_{\mathrm{T}} =0$ the tensor spectral index $n_{\mathrm{T}}$ does not depend upon the frequency and this is the case which is, somehow, endorsed when introducing gravitational waves in the minimal tensor extension of the $\Lambda$CDM. If a tensor component is allowed in the analysis
of the WMAP 7yr data alone the relevant cosmological parameters are determined to be
\begin{equation}
( \Omega_{\mathrm{b}}, \, \Omega_{\mathrm{c}}, \Omega_{\mathrm{de}},\, h_{0},\,n_{\mathrm{s}},\, \epsilon_{\mathrm{re}}) \equiv
(0.0430,\, 0.200,\, 0.757,\,0.735,\, 0.982,\,0.091).
\label{Par2}
\end{equation}
In the case of Eq. (\ref{PP1}) the amplitude of the scalar modes is ${\mathcal A}_{{\mathcal R}} =
(2.43 \pm 0.11) \times 10^{-9}$ while in the case of Eq. (\ref{Par2}) the corresponding values of ${\mathcal A}_{{\mathcal R}}$ and of $r_{\mathrm{T}}$ are given by
\begin{equation}
{\mathcal A}_{{\mathcal R}} = (2.28 \pm 0.15)\times 10^{-9},\qquad r_{\mathrm{T}} < 0.36,
\label{Par3}
\end{equation}
to $95$ \% confidence level. To avoid confusions it is appropriate
to spend a word of care on the figures implied by Eqs. (\ref{Par2}) and
(\ref{Par3}) which have been used in the numeric analysis just
for sake of accuracy. The qualitative features
of the effects discussed here do not change if, for instance, one
would endorse the parameters drawn from the comparison of the minimal tensor extension
of the $\Lambda$CDM with the WMAP 5yr data release \cite{wmap5a,wmap5b}, implying, for instance, ${\mathcal A}_{{\mathcal R}} = 2.1^{+2.2}_{-2.3}\times 10^{-9}$, $n_{\mathrm{s}} =0.984$ and $r_{\mathrm{T}} < 0.65$ (95 \% confidence level). Similar orders
of magnitude can be also obtained from even older releases \cite{wmap3,wmapfirst}.
\renewcommand{\theequation}{6.\arabic{equation}}
\setcounter{equation}{0}
\section{Magnetic field induced by the total vorticity}
\label{sec6}
The total vorticity derived in the previous sections is larger than the vorticity of the ions. Therefore, the total magnetic field derived on the basis of $\omega_{\mathrm{tot}}^{i}$ is larger than the one derived on the basis of the ion contribution. Of course this statement holds in an averaged sense since what matters
is not the vorticity itself but rather its two-point function which will be explicitly
computed in the present section. Using Eq. (\ref{F3}) the maximal obtainable magnetic field
will be the one given by Eqs. (\ref{D7})--(\ref{D8}) or (\ref{D12}) where the total vorticity induced by the geometry is given by Eq. (\ref{F3})
\begin{equation}
B_{\mathrm{max}}^{i}(\vec{x},\tau) = - \frac{\rho_{\mathrm{i}} \sqrt{\gamma} }{ e\,N^2 \tilde{n}_{\mathrm{i}}} \omega^{i}_{\mathrm{tot}}(\vec{x},\tau).
\label{Fmax1}
\end{equation}
which can also be written, by explicitly keeping track of the number of gradients, as
\begin{equation}
B_{\mathrm{max}}^{i}(\vec{x},\tau) = \biggl\{{\mathcal L}(\tau,w) \, \epsilon^{m i j} \partial_{j}\biggl[ h^{a \ell} h_{\ell}^{b} \partial_{b} h_{a m} + h_{m q} h^{b a } \partial_{b} h^{q}_{a} \biggr] + {\mathcal O}(\epsilon^3) \biggr\} \, a(\tau)\, \biggl[ 1 + {\mathcal O}(\epsilon^2) \biggr].
\label{Fmax2}
\end{equation}
The prefactor appearing in Eq. (\ref{Fmax1}) has been estimated, in Eq. (\ref{Fmax2}), by recalling
that, to lowest order in the gradient expansion
\begin{equation}
\partial_{\tau} \rho_{\mathrm{i}} = N K \rho_{\mathrm{i}}, \qquad \partial_{\tau} \tilde{n}_{\mathrm{i}} = N K \tilde{n}_{\mathrm{i}},
\label{pref1}
\end{equation}
implying that $\rho_{\mathrm{i}}$ and $\tilde{n}_{\mathrm{i}}$ scale in the same way with $\sqrt{\gamma}$ since
$ N K = - \partial_{\tau} \ln{\sqrt{\gamma}}$. But then, from Eqs. (\ref{FORM4}), (\ref{FORM5}) and (\ref{F0a}):
\begin{equation}
\frac{\rho_{\mathrm{i}} \sqrt{\gamma} }{ N^2 \tilde{n}_{\mathrm{i}}} = a(\tau) \biggl[ 1 + {\mathcal O}(\epsilon^2)\biggr],
\label{pref2}
\end{equation}
where the first correction is ${\mathcal O}(\epsilon^2)$ and depends on $\beta$ (see, e.g. Eqs. (\ref{FORM4}) and (\ref{FORM5})) but it will be immaterial for the present ends.
From now on the subscripts will be dropped but it will be always understood that we are referring here to the total vorticity and to the maximally achievable magnetic field. As a consequence of Eq. (\ref{Fmax1}) the correlation function of the magnetic field can be related to the correlation function of the vorticity.
To estimate the correlation of the vorticity and to obtain an explicit expression
the key point is to reduce the six-point function of the
tensor modes to the product of two-point functions. For this purpose it is not sufficient to consider
the trace of the two-point function introduced in Eq. (\ref{PP2}) but it is rather necessary
to proceed with the full tensorial structure of the correlator whose general parity and rotationally-invariant form will be denoted as
\begin{equation}
G_{i j m n}(r) = \langle h_{ij}(\vec{x},\tau) \, h_{m n}(\vec{y},\tau) \rangle,
\label{F6}
\end{equation}
where $G_{i j m n}(r)$ is only function of $r = |\vec{r}|$ where $\vec{r} = \vec{x} - \vec{y}$. Since both
$h_{i j}(\vec{x},\tau)$ and $h_{m n}(\vec{y},\tau)$ are transverse and traceless, $G_{i j m n}(r)$ will have to share
the same properties. In particular, $G_{i j m n}(r)$ must be symmetric for $i\to j$, $m \to n$, $(i\,j)\to(m\, n)$ and satisfy
the following properties
\begin{eqnarray}
&& \frac{\partial}{\partial r^{i}} G_{i j m n}=0, \qquad G_{i i m n} = G_{i j m m} = 0
\label{F7}\\
&& \mathrm{Tr}[ G_{i j m n} ] = G_{i j i j}= \int \frac{dk}{k} {\mathcal P}_{\mathrm{T}}(k) \, \frac{\sin{k r}}{kr}.
\label{F8}
\end{eqnarray}
The properties of Eq. (\ref{F7}) and (\ref{F8}) are a reflection of the divergenceless and traceless
nature of $h_{ij}(\vec{x},\tau)$ while the requirement on the trace follows from the consistency with Eq. (\ref{PP2}).
The general form of $G_{i j m n}$ can therefore be written as
\begin{eqnarray}
G_{i j m n}(r) &=& \biggl(\delta_{i m} \delta_{n j} + \delta_{m j} \delta_{n i} \biggr)\, G_{1}(r) + \delta_{i j} \delta_{m n}\, G_{2}(r)
\nonumber\\
&+& \biggl(\delta_{i j} \, r_{m}\, r_{n}\, + \delta_{m n}\, r_{i}\, r_{j}\biggr) G_{3}(r)
\nonumber\\
&+& \biggl(\delta_{j n} r_{i} r_{m} +
\delta_{i m} r_{j} r_{n} + \delta_{j m} r_{i} r_{n} + \delta_{i n} r_{j} r_{m}\biggr) G_{4}(r)
\nonumber\\
&+& r_{i}\, r_{j}\, r_{m}\,r_{n} G_{5}(r),
\label{F9A}
\end{eqnarray}
where the various independent functions appearing in Eq. (\ref{F9A}) are determined in appendix \ref{APPB}.
The methods used to analyze the real-space correlators are the ones exploited in usual
applications of statistical fluid mechanics \cite{monin,stoc}.
To evaluate Eq. (\ref{Fmax2}) can then proceed as follows. Using Eq. (\ref{F3}), the explicit form of the correlator
of the vorticity becomes
\begin{eqnarray}
\langle \omega^{i}(\vec{x},\tau)\, \omega^{i}(\vec{y},\tau) \rangle &=& {\mathcal L}^2(\tau,w) \epsilon^{j m i}\, \epsilon^{j' m' i} \frac{\partial^2}{\partial y^{j'}\, \partial y^{b'}} \frac{\partial^2 }{\partial x^{j} \, \partial x^{b}} {\mathcal T}_{b m b' m'}
\nonumber\\
{\mathcal T}_{b m b' m'} &=&
\langle\biggl[ h_{a\, b} \, h_{ a \, q} \, h_{q \,m}\biggr]_{\vec{x}} \, \biggl[ h_{a'\, b'} h_{a' \, q'} h_{q' \, m'}\biggr]_{\vec{y}} \rangle.
\label{F12}
\end{eqnarray}
By defining $\langle \omega^2(r) \rangle = \langle \omega^{i}(\vec{x},\tau)\, \omega^{i}(\vec{y},\tau) \rangle$ and by
recalling the notations of appendix \ref{APPB}, we shall have that\footnote{Recall that $\vec{r} = \vec{x} - \vec{y}$, i.e. $ r= |\vec{r}| = | \vec{x} - \vec{y}|$.}
\begin{equation}
\langle\omega^2(r)\rangle = {\mathcal L}^2(\tau,w) \epsilon^{j m i}\, \epsilon^{j' m' i} \frac{\partial^2}{\partial r^{j'}\, \partial r^{b'}}
\frac{\partial^2 }{\partial r^{j} \, \partial r^{b}} {\mathcal T}_{m' b' m b},
\label{F13}
\end{equation}
where the quantity ${\mathcal T}_{m' b' m b}$ is a function of $r$; the explicit form of ${\mathcal T}_{m' b' m b}$ is given in
appendix \ref{APPB} in terms of the two-point functions $G_{i j m n}$. Furthermore, since
$T_{m' b' m b}$ can be written, in general terms, as
\begin{eqnarray}
{\mathcal T}_{m' b' m b} &=& T_{1}(r) (\delta_{m' b} \delta_{m b'} + \delta_{m m'} \delta_{b b'})
+ T_{2}(r) \delta_{m' b'} \delta_{m b}
\nonumber\\
&+& T_{3}(r) ( \delta_{m' b'} r_{m} r_{b} + \delta_{m b} r_{m'} r_{b'}) + T_{4}(r) ( \delta_{b m'} r_{m} r_{b'} +
\delta_{m b'} r_{b} r_{m'}
\nonumber\\
&+& \delta_{m m'} r_{b} r_{b'} + \delta_{b b'} r_{m} r_{m'}) + r_{m'} r_{b'} r_{m} r_{b} T_{5}(r).
\label{F15}
\end{eqnarray}
By using the results of appendix \ref{APPB} the explicit values of the five $T_{i}(r)$ can be expressed in terms
of the two-point functions $G_{i j m n}$ (see Eq. (\ref{AB2}) and (\ref{AB3})--(\ref{AB7})).
There are two physically complementary regimes where the primordial vorticity and hence
the magnetic field can be evaluated. Comoving lengths $r_{\mathrm{G}}$ defined between $1$ and $100$ Mpc are smaller than the Hubble radius at equality since\footnote{The quantity $r_{\mathrm{T}}$ (denoting, in sec. \ref{sec5}, the tensor
to scalar ratio) must not be confused with $r_{\mathrm{G}}$ and $r_{\mathrm{eq}}$ which denote specific
values of the radial coordinate.}
\begin{equation}
r_{\mathrm{eq}} =
\frac{2 (\sqrt{2} -1)}{H_{0}} \frac{\sqrt{ \Omega_{\mathrm{R}0}}}{ \Omega_{\mathrm{M}0}}
= 119.397 \biggl(\frac{h_{0}^2 \Omega_{\mathrm{M}0}}{0.134}\biggr)^{-1} \biggl(\frac{h_{0}^2 \Omega_{\mathrm{R}0}}{4.15 \times 10^{-5}}\biggr)^{1/2}\,\,\mathrm{Mpc},
\label{TA13}
\end{equation}
where $H_{0}$ is the present value of the Hubble rate, $\Omega_{\mathrm{M}0}$ is the
present value of the critical fraction in matter and $\Omega_{\mathrm{R}0}$ is the
present value of the critical fraction in radiation. The pivot length $r_{\mathrm{p}} = 500\, \mathrm{Mpc}$
at which the tensor amplitudes are assigned is such that $r_{\mathrm{G}} < r_{\mathrm{eq}} < r_{\mathrm{p}
}$. Therefore, after matter-radiation equality and, in particular, at photon decoupling, the correlation function
of the magnetic field can be estimated as
\begin{eqnarray}
\langle B^2(r) \rangle &=& 6.348 \times 10^{-76} \biggl(\frac{r_{\mathrm{T}}}{0.32}\biggr)^3 \biggl(\frac{{\mathcal A}_{{\mathcal R}}}{2.43 \times 10^{-9}}\biggr)^{3} \biggl(\frac{z_{\mathrm dec} +1}{1089.2}\biggr)^2
\nonumber\\
&\times& \biggl(\frac{h_{0}^2 \Omega_{\mathrm{M}0}}{0.134}\biggr)^{6}\,
\biggl(\frac{h_{0}^2 \Omega_{\mathrm{R}0}}{4.15 \times 10^{-5}}\biggr)^{-6}\,\, {\mathcal C}(n_{\mathrm{T}}, r)
\,\, \mathrm{G}^2,
\nonumber\\
{\mathcal C}(n_{\mathrm{T}}, r) &=& c(n_{\mathrm{T}}) \biggl(\frac{r}{r_{\mathrm{p}}}\biggr)^{ 8 - 3 n_{\mathrm{T}}}
+ d(n_{\mathrm{T}}),
\label{TA14}
\end{eqnarray}
in units of $\mathrm{G}^2\equiv \mathrm{Gauss}^2$ and
where the constants $c(n_{\mathrm{T}})$ and $d(n_{\mathrm{T}})$ are given by\footnote{Recall that because of the relation (\ref{PP4}) $n_{\mathrm{T}} < 0$ and $r_{\mathrm{T}}>0$.}
\begin{eqnarray}
c(n_{\mathrm{T}}) &=& - 2 (n_{\mathrm{T}} - 4 ) (n_{\mathrm{T}} - 3 )[ 2 n_{\mathrm{T}}(n_{\mathrm{T}} - 6) + 19]
\cos^3{\biggl(\frac{n_{\mathrm{T}} \pi}{2}\biggr)} \Gamma^3(n_{\mathrm{T}} - 5),
\nonumber\\
d(n_{\mathrm{T}}) &=& - \frac{36 + 14 n_{\mathrm{T}} (n_{\mathrm{T}} - 4)}{ 45 n_{\mathrm{T}} ( n_{\mathrm{T}}^2 - 6n_{\mathrm{T}} + 8 )^2}.
\label{TA15}
\end{eqnarray}
The typical values of $n_{\mathrm{T}}$ are negative and ${\mathcal O}(10^{-2})$. Indeed,
assume, consistently with Eq. (\ref{Par3}), that $r_{\mathrm{T}} \sim 0.32$. Then, according to Eq. (\ref{PP4}), $n_{\mathrm{T}} \sim - 0.04$
and $\epsilon \sim 0.02$.
Concerning the results of Eqs. (\ref{TA14}) and (\ref{TA15}) few comments are in order:
\begin{itemize}
\item{} the prefactor ${\mathcal L}(\tau,w)$ is estimated in the hypothesis
$w=0$, $a_{1} = a_{\mathrm{eq}}$ and ${\mathcal H}_{1} = {\mathcal H}_{\mathrm{eq}}$ since
we ought to estimate the field prior to photon decoupling;
\item{} recalling that ${\mathcal H} = a H$ the value of the Hubble rate at the equality time can be
estimated as:
\begin{equation}
H_{\mathrm{eq}} = \sqrt{ 2 \,\,\Omega_{\mathrm{M}0}} \, H_{0} \, \biggl(\frac{a_{0}}{a_{\mathrm{eq}}}\biggr)^{3/2} \equiv 1.65 \times 10^{-56} \biggl(\frac{h_{0}^2 \Omega_{\mathrm{M}0}}{0.134}\biggr)^2\, M_{\mathrm{P}};
\end{equation}
\item{} the result of Eq. (\ref{TA14}) holds for comoving scales $r < r_{\mathrm{P}} = 500$ Mpc
(which are the ones relevant for the gravitational collapse of the protogalaxy) and it is not sensitive to the variation of $r$ provided $n_{\mathrm{T}}$ is nearly scale-invariant;
\item{} if $r_{\mathrm{T}} \simeq 0.32$, then $n_{\mathrm{T}} = - 0.04$; from Eq. (\ref{TA15}), ${\mathcal C}(r_{\mathrm{G}}, -0.04) \simeq 0.07$ while for $r= 100\, r_{\mathrm{G}}$ we have
that ${\mathcal C}(100\, r_{\mathrm{G}}, -0.04) \simeq 0.01$.
\end{itemize}
By thus approximating ${\mathcal C}(n_{\mathrm{T}}, r) \simeq {\mathcal O}(1)$
in the range $r= 1$--$100$ Mpc and for
$0.2 < r_{\mathrm{T}} < 0.3$ we get the following value for $B_{\mathrm{max}}= \sqrt{\langle B^2(r) \rangle}$
\begin{eqnarray}
B_{\mathrm{max}} &=& 2.519 \times 10^{-38} \biggl(\frac{r_{\mathrm{T}}}{0.32}\biggr)^{3/2} \biggl(\frac{{\mathcal A}_{{\mathcal R}}}{2.43 \times 10^{-9}}\biggr)^{3/2} \biggl(\frac{z_{\mathrm dec} +1}{1089.2}\biggr)
\nonumber\\
&\times& \biggl(\frac{h_{0}^2 \Omega_{\mathrm{M}0}}{0.134}\biggr)^{3}\,
\biggl(\frac{h_{0}^2 \Omega_{\mathrm{R}0}}{4.15 \times 10^{-5}}\biggr)^{-3}\,\,\mathrm{G}.
\label{TA16}
\end{eqnarray}
The result of Eq. (\ref{TA16}) does not seem to be even remotely relevant for galactic magnetogenesis
or for cluster magnetogenesis. In spite of the intricacy and of the ramification of the
galactic dynamo hypothesis, it is useful to compare Eq. (\ref{TA16}) with the minimal
requirements stemming from what we would call optimal or ideal dynamo, namely
a process where the kinetic energy of the protrogalaxy is converted into magnetic energy with
maximal efficiency. Let us denote with $N_{\mathrm{rot}}$ the number of (effective) rotations performed
by the galaxy since gravitational collapse and with $\rho_{\mathrm{a}}$ and $\rho_{\mathrm{b}}$
the matter density after and before gravitational collapse .
The typical rotation period of a spiral galaxy is of the order of $3\times10^{8}$ yrs which should be compared with $10^{10}$ yrs, i.e. the approximate age of the galaxy. The maximal number of rotations
performed by the galaxy since its origin is then of the order of $N_{\mathrm{rot}}\sim 30$.
Under the hypothesis that the kinetic energy of the plasma
is transferred to the magnetic energy with maximal efficiency, the protogalactic field
will be amplified by one efold during each rotation. The effective
number of efolds is however always smaller than $30$ for various reasons. Typically it can happen that the dynamo quenches prematurely because some of the higher wavenumbers
of the magnetic field become critical (i.e. comparable with the kinetic energy of the plasma) before the smaller ones. Other sources of quenching have been recently discussed in the literature (see, for an introduction to this topic, section 4.2 of \cite{dyn} and references therein). There is also another source of amplification of the primordial magnetic field and it has to do with compressional
amplification. At the time of the gravitational collapse of the
protogalaxy the conductivity of the plasma was sufficiently high
to justify the neglect of nonlinear corrections in the equations
expressing the conservation of the magnetic flux and of the
magnetic helicity. The conservation of the magnetic flux
implies that, during the gravitational collapse, the magnetic field
should undergo compressional amplification, i.e. the same
kind of mechanism which is believed to be the source of the
large magnetic fields of the pulsars. Taking into account the two previous
observations the estimate of Eq. (\ref{TA16}) must be compared with the bound
\begin{equation}
B_{\mathrm{bound}} \simeq 3 \times10^{3}\, e^{- N_{\mathrm{rot}}} \biggl(\frac{\rho_{\mathrm{b}}}{\rho_{\mathrm{a}}}\biggr)^{2/3} \,\, \mathrm{nG}
\label{TA17}
\end{equation}
in $\mathrm{nG}$ units. Even assuming $N_{\mathrm{rot}} = 30$, $\rho_{\mathrm{a}} \simeq 10^{-24} \, \mathrm{g}/\mathrm{cm}^3$, and $\rho_{\mathrm{b}} \simeq 10^{-29} \, \mathrm{g}/\mathrm{cm}^3$ the minimal value of $B_{\mathrm{bound}}$ is ${\mathcal O}(10^{-25}) \mathrm{G}$. Clearly, by comparing Eq. (\ref{TA16})
with Eq. (\ref{TA17}), $B_{\mathrm{max}} \ll B_{\mathrm{bound}}$.
Going then to cluster magnetogenesis, the typical scale of the gravitational collapse of a cluster is larger (roughly by one order of magnitude) than the scale of gravitational collapse of the protogalaxy. The mean mass density
within the Abell radius ( $\simeq 1.5 h_{0}^{-1} $ Mpc) is roughly
$10^{3}$ times larger than the critical density since clusters are
formed from peaks in the density field. Moreover, clusters
rotate much less than galaxies even if it is somehow
hard to disentangle, observationally, the global (coherent)
rotation of the cluster from the rotation curves of the
constituent galaxies. By assuming, for instance, $N_{\mathrm{rot}}=5$, a density gradient of $10^{3}$ and $500$ nG as final field, Eq. (\ref{TA17}) demands and initial seed of the order $0.15$ nG.
Another application of the results obtained in the previous sections can be the estimate
of the magnetic field induced by the total vorticity for scales which are
larger than the Hubble radius prior to matter radiation equality. To conduct this estimate
the explicit form of the correlators will change. First of all in the pre-factor
${\mathcal L}(\tau, w)$ we shall choose $ w= 1/3$ and $a_{1} = a_{\mathrm{r}}$ and
${\mathcal H}_{1} = {\mathcal H}_{\mathrm{r}}$ with $H_{\mathrm{r}} \simeq 10^{-5} \, M_{\mathrm{P}}$.
Thus for typical length-scales larger than the Hubble radius at equality and for typical times
of the order of the equality time the analog of Eq. (\ref{TA14}) can be written as
\begin{eqnarray}
\langle B^2(r) \rangle &=& 2.915 \times 10^{-79} \biggl(\frac{r_{\mathrm{T}}}{0.32}\biggr)^3 \biggl(\frac{{\mathcal A}_{{\mathcal R}}}{2.43 \times 10^{-9}}\biggr)^{3} \biggl(\frac{z_{\mathrm dec} +1}{1089.2}\biggr)^2
\nonumber\\
&\times& \biggl(\frac{h_{0}^2 \Omega_{\mathrm{M}0}}{0.134}\biggr)^{-4}\, {\mathcal C}(n_{\mathrm{T}}, r)
\,\, \mathrm{G}^2,
\nonumber\\
{\mathcal C}(n_{\mathrm{T}}, r) &=& \tilde{c}(n_{\mathrm{T}}) \biggl(\frac{r}{r_{\mathrm{p}}}\biggr)^{ -4 - 3 n_{\mathrm{T}}}
+ \tilde{d}(n_{\mathrm{T}}),
\label{TA18}
\end{eqnarray}
where the numerical constants $\tilde{c}(n_{\mathrm{T}})$ and $\tilde{d}(n_{\mathrm{T}})$ are given by
\begin{eqnarray}
\tilde{c}(n_{\mathrm{T}}) &=& - 2 n_{\mathrm{T}} ( n_{\mathrm{T}} + 1) [ 2 n_{\mathrm{T}}(n_{\mathrm{T}} + 2) + 3]
\cos^3{\biggl(\frac{n_{\mathrm{T}} \pi}{2}\biggr)} \Gamma^3(n_{\mathrm{T}} -1),
\nonumber\\
\tilde{d}(n_{\mathrm{T}}) &=& - \frac{2( 7 n_{\mathrm{T}}^2 + 28 n_{\mathrm{T}} + 18)}{45 n_{\mathrm{T}}^2 (n_{\mathrm{T}}+ 2)^2 (n_{\mathrm{T}} + 4) }.
\label{TA19}
\end{eqnarray}
Equation (\ref{TA18}) holds under the assumption $r < r_{\mathrm{p}}$ which means, in practice, that
it applies only for a narrow range of scales $ 120 \, \mathrm{Mpc} < r < 500 \, \mathrm{Mpc}$. If
$r \simeq 250 \, \mathrm{Mpc}$ then $r/r_{\mathrm{p}} = 0.5$ and ${\mathcal C}(n_{\mathrm{T}}, r) \simeq
{\mathcal O}(163)$ and
\begin{eqnarray}
B_{\mathrm{max}} &=& 4.3 \times 10^{-39} \biggl(\frac{r_{\mathrm{T}}}{0.32}\biggr)^{3/2} \biggl(\frac{{\mathcal A}_{{\mathcal R}}}{2.43 \times 10^{-9}}\biggr)^{3/2} \biggl(\frac{h_{0}^2 \Omega_{\mathrm{M}0}}{0.134}\biggr)^{-2}\,
\,\,\mathrm{G}.
\label{TA20}
\end{eqnarray}
\renewcommand{\theequation}{7.\arabic{equation}}
\setcounter{equation}{0}
\section{Concluding remarks}
\label{sec7}
The idea explored in this paper has been to compute the vorticity by employing a recently devised
framework for the treatment of fully inhomogeneous plasmas which are
also gravitating. The latter description brings a new perspective to the study of the evolution
of the vorticity exchange in the electron-ion-photon system without
postulating the customary separation between a (preferably
conformally flat) background geometry and its relativistic fluctuations.
A set of general conservation laws has been derived on the
basis of the fully inhomogeneous equations in different
temperature regimes depending on the hierarchies between
the exchange rate of the vorticity between electrons, ions and photons.
After expanding the Einstein equations as well as the vorticity
equations to a given order in the spatial gradients, the total vorticity
has then been estimated to lowest order in the gradient expansion.
The maximal comoving magnetic field induced in the $\Lambda$CDM paradigm
depends upon the tensor to scalar ratio and it is, at most, of the order
of $10^{-37}$ G over the typical comoving scales ranging between $1$ and
$10$ Mpc. The obtained results are irrelevant for seeding a reasonable
galactic dynamo action and they demonstrate how the proposed fully inhomogeneous treatment
can be used for a systematic scrutiny of pre-decoupling plasmas beyond the conventional
perturbative expansions. The estimate of the primordial vorticity induced in the
$\Lambda$CDM scenario can also turn out to be relevant in related contexts
such as the ones contemplated by non conventional paradigms of galaxy formation.
\newpage
\begin{appendix}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\section{Gradient expansion and pre-decoupling physics}
\label{APPA}
In this appendix we are going to recap the essentials of the fully inhomogeneous
description of pre-decoupling plasmas already introduced in Eqs. (\ref{NH1})--(\ref{NH4}). We
will follow here the formalism developed in Ref. \cite{mg2} and describe
the fully inhomogeneous geometry in terms of the ADM decomposition \cite{ADM1,ADM2}:
\begin{eqnarray}
&& g_{00} = N^2 - N_{k} N^{k},\qquad g_{ij} = - \gamma_{ij},\qquad g_{0i} = - N_{i},
\nonumber\\
&& g^{00} = \frac{1}{N^2},\qquad g^{ij} = \frac{N^{i} \, N^{j}}{N^2}- \gamma^{ij},\qquad
g^{0i} = - \frac{N^{i}}{N^2}.
\label{ADM1}
\end{eqnarray}
In the ADM variables the extrinsic curvature
$K_{ij}$ and the spatial components of the Ricci tensor $r_{ij}$ become:
\begin{eqnarray}
K_{ij} &=& \frac{1}{2 N} \biggl[- \partial_{\tau}\gamma_{ij} + ^{(3)}\nabla_{i}N_{j} + ^{(3)}\nabla_{j} N_{i}
\biggr],
\label{ADM1a}\\
r_{ij} &=& \partial_{m} \, ^{(3)}\Gamma^{m}_{ij} -\partial_{j} ^{(3)}\Gamma_{i m}^{m} + ^{(3)}\Gamma_{i j}^{m}
\,^{(3)}\Gamma_{m n}^{n} - ^{(3)}\Gamma_{j n}^{m} \,^{(3)}\Gamma_{i m}^{n}.
\label{ADM1b}
\end{eqnarray}
Defining as $T_{\mu\nu}$ as the total energy-momentum tensor of the fluid sources, the
contracted form of the Einstein equations reads
\begin{equation}
R_{\mu}^{\nu} = \ell_{\mathrm{P}}^2 \biggl[\biggl(T_{\mu}^{\nu} - \frac{T}{2} \delta_{\mu}^{\nu}\biggr) \biggr], \qquad T= g^{\mu\nu} T_{\mu\nu} = T_{\mu}^{\mu}.
\label{FORM1}
\end{equation}
As in the bulk of the paper we are now going to focus
on the situation where the shift vectors vanish and the lapse function
is homogeneous but time dependent (i.e. $N(\vec{x}, \tau) = N(\tau)$).
The $(0\,0)$, $(i\,j)$ and $(0\,i)$ components of Eq. (\ref{FORM1}) become then:
\begin{eqnarray}
&& \partial_{\tau} K - N \mathrm{Tr} K^2 + \nabla^2 N = N \ell_{\mathrm{P}}^2 \biggl\{ \frac{3 p + \rho}{2}
+ ( p + \rho) \, u^2 \biggr\},
\label{00}\\
&& \nabla_{i} K - \nabla_{k} K^{k}_{i} = N \ell_{\mathrm{P}}^2 u^{0} \,u_{i} ( p + \rho),
\label{0i}\\
&& \partial_{\tau} K_{i}^{j} - N K K_{i}^{j} - N r_{i}^{j} + \nabla_{i} \nabla^{j} N = \ell_{\mathrm{P}}^2 N\biggl[ \frac{p - \rho}{2} \delta_{i}^{j} - ( p + \rho) u_{i} u^{j}],
\label{ij}
\end{eqnarray}
where $u^2 = u^{i} \, u^{j} \gamma_{i j}$. The electron and ion velocities appearing in Eqs. (\ref{NH1}) and (\ref{NH2})
reduce, in the conformally flat case (i.e. $N(\tau) \to a(\tau)$ and $\gamma_{ij}(\vec{x}, \tau) \to
a^2(\tau) \delta_{ij}$) to the velocity fields appearing in Eqs. (\ref{S2}), (\ref{S6}) and (\ref{S7}).
In the fully inhomogeneous case,
the evolution equations for the velocities of the electrons, ions and photons can be written, respectively, as
\begin{eqnarray}
\partial_{\tau} v_{\mathrm{e}}^{k} + N \partial^{k} N - {\mathcal G}^{k}_{j} v_{\mathrm{e}}^{j} &=& - \frac{e \tilde{n}_{\mathrm{e}} N^2}{ \rho_{\mathrm{e}} \sqrt{\gamma}} \biggl[ E^{k} + (\vec{v}_{\mathrm{e}} \times \vec{B})^{k} \biggr]
\nonumber\\
&+& N \Gamma_{\mathrm{ei}} (v_{\mathrm{i}}^{k} - v_{\mathrm{e}}^{k}) + \frac{4}{3}
\frac{\rho_{\gamma}}{\rho_{\mathrm{e}}} N \Gamma_{\mathrm{e} \gamma}(v_{\gamma}^{k} - v_{\mathrm{e}}^{k}),
\label{Av1}\\
\partial_{\tau} v_{\mathrm{i}}^{k} + N \partial^{k} N - {\mathcal G}^{k}_{j} v_{\mathrm{i}}^{j} &=& \frac{e \tilde{n}_{\mathrm{i}} N^2}{ \rho_{\mathrm{i}} \sqrt{\gamma}} \biggl[ E^{k} + (\vec{v}_{\mathrm{i}} \times \vec{B})^{k} \biggr]
\nonumber\\
&+& N\, \frac{\rho_{\mathrm{e}}}{\rho_{\mathrm{i}}}\, \Gamma_{\mathrm{ie}} (v_{\mathrm{e}}^{k} - v_{\mathrm{i}}^{k}) + \frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{i}}} N \Gamma_{\mathrm{i} \gamma}(v_{\gamma}^{k} - v_{\mathrm{i}}^{k}),
\label{Av2}\\
\partial_{\tau} v_{\gamma}^{k} + N \partial^{k} N- \biggl[ {\mathcal G}_{j}^{k} - \frac{N K}{3} \delta_{j}^{k}\biggr] v_{\gamma}^{j} &=& - \frac{N^2}{4 \rho_{\gamma}} \partial_{m} \biggl(\rho_{\gamma} \gamma^{m k} \biggr)
\nonumber\\
&+&
N \Gamma_{\gamma \mathrm{e}} ( v_{\mathrm{e}}^{k} - v_{\gamma}^{k}) +
N \Gamma_{\gamma \mathrm{i}} ( v_{\mathrm{i}}^{k} - v_{\gamma}^{k}),
\label{Av2a}
\end{eqnarray}
where
\begin{equation}
{\mathcal G}^{k}_{j}= \biggl[\frac{\partial_{\tau} N}{N} \delta_{j}^{k} + 2 N K_{j}^{k}\biggr].
\label{Av3}
\end{equation}
As in the conformally flat case the evolution equations of the electrons and of the ions
can be combined by defining the center of mass velocity of the electron-ion system
$v_{\mathrm{b}}^{k} = (m_{\mathrm{e}} v_{\mathrm{e}}^{k} + m_{\mathrm{i}} v_{\mathrm{i}}^{k})/(m_{\mathrm{e}} + m_{\mathrm{i}})$ so that the the effective evolution equations for the baryon-lepton-photon fluid become
\begin{eqnarray}
\partial_{\tau} \rho_{\gamma} &=& \frac{4}{3} K N \rho_{\gamma} - \frac{4}{3} N \partial_{k}\biggl( \frac{\rho_{\gamma}}{N}\,v_{\gamma}^{k}\biggr),
\label{Av4}\\
\partial_{\tau} v_{\mathrm{b}}^{k} &=& {\mathcal G}_{j}^{k} v_{\mathrm{b}}^{j} - N \partial^{k} N + \frac{(\vec{J} \times \vec{B})^{k} N^2}{ \gamma\, \rho_{\mathrm{b}} ( 1 + m_{\mathrm{e}}/m_{\mathrm{i}})} +
\frac{4}{3} \frac{\rho_{\gamma}}{\rho_{\mathrm{b}}} N \Gamma_{\gamma\mathrm{e}} (v_{\gamma}^{k} - v_{\mathrm{b}}^{k}),
\label{Av5}\\
\partial_{\tau} v_{\gamma}^{k} &=& \biggl[ {\mathcal G}_{j}^{k} - \frac{N K}{3} \delta_{j}^{k}\biggr] v_{\gamma}^{j} - \frac{N^2}{4 \rho_{\gamma}} \partial_{m} \biggl(\rho_{\gamma} \gamma^{m k} \biggr) - N \partial^{k} N +
N \Gamma_{\gamma \mathrm{e}} ( v_{\mathrm{b}}^{k} - v_{\gamma}^{k}),
\label{Av6}
\end{eqnarray}
where $v_{\gamma}^{k}$ and $\rho_{\gamma}$ denote, respectively, the photon velocity and the photon
energy density.
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
\section{Some relevant correlators}
\label{APPB}
The correlator appearing in Eq. (\ref{F12}), i.e.
\begin{equation}
{\mathcal T}_{b m b' m'} = \langle \biggl[ h_{a b} \, h_{ a q} \, h_{q m}\biggr]_{\vec{x}} \, \biggl[ h_{a'\, b'} h_{a' \,q'} h_{q' m'}\biggr]_{\vec{y}} \rangle,
\label{AB1}
\end{equation}
must be computed in terms of the corresponding two-point functions in real space (see Eq.
(\ref{F6})). The general form of the two-point function in real space has been already mentioned in Eq. (\ref{F9A})
and the functions $G_{i}(r)$ (with $i = 1,\,...\, 5$) are given by:
\begin{eqnarray}
G_{1}(r) &=& F_{1}(r) + \frac{2}{r} \frac{\partial F_{2}}{\partial r}+ \frac{1}{r} \frac{\partial}{\partial r} \biggl( \frac{1}{r} \frac{\partial F_{3}}{\partial r}\biggr),
\label{F9B}\\
G_{2}(r) &=& \frac{1}{r} \frac{\partial}{\partial r} \biggl( \frac{1}{r} \frac{\partial F_{3}}{\partial r}\biggr) - F_{1}(r) - \frac{2}{r} \frac{\partial F_{2}}{\partial r},
\label{F9C}\\
G_{3}(r) &=& \frac{1}{r} \frac{\partial}{\partial r}\biggl[ \frac{1}{r} \frac{\partial}{\partial r} \biggl( \frac{1}{r} \frac{\partial F_{3}}{\partial r}\biggr)\biggr]
- \frac{1}{r} \frac{\partial}{\partial r} \biggl( \frac{1}{r} \frac{\partial F_{2}}{\partial r}\biggr),
\label{F9D}\\
G_{4}(r) &=& \frac{1}{r} \frac{\partial}{\partial r}\biggl[ \frac{1}{r} \frac{\partial}{\partial r} \biggl( \frac{1}{r} \frac{\partial F_{3}}{\partial r}\biggr)\biggr]
+ \frac{1}{r} \frac{\partial}{\partial r} \biggl( \frac{1}{r} \frac{\partial F_{2}}{\partial r}\biggr),
\label{F9E}\\
G_{5}(r) &=& \frac{1}{r} \frac{\partial}{\partial r}\biggl\{ \frac{1}{r} \frac{\partial}{\partial r}\biggl[ \frac{1}{r} \frac{\partial}{\partial r}
\biggl( \frac{1}{r}\frac{\partial F_{3}}{\partial r} \biggr)\biggr]\biggr\},
\label{F9F}
\end{eqnarray}
where $F_{1}(r)$, $F_{2}(r)$ and $F_{3}(r)$ are fully determined once the power spectrum is known and are
defined as:
\begin{eqnarray}
F_{1}(r) &=& \frac{1}{4} \int \, \frac{d k}{k} \, {\mathcal P}_{\mathrm{T}}(k,\tau) \frac{\sin{k r}}{k r}, \qquad F_{2}(r)=
\frac{1}{4} \int \, \frac{d k}{k^3} \, {\mathcal P}_{\mathrm{T}}(k,\tau) \frac{\sin{k r}}{k r},
\nonumber\\
F_{3}(r) &=& \frac{1}{4} \int \, \frac{d k}{k^5} \, {\mathcal P}_{\mathrm{T}}(k,\tau) \frac{\sin{k r}}{k r}.
\label{F9G}
\end{eqnarray}
Using Eq. (\ref{F9G}) inside Eqs. (\ref{F9B})--(\ref{F9F}) we have that
\begin{eqnarray}
G_{1}(r) &=& \frac{1}{4} \int \frac{d k}{k} \, {\mathcal P}_{\mathrm{T}}(k,\tau) \biggl[ \biggl( 1 - \frac{1}{k^2 r^2}\biggr) j_{0}(k r) + \biggl( \frac{3}{k^2 r^2} - 2\biggr)\frac{j_{1}(k r)}{k r}
\biggr],
\label{F10B}\\
G_{2}(r) &=& \frac{1}{4} \int \frac{d k}{k} \, {\mathcal P}_{\mathrm{T}}(k,\tau) \biggl[\biggl( 2 + \frac{3}{k^2 r^2}\biggr) \frac{j_{1}(k r)}{k r} - \biggl(1 + \frac{1}{k^2 r^2}\biggr) j_{0}(k r)
\biggr],
\label{F10C}\\
G_{3}(r) &=& \frac{1}{4} \int k\, d k {\mathcal P}_{\mathrm{T}}(k,\tau) \biggl[ \frac{j_{0}(k r)}{k^2 r^2} \biggl( 1 + \frac{5}{k^2 r^2} \biggr) -
\frac{j_{1}(k r)}{k^3 r^3}\biggl( 2 + \frac{15}{k^2 r^2}\biggr)\biggr],
\label{F10D}\\
G_{4}(r) &=& \frac{1}{4} \int k\, d k{\mathcal P}_{\mathrm{T}}(k,\tau) \biggl[ \frac{j_{0}(k r)}{k^2 r^2} \biggl( -1 + \frac{5}{k^2 r^2}\biggr) + \frac{j_{1}(k r)}{k^3 r^3}\biggl( 4 - \frac{15}{k^2 r^2}\biggr)\biggr],
\label{F10E}\\
G_{5}(r) &=& \frac{1}{4} \int k^3 d k {\mathcal P}_{\mathrm{T}}(k,\tau) \biggl[ \frac{j_{0}(k r)}{k^4 r^4}\biggl( 1 -\frac{35}{k^2 r^2} \biggr)
- \frac{5 j_{1}(k r)}{k^5 r^5} \biggl( 2 - \frac{21}{k^2 r^2}\biggr)\biggr],
\label{F10F}
\end{eqnarray}
where $j_{0}(k r)$ and $j_{1}(k r)$ are spherical Bessel functions of zeroth and first-order
\cite{abr1,abr2}
\begin{equation}
j_{0}(k r) = \frac{\sin{k r}}{k r}, \qquad j_{1}(k r) = \frac{\sin{k r}}{k^2 r^2} - \frac{\cos{k r}}{k r}.
\label{BS}
\end{equation}
It is useful to compare the two different asymptotic limits of the various $G_{i}(r)$, i.e. for $k r < 1$ and for $k r > 1$.
In the limit $k r > 1$ we have that:
\begin{eqnarray}
G_{1}(r) &\to& \frac{1}{4} \int \frac{d k}{k} \, {\mathcal P}_{\mathrm{T}}(k,\tau) j_{0}(k r), \qquad G_{2}(r) \to - G_{1}(r),
\label{LIM1}\\
G_{3}(r) &\to& \frac{1}{4} \int k\, d k\, {\mathcal P}_{\mathrm{T}}(k,\tau) \frac{j_{0}(k r)}{k^2 r^2}, \qquad G_{4}(r) \to - G_{3}(r),
\label{LIM2}\\
G_{5}(r) &\to& \frac{1}{4} \int k^{3} d k \, {\mathcal P}_{\mathrm{T}}(k,\tau) \frac{j_{0}(k r)}{k^4 r^4}.
\label{LIM3}
\end{eqnarray}
Conversely, in the limit $k r < 1$, Eqs. (\ref{F10B})--(\ref{F10F}) imply:
\begin{eqnarray}
G_{1}(r) &\to& \frac{1}{10} \int \frac{d k}{k} {\mathcal P}_{\mathrm{T}}(k,\tau)\biggl[ 1 - \frac{11}{42} k^2 r^2\biggr],
\label{LIM4}\\
G_{2}(r) &\to& - \frac{1}{15} \int \frac{d k}{k} {\mathcal P}_{\mathrm{T}}(k,\tau)\biggl[ 1 - \frac{5 k^2 r^2}{14}\biggr],
\label{LIM5}\\
G_{3}(r) &\to& - \frac{2}{105} \int k d k {\mathcal P}_{\mathrm{T}}(k,\tau)\biggl[ 1 - \frac{5}{72} k^2 r^2\biggr],
\label{LIM6}\\
G_{4}(r) &\to& \frac{1}{70} \int k d k {\mathcal P}_{\mathrm{T}}(k,\tau)\biggl[ 1 - \frac{2\, k^2 r^2}{27}\biggr],
\label{LIM7}\\
G_{5}(r) &\to& \frac{1}{3780} \int k^3\,d k {\mathcal P}_{\mathrm{T}}(k,\tau)\biggl[ 1 - \frac{k^2 r^2}{22}\biggr].
\label{LIM8}
\end{eqnarray}
The explicit form of the two-point function $G_{i j m n}$ implies that the six-point function
appearing in the correlator of the vorticity can be expressed as
\begin{equation}
{\mathcal T}_{b m b' m'} = \sum_{\nu=1}^{5} {\mathcal T}^{(\nu)}_{b\, m\, b'\, m'},
\label{AB2}
\end{equation}
where the $5$ distinct contributions correspond to
\begin{eqnarray}
{\mathcal T}^{(1)}_{b\, m\, b'\, m'} &=& \overline{G}_{a b a q} \biggl[ G_{q m a' b'} \overline{G}_{a' q' q' m'}
+ G_{q m a' q'} \overline{G}_{b' a' q' m'} + G_{q m q' m'} \overline{G}_{a' q' a' b'}\biggr],
\label{AB3}\\
{\mathcal T}^{(2)}_{b\, m\, b'\, m'} &=& \overline{G}_{a b q m} \biggl[ G_{a q a' b'} \overline{G}_{a' q' q' m'}
+ G_{a q a' q'} \overline{G}_{a' b' q' m'} + G_{a q q' m'} \overline{G}_{a' b' a' q'}\biggr],
\label{AB4}\\
{\mathcal T}^{(3)}_{b\, m\, b'\, m'} &=& G_{a b a' b'} \overline{G}_{a q q m} \overline{G}_{a' q' q' m'}
\nonumber\\
&+& G_{a b a' b'} \biggl( G_{a q a' q'} G_{q m q' m'} + G_{a q q' m'} G_{q m a' q'}\biggr),
\label{AB5}\\
{\mathcal T}^{(4)}_{b\, m\, b'\, m'} &=& G_{a b q' a'} \overline{G}_{a q q m} \overline{G}_{a' b' q' m'}
\nonumber\\
&+& G_{a b q' a'}\biggl( G_{a q a' b'} G_{q m q' m'} + G_{a q q' m'} G_{q m a' b'}\biggr),
\label{AB6}\\
{\mathcal T}^{(5)}_{b\, m\, b'\, m'} &=& G_{a b q' m'} \overline{G}_{a q q m} \overline{G}_{a' b' a' q'}
\nonumber\\
&+& G_{a b q' m'} \biggl( G_{a q a' b'} G_{q m a' q'} + G_{a q a' q'} G_{q m a' b'}\biggr).
\label{AB7}
\end{eqnarray}
The overline signifies that the corresponding correlator is evaluated in the limit $r\to 0$. According to
Eqs. (\ref{LIM4})--(\ref{LIM8}) this limit is non-singular.
Notice finally that in terms of ${\mathcal T}_{b m b' m'}$ the correlation function
of the magnetic field can also be written, with shorthand notation, as
\begin{eqnarray}
\langle B^2(r) \rangle &=& {\mathcal J}(\tau,w) \epsilon^{j m i}\, \epsilon^{j' m' i} \frac{\partial^2}{\partial y^{j'}\, \partial y^{b'}} \frac{\partial^2 }{\partial x^{j} \, \partial x^{b}} {\mathcal T}_{b m b' m'}
\nonumber\\
{\mathcal J}(\tau, w) &=& \frac{m_{\mathrm{i}}^2}{\alpha_{\mathrm{em}}} \frac{{\mathcal H}^2\, a_{1}^2}{ 9 {\mathcal H}_{1}^4 (w+ 1)^2} \biggl(\frac{a}{a_{1}}\biggr)^{6 w + 4}.
\label{short}
\end{eqnarray}
The real space
approach is more effective and convenient for an explicit estimate of the vorticity
and the idea is therefore to express the correlation functions in real space, take the
appropriate derivatives and then expand the result in the desired limit.
Denoting with $R = r/r_{\mathrm{p}}$ and with $x = k/k_{\mathrm{p}}$,
the integrals over $k$ appearing in Eqs. (\ref{F10B})--(\ref{F10F})
can be computed explicitly by changing variable and by using the following
pair of relations \cite{grad}:
\begin{eqnarray}
\int_{1}^{\infty} x^{n - m} \sin{x R} \, d x&=& \frac{R}{m - n -2} \,\, _{1}F_{2}\biggl[ a_{1}; b_{1}, b_{2}; - \frac{R^2}{4} \biggr]
\nonumber\\
&+& R^{m - n -1} \cos{\biggl[\frac{\pi(m - n)}{2}\biggr]}
\Gamma[1 - m + n],
\label{AB8a}\\
\int_{1}^{\infty} x^{n - m} \cos{x R}\, d x &=& \frac{1}{m - n -2} \,\, _{1}F_{2}\biggl[ \tilde{a}_{1}; \tilde{b}_{1}, \tilde{b}_{2}; - \frac{R^2}{4} \biggr]
\nonumber\\
&+& R^{m - n -1} \sin{\biggl[\frac{\pi(m - n)}{2}\biggr]} \Gamma[1 - m + n],
\label{AB8b}
\end{eqnarray}
where $n < m$ (i.e. $(n - m)$ is negative).
In Eqs. (\ref{AB8a}) and (\ref{AB8b}) $ _{p}F_{q}[ a_{1}....a_{p}; b_{1}, ... b_{q}; z] $
denotes the generalized hypergeometric function of argument $z$; in the
case of Eqs. (\ref{AB8a}) and (\ref{AB8b}) , $p= 1$, $q=2$ and
\begin{eqnarray}
&&a_{1} = 1 + \frac{n - m}{2},\qquad b_{1} = \frac{3}{2},\qquad b_{2} = 2 + \frac{n - m}{2},
\label{AB9a}\\
&& \tilde{a}_{1} = a_{1} - \frac{1}{2},\qquad \tilde{b}_{1} = b_{1} -1,\qquad \tilde{b}_{2} = b_{2} - \frac{1}{2}.
\label{AB9b}
\end{eqnarray}
The integrals are taken from $1$ to infinity since the integral over $k$ starts from $k_{\mathrm{p}}$ implying that the lower limit of integration in $x$ is 1. Equations (\ref{AB8a}) and (\ref{AB8b})
can be used to derive the real space form of the correlator of Eq. (\ref{F6}). Using Eqs.
(\ref{AB8a}) and (\ref{AB8b}), the explicit form of Eqs. (\ref{F10B})--(\ref{F10F}) can be derived and inserted
into Eq. (\ref{F9A}) whose explicit form determines the real-space expression of the two-point functions
of the vorticities through Eqs. (\ref{AB3})--(\ref{AB7}). After taking the appropriate derivatives
the obtained result can be expanded in the wanted limits (e. g. $R\gg 1$ or $R\ll 1$).
The explicit real-space expressions of Eqs. (\ref{F10B})--(\ref{F10F}) are typically rather lengthy but they are
conceptually straightforward. This is why they will be omitted and only an example will be given.
Even if the scales interesting for section \ref{sec6} will be the ones close to the galactic scale, consider, for instance the expression of $G_{1}(R)$ in the opposite limit, i.e. comoving scales much larger
than $r_{\mathrm{eq}}$. In this case the expressions simplify since ${\mathcal M}(k, k_{\mathrm{eq}}, \tau) \to 1$.
Therefore, using Eqs. (\ref{AB8a}) and (\ref{AB8b}), Eq. (\ref{F10B}) becomes:
\begin{eqnarray}
G_{1}(R) &=& \frac{{\mathcal A}_{\mathrm{T}}}{4}\biggl\{ \frac{1}{n_{\mathrm{T}}} \, _{1}F_{2}\biggl[
\frac{n_{\mathrm{T}}}{2};\, \frac{3}{2}, \frac{n_{\mathrm{T}}}{2}; - \frac{R^2}{4}\biggr]
\nonumber\\
&+& \frac{3}{4(n_{\mathrm{T}} -4) R^4}\biggl[\, _{1}F_{2}\biggl[ -2 + \frac{n_{\mathrm{T}}}{2}; \frac{1}{2}, -1 +
\frac{n_{\mathrm{T}}}{2}; - \frac{R^2}{4}\biggr]
\nonumber\\
&-&
\, _{1} F_{2}\biggl[ -2 + \frac{n_{\mathrm{T}}}{2}; \frac{3}{2}, -1 +
\frac{n_{\mathrm{T}}}{2}; - \frac{R^2}{4}\biggr]\,\biggr]
\nonumber\\
&+& \frac{1}{4(n_{\mathrm{T}} -2) R^2}\biggl[\, 3\, _{1}F_{2}\biggl[ -1 + \frac{n_{\mathrm{T}}}{2}; \frac{3}{2},
\frac{n_{\mathrm{T}}}{2}; - \frac{R^2}{4}\biggr]
\nonumber\\
&-& 2\, _{1}F_{2}\biggl[ -1 + \frac{n_{\mathrm{T}}}{2}; \frac{1}{2},
\frac{n_{\mathrm{T}}}{2}; - \frac{R^2}{4}\biggr] \, \biggr]
\nonumber\\
&-& \frac{1}{4 R^{n_{\mathrm{T}}}} \cos{\biggl(\frac{\pi n_{\mathrm{T}}}{2} \biggr)}\biggl[ \Gamma[n_{\mathrm{T}}- 4] + \Gamma[n_{\mathrm{T}} -2] + 3 \Gamma[n_{\mathrm{T}} - 5]
\nonumber\\
&+& 3 \Gamma[n_{\mathrm{T}} - 3] +
\Gamma[n_{\mathrm{T}}-1]\biggr]\, \biggr\}.
\label{EXG}
\end{eqnarray}
To conclude this appendix let us show that
the expression of $G_{i j m n}$ given in Eq. (\ref{F6}) coincides with the result directly
obtainable in the particular case where the tensor modes of the geometry are quantized in terms of gravitons. When $h_{i j}(\vec{x},\tau)$ is a field operator its expression
can be written as \cite{mgt1,mgt2}
\begin{equation}
\hat{h}_{ij}(\vec{x},\tau)= \frac{\sqrt{2} \ell_{\mathrm{P}}}{(2\pi)^{3/2} a(\tau)} \sum_{\lambda}
\int d^{3}k\,\, \epsilon_{ij}^{(\lambda)}(\hat{k})\,
\biggl[ \hat{a}_{\vec{k},\lambda} \,f_{k,\lambda}(\tau) e^{- i \vec{k} \cdot \vec{x}} + \hat{a}_{\vec{k},\lambda}^{\dagger}\, f_{k,\lambda}^{*}(\tau)
e^{ i \vec{k} \cdot \vec{x}}\biggr].
\label{F10}
\end{equation}
which also implies, using the properties of the creation and annihilation operators,
\begin{equation}
G_{i j m n}(r)= \langle \hat{h}_{i j}(\vec{x},\tau) \, \hat{h}_{m n}(\vec{y},\tau) \rangle =
\int \frac{d k }{k} {\mathcal P}_{\mathrm{T}}(k, \tau) \, {\mathcal Q}_{i j m n}(\hat{k}) \, j_{0}(k r),
\label{F11A}
\end{equation}
where
\begin{eqnarray}
{\mathcal P}_{\mathrm{T}}(k,\tau) &=& 4\ell_{\mathrm{P}}^2\frac{k^3}{\pi^2 a^2(\tau)} |f_{k}(\tau)|^2,
\label{F11AA}\\
{\mathcal Q}_{i j m n} &=& \frac{1}{4} \sum_{\lambda} \epsilon_{ij}^{(\lambda)} \, \epsilon_{m n}^{(\lambda)} =
\frac{1}{4} \biggl[P_{m i}(\hat{k}) P_{n j}(\hat{k}) + P_{m j}(\hat{k}) P_{n i}(\hat{k}) - P_{i j}(\hat{k}) P_{m n}(\hat{k}) \biggr];
\label{F11B}
\end{eqnarray}
$P_{ij}(\hat{k}) = (\delta_{i j} - \hat{k}_{i} \hat{k}_{j})$ with $\hat{k}^{i} = k^{i}/|\vec{k}|$. In Eq. (\ref{F11A}) it has been used that
\begin{equation}
\langle 0| \, \hat{a}_{\vec{p},\mu}\, \hat{a}^{\dagger}_{\vec{p},\,\lambda}|0 \rangle = \delta^{(3)}(\vec{k} - \vec{p}) \delta_{\lambda\mu}.
\label{Q1}
\end{equation}
Furthermore, to derive Eq. (\ref{F11B}), the explicit form of the two tensor polarizations can be written, in explicit terms, as
\begin{equation}
\epsilon_{ij}^{(\oplus)}(\hat{k}) = (\hat{a}_{i} \hat{a}_{j} - \hat{b}_{i} \hat{b}_{j}), \qquad
\epsilon_{ij}^{(\otimes)}(\hat{k}) = (\hat{a}_{i} \hat{b}_{j} + \hat{b}_{i} \hat{a}_{j}),
\label{F9}
\end{equation}
where $\hat{a}$, $\hat{b}$ and $\hat{k}$ are three mutually orthogonal unit vectors.
\end{appendix}
\newpage
|
1,108,101,564,133 | arxiv | \section{Introduction} \label{introduction}
The temperature fluctuations of the Cosmic Microwave Background radiation (CMB),
recently released by the {\em Planck collaboration}~\cite{PLA-I}, confirmed with outstanding precision the concordance cosmological model, $\Lambda$CDM~\cite{PLA-XV,PLA-XVI,WMAP9}.
Such exquisite set of cosmological information allows us to test two fundamental properties
of the universe expected after the standard inflationary
phase~\cite{Bartolo04,Komatsu1,Bassett2006,Linde2008}, namely that the CMB field is, at
large-angles, nearly Gaussian and statistically isotropic
(see, e.g.,~\cite{Abramo10,PLA-XXIII} and refs. therein).
Previous studies using WMAP data indicate significant departure from either gaussianity or
statistical isotropy at the largest angular scales -- an unexpected result in the $\Lambda$CDM
model~\cite{Abramo06,Abramo09,Aghanim:2013suk,Pereira09,Bernui06,Bernui07,Bernui2008b,Bernui2009a,Bielewicz04,Copi04a,Copi04b,
Copi06,Copi07,Copi10,Copi13a,Copi13b,Cruz05,
Eriksen04,Eriksen07,Gordon,Gruppuso10,Gruppuso11,Mandolesi,
Hansen04a,Hansen04b,Huterer,Jaffe,Kahniashvili,Koivisto,Land05,Land07,OCTZ,TOCH,Paci,Rath,%
Samal08,Samal09,Vielva04,Vielva06,Vielva07,Urban,Zhao,%
Fabre,Hansen12,Kashino,Rath13,Rassat14}, though possibly disputable \cite{WMAP7}.
These phenomena, also called {\it anomalies},
have been now confirmed with similar high confidence levels, $\sim 3 \sigma$, by the
{\em Planck collaboration} with CMB foreground-cleaned maps~\cite{PLA-XXIII}.
On the other side, only small magnitude Gaussian deviations from primordial origin have been
detected in Planck data~\cite{PLA-XXII,PLA-XXIV}.
However, there are more potential sources of non-Gaussianity (NG) in the CMB data than
just primordial NG~\cite{Komatsu2,PNG-Liguori,Chen-2010,WMAP-7yr-Jarosik,%
Su-Yadav,Komatsu03}.
These include galactic foregrounds remnants and secondary anisotropies coming from
processes after the last scattering
surface~\cite{WMAP-7yr-Gold,PLA-XXIV,Munshi,Aghanim,Novaes12,%
Chiang03,Naselsky,Delabrouille08,Novaes,Saha,Pietrobon09,Pietrobon10a,%
Pietrobon10b,Pratten,Smith,Vielva09,Zhao12}.
In particular, Gaussian analyses for large angular scales are delicate because galactic
foregrounds contaminations are not completely understood and, as a consequence, galactic
cut-sky masks are still necessary in CMB data analyses~\cite{PLA-XXIV}.
Monteser\'{\i}n et al. (2008)~\cite{Monteserin08} reported an anomalously low variance
distribution in WMAP3 maps at $98.7\%$~CL.
Cruz et al. (2011)\cite{Cruz11} confirmed this result in WMAP5 and WMAP7 data, also pointing that some regions near the galactic plane present an anomalously high variance ($95.6\%$ CL) in the south ecliptic hemisphere. Their analyses, using various galactic cut-sky masks, suggest that foreground residuals could explain the results, besides a possible connection with the CMB quadrupole-octopole alignment was
investigated. Gruppuso et al. (2013) \cite{Gruppuso13}, using a different estimator, also found a low variance at large scales in WMAP9 data, basically in agreement with~\cite{Monteserin08,Cruz11}.
More recently, the {\em Planck collaboration}~\cite{PLA-XXIII} and Akrami et al.
(2014)~\cite{Akrami} studied the local variance in hemispheres and disks finding again an anomalous high variance in the south ecliptic hemisphere.
In recent works~\cite{BR09,BR10,BR12} one of us have proposed two large-angle NG
estimators based on skewness and kurtosis momenta performed on spherical caps on the
CMB sphere. We found that this directional mapping approach is suitable when a cut-sky mask has to be used because it minimizes the effect of incomplete data in the CMB sky.
These indicators provide a directional map of local NG due to its possible non-uniform distribution in the CMB maps, also giving information about the angular scale dependence of such contributions. Results obtained in previous analyses~\cite{BR09,BR10} using WMAP maps suggest that the NG captured there is not of primordial origin, although it might have a primordial component.
The aim of the present work is to conduct an analysis of the local variance in Planck
foreground-cleaned maps, using a prescription similar to that of Refs.~\cite{BR09,BR10,BR12}. For this we implement a simple estimator of statistical variance, applying it to patches of the CMB sky.
The information from all the patches is then used to produce an associated {\em Variance}-map, or simply $V\!-$map, which contains the signatures of the analysed CMB map. Our analyses investigate the possibility that foreground remnants in the galactic region could be the source of departures from Gaussianity and statistical
isotropy, by considering several cut-sky Planck masks and three frequency band Planck
maps, in addition to the four foreground-cleaned maps. To calculate the confidence level of our results we shall compare properties of these $V\!-$maps from Planck data with $V\!-$maps from simulated Monte Carlo (MC) CMB maps. These maps are obtained as Gaussian and statistically isotropic realisations from a seed angular power spectrum corresponding to the $\Lambda$CDM concordance model. Accordingly, the masking procedure applied to Planck CMB data is also applied to the MC maps.
In section~\ref{pla-maps} we briefly review the main features of the four foreground-cleaned {\em Planck} maps and the masks to be used in the analyses.
In section~\ref{method} we describe our variance estimator and explain the methodology to study the statistical Gaussian and isotropy attributes of Planck maps.
The procedure delineated in this section will be used, in section~\ref{results}, to investigate directional large-angle deviations from the standard statistical scenario of the Planck data as compared with simulated maps. Our analysis includes realistic features of the Planck data, like their inhomogeneous noise maps and galactic cut-sky masks. Finally, in section~\ref{conclusion}, we summarize our main results, present our conclusions and
final remarks.
\section{Foreground-reduced Planck maps} \label{pla-maps}
The Planck satellite observed the sky in nine frequency bands, from 30 to 857
GHz~\cite{PLA-I,PLA-XII}. The use of four {\em component separation techniques}, which efficiently identifies the sources of contaminating emissions present in the data set, have allowed the {\em Planck collaboration} to produce four high resolution and almost full sky foreground-cleaned CMB maps~\cite{PLA-XII}. They are: the Spectral Matching Independent Component Analysis ({\sc smica})~\cite{Cardoso08}, the Needlet Internal Linear Combination ({\sc nilc})~\cite{Delabrouille08}, the Internal Template
Fitting Spectral Estimation Via Expectation Maximization ({\sc sevem})~\cite{Fernandez-Cobos12}, and the combined approach termed Commander-Ruler ({\sc CR})~\cite{Eriksen06,PLA-XII}.
Each of the foreground cleaning methods provides a CMB map with its Component Separation Confidence mask --also termed {\it validation} mask or simply {\sc val}mask-- outside which the corresponding CMB data is considered to be foreground-cleaned, and also a noise map containing an estimate of the real inhomogeneous pixel's noise. In addition, the {\sc smica} and {\sc nilc} maps were released with their own {\it inpainting} mask, or simply {\sc inp}mask. Regarding the masks, there also exist the separation component minimum mask, termed M82, and the U73 mask, which is the union of the confidence galactic masks plus the point sources
mask~\cite{PLA-XII} (see Table~\ref{table1}).
The effect of realistic anisotropic noise, due to the different number of times that each pixel was observed by the probe, is taken into account according to the specifications of each foreground-cleaned method.
For this, each component separation procedure provides an estimate of the pixel's noise in the output CMB map, information released together with each foreground-cleaned Planck map as a full-sky map, termed {\em noise map}~\cite{PLA-XII,PLA-XXIII,PLA-XXIV}. Thus, we use the noise maps in a ``signal + noise'' analysis to find their effect on $V\!-$maps, i.e., we apply our estimator after adding a noise map to its corresponding foreground-cleaned Planck map. In section 4 we consider the analyses with and without including this realistic anisotropic noise component, which is done by adding to the foreground-cleaned map its corresponding noise map, and also considering different Planck masks.
\begin{table}[h]
\begin{center}
\begin{tabular}{lc}
\hline \hline
\ \ \ \ Planck mask \ \ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ $f_{\mbox{sky}}$ \ \ \ \ \ \\
\hline
\ \ \ {\sc smica}~--~{\sc inpmask} & \ \ \ \ \ \ \ \ \ 0.97 \\
\ \ \ {\sc smica}~--~{\sc valmask} & \ \ \ \ \ \ \ \ \ 0.89 \\
\ \ \ {\sc nilc}~--~{\sc inpmask} & \ \ \ \ \ \ \ \ \ 0.97 \\
\ \ \ {\sc nilc}~--~{\sc valmask} & \ \ \ \ \ \ \ \ \ 0.93 \\
\ \ \ {\sc sevem}~--~{\sc valmask} & \ \ \ \ \ \ \ \ \ 0.76 \\
\ \ \ {\sc CR}~--~{\sc valmask} & \ \ \ \ \ \ \ \ \ 0.75 \\
\ \ \ M82~--~ {\sc minimal mask} & \ \ \ \ \ \ \ \ \ 0.82 \\
\ \ \ U73~--~ {\sc union mask} & \ \ \ \ \ \ \ \ \ 0.73 \\
\hline \hline
\end{tabular}
\end{center}
\caption {The $f_{\mbox{\footnotesize sky}}$ values give the available sky fraction of
a CMB map when a Planck mask is applied to it.}
\label{table1}
\end{table}
\section{The variance estimator} \label{method}
We start this section by explaining the procedure for constructing the variance map ($V\!-$map) of a
given CMB map. Let $\Omega_j \equiv \Omega(\theta_j,\phi_j) \in \mathbb{S}^2$ be a hemisphere on
the celestial sphere, with center at the $j^{\,\rm{th}}$ pixel, $j=1, \ldots, N_{\mbox{\footnotesize hem}}$,
where $(\theta_j,\phi_j)$ are the angular coordinates of the $j^{\,\rm{th}}$ pixel, and
$N_{\mbox{\footnotesize hem}}$ is the number of hemispheres.
The number of hemispheres and the coordinates of their centers are defined using the HEALPix
pixelization scheme~\cite{Gorski05}.
Moreover, the hemisphere's centers are uniformly distributed on $\mathbb{S}^2$
and the union of them completely covers the celestial sphere.
The variance of the data inside each hemisphere can be calculated simply by
\begin{eqnarray} \label{V-map}
&&V_j = \frac{1}{ n_{\rm p} } \sum_{i=1}^{n_{\rm p}}
\left(\, T_j^i\, - \overline{T_j} \,\right)^2 \, , \label{Vdef}
\end{eqnarray}
where $n_{\rm p}$ is the number of pixels in the $j^{\,\rm{th}}$ hemisphere, $T_j^i$ is the temperature fluctuation at the $i^{\,\rm{th}}$ pixel and $\overline{T_j}$ is the mean CMB temperature fluctuation of the $j^{\,\rm{th}}$ hemisphere.
The values $V_j$ obtained in this way give a local measure of the variance in the direction
$(\theta_j, \phi_j)$. Patching together the set of values $\{ V_j, \, j=1,...,N_{\mbox{\footnotesize hem}} \}$ in a sphere with $N_{\mbox{\footnotesize hem}}$ pixels we obtain a colored (pixelized) celestial sphere. The Mollweide projection of this sphere is termed the $V\!-$map: it is the final product of the application of our variance estimator to a given CMB map. According to the scale of colors, the minimum (maximum) value of the set $\{ V_j \}$ corresponds to the bluest (reddest) pixel.
With the above prescription one can obtain a quantitative measure of anomaly of a real map by simply calculating the angular power spectrum of its corresponding $V\!-$map, and then comparing it with the mean power spectra obtained from MC simulations.
Because the $V\!-$map assigns a real value to each pixel in the celestial sphere
$\mathbb{S}^2$, that is $V = V(\theta,\phi)$, one can expand it in spherical harmonics:
$V(\theta,\phi) = \sum_{L, M} A_{L M} Y_{L M}(\theta,\phi)$,
where the set of values $\{ v_{L},\, L = 0,1,2,\cdots \}$, given by
\begin{eqnarray} \label{V-spectrum}
v_{L} \,\equiv\, \frac{1}{2 L + 1}\, \sum_{M={\mbox{\small -}}L}^{L} \, |A_{L M}|^2 \, ,
\end{eqnarray}
is the angular power spectrum of the $V\!-$map.
Given that we are interested in the large-scale NG deviations, we shall concentrate on the low-$L$ angular power spectrum $\{ v_{L}, \,\mbox{for}\,\, L = 1,2,\cdots,10 \}$ of the $V\!-$map.
Before proceeding, let us clarify some points of our whole prescription. In what concerns the implementation of a local variance estimator, note that Eq. (\ref{V-map}) is the simplest mathematical possibility. On the other hand, the choice of spherical caps with an aperture of $90^{\circ}$ is motivated by the fact that, when scanning a map to calculate its $V\!-$map, the procedure considers caps whose centers are close or even within the masked region. In these cases, the variance of the data inside the cap is performed with a smaller number of pixels, which introduces additional statistical noise as compared to caps away from the masked region. As it turns out,
this effect can be minimized by choosing spherical caps having aperture of $90^{\circ}$, that is, hemispheres~\cite{BR09,BR10,BR12}. Note that our prescription is thus different from the one adopted in~\cite{Akrami}, where not only the size of the caps is allowed to vary, but also masked pixels are excluded. As we will see, our results are compatible with their findings.
In the next section we use the above procedure to generate $V\!-$maps from the set of 1,000 Gaussian and statistically isotropic simulated maps -- from now on called
$V^{\mbox{\footnotesize\sc g}}\!-$maps -- from which we obtain the corresponding spectra, $v_L^{\mbox{\footnotesize\sc g}}$, and mean values, $\overline{v}_{L}^{\mbox{\footnotesize\sc g}}$. We then compare the spectra of the $V\!-$maps produced from the Planck maps, from now on called $V^{\mbox{\footnotesize\sc pla}}\!-$maps, with the mean value $\overline{v}_{L}^{\mbox{\footnotesize\sc g}}$. We finally emphasize that, although it might be clear from the context, the $V^{\mbox{\footnotesize\sc g}}\!-$maps themselves are not normally distributed.
\subsection{Statistically Isotropic Gaussian maps} \label{MC-maps}
Our Gaussian and statistically isotropic MC maps were obtained as random realizations
from a seed angular power spectrum, which corresponds to the $\Lambda$CDM concordance
model~\cite{PLA-I}, and the map-making process is performed using the {\sc synfast} facility from the HEALPix package~\cite{Gorski05}. We test the robustness of our outcomes with two different angular resolutions of the
$V\!-$maps, that is, with $N_{\mbox{\footnotesize hem}} = 768$ and with
$N_{\mbox{\footnotesize hem}} = 3,\!072$.
For illustration, we show in Fig.~\ref{fig1} two representative
$V^{\mbox{\footnotesize\sc g}}\!-$maps from a MC simulations; notice in these figures the minimum and the maximum values.
\begin{figure}
\includegraphics[angle=90,scale=0.3]
{figure1a.pdf}
\includegraphics[angle=90,scale=0.3]
{figure1b.pdf}
\caption{Variance maps from Gaussian and statistically isotropic simulations.
We show $V^{\mbox{\footnotesize\sc g}}\!-$maps with the minimum (left, smaller than the data) and the
maximum (right, larger than the data) variance dipole values produced from the set of 1,000 MC CMB maps,
in units of $\mu{\rm K}^2$.}
\label{fig1}
\end{figure}
In order to generate a $V\!-$map, either from a MC or foreground-cleaned Planck map, we first choose the angular scales to be included in analysis, $\ell \in [\ell_\text{min},\ell_\text{max}]$, which in turn determine the angular resolution ($N_{\mbox{\footnotesize side}}$) of the map. Planck and simulated CMB maps are analysed under the same conditions: mask, angular-scale's interval, $N_{\mbox{\footnotesize side}}$, and $N_{\mbox{\footnotesize hem}}$.
Thus, given the set of 1,000 MC maps, we produce their corresponding 1,000
$V^{\mbox{\footnotesize\sc g}}\!-$maps and calculate their associated power spectra, namely,
$\{ \{ v_{\,L} \}^{\,\mathbf{i} } \}$, for $\mathbf{i} = 1, \cdots, 1,\!000$ and $\,L=1, \cdots, 10\,$.
Finally, we compute the mean angular power spectra of the
$V^{\mbox{\footnotesize\sc g}}\!-$maps
\begin{eqnarray} \label{mean-spectra}
\overline{v}_{L}^{\mbox{\footnotesize\sc g}} \,=\,
\frac{1}{1000} \,\sum_{\mathbf{i} = 1}^{1000} v_{\,L}^{\,\mathbf{i}} \, .
\end{eqnarray}
These values are then used to obtain the statistical significance
(i.e., the goodness-of-fit) of the spectrum $v_{L}^{\mbox{\footnotesize {\sc pla}}}$, obtained from the $V^{\mbox{\footnotesize\sc pla}}\!-$map.
\section{Statistical estimator applied to Planck maps} \label{results}
In this section we perform variance analyses of the four foreground-cleaned Planck maps.
We first calculate their angular power spectra $v_{L}^{\mbox{\footnotesize\sc pla}}$, and the
corresponding statistical confidence level by comparison with the mean spectra
$\overline{v}_{L}^{\mbox{\footnotesize\sc g}}$.
We find that the dipolar term of the $v_{L}^{\mbox{\footnotesize\sc pla}}$'s spectra is the dominant term, and appears to be robust under foreground-cleaning procedures (i.e., a similar result is obtained for the four Planck maps), cut-sky masks, inhomogeneous pixel's noise, and different estimator's parameter $N_{\mbox{\footnotesize hem}}$. Then we investigate several angular scale's intervals looking for the origin of this variance dipole phenomenon. At the end of this section we discuss the possibility that residual foregrounds could be causing it.
\subsection{Angular power spectra analyses of $V$-maps}%
\label{planck-maps}
The CMB maps analysed in this subsection contain the angular-scales
$\ell \in[2,\, 1,\!000]$. We shall study two cases: the `pure signal' case and the `signal + inhomogeneous noise' case. Initially we use $N_{\mbox{\footnotesize side}} = 512$,
$N_{\mbox{\footnotesize hem}} = 3,\!072$, but to validate our results of this subsection we also consider other pixelization scheme's parameters as robustness tests.
The result of the application of our estimator on the four foreground-cleaned Planck maps can be observed in Fig.~\ref{fig2}, where we show the $V^{\mbox{\footnotesize\sc pla}}\!-$maps obtained using the four Planck maps and diverse masks.
A common interesting feature noticed in these $V\!-$maps is the strong dipolar signal, independent of the maps and masks used to produce the $V^{\mbox{\footnotesize\sc pla}}\!-$map. Similar results are obtained in all the other cases investigated, which we do not show in Fig.~\ref{fig2} to avoid repetitions.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.29,angle=90]{figure2a.pdf}\vspace{2mm}
\includegraphics[scale=0.29,angle=90]{figure2b.pdf}
\includegraphics[scale=0.29,angle=90]{figure2c.pdf}\vspace{2mm}
\includegraphics[scale=0.29,angle=90]{figure2d.pdf}
\includegraphics[scale=0.29,angle=90]{figure2e.pdf}
\includegraphics[scale=0.29,angle=90]{figure2f.pdf}
\caption{\label{fig2}
Variance maps of four different foreground-cleaned Planck maps.
In the top row we have the $V\!-$maps obtained from the foreground-cleaned
{\sc smica} Planck map using their {\sc inp}mask (left) and {\sc val}mask
(right). Similarly, in the middle row we show the $V\!-$maps obtained from the foreground-cleaned {\sc nilc} Planck map using their {\sc inp}mask (left) and {\sc val}mask (right), respectively. Finally, in the bottom row we have the $V\!-$maps obtained from the foreground-cleaned {\sc sevem} (left) and {\sc CR} (right) Planck maps using their corresponding {\sc val}masks.}
\end{center}
\end{figure*}
In Fig.~\ref{fig3} we give a quantitative measure of the spectra of the
$V^{\mbox{\footnotesize\sc pla}}\!-$maps as compared with the average spectra of the
$V^{\mbox{\footnotesize\sc g}}\!-$maps (for simplicity we present only the {\sc smica} + {\sc val}mask case; the other cases show similar results). This comparison measures the possible departure of the Planck data with
respect to the standard statistical scenario. In fact, an overall assessment of the statistical
significance of the spectrum $v_{L}^{\mbox{\footnotesize {\sc pla}}}$ as compared with the average spectra
$\overline{v}_{L}^{\mbox{\footnotesize\sc g}}$ data, given by the $\chi^2$ goodness-of-fit, supplies a
measure of the statistical features present in Planck data.
For instance, in Fig.~\ref{fig3} the goodness-of-fit test gives $\chi^2 \,=\,7.3$, for 9 d.o.f. (degrees of
freedom), which means a good agreement between Planck data and Gaussian MCs,
having in mind the large cosmic variance existent at these scales.
Similar numerical analyses can be obtained for other cases.
As suggested by the Planck team~\cite{PLA-XXIV}, realistic experimental features such as galactic masks and anisotropic noise distribution should be used in Planck data analyses to test the robustness of the results. Moreover, because these noise maps have a large quadrupolar signal roughly aligned with the ecliptic, it is pertinent to evaluate the effect of such real anisotropic noise by comparing the cases with and without noise. For this, we calculate the $\chi^2$ values, for 9 d.o.f., considering the four foreground-cleaned Planck maps in different situations: considering the addition or not of their corresponding anisotropic noise maps and after that we apply the M82 mask or the U73 masks. Our results are shown in Table~\ref{table2}, where we emphasize that the $\chi^2$ values were not divided by the number of d.o.f.
The conclusion is that the Planck data, after being cut with either M82 or U73 masks, and independent of the addition of the inhomogeneous pixel's noise, are fully consistent with the Gaussian and statistically isotropy hypotheses of the standard cosmological model.
\begin{figure
\begin{center}
\mbox{\hspace{-1.cm}
\includegraphics[scale=0.52]%
{figure3.pdf}}
\caption{\label{fig3}
Angular power spectrum of the $V^{\mbox{\footnotesize\sc smica}}\!-$map, generated
applying our estimator to the {\sc smica} Planck map, after cutting the sky patch corresponding
to its {\sc val}mask.
The average spectrum $\overline{v}_{L}^{\mbox{\footnotesize\sc g}}$ generated from
Gaussian and statistically isotropic CMB data is represented as dots with 1$\sigma$
error bars. The $\chi^2$ goodness-of-fit gives 7.3, for 9 d.o.f., corresponds to a $p-$value equal to 0.61, which lead us to conclude that this spectrum is fully consistent with the null hypothesis (i.e., a Gaussian and statistically isotropic universe).
The $V^{\mbox{\footnotesize\sc smica}}\!-$map for this case, i.e., {\sc smica} map
+ {\sc val}mask, is seen in the right panel of the first row in Fig.~\ref{fig2},
where the dipole variance points towards $(l, b) \,\simeq\, (245^{\circ},-35^{\circ})$.
}
\end{center}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{lcc}
\hline \hline
\ \ \ Planck map + mask \ \
& \ \ \ \ \ \ \ \ \ \ \ \ $\chi^2$ \ \ & \ \ \ \ \ \ \ \ \ $\chi^2_{\text \, inh. noise}$ \ \ \ \\
\hline
\ \ \ \ \ \ {\sc smica} + M82 & \ \ \ \ \ \ \ \ \ \ \ 8.6 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ \ 8.2 \ \ \ \ \ \\
\ \ \ \ \ \ {\sc smica} + U73 & \ \ \ \ \ \ \ \ \ \ \ 8.8 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ \ 8.6 \ \ \ \ \ \\
\ \ \ \ \ \ {\sc nilc} + M82 & \ \ \ \ \ \ \ \ \ \ \ 8.4 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ \ 8.1 \ \ \ \ \ \\
\ \ \ \ \ \ {\sc nilc} + U73 & \ \ \ \ \ \ \ \ \ \ \ 9.0 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ \ 8.7 \ \ \ \ \ \\
\ \ \ \ \ \ {\sc sevem} + M82 & \ \ \ \ \ \ \ \ \ \ \ 8.4 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ \ 8.1 \ \ \ \ \ \\
\ \ \ \ \ \ {\sc sevem} + U73 & \ \ \ \ \ \ \ \ \ \ \ 8.6 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ \ 8.3 \ \ \ \ \ \\
\ \ \ \ \ \ {\sc CR} + M82 & \ \ \ \ \ \ \ \ \ \ \ 11.6 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ 11.1 \ \ \ \ \ \\
\ \ \ \ \ \ {\sc CR} + U73 & \ \ \ \ \ \ \ \ \ \ \ 12.9 \ \ \ & \ \ \ \ \ \ \ \ \ \ \ 12.4 \ \ \ \ \ \\
\hline \hline
\end{tabular}
\end{center}
\caption{$\chi^2$, for 9 d.o.f.,
obtained when each $\overline{v}_{L}^{\mbox{\footnotesize\sc pla}}$ spectrum is fitting the
$\overline{v}_{L}^{\mbox{\footnotesize\sc g}}$ spectrum, considering the M82 and U73 masks,
in two situations: `pure signal' case and `signal + inhomogeneous noise' case.
The first (second) column correspond to the case without (with) the addition of
inhomogeneous pixel's noise to the Planck maps before the variance analysis.
Our results show a good agreement between the large-angle spectra of
$V^{\mbox{\footnotesize\sc pla}}\!-$maps as compared with
$V^{\mbox{\footnotesize\sc g}}\!-$maps.
We stress that the $V^{\mbox{\footnotesize\sc g}}\!-$maps are not normally distributed.}
\label{table2}
\end{table}
One should note that the overall good agreement -- as evinced by the $\chi^2$ test
(table~\ref{table2}) --
between the spectrum $v_{L}^{\mbox{\footnotesize\sc pla}}$ and
$\overline{v}_{L}^{\mbox{\footnotesize\sc g}}$ for the scale's range $\,L=1, \cdots, 10\,$,
does not exclude the possibility that a particular scale $L$ could be anomalous with respect to the Gaussian and statistically isotropic scenario.
Indeed, one observes in Fig.~\ref{fig3}, that the dipole $v_{1}^{\mbox{\footnotesize\sc smica}}$ is the largest multipole value of the spectra, being $\sim$ 50 times greater than the sum of the other multipoles. This dominant dipole term, $L = 1$, similarly observed in the other three Planck maps, reflects what is being observed in the $V^{\mbox{\footnotesize\sc pla}}\!-$maps displayed in
Fig.~\ref{fig2}.
Moreover, this dipolar variance asymmetry seems to be related to the anomalous variance distribution found in WMAP maps~\cite{Cruz11}, and more recently in Planck
data~\cite{PLA-XXIII,Akrami}.
For these reasons, this suspicious dipolar phenomenon deserves detailed angular scale's analyses, which shall be done in the next subsection.
To end this subsection, we notice that these results are robust under the four Planck's foreground-cleaning procedures, the set of Planck masks~\ref{table2}, inhomogeneous pixel's noise (released by Planck's team), and pixelization scheme parameters (that is, $N_{\mbox{\footnotesize side}} = 256; \, 512$,
$N_{\mbox{\footnotesize hem}}=768; \, 3,\!072$).
\subsection{Angular-scale analyses of the Variance dipole}%
\label{angular-scales}
The observed direction of the dipolar variance maps, which appears close to the hemispherical NS-asymmetry~\cite{Land05,Eriksen07}, brings the question about what are the CMB angular scales related to this phenomenon.
Thus, we apply our variance estimator to the Planck and MC maps containing multipoles
in a given range: $\ell \in[\ell_\text{min},\ell_\text{max}]$, that is, only contributions in a such angular-scale interval. We consider the {\sc smica} + {\sc val}mask case. The resulting confidence levels and variance dipole directions are summarized in the Table~\ref{table3}. Some of the $V^{\mbox{\footnotesize\sc pla}}\!-$maps corresponding to these angular-scales analyses are shown in Fig.~\ref{fig4}.
We found the following outcomes:
\begin{itemize}[leftmargin=0.3cm]
\item For angular scales in the interval $\ell\in[2,\, 1,\!000]$, i.e. $\ell_{\text{max}}$=1,000,
we found that the statistical significance of the variance dipole has only 83.2 \% CL, as compared
to the dipoles from the $V^{\mbox{\footnotesize\sc g}}\!-$maps, and points in the direction
$(l, b) \,\simeq\, (245^{\circ},-35^{\circ})$.
In other words, 168 dipole values, in the set of 1,000 $\{ v_{1}^{\mbox{\footnotesize\sc g}} \}$,
have a larger value than $v_{1}^{\mbox{\footnotesize\sc smica}}$.
\item For the scales $\ell\in[2,500]$ the variance dipole is, again, not statistically
significant, 82.8 \% CL; the direction being $(l, b) \,\simeq\, (245^{\circ},-37^{\circ})$.
\item For $\ell \in[2,40]$ we obtain a statistical confidence value of
71.0 \% CL, with the dipole direction towards $(l, b) \,\simeq\, (250^{\circ},-40^{\circ})$.
\item More interesting, for the CMB scales $\ell \in[41,500]$ the phenomenon is more significant,
94.3 \% CL, i.e., $\sim 2 \sigma$, with direction towards $(l,b) \,\simeq\, (225^{\circ},-15^{\circ})$,
close to the NS-asymmetry phenomenon direction.
\item Remarkably, for the angular scales $\ell \in[4,500]$ the variance dipole
phenomenon is definitely statistically significant: 98.1 \% CL, and the dipolar direction
$(l,b) \,\simeq\, (220^{\circ},-32^{\circ})$ remains close to the NS-asymmetry phenomenon.
Notice that removing the $\ell= 2, 3$ multipoles the significance increases, converting it in a
statistically significant phenomenon: 82.8 \% $\rightarrow$ 98.1\%, contrary to what was
expected~\cite{Cruz11}.
\item Furthermore, for the case with only the quadrupole ($\ell=2$) plus octopole ($\ell=3$)
CMB components, we found a low statistical significance:
$\!$14.0 $\!$\% $\!$CL, with the dipole direction $(l,b) \,\simeq\, (310^{\circ},-25^{\circ})$.
In addition, the analysis for the largest scales $\ell\in[2,10]$ exhibits again a low significance:
44.3 \% CL, with dipole direction $(l,b) \,\simeq\, (260^{\circ},-47^{\circ})$.
\end{itemize}
Contrary to the NS hemispherical asymmetry, where the signal comes from the low CMB
multipoles (that is, large angular scales; see, e.g.,~\cite{Eriksen07}, and references
therein), our results show that the power of the variance dipole asymmetry phenomenon
does not come only from the lowest multipoles ($\ell \le 40$).
Instead, we observe that the contribution from small scales $\ell \in[41,500]$ is far from being
negligible. The statistical evidence indicates that the contribution of the CMB multipoles $\ell \in[4,500]$
to the variance dipole phenomenon is highly significant: $\sim 2.4 \sigma$.
\begin{table}[h]
\begin{center}
\begin{tabular}{lcc}
\hline \hline
\ \ CMB angular-scales\ \ & \ \ \ \ \ \ \ \ \ CL (\%) \ \ & \ \ \ \ \ \ $(l, b)$ \ \ \ \\
\hline
\ \ \ $\ell \in [2,\,1,\!000]$ & \ \ \ \ \ \ \ 83.2 \ \ \ & \ \ \ \ \ \ \ $(245^{\circ},-35^{\circ})$ \ \ \ \\
\ \ \ $\ell \in [2,500]$ & \ \ \ \ \ \ \ 82.8 \ \ \ & \ \ \ \ \ \ \ $(245^{\circ},-37^{\circ})$ \ \ \ \\
\ \ \ $\ell \in [2,40]$ & \ \ \ \ \ \ \ 71.0 \ \ \ & \ \ \ \ \ \ \ $(250^{\circ},-40^{\circ})$ \ \ \ \\
\ \ \ $\ell \in [41,500]$ & \ \ \ \ \ \ \ 94.3 \ \ \ & \ \ \ \ \ \ \ $(225^{\circ},-15^{\circ})$ \ \ \ \\
\ \ \ $\ell \in [4,40]$ & \ \ \ \ \ \ \ 92.2 \ \ \ & \ \ \ \ \ \ \ $(220^{\circ},-37^{\circ})$ \ \ \ \\
\ \ \ $\ell \in [4,500]$ & \ \ \ \ \ \ \ 98.1 \ \ \ & \ \ \ \ \ \ \ $(220^{\circ},-32^{\circ})$ \ \ \ \\
\ \ \ $\ell \in [2,10]$ & \ \ \ \ \ \ \ 44.3 \ \ \ & \ \ \ \ \ \ \ $(260^{\circ},-47^{\circ})$ \ \ \ \\
\ \ \ $\ell \in [2,3]$ & \ \ \ \ \ \ \ 14.0 \ \ \ & \ \ \ \ \ \ \ $(310^{\circ},-25^{\circ})$ \ \ \ \\
\hline \hline
\end{tabular}
\end{center}
\caption{Statistical angular-scale analyses, showing the confidence level and the variance dipole
direction for each interval investigated.
Remarkably, for $\ell \in[4,500]$ the variance dipole phenomenon is
highly significant: $\sim 2.4 \sigma$.
Some of the $V^{\mbox{\footnotesize\sc pla}}\!-$maps corresponding to these angular-scales
analyses are shown in~Fig.\ref{fig4}.}
\label{table3}
\end{table}
\begin{figure
\begin{center}
\includegraphics[scale=0.3,angle=90]{figure4a.pdf}\vspace{1mm}
\includegraphics[scale=0.3,angle=90]{figure4b.pdf}\vspace{1mm}
\includegraphics[scale=0.3,angle=90]{figure4c.pdf}
\caption{\label{fig4}
Variance dipole direction from the $V^{\mbox{\footnotesize\sc smica}}\!-$map,
obtained using its {\sc val}mask. In the upper panel we display the case $\ell \in[2, 500]$, in the
middle panel the case $\ell = [4, 500]$, and in the bottom panel we show the $\ell \in[2, 3]$ case.
}
\end{center}
\end{figure}
\subsection{Masks and foreground residuals effects on Variance dipole}%
\label{foregrounds}
The four foreground-cleaned Planck maps, are expected to be free of
contamination in the region outside their validation masks (i.e., outside the
{\sc val}masks)~\cite{PLA-XII}.
Now we analyse the effect of the cut-sky masks on the variance dipole values obtained from different foreground-cleaned maps. In Fig.~\ref{fig5} it is shown that, for different values of $f_{\mbox{\footnotesize sky}}$, the mean variance dipole from simulations, $\overline{v}_{1}^{\mbox{\footnotesize\sc g}}$, remains almost constant under the different cut-sky masks employed (see Table~\ref{table1}).
However, this plot also exhibits that the values $v_{1}^{\mbox{\footnotesize\sc pla}}$ decrease when larger masks are used. Two notable informations are concentrated in this interesting plot. The first one points against foreground residuals hypothesis: because all foreground-cleaned maps behaves identically when the same mask
is used in their analysis, this is an strong indication that foregrounds residuals are absent in these maps. Second, from Fig.~\ref{fig5} there is a clear inference that the intensity of the dipole value from Planck maps is concentrated near the galactic plane, and for this reason much power is cut-off when larger masks are used.
We now investigate the frequency dependence of the $V^{\mbox{\footnotesize\sc pla}}\!-$maps, since it is well-known that galactic foregrounds depend on the electromagnetic
frequency~\cite{PLA-XII} and therefore their foreground residuals, if present,
would manifest differently according to the individual frequency of the map in study.
In accordance with this, and using the most severe cut-sky, i.e., the U73 mask, we produced the $V^{\,\nu}\!-$maps for the individual frequency Planck maps of 70, 100, and 143 GHz. We found that their dipole values are quite similar: 2.44, 2.33, and
$1.85 \times 10^{5} \mu \mbox{K}^2$, respectively.
Moreover, the dipole directions are also analogous:
$(l, b) \,\simeq\, (245^{\circ},-35^{\circ})$,
$(l, b) \,\simeq\, (240^{\circ},-40^{\circ})$,
and $(l, b) \,\simeq\, (235^{\circ},-35^{\circ})$, respectively.
Therefore, the fact that the variance dipoles $v_{1}^{\,\nu}$ for the individual frequency Planck maps, $\nu = 70, 100, 143$ GHz, are very similar, suggests that the galactic foreground residuals are not causing the variance dipole effect.
Our conclusion is, in accordance with~\cite{PLA-XXIII} but in disagreement with a previous report~\cite{Cruz11}, that galactic foregrounds are unlikely to be the cause of the variance dipole phenomenon. Although this conclusion is not new~\cite{PLA-XXIII}, its confirmation by a different statistical procedure is reassuring.
To end this subsection, we find interesting to explain why removing the quadrupole and the octopole components from data and simulations increases the statistical significance, instead of decreasing it as claimed in~\cite{Cruz11}.
The behavior of two quantities are relevant in this examination: the intensity,
$v_{1}^{\mbox{\footnotesize\sc pla}}$, and the direction, $(l, b)$, of the variance dipole for the angular-scales in analyses.
Regarding the role of the dipole direction: going from the case $\ell \in[2,500]$ to the case $\ell \in [4,500]$ the dipole direction changes from $(245^{\circ},-37^{\circ})$ to $(220^{\circ},-32^{\circ})$, that is, the net effect on the dipole direction after removing the $\ell = 2 - 3$ components is to point closer the galactic plane region (see Table~\ref{table3}). Regarding the intensity of the dipole: as commented above, from Fig.~\ref{fig5} one deduces that the strength of the dipoles $v_{1}^{\mbox{\footnotesize\sc pla}}$ is concentrated near the galactic plane~\footnote{It was shown in~\cite{Cruz11} that the variance of the CMB data is not stable
against the Galactic masks used.}, and due to this fact much power is cut-off when larger masks are used. In this way, considering the galactic cuts, like $|b| \simeq 30^{\circ}$, used in~\cite{Cruz11} it is not difficult to understand that the dipole intensity is being cut-off in a larger fraction in the case
$\ell \in[4,500]$ as compared with the original case $\ell \in[2,500]$.
Differently from that galactic cuts, the {\sc val}mask used here is not so large near the region
$(l, b) \sim (220^{\circ},-32^{\circ})$, so in our case the dipole intensity cut is moderate.
It is also worth to illustrate the role of the CMB multipole's intensity in the growth in the statistical
confidence level when going from $\ell \in[2,500]$ (82.8\% CL) to the case $\ell\in[4,500]$ (98.1\% CL).
The reason seems to be in the large difference between the low multipole's moments
(in $\mu K^2$ units) $C_2 \simeq 250$ and $C_3 \simeq 480$ in Planck CMB maps, as
compared with the corresponding values in $\Lambda$CDM and MCs spectra, namely,
$C_2^{\Lambda{\text CDM}} = 1158$ and
$C_3^{\Lambda{\text CDM}} = 545$~\footnote{http://pla.esac.esa.int/pla/aio/planckProducts.html}.
Because the quadrupole and octopole are notoriously lower for the Planck maps as compared with those of the MC produced from the $\Lambda$CDM spectrum, it makes intuitive sense that the variance dipoles from data and simulations are best compared in the range $\ell \in[4,500]$ than in the interval $\ell \in[2,500]$.
To test this intuitive argument, we make the following experiment.
Consider the {\sc smica} map with its $\{ a_{2 m} \}$ and $\{ a_{3 m} \}$ components now multiplied by a numerical factor in such a way that its new quadrupole and octopole moments are $C_2^{\text new} = C_2^{\Lambda{\text CDM}}$ and
$C_3^{\text new} = C_3^{\Lambda{\text CDM}}$.
Then, we repeat the variance analysis for this map, considering the angular scales
$\ell \in[2,500]$, now finding that the statistical confidence level is 98.3 \%, very similar with the 98.1 \% CL found for the case $\ell \in[4,500]$.
\begin{figure
\begin{center}
\hspace{-0.85cm}
\includegraphics[scale=0.48]%
{figure5.pdf}
\caption{\label{fig5}
Plot of the $v_{1}^{\mbox{\footnotesize\sc pla}}$ and
$\overline{v}_{1}^{\mbox{\footnotesize\sc g}}$
values vs. the $f_{\mbox{\footnotesize sky}}$ (see Table~\ref{table1} for details).
As observed, given a mask (i.e., $f_{\mbox{\footnotesize sky}}$), all the four
foreground-cleaned Planck maps have almost equal
$v_{1}^{\mbox{\footnotesize\sc pla}}$ values.
\,In other words, all Planck maps produce $V^{\mbox{\footnotesize\sc pla}}\!-$maps
with the same dipole when the same mask is applied to them.}
\end{center}
\end{figure}
\section{Concluding remarks} \label{conclusion}
The Gaussian and statistically isotropic scenario, on which the $\Lambda$CDM concordance model is based, can be rigorously tested with precise CMB data from the four foreground-cleaned Planck maps. Although questions regarding the statistical homogeneity of the universe's large-scale structure wait for future large and deep surveys~\cite{JPAS}, other stimulating questions can be addressed with the
highly precise CMB data from the Planck satellite.
Using a directional variance estimator, based on the {\em variance} statistical momentum, we performed a statistical analyses of the four foreground-cleaned Planck maps in several angular-scales intervals. In all the intervals investigated our results reveal a net dipolar distribution. In particular, in the angular
scales $\ell \in[4,40]$ and $\ell \in[41,500]$ the significance is moderate, $\sim 2 \sigma$. Moreover, for the multipoles range $\ell \in[4,500]$, the result is highly significant $\sim 2.4 \sigma$ (see Table~\ref{table3}), with the variance dipole pointing in the direction $(l, b) \,\simeq\, (220^{\circ},-32^{\circ})$, close to the direction of the NS-asymmetry phenomenon.
Additionally, we found that the Planck's variance dipole magnitude gets lower values for larger sky-cut masks, independent of the map analysed.
This fact is coherent with the result that the variance dipole direction, for all the angular-scale intervals analysed, points relatively near the galactic plane (see Table~\ref{table3}): in such a case, larger masks (i.e., lower $f_{\mbox{\footnotesize sky}}$) cut-off larger regions near this plane where most of the power is located.
Moreover, we found that foreground residuals are absent in our analyses because, considering the same mask, all the foreground-cleaned maps have essentially the same variance dipole value (with a slight dispersion of $\pm 3\%$, see Fig.~\ref{fig3}).
On the one other hand, this variance dipole stands robust in magnitude and direction against frequency dependence, in the 70, 100, and 143 GHz maps, disfavouring the foreground residuals cause, in agreement with~\cite{PLA-XXIII}.
Furthermore, an important part of the analyses of the foreground-cleaned Planck maps that validate our results was the robustness tests, where such examinations considered realistic features of the data like the inhomogeneous noise maps and galactic cut-sky masks (information released by the {\em Planck collaboration}~\footnote{Based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.}). The inhomogeneous noise comes out as a result of the non-uniform way the CMB sky is measured by the Planck probe. In fact, the regions near the ecliptic poles were observed by the probe many more times than others. The pixel's inhomogeneous noise data were released together with the foreground-cleaned Planck
maps and are crucial to assert its influence in the hemispherical asymmetry
found in the $V^{\mbox{\footnotesize\sc pla}}\!-$maps. In fact, although this noise has small magnitude, i.e. $|T| \lesssim 18 \, \mu$K at 1$\sigma$ level, the inhomogeneous noise has to be included in the analyses in order to quantify its
impact in the results, and most importantly, to test the robustness of our outcomes.
Our results show (see Table~\ref{table2}) that including pixel's inhomogeneous noise in the data analyses does not modifies appreciably our confidence level's calculation listed below. For instance, for the {\sc smica+valmask} map, with and without noise, we obtain: $v_1^{\text smica+noise} / v_1^{\text smica} = 0.985$.
Summarizing, we conclude that our directional variance estimator shows a clear dipolar structure in the four foreground-cleaned and the individual frequency Planck maps, results that appear robust against the component separation algorithms, various Planck masks, map's pixelization parameters, and the addition of inhomogeneous pixel's noise.
The magnitude of this dipole is highly significant, $\sim 2.4 \sigma$, in the angular scale interval $\ell \in[4,500]$, attaining less significant values in the scales $\ell \in[4,40]$ and $\ell \in[41,500]$ (Table~\ref{table3}).
We also discover that this significance is not so high in the range $\ell \in[2,500]$ just because the $C_2$ and $C_3$ values in Planck CMB maps are manifestly lower than the corresponding values in the MC maps we use for analyses, which are based on $\Lambda$CDM spectrum. As a matter of fact, if we increase these multipoles values in Planck maps to be equal to those in MC data, we found that the statistical significance in this interval increases from
82.8\% to 98.3\%, i.e., $ 2.4 \sigma$.
\vspace{4mm}
\begin{acknowledgments}
\noindent
TSP and AB acknowledge the support of Conselho Nacional de Desenvolvimento
Cient\'{\i}fico e Tecnol\'{o}gico (CNPq) -- Brasil.
We acknowledge the use of the Planck data.
Some of the results in this paper were derived using the HEALPix
package~\cite{Gorski05}.
\end{acknowledgments}
\bibliographystyle{JHEP}
\phantomsection\addcontentsline{toc}{section}{\refname} |
1,108,101,564,134 | arxiv | \section{Introduction}
\IEEEPARstart{S}{uperconducting} devices are revolutionizing a wide range of research and technological fields including quantum computing \cite{Barends2014SuperconductingTolerance, Barends2016DigitizedCircuit, Ofek2016ExtendingCircuits, Gu2017MicrowaveCircuits}, nanowire single-photon detectors \cite{Wollman2019KilopixelDetectors}, X-ray microcalorimeters \cite{Ulbricht2015}, submillimeter bolometers \cite{Holland2013SCUBA-2:Telescope}, and Microwave Kinetic Inductance Detectors (MKIDs) \cite{Calvo2016, Mazin2013, Meeker2015, Meeker2018DARKNESS:Astronomy}. These applications require increasingly large superconducting arrays, which present the common technical challenge of transporting microwave signals from the cold device stage to room temperature without losing or corrupting the signal or conducting excess heat to the cold stage. Low thermal conductivity is especially important for detector arrays in the field or in space using adiabatic demagnetization refrigerators (ADRs) which have less cooling power than dilution refrigerators but offer smaller form factors and simpler operation.
Commercially available superconducting coaxial cables are often used below 4 K; however, they are either semi-rigid and cumbersome to use in small cryogenic volumes, have large cross-sections yielding excessive heat loads, or both. Another option is flexible superconducting circuits fabricated using lithography techniques. These laminated cable technologies boast low thermal conductivity and high-density interconnects but lack the length, durability, and signal isolation needed for many applications \cite{Pappas2016High-DensityACTPol, Tuckerman2016FlexibleApplications, Zou2019Low-lossLines, Walter2018, Gupta2019Thin-filmCables}.
\begin{figure}
\includegraphics[width=3.4in]{smith1.pdf}
\caption{Photographs showing a.) Close-up of cable end where NbTi center conductors connect to center trace of GCPW transition board via stainless steel capillary tubing. b.) Fully assembled cable end with protruding micro spot-welded, shared Nb47Ti ground shield. c.) Fully assembled cable spanning the 3.4 K stage and the 90 mK cold ADR stage with a thermal sink at 800 mK halfway down the length of the cable in the MKID Expolanet Camera (MEC) experiment\cite{Walter2018, Walter2020TheSCExAO}. }\label{fig:picture}
\end{figure}
An optimal solution should be made from superconducting material with a transition temperature well above 4 K to maximize transmission with an encompassing ground shield to minimize cross talk and pickup. It must have a small cross-section and be made from a low thermal conductivity material. Lastly, it should be flexible, durable, and ideally cheap and easy to manufacture. Such a structure is difficult to realize because few materials have the desired properties and often are difficult to work with and interface with connectors.
In this paper we present a superconducting FLexible coAXial ribbon cable (FLAX) which uniquely satisfies the aforementioned criteria. We developed this solution to carry broadband signals for 10 000+ pixel multiplexed Microwave Kinetic Inductance Detector (MKID) arrays for exoplanet detection operating at 90 mK \cite{Walter2020TheSCExAO, Walter2019MEC:Telescope}. We expect this technology to be especially relevant for superconducting technologies requiring high detector isolation and low thermal load.
\begin{figure}
\includegraphics[width=3.5in]{smith2.pdf}
\caption{Exploded view of cable-end assembly diagram with key dimensions shown in mm. Drawing is not to scale. From top/back to bottom/front: G3PO half-shell connectors are soldered to the transition board. Two ground tabs with via borders and an intervening signal trace create a 50 $\Omega$ grounded coplanar waveguide. The FLAX cable center conductors are crimped into stainless steel capillary tubing and soldered to the center traces. The FLAX ground shield is spot welded to the ground tabs. The cable cross-section shows the PFA (blue) insulated NbTi (grey) wire set in semicylindrical crimps made in the shared Nb47Ti foil ground shield. The two sides of the shield are mechanically and electrically bonded with micro spot welds less than $\lambda/16\simeq$ 2 mm (at 8 GHz) apart which run in-between the traces down the length of the cable.}\label{fig:Crosssection}
\end{figure}
\section{FLAX Design and Manufacture}
The FLAX cables are fabricated using ${\diameter}$0.076 mm [0.003"] NbTi center conductor insulated with ${\diameter}$0.28 mm [0.011"] PFA wire obtained from Supercon\footnote{Supercon Inc., 830 Boston Turnpike, Shrewsbury, MA.}. The shared outer coaxial conductor is formed with 0.025 mm [0.001"] Nb47Ti foil purchased and rolled by ATI\footnote{ATI Specialty Alloys \& Components, 1600 Old Salem Rd., Albany, OR.} and HPM\footnote{Hamilton Precision Metals, 1780 Rohrerstown Rd., Lancaster, PA.}. The wires are held in ten, ${\diameter}$0.28 mm semicylindrical crimps made 3.56 mm apart in the foil to achieve a $\sim$50 $\Omega$ characteristic impedance and 3.56 mm standard trace pitch density used by G3PO connectors available from Corning Gilbert\footnote{Corning Optical Communications, 4200 Corning Place, Charlotte, NC.} (compatible with SMP-S) (see Fig.~\ref{fig:picture},~\ref{fig:Crosssection}). The two sides of the ground shield are mechanically and electrically bonded by micro spot welds which run the length of the cable between each trace. The welds are approximately every 2 mm which is less than $\lambda/16=$ 2.3 mm at 8 GHz (see Fig.~\ref{fig:picture},~\ref{fig:Crosssection}).
At the ends of the cable, the protruding center conductors are threaded into ${\diameter}$1.6 mm, 0.13 mm thick stainless steel capillary tubing. The tubing is crimped onto the center conductor before the assembly is soldered to the center traces of the transition board using a stainless steel soldering flux (see Fig.~\ref{fig:picture}a,~\ref{fig:Crosssection}). The transition board is a 0.25 mm thick RT/Duroid6010LM PCB with 50 $\Omega$ grounded coplanar waveguide (GCPW) geometry for increased signal isolation. Between each trace, the Nb47Ti outer conductor foil is micro spot welded to the ground tabs of the transition board while surface mount coaxial G3PO push-on connectors are soldered to the other end of the GCPW (Fig.~\ref{fig:picture}a). The cable end assembly is clamped in a 3$\times$7 cm gold-plated copper box which provides strain relief and allows for easy push-on connection of all ten traces with G3PO blind-mate bullet connectors (Fig.~\ref{fig:picture}b).
\section{Performance Characterization}
Transmission loss ($S_{21}$), cross talk ($S_{41}$), and time domain reflectometry measurements were performed in a dilution refrigerator under vacuum at 4 K with a Keysight N9917A network analyzer. The device under test circuit consisted of the assembled FLAX cable with a 3 dB cryo-attenuator obtained from XMA\footnote{XMA Corporation-Omni Spectra, 7 Perimeter Road, Manchester, NH. \vskip1pt \hskip4pt P/N: 2082-6040-03-CRYO} and a 25 cm nonmagnetic SMA-to-G3PO adapter coaxial cable obtained from Koaxis\footnote{Koaxis RF Cable Assemblies, 2081 Lucon Road, Schwenksville, PA. \vskip1pt \hskip4pt P/N: AO10-CC047C-YO18} on either end (see Fig.~\ref{fig:DUT}). A Crystek\footnote{Obtained through Digikey. P/N: CCSMA18-MM-141-12} braided, semi-rigid coax through line was used as a calibration reference. Repeated handling through the testing process revealed the cables have a minimum inside bend radius close to 2 mm and are robust to cryogenic cycling.
\begin{figure}
\includegraphics[width=3.4in]{smith3.pdf}
\caption{Schematic diagram depicting FLAX attachment to the G3PO push-on connectors via a capillary tube soldered to a coplanar waveguide transition board and the device under test (DUT) circuit at 4 K. }\label{fig:DUT}
\end{figure}
\subsection{Transmission}
\begin{figure*}
\includegraphics[width=\textwidth]{smith4.pdf}
\caption{Top: $S_{21}$ (transmission) measurement of sample FLAX traces from various cables at 4 K. Bottom: $S_{41}$ (nearest neighbor forward cross talk) measurement of sample FLAX traces from the same cable at 4 K. The average cross talk level is given by the dashed red line.}\label{fig:transmission}
\end{figure*}
Ripples in the FLAX transmission suggest standing wave modes are present on the traces which is indicative of an impedance mismatch between the FLAX cable and the 50 $\Omega$ circuit (see Fig.~\ref{fig:transmission}). The transmission ripples are not uniformly harmonic which suggests the impedance is changing with length along each trace. This could be explained by flaws in micro spot welding placements along the cable which determine the distance between the inner and outer coaxial conductors and therefore the characteristic impedance. The characteristic impedance of the traces were probed using a time domain reflectometry measurement adjusted for loss (see \cite{Gisin2017CharacterizingInstrument} for details on loss correction) which confirmed the impedance varies from 55--65$\pm$3 $\Omega$ along the traces (see Fig.~\ref{fig:TDR}). This mismatch at various points in the cable launches reflected waves which contribute to the observed ripple.
We hypothesize an additional factor contributing to the impedance mismatch originates in the intermediate regions of the cable ends where the center conductor exits the foil sheath and transitions onto the GCPW transition board (see Fig.~\ref{fig:picture}, a.). After exiting the ground shield, the exposed wire can act as an inductor. Previous work done by our group shows inductance on the input and output of a transmission line causes ripples which increase in magnitude at higher frequencies \cite{Walter2018}. This is because the impedance of a perfect inductor grows linearly with frequency, i.e., $Z_L = j\omega L$. With each successive cable iteration, manufacturing techniques improved, the length of exposed wire was shortened, and the frequency-dependent ripple amplitude diminished. The use of a capillary tube to pin the hair-like center conductor close to the transition board dramatically reduced the cable end inductance.
\begin{figure}
\includegraphics[width=3.4in]{smith5.pdf}
\caption{A typical time domain reflectometry (TDR) measurement of the cryogenic signal path showing the characteristic impedance at lengths along the signal path. Commercially available 50 $\Omega$ standard coaxial cables border the FLAX cable highlighted by the double arrow. Note the TDR measurement is accurate to $\pm$ 3 $\Omega$. }\label{fig:TDR}
\end{figure}
Using the peak of the ripple, we report the loss of the 30 cm cable at 8 GHz to be roughly 1 dB which is slightly higher than the 0.5 dB/m loss reported by commercially available superconducting coaxial cables \cite{CryoCoaxCryogenicCryoCoax, KEYCOMSuperconductingCables}. This difference cannot be explained by a difference in cable materials or geometry \cite{Emerson2017PTFEDifferences}. Likely, the source of our additional loss is the impedance mismatch caused by manufacturing imperfections which produce reflections in the cable and off the ends as described above.
\begin{table*}[!htbp]
\caption{ Summary of thermal, mechanical, and microwave properties of superconducting coaxial ribbon cable, laminated microstrip cable, and best commercially available superconducting coaxial cables}\label{table:properties}
\vskip2pt
\centerline{
\vbox{\offinterlineskip
\hrule
\halign{&\vrule#&
\strut\quad#\hfil\quad\cr
&\strut&&\multispan3\hfil {\bf Thermal Load\footnote{} }\hfil&&\multispan9\hfil {\bf Mechanical}\hfil &&\multispan3\hfil {\bf Microwave}\hfil &\cr
&{\bf Cable}&&\multispan3\hfil per trace [nW] \hfil&&\multispan9\hfil All Dimensions [mm] \hfil && \multispan3\hfil Values at 8GHz \hfil &\cr
& &&\multispan3\hfil 100mK to \hfil&& Trace && OD && Min. Inside && Conductor && Dielectric && Cross Talk \hfil &&
Attenuation\footnote{} &\cr
&\omit && 1 K & &4 K && Pitch && (${\diameter}$) && Bend Radius && Material && Material && [dB] && [dB] &\cr
height2pt&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&\cr
\noalign{\hrule}
height2pt&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&&\omit&\cr
& FLAX && $16$ && $800$ && $3.556$ && $0.376$ && $2$ && Nb47Ti && PFA && $-60$ && $1$ &\cr
& CryoCoax && $26$ && $1400$ && $>$13 && $0.90$0 && $3.2$ && NbTi && PTFE && N/A && $<0.5$ &\cr
& KEYCOM && $34$ && $1800$ && $>$13 && $0.860$ && $8$ && NbTi && PTFE && N/A && $<0.5$ &\cr
& Nikaflex && $16$ && $460$ && $3.556$ && $0.198$\footnote{} && $6.4$ && Nb47Ti && Nikaflex\footnote{} && $-25$ && $1$ &\cr}
\hrule}}
\hskip20pt \footnotesize{$^8$ Computed using dimensions available from \cite{CryoCoaxCryogenicCryoCoax, KEYCOMSuperconductingCables, Walter2018} and assuming a cable length of 30 cm. }
\vskip1pt \hskip20pt \footnotesize{$^9$ Estimated with ripple peak.}
\vskip1pt \hskip17pt \footnotesize{$^{10}$ For the microstrip geometry this is the total cable thickness.}
\vskip1pt \hskip17pt \footnotesize{$^{11}$ Kapton polyimide film manufactured by Dupont, see \cite{Walter2018} for details.}
\end{table*}
\subsection{Cross Talk}
We found the average nearest-neighbor forward cross talk to be -60 dB (see Fig.~\ref{fig:transmission}). This is roughly 30 dB lower than what we previously achieved using flexible laminated NbTi-on-Kapton microstrip cables \cite{Walter2018}. Since the cable's installation in the MKID Exoplanet Camera (MEC) at Subaru Observatory, this enhanced isolation has increased our pixel yield $\sim$20\% \cite{Walter2020TheSCExAO}. We suspect this large improvement is because the exposed microstrip geometry allows trace-to-trace coupling whereas the coaxial nature of the FLAX shields the center conductors thereby preventing signal corruption. In early iterations of the cable, we found infrequent or failed micro spot welds in the ground shield lead to much higher levels of cross talk. This leads us to conclude incorporating micro spot welds less than $\lambda/16$ apart between the traces reduces electromagnetic coupling.
\subsection{Thermal Conductivity}
\begin{figure}
\includegraphics[width=3.4in]{smith6.pdf}
\caption{We computed a cable thermal conductivity $G(T)$ in units of $\mu \mathrm{W cm K}^{-1}$ by summing the thermal conductivity of each constituent material weighted by the cross-section \cite{Kushino2005ThermalDetectors, Walter2018}. The cable previously developed by our lab (Nikaflex, gold) is compared with the subject of this paper (FLAX, salmon), and two commercial options by KEYCOM (P/N: NbTiNbTi034, burgundy) and Cryocoax (P/N 5139-P1NN-611-100P, pink). Solid lines are computed using literature values for Nb47Ti\cite{Daal2019PropertiesApplications, Olson1993}, PTFE\cite{Kushino2005ThermalDetectors}, Nikaflex (Kapton polyimid film)\cite{Walter2018, Kellaris2014Sub-kelvinExperiments}, and Pyralux \cite{Daal2019PropertiesApplications}. PTFE values were used to estimate the PFA dielectric in the FLAX cable\cite{Emerson2017PTFEDifferences}. Dashed lines indicate extrapolation.}\label{fig:heatload}
\end{figure}
Following previous convention, a cable thermal conductivity, $G(T)$, was computed by summing literature values of constituent materials weighted by their cross-sections (see Fig.~\ref{fig:heatload}) \cite{Kushino2005ThermalDetectors,Walter2018}. We compare our superconducting coaxial ribbon cable to two commercially available superconducting coaxial cables as well as our lab's previously developed laminated NbTi-on-Kapton microstrip cables \cite{Walter2018}. We estimate the thermal conductivity of the PFA dielectric present in the flexible coaxial ribbon cables using PTFE; the same dielectric used in the two commercial solutions \cite{Emerson2017PTFEDifferences}. The smallest commercially available superconducting coaxial cables from KEYCOM\footnote{KEYCOM Corp. 3-40-2 Minamiotsuka,Toshima-ku Tokyo. \vskip1pt \hskip5pt P/N: NbTiNbTi034} and CryoCoax\footnote{CryoCoax - Intelliconnect, 448 Old Lantana Road, Crossville, TN. \vskip1pt \hskip5ptP/N: 5139-P1NN-611-100P} were chosen for comparison. The electrical and thermal properties of the cables are summarized in table~\ref{table:properties}.
The heat load from one temperature stage to another can be computed by integrating values in Fig.~\ref{fig:heatload} from $T_1$ to $T_2$ ($T_1<T_2$) and dividing by the cable length. The ten-trace FLAX cables are currently installed in the MEC experiment where they span 33 cm from the 3.4 K stage to the 90 mK cold ADR stage with a thermal sink at 800 mK about halfway down the length of the cable \cite{Walter2019MEC:Telescope}. We estimate they generate a thermal load of $\sim$200 nW on the 90 mK cold ADR stage. This is about equivalent to the thermal load created by the Nikaflex cables and approximately half the computed heat load of either commercial option.
\section{Conclusion}
We have manufactured a superconducting flexible coaxial cable capable of delivering microwave signals between temperature stages with minimal loss, cross talk, and heat conduction. Strong signal isolation is especially important for our application of moving 4-8GHz servicing 10 000+ multiplexed sensors across temperature stages. The FLAX cable represents a 30 dB improvement in cross talk as compared to our group's previously developed NbTi-on-Kapton microstrip cables. This enhanced isolation facilitated a $\sim$20\% increase in MKID pixel yield in the MEC experiment\cite{Walter2020TheSCExAO}. We expect these results will be especially useful for high-density microwave superconducting detector arrays requiring strong signal isolation.
The cable technology presented in this paper also has very low thermal conductivity. For a given thermal budget, the FLAX cables allow for twice as many detectors as the leading commercial option. The reduced heat load combined with the push-on, small form factor connectors and reduced trace pitch allow for increased detector density in a cryogenic system.
We found an attenuation of $1$ dB at 8 GHz with $\sim$3 dB ripples which is at worst 2x more loss than commercial options. This magnitude of ripples and loss do not impact our array on the input side as we can drive microwave resonators (MKIDs) located at transmission dips with higher power than their frequency neighbors. However, these features degrade the overall signal to noise ratio on the output. Ripples and loss may become prohibitive for systems operating at frequencies over 8 GHz or systems constrained by amplifier dynamic range. Insertion loss and ripples can be reduced by improving manufacturing precision in the forming of the NbTi foil crimps and location of micro spot welds. Alternative methods to join the push on connectors and traces, e.g., brush plating the NbTi center conductor with an easily solderable material such as nickel may also improve the impedance match.
Lastly, we note these cables are relatively easy to fabricate. Many components, most notably the fine, NbTi center conductor wire, are commercially available. All cable iterations were manufactured in-house at the University of California, Santa Barbara. Ten trace FLAX can be assembled in two days. Overall, we find this cable technology to be superior to commercial options for our applications building high-density superconducting detector arrays.
\section*{Acknowledgment}
J. P. Smith is supported by a NASA Space Technology Research Fellowship under grant number 80NSSC19K1126.
\bibliographystyle{IEEEtran}
|
1,108,101,564,135 | arxiv | \section{Introduction}
Graphene is a 2-dimensional honeycomb lattice of single atomic
carbon layer and has a special band structures. With more and more
experimental discoveries and theoretical
predictions\cite{geim,zhang,zhang2,VPG1,peres,louie}, there is
currently a intense interest on electronic properties on the
graphene sheet. Especially the spin Hall effect(SHE) has the
potential to provide a purely electrical means to control the spins
of electron in the absence of non-ferromagnetic materials and
magnetic field\cite{sheng}. This is because the spin-orbit
interaction in the Graphene exerts a torque on the spin of electron
whose precessing leads to a spin polarized current. In a four probe
device, this spin polarized current can lead to a pure spin current
without accompanying charge current\cite{hank}. It has been proposed
by Haldane\cite{haldane} that a quantum Hall effect may exist in the
absence of magnetic field. Similarly, integer quantum spin-Hall
effect can exist on a honeycomb lattice when the intrinsic spin
orbit interaction is present\cite{sheng,kane}. In the presence of
disorder the charge conductance of mesoscopic conductors show
universal features with a universal conductance
fluctuation\cite{lee85} and the spin-Hall conductance also
fluctuates with a universal value\cite{ren} in the presence of spin
orbit interaction. The presence of disorder can also destroy the
integer quantum spin-Hall effect and quantum Hall
effect\cite{sheng1} for a Graphene system with intrinsic spin orbit
interaction\cite{sheng}. Hence it is of interest to map out the
phase diagram for the integer quantum spin-Hall effect. In this
paper, we investigate the disorder effect on the spin Hall current
for a four-probe Graphene system in the presence of intrinsic and/or
Rashba SO interactions, denoted as $V_{so}$ and $V_r$, respectively.
For such a system, the conventional transfer matrix method can not
be used. So the direct matrix inversion method must be used to
obtain the Green's function that is needed for the transport
properties. As a result, the simulation of a multi-probe system
using the direct method is very calculational demanding.
In this paper, we developed an algorithm based on the idea of
transfer matrix that is much faster than the direct method. As an
application, we have numerically mapped out the phase diagram for a
two dimensional honeycomb lattice in the presence of the intrinsic
and/or Rashba SO interactions and disorders. When turning on the
Rashba SO interaction, we found that the energy gap needed for the
IQSHE is $|E|<t$ for $V_{so} \ge 0.2t$ and decreases linearly when
$V_{so}<0.2t$. In the presence of Rashba SO interaction, the phase
diagram $(E, V_r)$ is asymmetric about the Fermi energy. The IQSHE
is more difficult to destroy at the largest energy of the energy
gap. In the presence of disorder, the phase diagram $(E, W)$ is
again asymmetric about the Fermi energy but it is the smallest
energy of the energy gap that is robust against the disorder
fluctuation.
\section{theoretical formalism}
In the tight-binding representation, the Hamiltonian for the 2D
honeycomb lattice of the graphene structure can be written
as\cite{haldane,sheng}:
\begin{eqnarray}
H&=&-t
\sum_{<{ij}>}c^\dagger_{i}c_{j}+\frac{2i}{\sqrt{3}}V_{so}\sum_{\ll{ij}\gg}
{c^\dagger_{i}{\sigma}{\cdot}(\mathbf{d}_{kj}{\times}\mathbf{d}_{ik})c_{j}} \nonumber \\
&+&{i} V_{r}\sum_{<{ij}>}{ c^\dagger_{i}
\hat{\mathbf{e}}_{z}{\cdot}(\sigma{\times}{\mathbf{d}}_{ij})c_{j}}+\sum_{i}{\epsilon_{i}
c^\dagger_{i} c_{i}}
\end{eqnarray}
where $c^\dag_{i}$($c_{i}$) is electron creation
(annihilation) operator and $\sigma$ are Pauli matrices. The first
term is due to the nearest hopping. The second term is the intrinsic
spin-orbit interaction that involves the next nearest sites. Here $i$ and $j$
are two next nearest neighbor sites, $k$ is the common nearest
neighbor of $i$ and $j$, and ${\mathbf{d}}_{ik}$ describes a vector
pointing from $k$ to $i$. The third term is due to the Rashba spin-orbit
coupling. The last term is the on-site energy where $\epsilon_{i}$ is a
random on-site potential uniformly distributed in the interval $[-W/2,W/2]$.
In this Hamiltonian, we have set the lattice constant to be unity.
We consider a four-probe device as shown schematically in FIG.1a.
The four probes are exactly the extension from the central
scattering region, i.e., the probes are graphene ribbons. The number
of sites in the scattering region is denoted as $N=n_x{\times}n_y$,
where there are $n_x=8{\times}n+1$ sites on $n_y=4{\times}n$ chains
(FIG.1a shows the cell for $n=1$)\cite{foot2}. We apply external
bias voltages $V_i$ with $i=1,2,3,4$ at the four different probes as
$V_{i}=(v/2,0,-v/2,0)$. In the presence of Rashba SO interaction,
the spin is not a good quantum number. As a result, the spin current
is not conserved using the conventional definition. Hence we switch
off the Rashba SO interaction in the 2nd and 4th probes. Similar to
the setup of Ref.\cite{sheng} our setup can generate integer quantum
spin Hall effect. The difference between the setup of
Ref.\cite{sheng} and ours is that the lead in Ref.\cite{sheng} is a
square lattice without SO interactions while our lead is still
honeycomb lattice with SO interactions except that the Rashba SO
interaction has been switched off in lead 2 and 4. The use of the
square lattice as a lead has two consequences. It provides
additional interfacial scattering between scattering region and the
lead due to the lattice mismatch and the mismatch in SO
interactions. In addition, the dimension of the self-energy matrix
for the square lattice lead with SO interaction is much smaller. The
spin-Hall conductance $G_{sH}$ can be calculated from the
multi-probe Landauer-Buttiker formula\cite{hank,ren}:
\begin{eqnarray}
G_{sH}=(e/8{\pi})[(T_{2{\uparrow},1}-T_{2{\downarrow},1})-(T_{2{\uparrow},3}-T_{2{\downarrow},3})]
\end{eqnarray}
where the transmission coefficient is given by
$T_{2{\sigma,1}}=Tr(\Gamma_{2{\sigma}}G^{r}\Gamma_{1}G^{a})$ with
$G^{r,a}$ being the retarded and advanced Green functions of the
central disordered region which can be evaluated numerically. The
quantities $\Gamma_{i{\sigma}}$ are the linewidth functions
describing coupling of the probes and the scattering region and are
obtained by calculating self-energies $\Sigma^r$ due to the
semi-infinite leads using a transfer matrices method\cite{lopez84}.
In the following, our numerical data are mainly on a system with
$n=8$ or ${32\times}65$ sites in the system. To fix units,
throughout this paper, we define the Fermi-energy $E$, disorder
strength $W$, intrinsic spin-orbit coupling $V_{so}$ and Rashba
spin-orbit coupling $V_{r}$ in terms of the hopping energy $t$.
For the four-probe device, the conventional transfer matrix that is
suitable for two-probe devices can no longer be used. Below, we
provide a modified transfer matrix method for the four-probe device.
Note that the self-energy $\Sigma^r$ is a matrix with non-zero
elements at those positions corresponding to the interface sites
between a lead and the scattering region\cite{foot1}. Because
evaluating the Green's function $G^r$ corresponds to the inversion
of a matrix, a reasonable numbering scheme to the lattice sites can
minimize the bandwidth of the matrix and thus reduce the cost of
numerical computation. For example, to obtain the narrowest
bandwidth for our system we partition the system into layers shown
in FIG.1b so that there is no coupling between the next nearest
layers. We then label each site layer by layer from the center of
the system (see FIG.1a). As a result, the matrix $E-H-\Sigma^r$
becomes a block tri-diagonal matrix:
$$
E-H-\Sigma^r = \left(
\begin{array}{ccccccc}
A_1 & C_1 & . & . & . & . \\
B_2 & A_2 & C_2 & . & . & . \\
. & . & . & . & . & . \\
. & . & . & . & . & . \\
. & . & . & . & A_{m-1} & C_{m-1} \\
. & . & . & . & B_m & A_m
\end{array}
\right)
$$
where $A_n$ is a $(128n-56) \times (128n-56)$ matrix, $C_n$ is a
$(128n-56) \times (128n+72)$ matrix, and $B_n$ is a $(128n-56)
\times (128n-184)$ matrix. Here $n=1$ corresponds to the innermost
layer and $n=m$ is for the outermost layer. A direct inversion of
this block tri-diagonal matrix is already faster than the other
labeling schemes. However, if we are interested in the transmission
coefficient, it is not necessary to invert the whole matrix. This is
because the self-energies of the leads are coupled only to $A_m$ of
the outermost layers, from Landauer-Buttiker's formula it is enough
to calculate the Green's function $G_{mm}^r$ which satisfys the
following equation,
$$
(E-H-\Sigma^r )
\left(
\begin{array}{c}
G^r_{1m} \\
G^r_{2m} \\
. \\
. \\
G^r_{m-1 m} \\
G^r_{m m}
\end{array}
\right)=\left(
\begin{array}{c}
0 \\
0 \\
. \\
. \\
0 \\
I_m
\end{array}
\right)$$ where $I_m$ is a unit matrix of dimension $m$. In general,
the solution $X_i$ of the following equation with block tri-diagonal
matrix can be easily obtained.
$$\left(
\begin{array}{cccccccccc}
A_1 & C_1 & . & . & . & . \\
B_2 & A_2 & C_2 & . & . & . \\
. & . & . & . & . & . \\
. & . & . & . & . & . \\
. & . & . & . & A_{m-1} & C_{m-1} \\
. & . & . & . & B_m & A_m
\end{array}
\right) \left(
\begin{array}{c}
X_1 \\
X_2 \\
. \\
. \\
X_{m-1} \\
X_m
\end{array}
\right)=\left(
\begin{array}{c}
R_1 \\
R_2 \\
. \\
. \\
R_{m-1} \\
R_m
\end{array}
\right).$$
From the first row
$$
A_1X_1+C_1X_2=R_1,
$$
we have%
$$
X_1+A_1^{-1}C_1X_2=A_1^{-1}R_1.
$$
From the 2$^{nd}$ row,%
$$
B_2X_1+A_2X_2+C_2X_3=R_2,
$$
eliminating $X_1$, we have
$$
(A_2-B_2A_1^{-1}C_1)X_2+C_2X_3=R_2-B_2A_1^{-1}R_1.
$$
This equation can be written as%
$$
F_2X_2+C_2X_3=D_2,
$$
where
$$
F_2=A_2-B_2A_1^{-1}C_1,D_2=R_2-B_2A_1^{-1}R_1.
$$
From the 3$^{rd}$ row,
$$
B_3X_2+A_3X_3+C_3X_4=R_3,
$$
eliminating $X_2,$ we have
$$
F_3X_3+C_3X_4=D_3,
$$
where%
$$
F_3=A_3-B_3F_2^{-1}C_2,D_3=R_3-B_3F_2^{-1}D_2.
$$
Therefore, we have the following recursion relation,
$$
\begin{array}{ccc}
F_1= & A_1, & initial \\
F_i= & A_i-B_iF_{i-1}^{-1}C_{i-1}, & i=2,3,\cdots ,m \\
D_1= & R_1, & initial \\
D_i= & R_i-B_iF_{i-1}^{-1}D_{i-1}, & i=2,3,\cdots ,m
\end{array}
.
$$
Finally, we have
$$\left(
\begin{array}{cccccccccc}
F_1 & C_1 & . & . & . & . \\
. & F_2 & C_2 & . & . & . \\
. & . & . & . & . & . \\
. & . & . & . & . & . \\
. & . & . & . & F_{m-1} & C_{m-1} \\
. & . & . & . & . & F_m
\end{array}
\right) \left(
\begin{array}{c}
X_1 \\
X_2 \\
. \\
. \\
X_{m-1} \\
X_m
\end{array}
\right) =\left(
\begin{array}{c}
D_1 \\
D_2 \\
. \\
. \\
D_{m-1} \\
D_m
\end{array}
\right) .$$
From the last row, we can solve for $X_m:$
$$
X_m=F_m^{-1}D_m.
$$
We can cancel $X_m$ in the last but one equation%
$$
X_{m-1}=F_{m-1}^{-1}(D_{m-1}-C_{m-1}X_m).
$$
In our case, $X_i=G^r_{i m}$ and $R_i=\delta_{i m} I_m$ and we are
only interest in the solution $G_{mm}$. Hence we have the solution
$$
G^r_{mm}=F_m^{-1}
$$
where
$$
\begin{array}{ccc}
F_1= & A_1,\\
F_i= & A_i-B_iF_{i-1}^{-1}C_{i-1}, & i=2,3,\cdots ,m \\
\end{array}
$$
To test the speed of this algorithm, we have calculated the spin
Hall conductance for the four-probe graphene system with different
system size labeled by $n$ on a matlab platform. The calculation is
done at a fixed energy and for 1000 random configurations. The cpu
times are listed in Table 1 where speed of direct matrix inversion
and the algorithm just described are compared. We see that the speed
up factor increases as the system size increases. For instance, for
$n=8$ which corresponds to 2080 sites (amounts to a $4016 \times
4016$ matrix) in the scattering region, a factor of 100 is gained in
speed. We note that in the presence of intrinsic SO interaction the
coupling involves next nearest neighbor interaction. This is the
major factor that slows down our algorithm. As shown in TABLE 1, for
a square lattice without intrinsic SO interaction but with Rashba SO
interaction, the speed up factor is around 200 for a $40 \times 40$
system (matrix dimension $3200$). The new algorithm is particular
useful when the large number of disorder samples and different
sample sizes are needed for the calculation of the conductance
fluctuation and its scaling with size. Finally, we wish to mention
that this algorithm also applies to multi-probe systems such as
six-probe systems.
\section{numerical results}
It has been shown that in the presence of disorder or Rashba SO
interaction the QSHE may be destroyed\cite{sheng}. As an application
of our algorithm, we study the phase phase boundary between regimes
of the integer QSHE regime and the QSH liquid in the presence of
disorder. For this purpose, we set a criteria for the QSH, i.e., if
$G_{sH}{\geq}0.999$ we say it reaches an integer quantum spin Hall
plateau (IQSH). Since the integer QSHE is due to the presence of
intrinsic SOI, we first study the phase diagram of a clean sample in
the absence of Rashba SOI, i.e., the two-component Haldane's
model\cite{haldane}. For this model, there is an energy gap within
which the IQSH effect exists. FIG.2 depicts the phase diagram in
($E$,$V_{so}$) plane with a curve separates the integer QSHE and SHE
liquid. We see that the phase diagram is symmetric about the Fermi
energy $E$ and the integer QSHE exists only for energy $E<1$ that
corresponds to the energy gap. FIG.2 shows that the energy gap
depends on the strength of intrinsic SO interaction. When $V_{so}
\ge 0.2$ the energy gap is the largest between $E=[-1,1]$ while for
$V_{so}<0.2$, the energy gap gradually diminishes to zero in a
linear fashion. Our numerical data show that for $V_{so}<0.025$ the
IQSHE disappears (see FIG.2). Between $V_{so}=[0.025,0.18]$ the
phase boundary is a linear curve. When $V_{so}>0.20$, the phase
boundary becomes a sharp vertical line.
For Haldane's model, the $\sigma_z$ is a good quantum number.
However, in the presence of Rashba SOI the spin experiences a spin
torque while traversing the system. This can destroy the IQSHE at
large enough Rashba SOI strength $V_r$. In FIG.3, we show the
spin-Hall conductance $G_{sH}$ vs Fermi energy at difference $V_r$
when $V_{so}=0.1,0.2$. In FIG.3a we see that when $V_{r}=0$, the
spin-Hall conductance is quantized between $E=-0.52$ and $+0.52$. As
$V_{r}$ increases to $0.1$, and the energy gap decreases to $-0.22$
and $0.51$. Upon further increasing $V_{r}$ to $0.2$ and $0.3$, the
gaps shrink to, respectively, $[0.06,0.50]$ and $[0.34,0.46]$. In
Ref.\cite{sheng} the IQSHE is completely destroyed when $V_r=0.3$
which is different from our result. The difference is due to the
lead used in Ref.\cite{sheng} that causes additional scattering. The
larger intrinsic SO interaction strength $V_{so}$, the more
difficult to destroy the integer QSHE as can be seen from FIG.3b.
In the presence of Rashba SO interaction the phase diagram in
$(E,V_r)$ plane at different intrinsic SO interaction strengths is
shown in FIG.4. We see that the phase diagram is asymmetric about
the Fermi energy and it is more difficult to destroy the integer
QSHE for largest positive energies within the energy gap, e.g., near
$E=0.51$ when $V_{so}=0.1$. Similar to FIG.2, we see that when
$V_{so}>0.2$ integer QSHE can exist for all energies as long as
$|E|<1$. Roughly speaking, the energy gap decreases linearly with
increasing of Rashba SOI and there is a threshold $V_r$ beyond which
the integer QSHE disappears. For instance, when $V_r>0.3$ and
$V_{so}=0.1$, the integer QSHE is destroyed.
From the above analysis, we see that $V_{so}=0.2$ is an important
point separating two different behaviors in $(E,V_{so})$ and
$(E,V_r)$ phase diagrams. Now we examine the effect of disorder on
the QSHE. FIG.5 shows the phase diagram of integer QSHE on $(E,W)$
at two typical intrinsic SO interaction strengths $V_{so}=0.1$ and
$V_{so}=0.2$. The phase diagrams are asymmetric about the Fermi
energy. Generally speaking, the larger the Rashba SO interaction
strength $V_{r}$, the smaller the energy gap needed for integer
QSHE. We already see from FIG.4 that the integer QSHE is more robust
against Rashba SO interaction strength $V_r$ at positive Fermi
energy within the energy gap. In contrast, it is small Fermi
energies within the energy gap that are stable against the disorder
fluctuation, especially for large Rashba SO interaction strength. In
addition, the phase boundary at positive Fermi energy are not very
sensitive to the variation of Rashba SO interaction strength. The
larger the intrinsic SO interaction, the larger the disorder
strength $W_c$ needed to destroy the integer QSHE. In FIG.6, we
estimate this critical disorder strength $W_c$ and plot it vs
$V_{so}$ for $E=0.01$ and $V_r=0$.
If we replace the Rashba SO interaction by the Dresselhaus SO
interaction, we have numerically confirmed that the phase diagram of
IQSHC in $(E,W)$ plane is the same if we change $E$ by $-E$.
In summary, we have developed variant transfer matrix method that is
suitable for multi-probe systems. With this algorithm, the speed
gained is of a factor 100 for a system of 2080 sites with the next
nearest SO interaction on a honeycomb lattice. For the square
lattice with Rashba SO interaction, the speed gained is around 200
for a $40 \times 40$ system. Using this algorithm, we have studied
the phase diagrams of the graphene with intrinsic and Rashba SO
interaction in the presence of disorder.
\bigskip
\section{acknowledgments}
This work was financially supported by RGC grant (HKU 7048/06P) from
the government SAR of Hong Kong and LuXin Energy Group. Computer
Center of The University of Hong Kong is gratefully acknowledged for
the High-Performance Computing facility.
|
1,108,101,564,136 | arxiv | \section{Introduction}
In the scenario of quantum systems, quantum indistinguishability of identical particles has been tested using the interference effect of a Hong-Ou-Mandel-type scheme \cite{HOM87}. It assumes that indistinguishable particles scatter independently, e.g., on a beam splitter and do not interact. One cannot, however, exclude the possibility that the resultant bunching or anti-bunching effect actually originates from the interaction between distinguishable particles.
Therefore, in order to verify true particle indistinguishability one needs to
preclude the possibility of inter-particle interaction.
One such way is to prepare an entangled state of identical particles and probe if the entanglement encoded in a certain degree of freedom (DOF) can be converted to one in another DOF.
This interchangeability of entanglement between different DOFs is studied in Ref. \cite{BH13}
and dubbed ``duality in entanglement.''
Since such duality does not arise in case of distinguishable (e.g., different species of) particles,
it is considered a novel way to manifest quantum indistinguishability.
For example, if two single photons are generated in a parametric down-conversion and thereby are entangled in polarization DOF ($H,V$), the entanglement can be accessed because the two particles are effectively distinguishable by their path DOF, say, (1, 2). However, if one decides to effectively distinguish the identical particles by their polarizations, one will observe entanglement in the path DOF. This phenomenon would not occur for distinguishable particles and hence it can be used in testing their indistinguishability.
To date, such tests of quantum indistinguishability based on entanglement of two identical particles were performed in two scenarios: the first case utilizes a polarization/path entangled state \cite{BH13} and the duality is tested by the violation of a Bell's inequality; the second case considers a spin/orbital angular momentum entangled state \cite{BZA15} and the duality is tested by an entanglement witness.
Both the scenarios are implemented on photonic setup and the corresponding demonstrations remain only at single-photon level.
Therefore, one research direction in indistinguishability test based on entanglement duality is to survey schemes that choose other kinds of identical particles (e.g., other bosonic particles or fermions);
another direction is to examine if the notion of entanglement duality can also apply to a multi-particle system beyond the aforementioned single-particle level.
We are interested in the latter direction and here propose a scenario for entanglement duality test in a macroscopic bipartite system which surpasses single-particle level and even does not have a fixed number of particles. Note that this situation differs from the Hong-Ou-Mandel scenario in which one tests distinguishability of only two microscopic particles.
An additional motivation for our studies comes from the fact that it was conjectured in \cite{BH13} that entanglement duality scenario can be used to test indistinguishability of complex macroscopic objects. Such macroscopic system have many internal degrees of freedom and two macro-molecules that are initially identical will most probably evolve in a different way leading to effective distinguishability. In this case duality in entanglement would be unobservable. This can be considered as a kind of transition from quantum to classical domain. However, if duality in entanglement occurs for macroscopic objects, then we can confirm that despite large size the systems are still quantum since they exhibit indistinguishability that is a truly quantum feature.
We consider macroscopic light field states that are entangled in polarization DOF and by entanglement duality can also be regarded as entangled in parity DOF.
Specifically, we consider coherent states---in principle their size can be arbitrarily large---which can be effectively distinguished by parity (the former case) or by polarization (the latter):
\begin{eqnarray}
&&\frac{1}{\sqrt{2}}(|H\rangle_{even}|V\rangle_{odd}\pm|V\rangle_{even}|H\rangle_{odd})\nonumber\\
&& = \frac{1}{\sqrt{2}}(|even\rangle_H|odd\rangle_V\pm|odd\rangle_H|even\rangle_V).
\label{eq:entangledstate1}
\end{eqnarray}
Here, $|even\rangle=N_e(|\alpha\rangle+|-\alpha\rangle)$ and $|odd\rangle=N_o(|\alpha\rangle-|-\alpha\rangle)$ are even and odd coherent states with normalization constants $N_e$ and $N_o$ respectively.
We adopt orthogonal coherent state basis $\{|even\rangle,~|odd\rangle \}$ instead of non-orthogonal one $\{|\alpha\rangle,~|-\alpha\rangle \}$.
Notice that an even coherent state is orthogonal to an odd coherent state since an even (odd) coherent state is a superposition of even- (odd-) number Fock states. The entanglement of (macroscopic) coherent states encoded in polarization/parity DOF is interchangeable between these two DOFs and accordingly its duality in entanglement can be identified.
To obtain the entangled states in Eq. \eqref{eq:entangledstate1}, it is required first to prepare a single-mode even (odd) coherent state which is experimentally within reach in a trapped $^9$Be$^+$ ion system \cite{MMKW96}, a high \textit{Q} microwave cavity \cite{B96}, a Bose-Einstein condensate with Rb atoms \cite{GMHB02}, and an optical system using homodyne detection and Fock states \cite{OJTG07}.
To identify entanglement in each DOF, one can consider quantum information protocols such as CHSH-Bell-type inequality test based on displaced parity detector \cite{BW99} or interaction-free measurement scheme \cite{EV93}.
Furthermore, instead of using the definite-parity coherent states one can consider another macroscopic state basis, namely squeezed vacuum and a single-photon-subtracted squeezed vacuum state, which also comprises a parity-based orthogonal basis:
Squeezed vacuum state is a superposition of even-number Fock states whereas its single-photon-subtracted version consists of odd-number Fock states.
This paper is organized as follows. We begin with generation scheme of the entangled states in Eq. \eqref{eq:entangledstate1}. Then, we discuss how to identify entanglement in each DOF. Next, we discuss a similar scenario using squeezed vacuum states. Finally, we summarize our results and list open questions.
\section{State generation scheme}
\begin{figure}
\centerline{\scalebox{0.35}{\includegraphics[angle=0]{generation}}}
\vspace{-1.6in}
\caption{State generation scheme of Eq. (1). H (V) represents H- (V-) polarizer.
PBS is a polarizing beam splitter which transmits horizontal polarization and reflects vertical polarization.
}
\label{fig:fig1}
\end{figure}
The desired entangled states in Eq. \eqref{eq:entangledstate1} can be generated by injecting an odd coherent state into a 50:50 beam splitter as in Fig. \ref{fig:fig1}.
The output state is simply reformulated in an orthonormal basis $\{|even\rangle,~|odd\rangle \}$ and it is given by
\begin{eqnarray}
&&\hat{B}_{ab}(|\sqrt{2}\alpha\rangle_1-|-\sqrt{2}\alpha\rangle_1)|0\rangle_2\nonumber\\
&&=|\alpha\rangle_1|-\alpha\rangle_2-|-\alpha\rangle_1|\alpha\rangle_2\nonumber\\
&&\approx |even\rangle_1|odd\rangle_2-|odd\rangle_1|even\rangle_2.
\end{eqnarray}
Notice that if we adjust the phase of a beam splitting operator we can also obtain
$|even\rangle_1|odd\rangle_2 + |odd\rangle_1|even\rangle_2$ from an odd coherent state.
After applying H- (V-) polarizer into path-mode $1$ ($2$), the entangled state can be represented by
\begin{eqnarray}
|even\rangle_{H,1}|odd\rangle_{V,2}-|odd\rangle_{H,1}|even\rangle_{V,2}.
\end{eqnarray}
Now combining both path modes with a polarizing beam splitter, we obtain one of the entangled states in Eq. \eqref{eq:entangledstate1},
\begin{eqnarray}
|\psi\rangle&\equiv&\frac{1}{\sqrt{2}}(|even\rangle_H|odd\rangle_V-|odd\rangle_H|even\rangle_V)\nonumber\\
&=&\frac{1}{\sqrt{2}}(|H\rangle_{even}|V\rangle_{odd}-|V\rangle_{even}|H\rangle_{odd}),
\label{eq:entangledstate2}
\end{eqnarray}
where the entangled state is on a single path-mode ($1$ or $2$).
Note that we cannot produce the dual entanglement with an even coherent state since it is not interchangeable between the two DOFs.
If one uses an even coherent state in Fig. \ref{fig:fig1}, an entangled coherent state $|\alpha\rangle_1|- \alpha\rangle_2 + |- \alpha\rangle_1| \alpha\rangle_2$ or $|\alpha\rangle_1| \alpha\rangle_2 + |- \alpha\rangle_1| -\alpha\rangle_2$ will be obtained, which does not have a full \textit{ebit} and hence cannot be converted to a maximally entangled polarization state as \eqref{eq:entangledstate2}.
Put in more detail, the final state would be
$(1/N_e^2) |even\rangle_H|even\rangle_V \pm (1/N_o^2) |odd\rangle_H|odd\rangle_V $ which cannot be converted to a coefficient-balanced entangled state as \eqref{eq:entangledstate2}.
\section{Accessing Entanglement}
Now we illustrate how to access entanglement in Eq. \eqref{eq:entangledstate2} by directing the two mode variables of DOF to different path modes; see Fig. \ref{fig:fig2}.
Note that placing detectors at both ends as in the figure allows for testing the duality in entanglement;
different types of detections in each DOF setup verify the duality.
Alternatively, the duality can be tested indirectly by using these path-divided entangled states in disparate information protocols, which will be presented in the next section.
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{sorting}}}
\vspace{-0.6in}
\caption{Detecting entanglement in each degree of freedom (DOF). (a) Parity entanglement. (b) Polarization entanglement. PBS is a polarizing beam splitter which transmits horizontal polarization and reflects vertical polarization. HWP is a half-wave plate which changes polarization from $V$ to $H$ and vice versa.
}
\label{fig:fig2}
\end{figure}
To observe entanglement in parity DOF one just needs a polarizing beam splitter (PBS) and a half-wave plate ahead of detections, as shown in Fig. \ref{fig:fig2}(a).
For the case of polarization-DOF entanglement, however, the procedure is more complicated, and we denote it by the box in Fig. \ref{fig:fig2}(b).
In the rest of this section, we elaborate on what kinds of processes are included in the box.
As can be hinted from the two parts of Fig. \ref{fig:fig3}, the box is composed of two stages.
First, we introduce an additional mode $3$ to control the target modes 1 and 2, as shown in Fig. \ref{fig:fig3}(a). The entangled state of Eq. \eqref{eq:entangledstate2} passes through a PBS such that the horizontal (vertical) polarization state moves to path mode $1$ ($2$). Then, impinging each path mode on a 50:50 beam splitter with additional modes $3$ and $4$ which are in vacuum state, we get
\begin{eqnarray}
&&\frac{1}{\sqrt{2}}(|A\rangle_{H,1}|\!-\!A\rangle_{V,2}|A\rangle_{H,3}|\!-\!A\rangle_{V,4}\nonumber\\
&&-|\!-\!A\rangle_{H,1}|A\rangle_{V,2}|\!-\!A\rangle_{H,3}|A\rangle_{V,4}),
\end{eqnarray}
where $A=\alpha/\sqrt{2}$.
After applying displacement operations $\hat{D}_{3,H}(A)$ and $\hat{D}_{4,V}(A)$ to modes $3$ and $4$ respectively, and guiding those modes into another PBS, we obtain the following state:
\begin{eqnarray}
\frac{1}{\sqrt{2}}(|A\rangle_{H,1}|A\rangle_{V,2}|2A\rangle_{H,3}-
|\!-\!\!A\rangle_{H,1}|\!-\!\!A\rangle_{V,2}|2A\rangle_{V,3}),\nonumber\\
\end{eqnarray}
where a phase-shift operation $e^{i\pi\hat{a}^{\dag}_{2,V}\hat{a}^{\phantom{\dag}}_{2,V}}$ was applied to the mode $2$.
Now we are ready to manipulate the target modes $1$ and $2$ under the control mode $3$.
That is, their polarizations are flipped if the control mode $3$ is vertically polarized, as shown symbolically in Fig. \ref{fig:fig3}(b),
\begin{eqnarray}
\frac{1}{\sqrt{2}}(|A\rangle_{H,1}|A\rangle_{V,2}|2A\rangle_{H,3}-
|\!-\!A\rangle_{V,1}|\!-\!A\rangle_{H,2}|2A\rangle_{V,3}).\nonumber\\
\end{eqnarray}
It is implemented by two controlled-NOT-type gates that were realized in optical frequency regime \cite{Zhou11}. Then, controlled phase-shift operations are applied to the modes $1$ and $2$ if the mode $3$ is again vertically polarized. Note that this process can be even deterministically implemented in superconducting qubits \cite{V13}.
Finally, by selecting out a click event on mode $3$, we can sort out the polarization-DOF entanglement in the same coherent state mode,
$(|H\rangle_{A,1}|V\rangle_{A,2}-|V\rangle_{A,1}|H\rangle_{A,2})/\sqrt{2}$.
What if the complicated procedure of Fig. 3 is not perfectly set to operate? One of imperfections of experimental results is that optical components are not set to the correct values perfectly \cite{Zhou11}. Here we consider the imperfections of displacement operations or controlled operations. In Fig. 3 (a), given that the displacement operations do not produce appropriate displacement amplitudes as $\hat{D}_{3,H}(B)$ and $\hat{D}_{4,V}(B)$ ($B$ is not equal to $A$), then the control mode 3 of Eq. (6) is represented in terms of horizontal and vertical polarizations simultaneously. Thus, without perfect displacement operations, one cannot control the target modes 1 and 2 with either of the polarizations in mode 3. In Fig. 3 (b), given that the vertical polarization of the control mode 3 is not properly identified, the polarizations of the modes 1 and 2 cannot be flipped perfectly. Also, the controlled phase-shift operations cannot control the phases of the modes 1 and 2 perfectly. Thus, the imperfection of the controlled operations produce a non-maximal polarization entangled state.
\begin{figure}
\centerline{\scalebox{0.3}{\includegraphics[angle=0]{box}}}
\vspace{-0.6in}
\caption{Box of Fig. 2 (b) to observe polarization entanglement. (a) First, an additional mode $3$ is added to control target modes $1$ and $2$, where $|\psi\rangle=\frac{1}{\sqrt{2}}(|H\rangle_{even}|V\rangle_{odd}-|V\rangle_{even}|H\rangle_{odd})$.
Displacement operation ($|\beta\sqrt{1-T}\rangle$) is achieved with strong coherent light ($|\beta\rangle$) and a beam splitter with high transmittance ($T\sim 1$) \cite{P96,LB02} .
(b) Next, two controlled NOT gates flip the polarization of the target modes $1$ and $2$ if the control mode $3$ is vertically polarized.
}
\label{fig:fig3}
\end{figure}
\section {Entanglement Observation}
We can observe entanglement in each DOF by means of different types of quantum information protocols. Parity entanglement can be verified by the CHSH-Bell type inequality which utilizes displaced parity measurement \cite{BW99}.
It violates the inequality up to Tsirelson's bound $2\sqrt{2}$ with increasing average photon number \cite{WJK02}.
Since the state
\begin{equation}
|even\rangle_{1}|odd\rangle_{2}-|odd\rangle_{1}|even\rangle_{2}
\label{eq:entangledstate3}
\end{equation}
has faster oscillating amplitude in phase space with increasing average photon number,
one has more possibility of violating the inequality.
Let us conceive another application of the entangled state \eqref{eq:entangledstate3}. Applying local displacement operation $\hat{D}(\alpha)$ to each mode, we obtain a NOON-type coherent state without changing the degree of entanglement. It is known to provide the Heisenberg limit in local quantum phase estimation, specifically using photon number resolving detection in a Mach-Zehnder interferometer \cite{LLLN15}. We observe that the phase sensitivity increases with the increasing average photon number, and it goes down to the shot-noise limit with the decreasing of its entanglement.
Next, polarization entanglement can be attested by interaction-free measurement (IFM) in the Mach-Zehnder (MZ) interferometer, which is also known as the Elitzur-Vaidman bomb tester \cite{EV93}. Injecting a single photon into the MZ interferometer without any obstacle (a bomb) in it, the quantum interference mechanism leads to detection only at the first detector. If a bomb is put in one of the internal arms of the MZ interferometer, the interference is disturbed and the photon is now either detected by the first or the second detector, or it hits the bomb---then it explodes. The efficiency of this detection strategy is formulated as $\eta=P_{IFM}/(P_{bomb}+P_{IFM})$, where $P_{IFM}$ is the probability of detecting the presence of the bomb and $P_{bomb}$ the probability of bomb explosion. For a single run, the efficiency of detection without explosion is as large as up to $50\%$.
In Fig. \ref{fig:fig4}, we consider an IFM setup that is composed of the MZ interferometer using PBSs. Note that a single-photon based IFM setup uses beam splitters instead.
Assuming that our polarization entangled state has the constraint of $|A|^2=1$, we can take our entangled state $(|H\rangle_{A,1}|V\rangle_{A,2}-|V\rangle_{A,1}|H\rangle_{A,2})/\sqrt{2}$ as a two-particle entangled state formula $(|H\rangle_1|V\rangle_2-|V\rangle_1|H\rangle_2)/\sqrt{2}$.
Injecting the polarization entangled state $|\Psi\rangle_{12}=(|H\rangle_{1}|V\rangle_{2}+|V\rangle_{1}|H\rangle_{2})/\sqrt{2}$ into the MZ interferometer without a bomb, the state is given by $|\Psi\rangle_{12}$ after the 2nd polarizing beam splitter. Then, applying a 45-degree polarizer to each mode, we obtain the output state $(|V\rangle_{1}|V\rangle_{2}-|H\rangle_{1}|H\rangle_{2})/\sqrt{2}$ and observe the same polarization states on each detector simultaneously.
If, however, there is a bomb in one of the arms, the output state is given by $(|V\rangle_{1}|V\rangle_{2}-|H\rangle_{1}|H\rangle_{2}+|H\rangle_1|V\rangle_2-|V\rangle_1|H\rangle_2)/2\sqrt{2}$.
As shown in Fig. 4, we can discriminate the four different events by a combination of PBSs and on-off detectors.
Then, we are to observe the different polarization states on both detectors simultaneously with $25\%$ probability. Thus, the polarization entangled state attains $33.3\%$ efficiency of the IFM since the probability of detecting the presence of the bomb is a half to that of bomb explosion. If it is not maximally entangled, there are also four different detection events in no bomb scenario. We cannot discriminate the two scenarios of no bomb and bomb, such that the efficiency of detection without explosion is equal to zero.
\begin{figure}
\centerline{\scalebox{0.32}{\includegraphics[angle=0]{IFM2}}}
\vspace{-1.6in}
\caption{Observing indistinguishability of polarization entangled photons by interaction-free measurement (IFM), where
$|\Psi\rangle_{12}=\frac{1}{\sqrt{2}}(|H\rangle_1|V\rangle_2+|V\rangle_1|H\rangle_2)$. Pol is a 45-degree polarizer ($|H\rangle\rightarrow \frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$ and $|V\rangle\rightarrow \frac{1}{\sqrt{2}}(|V\rangle-|H\rangle)$). PBS is a polarizing beam splitter.
D1 (D2) is composed of a PBS and two on-off detectors, and we can discriminate four different events [$(H,H),~(V,V),~(H,V),~(V,H)$] by the two detection setups.
}
\label{fig:fig4}
\end{figure}
\section{Alternative implementation}
We can also consider an alternative implementation which takes into account a squeezed vacuum (SV) state ($|S\rangle\equiv \hat{S}(r)|0\rangle$) instead of a coherent state. In state generation stage, SV state case is more feasible than the coherent state one. However, the SV state case is more difficult to access entanglement than the coherent state one.
First, an entangled state based on squeezed vacuum (SV) state is produced by applying a coherent superposition operation of photon subtractions to two single-mode SV states. Its output state is derived as
\begin{eqnarray}
(\hat{a}_1+\hat{a}_2)|S\rangle_1|S\rangle_2=\hat{a}_1|S\rangle_1|S\rangle_2+|S\rangle_1\hat{a}_2|S\rangle_2,
\end{eqnarray}
where the photon-subtracted SV state $\hat{a}|S\rangle$ is a superposition of odd number states and the SV state $|S\rangle$ is a superposition of even number states. Then, sequentially applying polarizers ( mode 1 $\rightarrow$ H, mode 2 $\rightarrow$ V) and a PBS to the output state, we obtain an entangled state which has capability of \emph{entanglement duality},
\begin{eqnarray}
\frac{1}{\sqrt{2}\sinh{r}}(\hat{a}_H|S\rangle_H|S\rangle_V+|S\rangle_H\hat{a}_V|S\rangle_V),
\label{eq:svpolarization}
\end{eqnarray}
where $r$ is a squeezing parameter.
Note that the entangled state of Eq. (9) has been implemented in optical frequency range in order to mimic entangled coherent states \cite{OFTG09}.
It is expected that it is easier to generate the entangled state of Eq. (4) which requires a preliminary stage of preparing an odd coherent state. Furthermore, replacing $\hat{a}_1+\hat{a}_2$ with a modified coherent superposition operation $\sqrt{T}\hat{a}_1+\sqrt{1-T}\hat{a}_2$ \cite{LN10}, we can control the amount of indistinguishability of the entangled polarization-SV state.
It is implemented by adjusting the transmittance of a beam splitter \cite{LN10}.
However the indistinguishability in Eq. (4) is not implementable simply by controlling a beam splitting parameter.
The beam splitting parameter just changes the amplitude of the coherent state basis.
In spite of the simple state preparation, it is not simple to access the corresponding DOF to observe polarization entanglement. The SV state case requires an additional process of distinguishing even and odd number states without destroying the states.
The details of the process are given in Appendix A. It is due to the property that a single-mode squeezing operator produces a two-mode squeezing operation when we impinge a single-mode squeezed vacuum state on a beam splitter. It makes the access of entanglement more complicated, compared to the coherent state basis.
Provided we can access the entanglement, we can test its duality by the same quantum information protocols as in the coherent state case. Moreover, by applying a single-mode anti-squeezing operation $\hat{S}(-r)$ to each mode, we obtain the single-photon entangled state $\frac{1}{\sqrt{2}}(|1\rangle_1|0\rangle_2+|0\rangle_1|1\rangle_2)$, without changing the degree of entanglement. This can be used for the quantum teleportation protocol based on single-photon entanglement \cite{LK00}.
\section{Summary and Discussion}
We have proposed a scheme to test the indistinguishability of macroscopic entangled states of light. It has been shown that the duality in entanglement between polarization and parity DOFs can be accessed under current technology. Then we have mentioned that parity entanglement and polarization entanglement can be verified by CHSH-Bell type inequality and interaction-free measurement, respectively. Furthermore, we have proposed an alternative implementation scheme using a squeezed vacuum state.
Here, we have considered many particles in a bipartite system. It would be interesting to extend our scenario to many particles in a multi-partite system. In order to observe full indistinguishability of the multi-partite system, we need more degrees of freedom for each party.
Full indistinguishability is confirmed by full interchangeability of degrees of freedom, which is achieved when the number of degrees of freedom is equal to the number of the parties.
A candidate is time-frequency modes in optical frequency combs \cite{R16} and multi-headed coherent states \cite{R10,L13,K15,LLNK15}.
From the squeezed vacuum state case, we expect to find a way of discriminating even and odd numbers without destroying states. Furthermore, we could get an idea of a generalized parity measurement, i.e., modulo operation.
\begin{acknowledgments}
SYL thanks Dr Changsuk Noh and Prof Hyunseok Jeong for useful comments.
SYL, CWL, and JK were partly supported by the IT R$\&$D program of MOTIE/KEIT [$10043464$]. PK was supported by the National Science Centre in Poland through the NCN Grant No. 2014/14/E/ST2/00585.
\end{acknowledgments}
|
1,108,101,564,137 | arxiv | \section{Introduction}
Chest pain is responsible for one of the highest rates of emergency hospital visits in industrialized countries~\cite{Sanchez2007} and accounts for a large proportion of hospital admissions.
Statistics show that around 75\% of patients who present at the Emergency Department with chest pain do not have a cardiac related condition~\cite{Six2012,Backus2013,Backus2011,Rohacek2012}, yet they still need to go through a full diagnostic pathway which can take more than 10 hours~\cite{Six2012}.
This leads to several thousand people occupying bed spaces placing an additional burden on health care systems. A diagnostic that is capable of rapidly stratifying the cases and removing those patients who don't need an overnight stay is therefore valuable in both triage and cost saving~\cite{Six2012}.
Magnetocardiography (MCG) involves capturing Magnetic Field Maps (MFM's) of current distributions resulting from cardiac action potentials~\cite{Eskola1987,Korhonen2000,Korhonen2002,Malmivuo1995,McFee1972,Moshage1996,Ramon1998,Saarinen1974,Smith2001}.
It has been shown that MCG gives significant improvements in diagnostic capability over an ECG~\cite{Agarwal2012,Fenici2013,Gapelyuk,Hailer2005,Korhonen2006,Leithauser2011,Lim2007,Lim2009,Park2005,Smith2006b,Steinisch2013,Tolstrup2006}.
Significantly, in this respect, it has been demonstrated that MCG is capable of reliable detection of Non-ST-Elevated Myocardial Infarction (NSTEMI)~\cite{Agarwal2012,Lim2009}, which are by definition difficult to detect using ECG\@.
For this reason all ECG negative chest pain patients are treated as having an NSTEMI until other diagnostic results can be obtained~\cite{Backus2013}.
Hence, the short time to produce a MCG (typically \textless10 minute measurement) dramatically reduces the time for diagnosis and removes otherwise healthy patients earlier in the process and is therefore a tool with obvious clinical benefits.
The principle focus of the current research was the creation of a portable MCG device that would be capable of providing a rapid assessment of acute coronary syndrome (ACS) in an Emergency Department. To meet this goal, the device requirements are sensitivity to a magnetic window of between 0.1pT and 300pT, in the frequency range of around 1--40 Hz~\cite{Bison2009} and a spatial resolution sufficient to detect anomalies with a spacing of 10--15cm (for a sensor operated 10cm from the chest wall)~\cite{Guofa2011}.
Cardiac MFM devices typically use an array of sensitive magnetometers detectors to collect the magnetic field of the heart by simultaneously sampling at many positions across the chest.
Sensors include liquid helium cooled SQUID detectors, which have been used in commercially available devices for over 40 years~\cite{Stroink2010}, and atomic physics detectors and giant magnetoresistance detectors have also been developed~\cite{Bison2009,Pannetier2011, Shah2013}.
These devices are not always suitable for an Emergency Department as the associated apparatus is bulky, they often require liquid helium, specialist training to use, they are fixed in place and typically require an electromagnetically shielded room.
In contrast, induction coil magnetometers have been used several times for cardiac magnetic field detection by several authors.
They meet the demands of signal sensitivity~\cite{Baule1970,Baule1963,Cohen1969,Cohen1967,Estola1982,Tashiro2006}, they are inexpensive, do not require cooling and can be run from batteries.
However earlier efforts required noisy high gain amplifiers, large and heavy coils which are unsuitable for magnetic field mapping and fixed electronically implemented gradiometer arrangements.
Here we present a more compact coil design which when combined with modern analog-to-digital converters (ADC's) and digital signal processing (DSP) produces a device capable of detecting the cardiac magnetic field with an array of 19 sensors.
We first present the design of the sensors, array and DSP routine. Then we show that this design has the capability of resolving the field of the heart within both shielded and unshielded environments.
\section{Apparatus}
\subsection{Coil Design}
The most important aspect of this device is the construction of the sensor elements to achieve both the sensitivity and required spatial resolution.
An induction coil magnetometer will have an output voltage determined by
\begin{equation}
\label{outputVoltage}
V=AN\frac{{\rm d}B(t)}{{\rm d}t}=ANB2\pi f
\end{equation}
where $N$ is the number of windings, $A$ is the effective cross sectional area of the coil, $B(t)$ is the time varying magnetic field, with a magnitude $B$, and $f$ is the frequency of oscillation of the field.
The smallest field, $S$, that can be detected given thermal Johnson noise resulting from the winding resistance is given by:
detected given thermal Johnson noise resulting from the winding resistance is given by:
\begin{equation}
\label{sensitivity}
S=\frac{\sqrt{4k_{B}TR_{a}}}{2\pi fNA}
\end{equation}
where $k_{B}$ is Boltzmans constant, $T$ is the temperature, $R_a$ is the antenna wire resistance given by:
\begin{equation}
R_{a}=N2\pi^{2}a^{2} \rho r_{coil}
\end{equation}
where $a$ is the radius and $\rho$ is the resistivity of the wire used in the windings on a circular coil of average winding radius $r_{coil}$. Equations~\ref{outputVoltage},~\ref{sensitivity} can be used to find the coil structure with lowest noise level given the design constraints.
If the coil parameters are the length $L$, the coil outer diameter, $D$, and the coil inner diameter, $D_{i}$, the dimensions that give the lowest noise level have the ratio $D_{i}:D = 0.425:1$.
In addition to this we primarily want to measure the component of the magnetic field aligned to the axis of the coil.
Zilstra~\cite{Zijlstra1967} notes that the optimum coil structure to measure the axial component of the magnetic field is achieved when $L/D= 0.69$ for the above ratio $D_{i}/D$.
The coil diameter itself is determined according to the desired device resolution, leaving the radius of the wire only remaining free parameter in the coil design.
The output voltage of the coil is determined by N. As all other parameters are now fixed, voltage is determined exclusively by the wire radius $a$.
A thinner wire increases the voltage output at the expense of increased coil resistance and subsequently increased noise, leading to a fixed signal to noise ratio irrespective of wire diameter.
\Tref{tabone} presents outputs from a simulation of the coil design presented here (MFM Coil) compared to the Brooks coil of the same outside dimension and wire diameter, $a=0.23mm$. A Brooks coil is a special case in which the ratio of the dimensions are chosen to optimise inductance for which the ratio of the dimensions are $D:D_{i}:L=4:2:1$.
The current MFM Coil design has a higher voltage and a lower noise equivalent field at the target frequency of 30Hz than the Brooks coil.
The noise equivalent field is the smallest field strength that could be measured above Johnson noise of the detector.
The gains are a factor of 1.6 in signal to noise and a factor of about 2.9 in output voltage.
The increased signal size plays a role in the subsequent electronics especially when thermal and Johnson noise in the electronics is similar in size to the cardiac signal.
Overall both effects improve data collection times by a factor of 20.
This is important because, while the gains are modest, the overall impact on the design is a significant reduction (more than an order of magnitude) in the data collection time when cycle averaging is used.
\begin{table}
\caption{\label{tabone}Table comparing the classic Brooks coil design with the MFM coil design presented in this paper.}
\begin{indented}
\item[]
\begin{tabular}{llllcc}
\toprul
Coil & $D$ & $D_{i}$ & $L$ & $V_{out}$ at 1pT (40Hz) & Noise Equivalent Field \\
\midrul
MFM & 12cm & 5.1cm & 8.28cm & 616nV & 57fT \\
Brooks & 12cm & 6cm & 3cm & 211nV & 96fT \\
MFM & 8.5cm & 3.6cm & 5.87cm & 155nV & 136fT \\
Brooks & 8.5cm & 4.25cm & 2.125cm & 53nV & 227fT \\
MFM & 4.25cm & 1.8cm & 2.9cm & 9.5nV & 773fT \\
Brooks & 4.25cm & 2.125cm & 1.0625cm & 3.3nV & 1.3pT \\
\bottomrul
\end{tabular}
\end{indented}
\end{table}
\subsection{Mapping Array Construction}
The commercial analog to digital converter (ADC) we used has 16 channels. One channel was reserved for the ECG trigger, leaving 15 cardiac magnetometer channels.
MFM Coils with a diameter of 7cm were chosen to cover the measurement area of $\sim 25\times25 cm$ in a hexagonal array, arranged in order to detect the principle components of the hearts magnetic dipole field.
\begin{figure}[p]
\centering
\includegraphics[width=0.8\textwidth]{./CoilSensors.eps}
\caption{Photograph of the 7cm diameter coil sensors used. The 2cm diameter core is visible in the centre of the coil bobbin. The pre-amplifier circuit board is mounted to the coil to minimise the unshielded signal path. DC battery power is provided via the 4-way header. The amplified signals are output via an SMB connector into a coaxial cable.}\label{CoilSensors}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=0.6\textwidth]{./comsol.eps}
\caption{COMSOL model of the sensor array measuring an ideal cardiac dipole field. Figure a shows the cardiac dipole field, which is sampled at the center of the soft iron cores shown as black spots. Figure b shows the interpolated measurement of the field samples. The actual angle and the reconstructed rotation angle are closely matched indicating that an accurate reconstruction is possible with this sensor array.}\label{comsol}
\end{figure}
To evaluate the mapping fidelity of this arrangement a COMSOL model of the array was created. The interaction of the soft iron cores with a static magnetic dipole field comparable in size to the cardiac dipole was simulated. Measurements of the flux at the coil centers were taken as readings equivalent to sensor output. These outputs were spatially interpolated using the same technique as used with actual sensor signals to produce MFM\@. From the MFM the field map angle (FMA) was measured from the vector between the dipole maxima. \Fref{comsol} shows the actual and measured MFM's at a fixed angle.
The simulated cardiac dipole was rotated in small increments. At each increment the difference between the actual angle and the measured angle was calculated. The maximum difference observed was $15^{o}$, with a typical error of $8^{o}$. This uncertainty can be reduced by taking the vector between pole centroids, this spatially averaged measurement of the dipole vector has a considerably lower uncertainty of $<1^{o}$.
To collect cardiac magnetic fields we designed a mount to hold up to 19 coils with a hexagonal close packed layout of sensor locations, see \Fref{arrayphoto}. The layout, the approximate location against the body and a system level diagram for data acquisition are shown in \fref{flowchart}.
Since movement of the coils within the Earth's field will induce a current in the coils, they must be stiffly coupled so that acoustically induced signals become common mode and therefore removable by gradiometry. To this end, the mount was manufactured from a single piece of Acetal engineering plastic and the coils were securely potted in place. It was supported above the suppine participant by a four legged aluminium frame coupled to the floor. The large mass provided inertial dampening.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{./Flowchart.eps}
\caption{System level diagram; Individual pre-amplified sensor signals are acquired by the ADC, then processed within a computer to create a magnetocardiogram.}\label{flowchart}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{./ArrayPhoto.eps}
\caption{Photograph of the sensor array. The array was machined out of Acetyl engineering plastic, which stiffly couples the sensors together. This array was then bolted to an aluminium frame which supported it above a supine participant in order to capture MCG.}\label{arrayphoto}
\end{figure}
\subsection{Data Acquisition and DSP}
To extract signal from the coils with minimum interference a low noise pre-amplifier ($3.5nV/\sqrt{Hz}$ at 10Hz) with a gain of $1000\times$ was placed immediately above each MFM Coil, see \Fref{sensorschematic} for details.
The signal was then digitized using a National Instruments 16-channel 2kS/s 24bit AC-coupled ADC with a rail-rail voltage of $316mV$ ($37nV$ sensitivity).
Cycle averaging, filtering and gradiometry were performed in digital post processing~\cite{ipython}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{./SensorSchematic.eps}
\caption{Schematic of a single sensor channel. The DC offset control loop is used to remove drift generated by the amplifier, it functions as a high pass filter with a cut-off of 1.6Hz. The amplifier power supply (V+, V-) is supplied by a pair of batteries with local fixed LDO voltage regulators.}\label{sensorschematic}
\end{figure}
Cycle averaging used the ECG R-wave rising edge as a fiducial. The magnetometer signals were sliced in an interval of $\pm500ms$ about the fiducial, these intervals were then averaged.
A moving average filter was applied by convolution of the signal with a 20ms wide top hat distribution~\cite{numpy}.
In the normal mode of operation of a magnetic gradiometer it is considered necessary to have extensive shielding or closely matched gradiometer coils.
But matching wire wound induction coils to sufficient accuracy is effectively impossible.
However, in an array of coils differences in coil sensitivity are reduced by taking the average, and since the spatial average over a dipole crossection is zero, the cardiac signal is not present in this background.
Hence, this background is a bucket detector. Subtracting the bucket detector signal from a sensor signal produces a gradiometric signal.
Induction coils measure the time derivative of the magnetic field and not the static field. It is not possible to accurately construct the static field components by integrating their signals since the constant of integration is unknown. Attempting to numerically integrate yeilds signals with large baseline wander. This makes induction coil magnetometers not ideal for low frequency field measurement, though their performance can be improved with a fluxgate arrangement~\cite{fluxgate} or mechanical dithering of the sensors~\cite{MEMs}.
The derivative signals do not contain reliable absolute amplitude information, but they contain the same relative amplitude information and therefore the normalised spatial measurement is unaffected.
Therefore the majority of the diagnostic information is preserved.
\section{Results And Discussion}
\subsection{Sensor Response and Sensitivity}
The sensor response was measured by placing one in the center of a helmholtz coil pair, applying a calibrated sinusoidal field and measuring the sensor output amplitude, as shown in \Fref{response}. The applied magnetic field amplitude was measured using a calibrated fluxgate magnetometer.
The response is dependent on the field frequency and for this coil, it is linear between 1Hz and 1KHz. The measured response was $290fT/\mu V$ at 30Hz and $813fT/\mu V$ at 10Hz.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{./CalibrationExp.eps}
\caption{Experiment to measure sensor response. A sinusoidal field was created by the helmholtz coils being driven by the signal generator, the peak field amplitude was measured using a calibrated fluxgate magnetometer. The fluxgate probe was then replaced by a MFM coil sensor and the peak amplitude output by the sensor as recorded using the oscilloscope.}\label{response}
\end{figure}
The sensitivity is determined by the inherent sensor noise. To measure the inherent noise the sensor was placed in a shielded room and 10 minutes of signal were recorded. Computing a FFT on this timeseries gives the amplitude spectral density of the sensor, see \Fref{FFT}. This voltage amplitude spectra was converted into magnetic field amplitude by factoring in the coils frequency response. The resulting noise floor was $104fT/\sqrt{Hz}$ at 10Hz and $36fT/\sqrt{Hz}$ at 30Hz.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{./LargeCoilFFT.eps}
\caption{Amplitude spectral density of the sensor in shielded (YNIC) and unshielded (LGI) environments, computed by FFT\@. The sensor voltage spectral density was computed then the frequency dependent response was used to convert to units of magnetic field amplitude (fT).}\label{FFT}
\end{figure}
\subsection{First MCG Measurements}
Now we show that these coils can detect the magnetic field with sufficient sensitivity and sufficiently low inherent noise. An MCG was taken in a shielded environment, where the environmental noise was low enough to observe dominant sensor noise.
\Fref{signals}d shows a raw MCG sensor signal taken from one of us in a shielded room which had an attenuation of 40dB at 1Hz and a maximum attenuation of 104dB at 30Hz. Aproximately 10 minutes of data were recorded containing 485 cardiac cycles over 12 coils. The noise amplitude was approximately 0.6mV RMS ($<73pT$).
The synchronously recorded 3 lead ECG shown in~\Fref{signals}b was thresholded to find the R-wave rising edge, this was used as a fiducial for cycle averaging, which reduced the noise amplitude by a factor of 35 (to $<2.1pT$). Then the gradient was calculated by subtracting the synthetic bucket detector. However, this did not reduce the noise amplitude since it was composed of Johnson thermal noise which is uncorrelated across the array. Finally a 20ms wide moving average filter was applied to notch out the remaining 50Hz noise and smooth the remaining thermal noise.
The resulting signal has no significant noise ($<150fT$) and corresponds with the anti-derivative of previously observed MCG signals~\cite{koch_reference_2011,kandori_space-time_2008}. It has a maximum amplitude during cardiac depolarisation of 0.05mV ($30pT$).
To analyse the device performance in an unshielded enironment an MCG was recorded at Leeds General Infirmary, the results are shown alongside the shielded data in~\Fref{signals}. Similarly 10 minutes of data were recorded, yeilding 482 cardiac cycles. The noise amplitude is much larger compared to the shielded room signals at 80mV ($\sim20nT$), and highly correlated across the array. Cycle averaging reduces this by a factor of 12. Application of synthetic gradiometry provides $10\times$ rejection. The same 20ms wide moving average filter is highly effective at removing the remaining 50Hz noise and it's harmonics leading to $500\times$ supression.
The resulting signal is the same amplitude as acquired in the shielded room, however there is a large coloured noise content.
This coloured noise could be removed by the aplication of more advanced DSP techniques such as wavelet denoising~\cite{ishikawa_noise_2014}, reference data based non-linear denoising as an alternative improvement to gradiometry~\cite{sternickel_nonlinear_2001}, and EEMD for baseline wander removal~\cite{colominas_2012}. A second layer of coils ontop could provide an improvement to gradiometer performance as the second coils would be coaxial with the first but receive reduced cardiac magnetic flux~\cite{kang_simple_2012}.
The sensitivity to low frequency could be increased by lock-in to a global excitation field provided by a fluxgate or mechanical dithering arrangement~\cite{nakayama_pulse-driven_2011,fluxgate,MEMs,paz_room_2014,jahns_sensitivity_2012}.
MFM's represent the magnetic field at a chosen instant in the cardiac cycle. They are created by spatially interpolating the sensor amplitudes from a common time sample.
\Fref{MFM} compares the shielded and unshielded MFM's at -15ms, during the magnetic R wave peak activity.
The dipole angle is consistent between the two MFM's. The dipole position is translated between the MFM's since the array was only subjectively aligned in the coronal plane relative to the Xiphoid process. Also the angle between the coronal and transversal planes was not precisely controlled, since each bed had a different distribution of padding material. Ideally the MFM would be precisely referenced to the individuals cardiac geometry. This would be invaluable for solving the inverse problem; estimating the structure of the underlying current distribution corresponding to the observed MFM\@.
The observed dipole angle and size are in good agreement with past MCG observations of healthy normals~\cite{Lim2009}. We therefore anticipate that the signal has similar diagnostic value.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{./signals.eps}
\caption{Comparison of two magnetometer signals from the same person taken with an identical device, with similarly positioned sensors but in different environments. The Sheilded MCG was acquired in the York Neuroimaging center (YNIC). The Unsheilded MCG was acquired at Leeds General Infirmary (LGI), 16 months later. The environmental noise is $100\times$ larger in LGI\@. Averaging reduces the noise amplitude by $20\times$. Gradiometry (subtracting the synthetic bucket detector signal) provides $10\times$ noise reduction at LGI but does not effect the noise amplitude at YNIC, in this case it reduces the signal amplitude as the dipole measurement was not symmetric. A final stage FIR filter notches out 50Hz, reducing LGI noise by $500\times$ and acting mostly as a smoothing filter in LGI with $20\times$ reduction. The final signal amplitudes differ by 40\% which could be explained by a difference in sensor positioning or physiological differences. The unshielded signal has a similar level of white noise, but a much larger coloured noise component. }\label{signals}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{./ShieldingComparisonMFMs.eps}
\caption{MFM's of the author, during the rising edge of the ECG r-wave. MFM on the left was taken in the YNIC shielded room. The MFM on the right was taken in an unshielded room at Leeds General Infirmary. The shielded measurement used 12 coils as the pre-amplifier wiring broke on three of the coils during transportation of the device to YNIC\@.
}\label{MFM}
\end{figure}
\section{Conclusion}
We have presented a new design for a device to perform magnetic field mapping and demonstrated that the device collects useful magnetocardiography data in shielded and unshielded environments.
The shielded measurements prove that the coil sensor system has sufficiently low inherent noise for cycle averaged MCG and sufficient spatial resolution for field map angle measurement.
However, operation of the device within an unshielded environment imposes coloured noise on the signal of an amplitude comparable to the repolarisation signals (ECG T wave). This potentially limits the diagnostic capability of our device within unshielded environments. Though the depolarisation signals (QRS) are reliably observable above this noise. This result may be improved by the application of recent developments in denoising algorithms~\cite{ishikawa_noise_2014,sternickel_nonlinear_2001,colominas_2012}.
Further clinical testing will be required to determine if it is capable of detecting recent onset of NSTEMI in patients with the same accuracy as previous devices.
A future device may want to use more sensors to increase the measurement area and may also consider smaller coils to achieve a higher resolution. The addition of a second layer of coils would provide a vertical baseline for synthetic gradiometry which may improve environmental noise suppression~\cite{kang_simple_2012}.
The low frequency performance could be improved to reach DC by lock-in to a global excitation field~\cite{nakayama_pulse-driven_2011,fluxgate,MEMs,paz_room_2014,jahns_sensitivity_2012}.
\section{Acknowledgments}
Permission to use human subjects in the collection of data was granted by the University of Leeds ethics committee (ref.\ no. MEEC 12--034). This research would not have been possible without the assistance of L. Falk, R. Byrom, M. Williamson, D. Brettle, Prof.\ M Kearney and S. Smye at LGI and M. Everitt, S. Brown, L. Burgin, F. Ridgeon, B. Gibbs, P. Thornton, D. Grimmond, M. Moran, and S. Mann at the University of Leeds. We thank Prof.\ G. Green at York Neuroimaging for access to the shielded room. We acknowledge funding from the NHS National Innovation Center, the IKC for Medical Devices, the BHRC, HEIF, IPGroup, Quantum Imaging Limited and the University of Leeds. CS thanks Wellcome Trust and Nuffield for summer project support.
\section*{References}
\bibliographystyle{unsrt}
|
1,108,101,564,138 | arxiv | \section{Introduction}
\label{sec:intro}
The effects of a stochastic primordial magnetic field on the CMB are many: spectral distortions of the monopole~\cite{Puy:1998sv,Jedamzik:1999bm}, the generation of scalar, vector and tensor perturbations in the metric affecting both the temperature and the polarization spectra, see e.g. \cite{Durrer:1999bk,Caprini:2003vc,Paoletti:2012bb,Paoletti:2010rx,finelli,Shaw:2010ea,shaw,Lewis:2004ef,Kunze:2010ys,Giovannini:2009zq,Giovannini:2007qn,Yamazaki:2011eu,Yamazaki:2012pg,Durrer:2006pc}, the production of non-Gaussian signatures leading to a non-zero temperature bispectrum and trispectrum, see e.g. \cite{Seshadri:2009sy,Caprini:2009vk,Shiraishi:2012rm,Shiraishi:2010yk,Cai:2010uw}, Faraday rotation of CMB polarization, see e.g. \cite{Kosowsky:2004zh,Yadav:2012uz}. The most recent observational constraint from CMB temperature anisotropies on the amplitude and the spectral index of a primordial magnetic field has been established using Planck data \cite{Ade:2013zuv}:
\begin{equation}
B_{1\,{\rm Mpc}}<3.4~{\rm nG}~~~{\rm with}~~~n_B<0\,.
\label{Planckconstr}
\end{equation}
This limit has been derived including the magnetic contribution to scalar and vector perturbations, and performing a Markov chain Monte Carlo (MCMC) analysis of the temperature angular power spectrum.
In general, the magnetic field is modeled as a Gaussian random field, statistically homogeneous and isotropic, with a power law spectrum
\begin{eqnarray}
\vev{B_i({\mathbf k},\eta)B^*_j({\mathbf q},\eta)}&=&(2\pi)^3\delta^3({\mathbf k}-{\mathbf q}) P_{ij} P_B(k,\eta) ~~~~~\label{mpowerspectrum}\\
P_B(k,\eta)&=& \left\{ \begin{array}{ll}
A_B(\eta)\,k^{n_B} & k\leq k_D(\eta) \\
0 & k> k_D(\eta)\, , \end{array} \right.
\label{PB}
\end{eqnarray}
where $P_{ij}=\delta_{ij}-\hat{k}_i\hat{k}_j$ and $k_D(\eta)$ is a cutoff scale due to the dissipation of magnetic energy in the cosmic plasma, which has been first calculated in Refs.~\cite{Jedamzik:1996wp,Subramanian:1997gi}. The quantity in terms of which the CMB bounds are customarily expressed is the magnetic field amplitude smoothed over a comoving scale $\lambda$, set to $1$ Mpc in the Planck analysis~\cite{Ade:2013zuv}:
\begin{eqnarray}
B_\lambda^2 &=&\frac{1}{\pi^2}\int dk\,k^2\,P_B(k,\eta_0)\,e^{-k^2\lambda^2} \nonumber \\
&= &\frac{A_B(\eta_0)}{2\pi^2}\frac{\Gamma[(n_B+3)/2]}{\lambda^{n_B+3}} \label{Bla}\,.
\end{eqnarray}
Here $\eta_0$ denotes the conformal time today.
The magnetic field, its power spectrum, and the upper cutoff, $k_D$, depend on time, due not only to the expansion of the universe but also due to the interaction of the magnetic field with the cosmic plasma (for a review, see \cite{ruthreview}). The contribution to the time evolution from interactions with the cosmic plasma, i.e. MHD cascades, small scale damping by viscosity and so on, is usually neglected in CMB analyses (with the exception of \cite{Shaw:2010ea}). In fact, these processes operate mainly at small scales, as opposed to the large scales probed by the CMB. On the other hand, the non-trivial time evolution may affect CMB analyses in the large $\ell$ range, as probed by Planck, or whenever one accounts for helical magnetic fields undergoing large scale inverse cascades (see \cite{ruthreview} and references therein).
The magnetic field model presented above has been adopted in CMB analyses since it has the advantage to be simple and general. Only two parameters enter in the magnetic field description: once the choice of $\lambda$ has been made, these can be cast in the couple ($B_\lambda\,,\,n_B$). Effectively, when relevant, the damping scale $k_D$ can be expressed in terms of these - see e.g. \cite{Caprini:2009vk}. Therefore the constraints on the couple ($B_\lambda\,,\,n_B$) are, at least in principle, model independent. In particular, there is no need to specify the mechanism of generation of the magnetic field, nor its generation epoch.
The generation of a primordial magnetic field of the order of the nanogauss is severely constrained. Causal generation mechanisms give rise to blue magnetic spectra $P_B(k)\propto k^2$ peaked on very small scales, which, accordingly, are very small at typical CMB scales~\cite{Durrer:2003ja}. Generation based on the violation of conformal invariance during inflation can lead to scale invariant spectra and relevant magnetic field amplitudes, but has in general other problems (such as strong coupling, gauge symmetry breaking or the presence of ghosts, c.f. \cite{demozzi,Himmetoglu:2009qi}).
Nonetheless, the aim of the CMB analyses carried out so far is to constrain the presence of a primordial magnetic field in a {\it model independent way}, regardless of what is easy to produce or natural to expect; therefore, in these analyses one adopts a magnetic field model which is sufficiently general.
In practice, however, the situation is somewhat more involved. The generation mechanism of the magnetic field affects not only its spectral index, which is one of the parameters constrained by MCMC analyses of the CMB, but also the initial conditions of the Boltzmann hierarchy right after neutrino decoupling. Up to now, CMB data analyses have only taken into account the so-called {\em compensated mode} \cite{Paoletti:2010rx,finelli,shaw,Shaw:2010ea,Paoletti:2012bb,Giovannini:2008yz}: a particular set of initial conditions giving rise to an isocurvature ($\zeta=0$) mode, in which the fluid and magnetic energy densities and anisotropic stresses are compensated. This mode is one of the possible solutions of Einstein's equations with free-streaming neutrinos, and it is independent of the way the magnetic field is generated.
However, previous analyses have shown that there are other perturbation modes from magnetic fields that add to the compensated mode. There is first the so-called {\em passive mode}, which is an adiabatic-like mode that depends logarithmically on the time of generation of the magnetic field $\eta_*$. Its contribution to the CMB spectrum is in general larger than the one of the compensated mode (c.f. section \ref{sec:angular} and Refs.~\cite{shaw,magcaus}). Accounting for this mode would therefore change the bound in Eq.~\eqref{Planckconstr}. Or at least, given the Planck bound for the compensated mode, it adds a constraint on the new parameter: $\eta_*$, i.e. it constrains the time of magnetic field generation.
The purpose of this paper is to show the effect on the CMB of yet another mode, which is also adiabatic, and is present only if the magnetic field is generated during inflation. We call it the {\em inflationary magnetic mode}. The existence of this mode has been demonstrated in Refs.~\cite{maginf,barnaby,seery}, therefore this paper is intended as a follow-up and complement of Ref.~\cite{maginf}.
The inflationary magnetic mode is distinctively different from the compensated and the passive modes, since the curvature is dynamically generated only during inflation, and remains constant in the radiation/matter era (just as the usual inflationary mode due to the quantum fluctuations of the inflaton field). Moreover, it does not depend directly on the magnetic field power spectrum: we will show that through this new mode, even a magnetic field which is far from scale invariance can leave a detectable imprint on the CMB at large scales. The distinction of such a mode from the usual inflationary mode comes mainly from its statistics which is non-Gaussian~\cite{barnaby}. Another possible difference with respect to the adiabatic mode from inflation is that this mode can have logarithmic corrections to scale invariance, see Eq.~(\ref{zetainf}).
In this paper we show that this mode, if it is present, generically dominates the contributions to the CMB temperature perturbations due to the magnetic field. Therefore, it should be taken into account when constraining primordial magnetism with CMB data. As we shall see, this implies, however, to insert the magnetic field generation time, or the redshift of reheating, as an extra parameter in the magnetic field model, and diversify the CMB constraints depending on the mechanism of generation of the magnetic field. On the other hand, if magnetic fields are generated during inflation and this mode is present, it provides in principle a new way to determine the energy scale of inflation, which (in the simple approximation used here) directly determines the reheating temperature.
The remainder of the paper is structured as follows: in~Section \ref{sec:inf}, we revisit the model of inflationary magnetogenesis which we adopt in the following. We basically summarise the results of Ref.~\cite{maginf}, showing the effect of an inflationary magnetic field on the comoving curvature perturbation $\zeta$ at super-horizon scales. In Section~\ref{sec:psirec}, we compute the metric perturbations analytically at super-horizon scales until recombination. In Section~\ref{sec:SW}, we evaluate analytically the Sachs Wolfe effect, i.e. the temperature anisotropy at large scale and at recombination time, from the compensated, passive and the new inflationary magnetic mode. In Section \ref{sec:angular} we present the different CMB spectra evaluated both analytically and using the CAMB code~\cite{shaw}, and we compare the contributions of the different modes. In Section~\ref{sec:discussion} we discuss our results and we conclude in Section~\ref{sec:con}.
{\bf Notation:} Throughout this paper we use conformal time $\eta$, comoving space coordinates ${\bf x}$ and wave vectors ${\bf k}$ with the spatially flat metric $ds^2=a^2(\eta)(-d\eta^2+ \delta_{ij}dx^i dx^j)$; greek letters denote 4d spacetime indices while latin letters denote 3d spatial indices and spatial vectors are denoted in bold face. For the metric and scalar field perturbations we follow the conventions of~ \cite{mukhanov}. An overdot denotes derivatives with respect to conformal time $\eta$, and a prime with respect to the variable $x=|k\eta|$. We define the Planck mass by $m_P = (\sqrt{8\pi G})^{-1}$.
\section{Inflationary generation mechanism}
\label{sec:inf}
In this section we show that a primordial magnetic field which is generated during inflation leads to a new mode in the initial conditions for the evolution of metric perturbations after inflation. Specific examples of inflationary generation mechanism are worked out, e.g., in
Refs.~ \cite{turner,ratra,Giovannini:2000dj,Dimopoulos:2001wx,bamba1,Anber:2006xt,bamba2,yokoyama,subramanian,Durrer:2010mq}.
The existence of this mode is due to the fact that the magnetic energy momentum tensor gravitates and perturbs the metric. The mode is therefore present for any model of magnetic field generation from inflation, and choosing a specific model only changes the details but not the substance of this analysis, which can be easily generalized to other generation mechanisms. We consider the simplest existing model for magnetic field generation from inflation~\cite{ratra}, even though it has been shown to suffer from a strong coupling problem~\cite{demozzi} (a possible way to avoid the strong coupling problem is proposed e.g. in~\cite{Ferreira:2013sqa}). As shown in the following, the specific form of the coupling determines the power spectra of the magnetic energy density and of the anisotropic stress. We keep their amplitude and spectral index as free parameters throughout the analysis (specific examples of inflationary generation mechanism for which the metric perturbations have also been calculated are found in~\cite{Anber:2006xt,maginf,barnaby,Barnaby:2011vw}).
The simplest existing model for inflationary magnetogenesis consists in breaking conformal invariance through a coupling between the electromagnetic field and the inflaton $\varphi$ with an action of the form~\cite{turner,ratra}
\begin{equation}
\label{actionem}
S=-\frac{1}{16\pi}\int d^4x \sqrt{-g}\, f^2(\varphi)F^{\mu \nu}F_{\mu \nu} + \, S_{\varphi,g} + \cdots \; ,
\end{equation}
where $F_{\mu\nu}=A_{\nu,\mu}-A_{\mu,\nu}$ is the Faraday tensor, and $A_\nu$ is the electromagnetic 4-vector potential.
The time evolution of the electromagnetic field depends on the coupling function $f(\varphi)$. We parametrize it directly as a function of conformal time obtained by inverting the background inflaton evolution $\bar\varphi(\eta)$. We consider the simple case
\begin{equation}
\label{f}
f(\eta)=f_1\left(\frac{\eta}{\eta_1}\right)^\gamma\; ,
\end{equation}
where we restrict $\gamma$ to the values $-2\leq\gamma\leq 2$. This ensures that the electromagnetic field
remains subdominant and does not back react on the background expansion during inflation~\cite{yokoyama,subramanian}.
As shown in~\cite{maginf}, for super-horizon scales $|k\eta|<1$, the magnetic field power spectrum then scales as in Eq.~\eqref{PB}, and the magnetic field spectral index $n_B$ is related to $\gamma$ through $n_B=2\gamma +1$ for $\gamma<1/2$, and $n_B=3-2\gamma$ for $\gamma>1/2$. When $\gamma=-2$ the magnetic field is scale invariant ($n_B=-3$), and when $\gamma=1/2$, $n_B= 2$.
In~\cite{maginf} we have computed the energy density and anisotropic stress of the electromagnetic field during inflation (here and in the following a sub- or super-script~$-$ denotes the quantities during inflation while $+$ indicates the radiation era directly after inflation). On super-horizon scales, $x=|k\eta|<1$, we have found
\begin{eqnarray}
{\Omega^{-}_\Pi}&\equiv&\sqrt{\frac{k^3 P_\Pi}{\bar{\rho}_\varphi^2}}=\frac{H^2}{3 m_P^2}C_\Pi(\gamma)\, x^\alpha\;, \label{omp} \\
{\Omega^{-}_{\rm em}}&\equiv&\sqrt{\frac{k^3 P_{\rm em}}{\bar{\rho}_\varphi^2}}=\frac{H^2}{3 m_P^2}C_{\rm em}(\gamma)\, x^\alpha\;, \label{omem}
\end{eqnarray}
where $P_\Pi(k, \eta)$ and $P_{\rm em}(k, \eta)$ are respectively the power spectra of the electromagnetic anisotropic stress and energy density as defined in~\cite{maginf}, $\bar{\rho}_\varphi$ is the background energy density during inflation, $H$ is the physical Hubble scale during inflation, and
\begin{equation}
\alpha=\left\{\begin{array}{ll} 4+2\gamma=n_B+3 & \mbox{if} \quad -2\leq \gamma \leq -5/4\,, \\
3/2 & \mbox{if} \quad -5/4\leq \gamma \leq 5/4\,, \\
4-2\gamma=n_E+3 & \mbox{if} \quad 5/4\leq \gamma \leq 2 \,, \end{array} \right.
\label{alphanB}
\end{equation}
where $n_E$ denotes the spectral index of the electric field. The coefficients $C_\Pi(\gamma)$ and $C_{\rm em}(\gamma)$ are constants which depend on the value of $\gamma$. For the simple coupling of Eq.~\eqref{f} they are expected to be of order 1, but one could imagine specific models that would enhance or reduce their value.
Note that an electric field is also generated by the coupling in Eq.~\eqref{f}. For $\gamma<-5/4$ the contribution of the electric field to the energy density $\Omega^{-}_{\rm em}$ and to the anisotropic stress $\Omega^{-}_\Pi$ is subdominant with respect to the magnetic one and can therefore be neglected. However, for $-5/4<\gamma<5/4$ the electric contribution is of the same order of the magnetic one and it enters in the coefficients $C_\Pi(\gamma)$ and $C_{\rm em}(\gamma)$. Finally, for $\gamma>5/4$ the electric field dominates and virtually no magnetic field is generated by the coupling under consideration. In this case, the electromagnetic anisotropic stress and energy density depend on the spectral index of the electric field
$n_E$.
The energy density and anisotropic stress of the electromagnetic field act as sources for the curvature perturbation during inflation. In~\cite{maginf} we have calculated the comoving curvature perturbation and found that (c.f. Appendix~\ref{app:curvature} for a discussion)
\begin{eqnarray}
\label{zetainf}
\zeta_-(x) &\simeq& \frac{H^2}{9\, m_P^2 \, \epsilon} \Big[(\alpha-6) C_{\rm em}(\gamma)+
\alpha \,C_\Pi(\gamma) \Big] \nonumber \\
&\times& \left\{\begin{array}{cc} -\log\big(x) &\mbox{if}~\alpha=0 ~ ~~ (|\gamma|=2)\\
\alpha^{-1} & \mbox{if} ~\alpha\neq 0 \hspace{1.65cm}\end{array} \right.
\end{eqnarray}
where $\epsilon=({\cal H}^2-\dot{\cal H})/{\cal H}^2$ is the slow roll parameter, ${\cal H} = \dot a/a=aH$. The above expression for $\zeta_-(x)$ is valid at super-horizon scales $x<1$. Note that the pre factor $H^2/(\epsilon m_P^2)$ is of the order of the adiabatic power spectrum. Parametrically therefore, the power spectrum of this contribution is of the order of the square of the adiabatic one. However, the log can be large on large scales and also the term in square brackets in (\ref{zetainf}) may be substantially larger than 1.
We now study the impact of this inflationary magnetic mode on the CMB, and compare it with the passive and compensated modes.
\section{The curvature and metric perturbations after inflation}
\label{sec:psirec}
At the end of inflation the standard electromagnetic action is recovered. The magnetic field stops growing and is simply transferred to the radiation era, where it scales as a radiation component, i.e., $B^2\sim 1/a^4$. The electric field on the other hand is almost immediately dissipated due to the very high conductivity of the cosmic plasma~\cite{AE}.
The energy density and anisotropic stress of the magnetic field continue to source the curvature perturbation $\zeta$ and the metric potentials $\Phi$ and $\Psi$ also after inflation. The metric in the longitudinal gauge reads
\begin{equation}
ds^2 = a^2\left[-(1+2\Phi)d\eta^2 + (1-2\Psi)d{\mathbf x}^2\right] \,,
\end{equation}
and $\zeta$ is related to $\Psi$ and $\Phi$ by
\begin{equation}
\label{zetadef}
\zeta=\Psi+\frac{2}{3{\cal H}(1+w)}\Big({\cal H}\Phi+\dot{\Psi} \Big)\; .
\end{equation}
Combining Einstein's equations and the conservation equations one can write a second-order evolution equation for the curvature $\zeta$ valid after inflation, for the pre-recombination phase (we have set the total entropy perturbation in the fluids to zero and consider standard adiabatic initial conditions):
\begin{align}
\label{evolzeta}
&\ddot{\zeta} +\left[2{\cal H}+3{\cal H}\big(w-c_s^2 \big)-2\frac{\dot{c}_s}{c_s} \right]\dot{\zeta}+c_s^2 k^2 \zeta=\\
& \frac{2w{\cal H}}{1+w}\Bigg\{\left[\frac{2\dot{c}_s}{c_s}-\frac{{\cal H}}{2}\big(1+3w+6 c_s^2 \big) \right] \Big({\Omega^{+}_\Pi}+\frac{R_\nu}{3}\pi_\nu\Big)\nonumber\\
&+\left[\frac{{\cal H}}{4}\big(1-3c_s^2 \big)\big(1+3w\big)-\frac{\dot{c}_s}{c_s} \right]{\Omega^{+}_{\rm B}}-\frac{R_\nu}{3}\dot{\pi}_\nu\Bigg\}\; ,\nonumber
\end{align}
where $\pi_\nu$ denotes the anisotropic stress of the neutrinos which starts to develop after neutrino decoupling, $R_\nu=\bar{\rho}_\nu/\bar{\rho}_{\rm rad}$, $c_s$ and $w$ are the background fluid sound speed and equation of state parameter and a dot denotes the derivative with respect to conformal time.
${\Omega^{+}_{\rm B}}(k)$ is a function denoting the fractional energy density of the magnetic field at wavenumber $k$ and ${\Omega^{+}_\Pi}(k)$ correspondingly denotes the fractional anisotropic stress. They are both constant in time, since they are normalized to the radiation energy density $\bar\rho_{\rm rad}$. We define the magnetic field energy density and anisotropic stress power spectra as
\begin{align}
\vev{\rho_B({\mathbf k},\eta)\rho_B^*({\mathbf q},\eta)}&=(2\pi)^3 P_{\rho_B}(k, \eta)\delta({\mathbf k}-{\mathbf q})\,,\\
\vev{\Pi_B({\mathbf k},\eta)\Pi_B^*({\mathbf q},\eta)}&=(2\pi)^3 P_{\Pi_B}(k, \eta)\delta({\mathbf k}-{\mathbf q})\,.
\end{align}
The quantities above have to be understood in terms of these spectra, as in Eqs.~\eqref{omp} and \eqref{omem}:
\begin{equation}
{\Omega^{+}_{\rm B}}(k)\equiv \sqrt{\frac{k^3P_{\rho_B}}{\bar\rho_{\rm rad}^2}}~~~~~{\rm and}~~~~~{\Omega^{+}_\Pi}(k)\equiv \sqrt{\frac{k^3P_{\Pi_B}}{\bar\rho_{\rm rad}^2}}\,.
\label{PrhoB}
\end{equation}
If the magnetic field has its origin during inflation, the relation between ${\Omega^{-}_{\rm em}}$, ${\Omega^{-}_\Pi}$ and ${\Omega^{+}_{\rm B}}$, ${\Omega^{+}_\Pi}$ is not completely trivial, since these quantities are not necessarily continuous at the transition from inflation to the radiation dominated era. During the radiation era many charged particles are present and the conductivity of the universe is high, so that the electric field is rapidly damped. As previously mentioned, in Ref.~\cite{maginf} we have found that the magnetic field dominates if $\gamma<-5/4$ while the electric field dominates if $\gamma>5/4$. If the electric field dominates ${\Omega^{-}_{\rm em}}$, ${\Omega^{-}_\Pi}$ during inflation, then ${\Omega^{+}_{\rm B}}$, ${\Omega^{+}_\Pi}$ can be significantly smaller than ${\Omega^{-}_{\rm em}}$, ${\Omega^{-}_\Pi}$ because the electric field is dissipated after reheating. In the following, we shall however disregard this possibility, since we know that it cannot lead to significant magnetic fields. We therefore consider only $\gamma\le 5/4$, corresponding to $n_B\le 2$ (note that the maximal $n_B$ is obtained for $\gamma=1/2$ where $n_B=3-2\gamma=2$).
Eqs.~\eqref{omp} and~\eqref{omem} are valid at super-horizon scales $x<1$, for which the transition between the inflationary and the radiation dominated phase can be considered instantaneous. We assume that the transition happens at a time instant $\eta_*$, for which ${\cal H}_{\rm inf}(\eta=-\eta_*) = \eta_*^{-1} = {\cal H}_{\rm rad}(\eta=\eta_*)\equiv {\cal H}_*=a_*H_*$, while the background equation of state parameter jumps from $w\simeq -1$ to $w=1/3$. Afterwards, we know that the components of the magnetic field energy momentum tensor decay in time as radiation. We therefore simply set for the fractional energy density and anisotropic stress
\begin{eqnarray}
{\Omega^{+}_{\rm B}}&\simeq&\frac{H_*^2}{3 m_P^2}C_{\rm em}(\gamma)\, x_*^\alpha\;, \label{ombprad}\\
{\Omega^{+}_\Pi}&\simeq&\frac{H_*^2}{3 m_P^2} C_\Pi(\gamma)\, x_*^\alpha\; , \label{ompprad}
\end{eqnarray}
where $x_*=|k\eta_*|$ denotes the end of inflation, and we neglect the contribution from the electric field.
Knowing ${\Omega^{+}_{\rm B}}$ and ${\Omega^{+}_\Pi}$, the evolution equation of the comoving curvature can be solved. Moreover, in order to evaluate the CMB anisotropies we want to determine also the metric perturbation $\Psi$, which is related to the curvature $\zeta$ by
\begin{eqnarray}
\label{evolpsi}
\lefteqn{\dot{\Psi}+\frac{{\cal H}}{2}(5+3w)\Psi=\frac{3{\cal H}}{2}(1+w)\zeta} \nonumber\\
& & \hspace*{2cm} +\, 9w{\cal H}\left(\frac{{\cal H}}{k}\right)^2 \Big({\Omega^{+}_\Pi}+\frac{R_\nu}{3}\pi_\nu\Big)\; .
\end{eqnarray}
This equation has been derived from Eq.~\eqref{zetadef} using the Einstein equation with indices $i\neq j$,
\begin{equation}
\Psi-\Phi=9w\left(\frac{{\cal H}}{k}\right)^2\left[{\Omega^{+}_\Pi}+\frac{R_\nu\,\pi_\nu}{3}\right]\,.
\end{equation}
To solve Eqs.~\eqref{evolzeta} and \eqref{evolpsi}, we further need to specify the background evolution and the neutrino anisotropic stress $\pi_\nu(k,\eta)$. Note that in this work we make the simplifying assumption that neutrinos are massless, since this does not affect our results substantially. For a treatment of CMB magnetic field anisotropies (passive and compensated modes) with massive neutrinos see Ref.~\cite{shaw}.
\subsection{Before neutrino decoupling}
For $T\geq1$ MeV, neutrinos are coupled to the photons and have negligible anisotropic stress $\pi_\nu=0$. Moreover at these temperatures the universe is dominated by radiation and $w=c_s^2=1/3,\; {\cal H}=1/\eta$. The evolution equations for $\zeta$ and $\Psi$ take the simple form
\begin{equation}
\label{evolzetarad}
\zeta''+\frac{2}{x}\zeta'+\frac{\zeta}{3}=-\frac{{\Omega^{+}_\Pi}}{x^2}\; ,
\end{equation}
where a prime denotes a derivative with respect to $x=k\eta$; and
\begin{equation}
\label{evolpsirad}
\Psi'+\frac{3}{x}\Psi=\frac{2}{x}\zeta+\frac{3}{x^3}{\Omega^{+}_\Pi}\; .
\end{equation}
Hence the curvature $\zeta$ is sourced by the magnetic field anisotropic stress ${\Omega^{+}_\Pi}$ at order $x^0$. The metric potential $\Psi$ is sourced by ${\Omega^{+}_\Pi}$ at order $x^{-2}$ and by the curvature at order $x^0$.
Eq.~\eqref{evolzetarad} can easily be solved. The initial conditions for $\zeta$ are fixed by the matching conditions at the end of inflation. They insure the continuity of the induced 3-metric and the extrinsic curvature at the transition from the inflationary era to the radiation era \cite{muk_deruelle,maginf}. Without a magnetic field, this matching is trivial as all the relevant quantities $\zeta$, $\Psi$ and $\Phi =\Psi$ turn out to be continuous at the transition. In the presence of a magnetic field, however, these conditions imply that both $\zeta$ and $\Psi$ are continuous at the transition for large scales $x<1$, while $\Phi$ is not~\cite{maginf}. Also the time derivative of $\zeta$ is not continuous. To determine $\zeta'$ at the beginning of the radiation era, we need a relation between $\zeta$ and $\Psi$. Combining Einstein's equations with the derivative of $\zeta$ in Eq.~\eqref{zetadef} we find (c.f. Eq.~(16) of \cite{maginf})
\begin{eqnarray}
\zeta'_{-}(x)&=&\frac{1}{\epsilon x}\left(x^2 \Psi_{-} +{\Omega^{-}_\Pi}+{\Omega^{-}_{\rm em}}-9\Omega^{-}_Q \right)\label{zetapmoins}\; ,\\
\zeta'_{+}(x)&=&-\frac{1}{2x}\left(\frac{x^2 \Psi_{+}}{3}+{\Omega^{+}_\Pi} \right)\; .\label{zetapplus}
\end{eqnarray}
In Eq.~\eqref{zetapmoins}, we keep only the lowest order terms in $\epsilon$. $\Omega^{-}_Q$ represents the Poynting vector contribution; it is defined as in Eqs.~\eqref{omp} and \eqref{omem}, $\Omega^{-}_Q=\sqrt{k^3P_Q/\bar\rho_\varphi^2}$, where $P_Q$ is the power spectrum of the quantity $i{\cal H}\hat{k}^j{T^0}_j/k$, and ${T^0}_j$ is the Poynting vector component of the electromagnetic energy-momentum tensor \cite{maginf}. It is negligible when the electric field is subdominant ($\gamma<-5/4$) but of the same order of magnitude as ${\Omega^{-}_\Pi}$ and ${\Omega^{-}_{\rm em}}$ for larger values of $\gamma$.
Using that $\Psi$ is continuous at the transition, Eqs.~\eqref{zetapmoins} and \eqref{zetapplus} imply
\begin{equation}
\zeta'_{+}(x_*)=-\frac{\epsilon}{6}\zeta'_{-}(x_*) -\frac{{\Omega^{+}_\Pi}}{2x_*}+\frac{{\Omega^{-}_\Pi}+{\Omega^{-}_{\rm em}}-9\Omega^{-}_Q}{6 x_*}\;,
\end{equation}
where $x_*=|k\eta_*|$ denotes the end of inflation, and $\zeta'_{-}(x_*)$ is obtained by deriving Eq.~\eqref{zetainf} with respect to $x$.
This condition, together with the continuity of $\zeta$ for which we know the solution (Eq.~\eqref{zetainf}), provides the initial conditions for Eq.~\eqref{evolzetarad} and allows us to find the curvature during the radiation era. We obtain
\begin{eqnarray}
\hspace*{-3cm}\zeta_{+}&=& \zeta_{\rm inf}+\zeta_* +{\Omega^{+}_\Pi} \log\left({\frac{\eta_*}{\eta}}\right)
\nonumber\\ \label{e:zeta+}
& & \qquad +\left(1-\frac{\eta_*}{\eta} \right)\left[\frac{{\Omega^{+}_\Pi}}{2}+\frac{\Omega_*}{6} \right]\,,
\end{eqnarray}
where $\zeta_*\equiv \zeta_{-}(x_*)$ [c.f. Eq.~\eqref{zetainf}] and
\begin{equation}
\label{omstar}
\Omega_*=\frac{1}{3}\left\{\begin{array}{ll} {\Omega^{-}_\Pi}+{\Omega^{-}_{\rm em}}-9\Omega^{-}_Q & \mbox{if} \;\alpha \neq 0\\
{\Omega^{-}_\Pi}+{\Omega^{-}_{\rm em}}-9\Omega^{-}_Q-\frac{\epsilon\, \zeta_*}{\log\left(x_*\right)} & \mbox{if}\; \alpha= 0\,.\end{array} \right.
\end{equation}
There are four distinct contributions to $\zeta_{+}$:
\begin{enumerate}[(i)]
\item $\zeta_{\rm inf}$ is the standard scalar adiabatic mode.
\item The contribution proportional to ${\Omega^{+}_\Pi}$ is a dynamical mode generated by the magnetic field anisotropic stress \textit{after} the end of inflation. This mode has already been computed in~\cite{magcaus} and in~\cite{shaw} where it is called the passive magnetic mode. It vanishes at the transition, $\eta=\eta_*$.
\item $\zeta_*\equiv \zeta_{-}(x_*)$ is a new contribution, computed for the first time in~\cite{maginf}. It is directly transmitted from inflation to insure the continuity of the curvature. It is therefore a remnant of the inflationary period and is absent if the magnetic field is generated causally after inflation~\cite{magcaus}. This {\em inflationary magnetic mode} is constant in time. It is not affected by the behavior of the magnetic field after inflation.
\item The term proportional to $\Omega_*$ is also a new contribution, generated by the continuity of $\Psi$ at the transition. It is however negligible with respect to $\zeta_*$. First of all, it is one order higher in the slow roll parameter $\epsilon\ll 1$ (therefore, it is of the same order in $\epsilon$ as other terms which we have neglected in the derivation of expression~(\ref{e:zeta+})). Moreover, it is further reduced by other factors. From Eqs.~\eqref{zetainf} and \eqref{omstar} we see that the first part of $\Omega_*$ is of the order ${\Omega^{-}_\Pi}(x_*)\sim \epsilon\, x_*^\alpha \zeta_*\ll\zeta_*$ since $x_*\ll1$, $\alpha\geq 0$ and $\epsilon\sim0.01$ at the end of inflation. For $\alpha=0$ it contains a part $(\epsilon\,/\log x_*)\,\zeta_*$, with $|\log x_*|\gg 1$. Note that the amplitude of the first part of $\Omega_*$ is in principle of the same order as ${\Omega^{+}_\Pi}$. However, it is a non-dynamical component arising directly from inflation, of the same nature of the inflationary magnetic mode $\zeta_*$: we consistently compare it only to this latter.
\end{enumerate}
We neglect the subdominant $\Omega_*$--contribution and the decaying mode so that well into the radiation era, for $\eta\gg \eta_*$, the curvature takes the simple form ($\eta_\nu$ denotes the time of neutrino decoupling)
\begin{equation}
\label{zetasolrad}
\zeta\simeq \zeta_{\rm inf}+\zeta_*+ \left[ \frac{1}{2}+\log\left({\frac{\eta_*}{\eta}}\right) \right]{\Omega_\Pi}\,, ~~~~~\eta_*\ll \eta\leq \eta_\nu
\end{equation}
where here and in the following we drop the superscript~$+$ for simplicity. Here, ${\Omega_\Pi}$ is the magnetic anisotropic stress ratio in the radiation era.
Inserting the solution~\eqref{zetasolrad} into Eq.~\eqref{evolpsirad} and integrating, we find the Bardeen potential during the radiation era
\begin{eqnarray}
\label{psisolrad}
\Psi \simeq \frac{2}{3}{\zeta_{\rm inf}} +\frac{2}{3}\zeta_* +\frac{3{\Omega_\Pi}}{x^2}+\left[\frac{5}{9}+\frac{2}{3} \log\left({\frac{\eta_*}{\eta}}\right) \right]{\Omega_\Pi}\, , \\
& & \hspace*{-3cm} \eta_*\ll\eta\leq \eta_\nu\,. \nonumber
\end{eqnarray}
The inflationary magnetic mode, $\zeta_*$, generated by the electromagnetic field during inflation, contributes to the Bardeen potential in the same way as the standard inflationary mode, $\zeta_{\rm inf}$. As the large scale CMB anisotropies are determined by the Bardeen potentials\footnote{Note that the CMB temperature fluctuation $\Delta T/T$ from the Sachs Wolfe effect can be determined directly by the curvature perturbation $\zeta$ only in the standard adiabatic case for which $\Phi=\Psi$ and the potentials are almost constant during the matter era. In the presence of non-zero anisotropic stresses at large scales, i.e. when a magnetic field is present, $\Delta T/T$ depends on both Bardeen potentials, as given in Eq.~\eqref{SWV}.}, we expect the inflationary magnetic mode to generate CMB temperature anisotropies in a similar way as the standard inflationary term, although with a smaller amplitude (c.f. Eq.~\eqref{zetainf}). To determine the CMB temperature anisotropies, we need to evolve the metric perturbations further, until recombination, accounting for neutrino decoupling.
\subsection{After neutrino decoupling}
To evolve the curvature and metric potentials until recombination and evaluate the effect of the inflationary magnetic mode $\zeta_*$ on the temperature power spectrum we need to take into account that at a temperature $T_\nu\sim 1$~MeV neutrinos decouple from the electron-positron-photon plasma and start to free stream. They develop an anisotropic stress that acts as a source for the curvature and metric perturbations in~Eqs.~\eqref{evolzeta} and~\eqref{evolpsi}. Moreover, at~$T_{\rm eq}\simeq 0.73$~eV the universe evolves from radiation domination to matter domination. The equation of state $w$ and the sound speed $c_s^2$ depart from their radiation value. As a result, not only the anisotropic stress but also the magnetic energy density, ${\Omega_{\rm B}}$, source the curvature perturbation, see Eq.~\eqref{evolzeta}.
As studied in detail in~\cite{magcaus} (see also~\cite{shaw, Kojima:2009gw,const}), once neutrinos decouple, their anisotropic stress quickly adjusts at large scales to compensate the one of the magnetic field. The precise dynamical behaviour of $\pi_\nu$ can be determined by combining the Boltzmann hierarchy with the Einstein's equations. It is well approximated by~\cite{magcaus}
\begin{equation}
\pi_\nu(\eta>\eta_\nu)\simeq -\frac{3{\Omega_\Pi}}{R_\nu}\left(1-\frac{\eta_\nu^2}{\eta^2} \right) + \mathcal{O}(k\eta)^2\,.
\label{pinuapp}
\end{equation}
This compensation affects the subsequent evolution of $\zeta$ and $\Psi$. Indeed, after compensation the magnetic anisotropic stress ${\Omega_\Pi}$ does not source the curvature perturbation anymore. This stops the logarithmic growth of $\zeta$ at $\eta\simeq \eta_\nu$ (c.f Eq.~\eqref{zetasolrad}). The only remaining source term in Eq.~\eqref{evolzeta} is then the magnetic energy density ${\Omega_{\rm B}}$ with a pre-factor which vanishes in the radiation era. The inflationary magnetic mode $\zeta_*$ is not affected by this compensation and it simply goes through neutrino decoupling without alteration. The full solution for $\zeta$ is rather complicated, hence we only write its behaviour in the limits $\eta_\nu\ll\eta\ll \eta_{\rm eq}$ and $\eta\gg \eta_{\rm eq}$, details can be found in Ref.~\cite{magcaus}:
\begin{eqnarray}
\zeta&\simeq &{\zeta_{\rm inf}}+\zeta_*+\left[\frac{\eta_\nu}{\eta}-\frac{1}{2}+\log\left(\frac{\eta_*}{\eta_\nu}\right) \right]{\Omega_\Pi}\,, \label{zetasmall}\\
& & \hspace*{2.5cm} \eta_\nu\ll\eta\ll \eta_{\rm eq}\,,\nonumber\\
& & \nonumber \\
\zeta&\simeq &{\zeta_{\rm inf}}+\zeta_*+\frac{{\Omega_{\rm B}}}{4}+\left[-\frac{1}{2}+\log\left(\frac{\eta_*}{\eta_\nu}\right) \right]{\Omega_\Pi}\,, \label{zetabig}\\
& & \hspace*{3.5cm} \eta\gg \eta_{\rm eq} \,.\nonumber
\end{eqnarray}
The analytical solution for $\zeta$, together with the exact solution obtained by integrating Eq.~\eqref{evolzeta} numerically with the correct expression for the neutrino anisotropic stress instead of approximation \eqref{pinuapp} (c.f. Appendix B of~\cite{magcaus}), are shown in Fig.~\ref{fig:curv}. Approximation \eqref{pinuapp} adjusts to the value $-3{\Omega_\Pi} / R_\nu$ in a way which differs somewhat from how the true $\pi_\nu(k,\eta)$ adjusts itself to the same value. This is reflected in the curvature, leaving an offset between the approximated solution and the true one even at late times. Since this offset is much smaller than the amplitude of the passive mode in $\zeta$, we neglect it and keep using the analytical solution in the following.
\begin{figure}
\includegraphics{curvature.eps}
\caption{The comoving curvature $\zeta$ generated by an inflationary magnetic field. Solid, black: the correct numerical solution with the true $\pi_\nu$. Dashed, black: the approximated solution calculated from approximation \eqref{pinuapp}. Dotted, magenta: the asymptotic expressions given in Eqs.~\eqref{zetasmall} (lower line) and \eqref{zetabig} (upper line). Dash-dotted, green: the approximation given in Eq.~(86) of Ref.~\cite{shaw}. For this figure, we have set ${\Omega_{\rm B}}={\Omega^{+}_\Pi}\simeq {\Omega^{-}_\Pi}(x_*)\simeq \epsilon \,\zeta_*\simeq 0.01\,\zeta_*$, $\eta_*/\eta_\nu=10^{-22}$, $\eta_\nu/\eta_{\rm rec}=10^{-6}$ (the plot shows $\zeta$ after neutrino decoupling). Note also the vertical axes: the differences are small, on the level of 1\%.}
\label{fig:curv}
\end{figure}
The metric perturbation $\Psi$ is also affected by the compensation, that cancels the anisotropic source term scaling as $({\cal H}/k)^2$ in Eq.~\eqref{evolpsi}. As a result, the term $3{\Omega_\Pi}/x^2$ in Eq.~\eqref{psisolrad} cancels. Hence the metric perturbations $\Psi$ and $\Phi$ contribute to the large scale CMB anisotropies only at next-to-leading order $(k\eta)^0$. This has already been found in \cite{magcaus} for the case of a magnetic field generated by a causal process (e.g. a primordial phase transition).
Solving Eq.~\eqref{evolpsi} at order $(k\eta)^0$ requires us to know the evolution of the neutrino anisotropic stress at order $(k\eta)^2$. Combining the Boltzmann hierarchy with Einstein's and the conservation equations in the radiation era gives the following expression for $\pi_\nu$ at next-to-leading order
\begin{eqnarray}
\label{pinunext}
&\pi_\nu(k,\eta)&=\Bigg\{\frac{4}{15+4R_\nu}\left[\zeta_*+\log\left(\frac{\eta_*}{\eta_\nu}\right){\Omega_\Pi}\right]+\\
& &\frac{165{\Omega_\Pi}-42R_\nu{\Omega_{\rm B}}}{14R_\nu (15 + 4 R_\nu)}\Bigg\}k^2\eta(\eta-\eta_\nu) ~~~{\rm at}~\mathcal{O}(k\eta)^2\, .\nonumber
\end{eqnarray}
This solution is strictly valid only during the radiation era and it obtains corrections at the transition into the matter era. In our analytical approximation we use it, however, until recombination since analytically it is not possible to derive the exact solution. We do not expect those corrections to change the result significantly. In the numerical calculations we just use it for the initial conditions which are in the radiation era but after neutrino decoupling.
Inserting the solutions for $\pi_\nu$ and $\zeta$ into Eq.~\eqref{evolpsi} we find $\Psi$ after neutrino decoupling (the initial condition is given by Eq.~\eqref{psisolrad}). Here we only write the inflationary magnetic contribution $\zeta_*$ to $\Psi$ :
\begin{eqnarray}
\Psi^{\rm im}&=& \frac{2(5+2R_\nu)}{15+4 R_\nu}\zeta_* \;, \quad \eta \ll \eta_{\rm eq} \label{psinu}\\
\Psi^{\rm im}&=&\frac{3}{5}\zeta_*\;, \hspace{1.9cm} \eta\gg \eta_{\rm eq}\,.
\end{eqnarray}
The inflationary magnetic mode is affected by neutrino decoupling. It evolves from $\Psi^{\rm im}=2\zeta_*/3$ before decoupling (see Eq.~\eqref{psisolrad}) to expression~\eqref{psinu} after decoupling. This behaviour is like the one of the standard adiabatic curvature perturbation ${\zeta_{\rm inf}}$ that is also affected by the anisotropic stress of the neutrino. Then, well into the matter era, $\Psi^{\rm im}$ tends to $3\zeta_*/5$. In addition to the inflationary magnetic mode, $\Psi$ contains also the passive and compensated modes already calculated in~\cite{magcaus, shaw, finelli}.
\section{The Sachs Wolfe effect, analytic approximation }
\label{sec:SW}
We are now able to calculate the magnetic field contribution to the CMB anisotropy. In this section we present an analytical estimate of the Sachs Wolfe amplitude in order to gain insight into the relative amplitudes of the terms and their scaling. Following~\cite{magcaus} we express the Sachs Wolfe as
\begin{eqnarray}
\label{SWV}
\frac{\Delta T}{T}&=&\frac{D_{g\,\gamma}}{4} +\Psi+\Phi\\
&=&\frac{1}{k}\dot{V}_\gamma-\frac{{\Omega_{\rm B}}-2{\Omega_\Pi}}{4(1-R_\nu)}\; ,\nonumber
\end{eqnarray}
where the last term after the second equality sign is the contribution from the Lorentz force.
The photon velocity $V_\gamma$ obeys the second order differential equation
\begin{equation}
\label{Vgamma}
\ddot{V}_\gamma+\frac{k^2}{3}V_\gamma=k(\dot{\Phi}+\dot{\Psi})\; .
\end{equation}
Since we are interested in the large-scale solution, we can neglect the $k^2$-term in the above equation, so that
\begin{equation}
\label{Vsol}
\dot{V}_\gamma=k(\Phi+\Psi)+c\; .
\end{equation}
In Eq.~\eqref{SWV} we have used $\dot{V}_\gamma =k[\Psi+\Phi + D_{g\,\gamma}/4 +({\Omega_{\rm B}}-2{\Omega_\Pi})/4/(1-R_\nu)]$.
This implies that on large scales $D_{g\,\gamma} +({\Omega_{\rm B}}-2{\Omega_\Pi})/4/(1-R_\nu) =c/k$ is constant. However, analytically there is no way to determine directly the quantity $D_{g\,\gamma}$ other than via the evolution equation of $V_\gamma$. The constant of integration $c$ can in fact be determined from the initial conditions, using that at neutrino decoupling $\dot{V}_\gamma(\eta_\nu)=\dot{V}_\nu(\eta_\nu)=\dot{V}(\eta_\nu)$. Up to neutrino decoupling, photons and neutrinos behave indeed as a single fluid and share the same velocity. Inserting the solution~\eqref{Vsol} into Eq.~\eqref{SWV} we find at $\eta=\eta_{\rm rec}$
\begin{eqnarray}
\frac{\Delta T}{T}&\simeq& \frac{505 + 108 R_\nu}{135(15 + 4 R_\nu)}\left[{\zeta_{\rm inf}}+\zeta_*+ {\Omega_\Pi}\log\left(\frac{\eta_*}{\eta_\nu} \right) \right]\nonumber \\
&+&\frac{(765 + 244 R_\nu)}{270 (15 + 4 R_\nu)}{\Omega_{\rm B}} -\frac{184005 + 48188 R_\nu}{ 5670 (15 + 4 R_\nu)}{\Omega_\Pi}\nonumber\\
&-&\frac{{\Omega_{\rm B}}-2{\Omega_\Pi}}{4(1-R_\nu)}\; .
\label{SWstar}
\end{eqnarray}
The first term is the standard adiabatic contribution to the Sachs Wolfe term. The second term is our inflationary magnetic mode, equivalent to the adiabatic one. The third term is the contribution from the passive mode. Note that this part obtains corrections if one relaxes the assumption of Eq.~\eqref{pinuapp}. The way in which the neutrino anisotropic stress adjusts to the compensation value~$-3\Omega_\Pi/R_\nu$ in fact introduces a correction of the order $\Omega_\Pi$ in the passive mode (as shown in Fig.~\ref{fig:curv} for the curvature perturbation). This correction is however negligible with respect to the logarithmic term. The second line represents the contribution from the compensated mode~\footnote{The Sachs Wolfe term from the compensated mode is slightly different from the one given in Eq.~(6.11) of \cite{maginf}, due to the fact that \cite{maginf} uses a different initial expression for $\pi_\nu$ at next-to-leading order.}. Finally, the third line contains the contribution from the Lorentz force.
As the Bardeen potentials from the compensated mode are not constant in time, there is also an integrated Sachs Wolfe term which is not accounted for in this analytical approximation.
The inflationary magnetic mode $\zeta_*$ contributes to the Sachs Wolfe effect differently from the passive and compensated modes:
the temperature angular power spectrum of the passive and compensated modes depends on the magnetic field spectral index, that determines the k-dependence of ${\Omega_{\rm B}}$ and ${\Omega_\Pi}$ as $(k\eta_*)^\alpha$ (c.f. Eqs.~\eqref{ombprad} and~\eqref{ompprad}).
On the other hand, since $\zeta_*$ is independent of $k$, i.e. scale invariant (up to a possible log correction if $\alpha=0$) for any spectral index $n_B$ (c.f. Eq.~\eqref{zetainf}), its impact on the temperature power spectrum is always scale invariant like the inflationary one: $\ell(\ell+1)C_\ell\propto\ell^0$. Hence, through this inflationary magnetic mode, even a blue magnetic field can leave a detectable imprint on the CMB at large scales, if it has a sufficient amplitude.
In the following we compare the amplitude and scaling of the inflationary magnetic mode with the passive and compensated modes, using both analytical estimates and numerical evaluations: these latter are obtained using the modified CAMB code of Ref.~\cite{shaw}.
\section{The angular power spectrum}
\label{sec:angular}
Let us estimate the CMB anisotropy generated by the different modes. We use the Fourier convention
\begin{equation}
f({\mathbf k})=\int d^3x\, e^{-i {\mathbf x}\cdot{\mathbf k}}f({\mathbf x})\; \label{fourier},
\end{equation}
so that, as shown in \cite{magcaus} (for details see~\cite{mybook}), the temperature power spectrum from the Sachs Wolfe effect at large scales can be approximated by
\begin{eqnarray}
C_\ell \simeq \frac{2}{\pi}\int_0^\infty dk\,k^2 g^2(\eta_{\rm rec}) j_\ell^2(k,\eta_0) P_\frac{\Delta T}{T}(k, \eta_{\rm rec})\, ,
\end{eqnarray}
where $g(\eta_{\rm rec})$ is the visibility function, $j_\ell(k,\eta_0)$ the spherical Bessel function and $P_\frac{\Delta T}{T}(k, \eta_{\rm rec})$ the spectrum of the temperature perturbation, the square of the terms in Eq.~\eqref{SWstar}.
The amplitude of the inflationary magnetic mode is given by
\begin{eqnarray}
\zeta^2_* &=&k^3 P_{\zeta_*}= \left(\frac{H_*^2}{m_P^2\epsilon}\right)^2\frac{1}{81} \Big[(\alpha-6) C_{\rm em}+
\alpha \,C_\Pi \Big]^2 \nonumber \\
&\times& \left\{\begin{array}{cc} \log^2\big(x_*) &\mbox{if}~\alpha=0\\
1/\alpha^2 & \mbox{if} ~\alpha\neq 0\end{array} \right. \label{zetastarspectrum}
\end{eqnarray}
so that the CMB power spectrum at large scales is effectively flat, just as the inflationary one:
\begin{eqnarray}
\frac{\ell(\ell+1)C_\ell^{\rm im}}{2\pi} \simeq \frac{g^2(\eta_{\rm rec})}{2\pi^2} \left[ \frac{505 + 108 R_\nu}{135(15 + 4 R_\nu)} \right]^2\!\!
\left(\frac{H_*^2}{m_P^2\epsilon}\right)^2\times \nonumber \\
\frac{1}{81} \Big[(\alpha-6) C_{\rm em}+ \alpha \,C_\Pi \Big]^2
\!\!\left\{\begin{array}{cc} \log^2\big(\eta_*/\eta_0\big) &\mbox{if}~\alpha=0\\
1/\alpha^{2} & \mbox{if} ~\alpha\neq 0\end{array} \right. \hspace*{0.4cm} \label{e:Clim}
\end{eqnarray}
For the passive mode, the CMB power spectrum depends instead on the magnetic field spectral index. One has to consider two cases: for $-3<n_B<-3/2$, the anisotropic stress power spectrum behaves as $k^3P_\Pi\propto k^{2n_B+6}$ (c.f. Eqs.~\eqref{ompprad} and \eqref{alphanB}). To evaluate the integral of the Bessel function, one can use approximations (A3) and (A4) of \cite{Caprini:2009vk}. Restricting for example to the case $n_B<-2$, using Eq.~\eqref{ompprad} one finds
\begin{align}
\frac{\ell(\ell+1)C_\ell^{\rm pass}}{2\pi} \simeq \frac{g^2(\eta_{\rm rec})}{4 \pi^{3/2}} \left[ \frac{505 + 108 R_\nu}{135(15 + 4 R_\nu)} \right]^2 \log^2&\left(\frac{\eta_*}{\eta_\nu}\right) \nonumber\\
\times\left(\frac{H_*^2}{3m_P^2}\right)^2 {C_\Pi}^2(\gamma) \left( \frac{\eta_*}{\eta_0}\right)^{2n_B+6}\frac{\Gamma[-n_B-2]}{\Gamma[-n_B-3/2]} \,\,& \ell^{2n_B+6} \nonumber \\
\hspace{-2cm} {\rm for}~~n_B <-2\,. \label{Clpassivered}
\end{align}
If the magnetic spectral index is not scale invariant $n_B\not\simeq -3$, the amplitude of the CMB spectrum is severely suppressed by the factor $(\eta_*/\eta_0)^{2n_B+6}$.
For $n_B>-2$, the above integral diverges in the UV and we have to introduce the cutoff $k<k_D$ (c.f. Eq.~\eqref{PB}). If $n_B>-3/2$, the anisotropic stress power spectrum is flat $k^3P_\Pi\propto k^{3}$. One can use approximation (A2) of \cite{Caprini:2009vk}. The value of the integral of the Bessel function depends on the upper cutoff of the magnetic spectrum $k_D$, and in the limit $k_D\eta_0\gg 1$~\cite{Caprini:2009vk}, the CMB spectrum increases as $\ell^2$ for any value of the magnetic power spectrum:
\begin{align}
\frac{\ell(\ell+1)C_\ell^{\rm pass}}{2\pi}& \simeq \frac{g^2(\eta_{\rm rec})}{2\pi^2} \left[ \frac{505 + 108 R_\nu}{135(15 + 4 R_\nu)} \right]^2 \log^2\left(\frac{\eta_*}{\eta_\nu}\right) \nonumber \\
&\times\left(\frac{H_*^2}{3m_P^2}\right)^2 {C_\Pi}^2(\gamma) \left( \frac{\eta_*}{\eta_0}\right)^{3} (k_D\eta_0) \,\ell^2 \nonumber\\
&\hspace{3cm}{\rm for}~~n_B >-3/2\,. \label{Clpassiveblue}
\end{align}
Also in this case the result is severely suppressed by the factor $(\eta_*/\eta_0)^3$ (note that $k_D\eta_0\ll (\eta_0/\eta_*)^3$).
The contributions from the compensated mode and the Lorentz force in Eq.~\eqref{SWstar} are also proportional to $\Omega_\Pi$ and $\Omega_{\rm B}$. The Sachs Wolfe term from these modes has therefore the same $\ell$-dependence as the Sachs Wolfe term from the passive mode, shown in Eqs.~\eqref{Clpassivered} and~\eqref{Clpassiveblue}. The amplitude is reduced due to the absence of the logarithmic term in Eq.~\eqref{SWstar}. However, in addition to the Sachs Wolfe effect, we expect also a significant integrated Sachs Wolfe at large scale, since the compensated mode and the Lorentz force generate non-constant metric potentials $\Phi$ and $\Psi$. Our approximations, that only take into account the Sachs Wolfe term, are consequently not expected to be accurate for these contributions (therefore we do not show them here).
From these rough analytic estimates we infer that the contribution of the inflationary magnetic mode to the CMB anisotropies always dominates over the passive and the compensated modes. Comparing Eqs.~\eqref{ompprad} and \eqref{zetastarspectrum}, one sees that the amplitude of a given $k$ of the passive mode is suppressed by a factor $\epsilon^2$ and $x_*^{2\alpha}\ll1$ with respect to the inflationary magnetic mode. In the $C_\ell$ spectrum the second suppression is converted to a factor $(\eta_*/\eta_0)^{2\alpha}$. The suppression is stronger for bluer magnetic fields with a larger $\alpha$. Assuming that the magnetic anisotropic stress and energy density are perfectly anti-correlated, $C_{\rm em}(\gamma)=-C_\Pi(\gamma)$ (as suggested by the numerical analysis of~\cite{shaw, finelli}), and restricting to the case $\gamma<-5/4$, so that the magnetic field dominates and its energy density and anisotropic stress are continuous at the transition to the radiation era, one finds
\begin{eqnarray}
\frac{C_\ell^{\rm pass}}{C_\ell^{\rm im}} \simeq
\epsilon^2 \left[\log^2\left( \frac{\eta_*}{\eta_\nu}\right)\right] \left( \frac{\eta_*}{\eta_0}\right)^{2n_B+6} \frac{\Gamma[-n_B-2]}{\Gamma[-n_B-3/2]}
\nonumber\\
\ell^{2n_B+6} \left\{\begin{array}{cc} 1/\log^2\big( \eta_*/\eta_0 \big) & \mbox{if}~n_B\rightarrow -3 \\
(n_B+3)^{2} \, & \mbox{if} ~-3<n_B<-2\end{array} \right. \nonumber
\end{eqnarray}
It appears that $C_\ell^{\rm pass}\ll C_\ell^{\rm im}$. The same applies if $n_B>-3/2$, c.f. Eqs.~\eqref{e:Clim} and \eqref{Clpassiveblue}.
Note that this analysis applies to the case of a magnetic field entirely generated during inflation and transmitted to the radiation era without amplification. However, processes during reheating may lead to significant amplification of the magnetic field~\cite{ruthreview}. Moreover, phase transitions in the early universe are expected to generate primordial magnetic fields as well~\cite{xx}. Such processes would amplify the passive and compensated mode, but they are not expected to modify the inflationary magnetic mode. They may consequently change the above ratio between the passive mode and the inflationary magnetic mode.
\begin{figure}[!ht]
\centerline{\epsfig{figure=cl_nbm299.eps,height=5.2cm}}
\centerline{\epsfig{figure=cl_nbm25.eps,height=5.2cm}}
\centerline{\epsfig{figure=cl_nbm21.eps,height=5.2cm}}
\caption{ \label{fig:cl} Temperature angular power spectrum (in $\mu {\rm K}^2$) for a magnetic field with spectral index : $n_B=-2.99$ (upper panel), $n_B=-5/2$ (middle panel) and $n_B=-2.1$ (bottom panel). For each case we plot (from top to bottom) the standard adiabatic mode (red), the inflationary magnetic mode (blue), the passive mode (green) and the compensated mode with Lorentz force (black). The solid lines are the numerical results from CAMB and the dashed lines are our analytical approximations to the Sachs Wolfe effect given in Eqs.~\eqref{e:Clim} to \eqref{Clpassiveblue}, valid at low $\ell$. Note that in the middle and bottom panel we do not show the 24 (respectively 44) orders of magnitude between the inflationary magnetic mode and the passive mode.}
\end{figure}
In Fig.~\ref{fig:cl} we show some examples of CMB spectra for the standard inflationary mode and the magnetic modes, derived using the modified CAMB code of~\cite{shaw}, and we compare them with our analytical approximations at large scale. The amplitude of the standard adiabatic mode is
\begin{equation}
\zeta^2_{\rm inf}=k^3P_{{\zeta_{\rm inf}}}=\frac{2\pi H^2_*}{m_P^2\epsilon}\simeq 2\pi^2\cdot 2.1\times10^{-9}\;,\label{zetainfvalue}
\end{equation}
where the factor $2\pi^2$ accounts for our Fourier convention~\eqref{fourier} and power spectrum convention~\eqref{mpowerspectrum} that are different from those used in CAMB.
From Eq.~\eqref{zetastarspectrum} we see that the inflationary magnetic mode is suppressed by a factor $H^2_*/(m_P^2\epsilon)\simeq 6\times10^{-9}$ with respect to the standard adiabatic mode of Eq.~\eqref{zetainfvalue}. In order to leave an impact on the CMB, the coefficients $C_\Pi^2/\alpha^2$ and $C_{\rm em}^2/\alpha^2$ need therefore to be large. In Fig.~\ref{fig:cl} we choose $C_\Pi=-C_{\rm em}$ such that $\zeta_*^2\sim 0.01\cdot\zeta^2_{\rm inf}$. We consider three different cases:
\begin{itemize}
\item $n_B=-2.99$, so that $\alpha=0.01$, and $C_\Pi=46$\;,
\item $n_B=-5/2$, so that $\alpha=1/2$ and $C_\Pi=2315$\;,
\item $n_B=-2.1$, so that $\alpha=0.9$ and $C_\Pi=4166$\;.
\end{itemize}
Clearly, blue spectra require large, unphysical values of $C_\Pi$ in order to leave a $\sim1\%$~impact on the CMB. This requires fine-tuning of the generation mechanism: our simple model produces indeed $C_\Pi\sim 20$ for $n_B=-2.99$ and values of order unity for $n_B=-5/2$ and $n_B=-2.1$. In Fig.~\ref{fig:cl} we see that the inflationary magnetic mode dominates over the passive and compensated modes at all multipoles by several orders of magnitude. It therefore leads to much stronger constraints on the primordial amplitude of the magnetic field than the passive and compensated modes considered in~\cite{shaw, finelli}. We also compare the numerical angular power spectrum with our analytical expressions Eqs.~\eqref{e:Clim} and \eqref{Clpassivered} (dashed line), which provide a good approximation at large scale $\ell \lesssim 100$. Note that for these approximations we have chosen $\epsilon\simeq 0.01$ and $\eta_*$ is determined from Eq.~\eqref{zetainfvalue} so that $\eta_*/\eta_0\simeq 3.1\times 10^{-28}$.
\section{Discussion}
\label{sec:discussion}
We have demonstrated that the inflationary magnetic mode always dominates the passive and compensated ones. It should therefore be taken into account when constraining primordial magnetism with the CMB. A full MCMC analysis is beyond the scope of this paper, but it is possible to predict analytically the constraints on the late time magnetic field that would result from the inflationary mode. In order to do so, we want to relate the amplitude of $\zeta_*$ to the magnetic field amplitude $B_\lambda$ and the spectral index $n_B$.
Under the hypothesis that the energy density and the anisotropic stress of the magnetic field are perfectly anti-correlated $C_{\rm em}(\gamma)=-C_\Pi(\gamma)$, $\zeta_*$ can be directly related to the magnetic field energy density ${\Omega_{\rm B}}$, c.f Eqs.~\eqref{zetastarspectrum} and~\eqref{ombprad}. The magnetic energy density power spectrum at the beginning of the radiation era is
\begin{equation}
k^3 P_{\rho_B}(k,\eta_*)=[3 m_P^2 H_*^2\, {\Omega_{\rm B}}]^2\,,
\label{PrhoBeta*}
\end{equation}
obtained from Eq.~\eqref{PrhoB} where we have set the energy density at the beginning of the radiation era $\bar\rho_{\rm rad}(\eta_*)\equiv \rho_*=3H_*^2m_P^2=g_{\rm eff}^*a_{\rm SB} T_*^4$. The energy density power spectrum today becomes then $P_{\rho_B}(k,\eta_0)=(a_*/a_0)^8 P_{\rho_B}(k,\eta_*)$, which can be expressed in terms of $B_\lambda$. In order to do this, we use the definitions and the approximated spectra given in Sec. II of Ref.~\cite{Caprini:2009vk}, which read:
\begin{eqnarray}
P_{\rho_B}(k,\eta_0) \simeq \left\{
\begin{array}{ll}
\frac{A^2_B(\eta_0)\,k_D^{2n_B+3}}{32\pi^4(2n_B+3)}\,, & {\rm if}~n_B<-\frac{3}{2} \\
\\
\frac{3A^2_B(\eta_0)\,n_B\,k^{2n_B+3} }{128\pi^4(2n_B+3)(n_B+3)}\,, & {\rm if}~n_B>-\frac{3}{2} \\
\end{array}\right.
\end{eqnarray}
where $A_B(\eta_0)$ is related to $B_\lambda$ through Eq.~\eqref{Bla}. Together with Eq.~\eqref{PrhoBeta*}, we have then all the ingredients to relate ${\Omega_{\rm B}}$ and therefore $C_\Pi(\gamma)$ to $B_\lambda$ and $n_B$, and to substitute it into Eq.~\eqref{zetastarspectrum}. We finally obtain ($T_0$ denotes the temperature today):
\begin{eqnarray}
&&\zeta_*= \frac{B_\lambda^2}{\rho_*}\, \frac{1}{\epsilon}\, \left(\frac{T_*}{T_0}\right)^4 \label{zetaBlambda}\\
&&\left\{\begin{array}{ll}
\sqrt{\frac{3n_B}{8\Gamma^2[\frac{n_B+3}{2}] (n_B+3)^3(2n_B+3)}} \left(\frac{\lambda}{\eta_*}\right)^{n_B+3}\,, & {\rm if}~ n_B<-\frac{3}{2} \\
\\
\frac{2}{\sqrt{32}\Gamma[\frac{n_B+3}{2}] \sqrt{2n_B+3}} \left(\frac{\lambda}{\eta_*}\right)^{3/2} (k_D\lambda)^{n_B+\frac{3}{2}}\,, & {\rm if} ~ n_B>-\frac{3}{2}
\end{array}
\nonumber
\right.
\end{eqnarray}
A rough constraint on $B_\lambda$ as a function of $n_B$ can be derived by imposing that the amplitude of the CMB spectrum from the magnetic field at large scales must not overcome the observed value. Using the above equation~\eqref{zetaBlambda} into the CMB spectrum originated from the inflationary magnetic mode Eq.~\eqref{e:Clim}, and setting
\begin{equation} \label{e:Cltrue}
\ell^2 C_\ell^{\rm im} \leq \ell^2 C_\ell \simeq 7.9 \, \times \,10^{-10}\, ,
\end{equation}
we derive the constraint shown in Fig.~\ref{fig:Blambda}.
\begin{figure}[ht]
\centerline{\epsfig{figure=blambda.eps,height=5.3cm}}
\caption{ \label{fig:Blambda} Upper bound on the magnetic field amplitude today smoothed on a scale of $1$ Mpc, as a function of the magnetic field spectral index $n_B$. The bound is obtained from the effect on the CMB of the inflationary magnetic mode Eq.~\eqref{e:Clim}: it applies to magnetic fields generated during inflation. For $n_B>-3/2$, we have set $k_D\eta_0=3000$ (c.f. Ref.~\cite{Caprini:2009vk}).}
\end{figure}
This figure shows that only in the nearly scale invariant case, where $n_B\sim -3$ it is possible to have large (nG) magnetic fields on the scale of $\lambda=1$ Mpc without generating too large a contribution from the inflationary magnetic mode in the CMB. This is due to the huge factor
\begin{equation}
\frac{\lambda}{\eta_*} \simeq 1.4 \times 10^{23} \frac{T_*}{10^{16}{\rm GeV}} \frac{\lambda}{{\rm Mpc}} \,,
\label{laetastar}
\end{equation}
which enters in the amplitude of the inflationary mode when $\zeta_*$ is expressed in terms of $B_\lambda$ (c.f. Eq.~\eqref{zetaBlambda}). If $n_B<-3/2$, the constraint on $B_\lambda$ varies as $(\eta_*/\lambda)^{(n_B+3)/2}$. Therefore, it becomes more stringent as the spectral index increases. If $n_B> -3/2$, the dependence on the spectral index is much weaker, since $B_\lambda$ varies as $(\eta_*/\lambda)^{3/4}/(k_D\lambda)^{(2n_B+3)/4} = (\eta_*/k_D)^{3/4}/(k_D\lambda)^{n_B/2}$ with $k_D^{-1}\gg \eta_*$. This change in the slope is clearly visible in Fig.~\ref{fig:Blambda}.
The dependence on $B_\lambda$ and on $\eta_*$ is a general feature of the CMB spectrum generated by an inflationary magnetic field, and it does not depend on the details of the generation model that we have chosen in this analysis.
A somewhat stronger constraint is obtained when taking into account that the CMB fluctuations from a magnetic field are non-Gaussian and lead to a bispectrum $B_\ell^{\rm im}$ which is parametrically of the order of
$\left(C_\ell^{\rm im}\right)^{3/2}$ such that
$$
f_{\rm NL}\sim \ell^3B^{\rm im}_\ell/(\ell^2C_\ell)^2 \sim \left(\ell^2C_\ell^{\rm im}\right)^{3/2}/(\ell^2C_\ell)^2 \stackrel{<}{\sim} 10\,.
$$
The upper limit on $f_{\rm NL}$ is the Planck limit from~\cite{Ade:2013xxiv} on $f_{\rm NL}^{\rm local}$ .
Inserting (\ref{e:Cltrue}) for $\ell^2C_\ell$, we obtain
\begin{equation}
\ell^2C_\ell^{\rm im}\ \stackrel{<}{\sim} 4\times 10^{-3} \; \ell^2C_\ell \,.
\end{equation}
Since $C_\ell^{\rm im}$ scales with $B_\lambda^4$ this reduces the limit shown in Fig.~\ref{fig:Blambda} only by about a factor 5. On the other hand, this is a very rough estimate taking into account only the parametric dependence. The true value may well contain a pre-factor which differs considerably from $1$, see discussion of Ref.~\cite{barnaby} below. We therefore have plotted only the much safer limit $C_\ell^{\rm im} \le C_\ell$.
Strictly speaking, the CMB only puts a constraint on the parameter $C_{\rm em}(\gamma)=-C_{\Pi}(\gamma)$. In the context of the particular model given by Eqs.~\eqref{actionem} and~\eqref{f}, this parameter is known once the value of $\gamma$ is fixed \cite{maginf}. The CMB constraint can therefore be translated into a constraint, e.g. on the energy scale of inflation \cite{energyscale}. In this case, the amount of magnetic field generated is therefore completely determined by the choice of $\gamma$. It has been shown in previous analyses (and can be inferred from Eq. \eqref{ombprad}) that if $n_B$ is significantly larger than $-3$, ${\Omega_{\rm B}}$ is strongly suppressed on large scales and one cannot expect a large field amplitude $B_\lambda$ for $\lambda$ of the order of the Mpc \cite{yokoyama,subramanian}. However, the spirit of this paper is to put the inflationary magnetic mode on the same footing as the passive and the compensated modes, and to use it to set model independent constraints on the magnetic field amplitude today using the CMB.
The constraint on $B_\lambda$ shown in Fig.~\ref{fig:Blambda} is model independent in the sense that it is mainly set by the dependence of Eq.~\eqref{zetaBlambda} on the factor $\lambda/\eta_*$. The numerical pre-factor in Eq.~\eqref{zetaBlambda} depends somewhat on the choice of the model of Eqs.~\eqref{actionem} and~\eqref{f}, but this dependence is negligible compared to the main feature of the constraint, i.e., its strong dependence on $n_B$.
Basically, the difference from the passive and compensated modes is that, for the inflationary mode, the {\it integrated} magnetic energy density (up to the tiny scale $\eta_*$) is converted into a scale invariant $\zeta_*$, and influences the CMB constraint on the magnetic field amplitude at large scale $B_\lambda$. The passive and compensated modes, on the other hand, depend on the magnetic field spectrum: therefore only the large scales contribute to the CMB constraint.
In Fig.~\ref{fig:zetapassive} we compare the CMB constraint on $B_\lambda$ obtained from the passive and the inflationary mode, for $n_B<-2$ (for the passive CMB spectrum we use approximation \eqref{Clpassivered}). The constraint from the passive mode does not become more stringent for higher values of the spectral index. The CMB spectrum from the passive mode in terms of $B_\lambda $ contains in fact only a factor $(\lambda/\eta_0)^{(n_B+3)/2}$, instead of the huge factor of Eq.~\eqref{laetastar} \footnote{Note that we could have chosen to express the CMB constraint in terms of the integrated magnetic field amplitude $\vev{B^2}\simeq B_\lambda^2(k_D\lambda)^{n_B+3}$. We have chosen $B_\lambda$ since it is the quantity customarily used in CMB analyses. In terms of $\sqrt{\vev{B^2}}$, the result in Fig.~\ref{fig:Blambda} would simply change by a factor $(k_D \lambda)^{(n_B+3)/2}$.}.
\begin{figure}[ht]
\centerline{\epsfig{figure=blambdapass.eps,height=5.3cm}}
\caption{ \label{fig:zetapassive} Region below the dashed line: constraint from the passive mode from the CMB spectrum \eqref{Clpassivered}. Region below the solid line: the same as Fig.~\ref{fig:Blambda}. }
\end{figure}
In a full MCMC study, one has to vary the parameters $\eta_*$, $B_{1{\rm Mpc}}$ and $n_B$. Note that the above equation~\eqref{zetaBlambda} can be used to express the initial conditions for the Boltzmann hierarchy given in Appendix \ref{app:initial} in terms of $B_{1{\rm Mpc}}$ and $n_B$. In the simple model discussed here, the additional slow roll parameter $\epsilon$ appearing in Eq.~\eqref{zetaBlambda} is actually not a free parameter but can be inferred from $B_{1{\rm Mpc}}$ and the amplitude of inflationary perturbations. However this may change in a more sophisticated model for the generation of magnetic fields during inflation.
As mentioned before, in addition to the power spectrum, the inflationary magnetic mode induces a distinct bispectrum since it is non-Gaussian~\cite{barnaby,Caprini:2009vk}. As for the CMB spectrum, there is a non-Gaussian contribution in the CMB temperature anisotropy arising from the inflationary magnetic mode (the one calculated in \cite{barnaby}) and one arising from the passive and compensated modes, which corresponds to the one evaluated in e.g. \cite{Caprini:2009vk}. If the generation of the magnetic field arises during inflation, the non-Gaussian contribution from the inflationary magnetic mode is the dominant one. Ref.~\cite{barnaby} estimates that a scale invariant magnetic field spectrum generates a bi-spectrum with $f_{\rm NL}^{\rm equiv.~local} \simeq 1280 \,\mathcal{P} \, N_{\rm CMB}^3(N_{\rm tot}-N_{\rm CMB})$, where $\mathcal{P}=H_*^2/(\pi m_P^2\epsilon) \simeq 2.1\times 10^{-9}$ is the amplitude of the fluctuations of the inflationary spectrum, $N_{\rm CMB}$ is the number of e-folds before the end of inflation when the observable scales leave the horizon, while $N_{\rm tot}$ is the total number of e-folds of inflation\footnote{Since the calculations of $\zeta$ in Refs.~\cite{barnaby} and \cite{maginf} differ significantly, we show in Appendix~\ref{app:curvature} that they are nevertheless equivalent.}. Because of the presence of $N_{\rm tot}$, in the perfectly scale invariant case the amount of non-Gaussianity cannot be determined precisely. On the other hand, for a magnetic field spectrum only close to scale invariance, the factor $(N_{\rm tot}-N_{\rm CMB})$ is absent, since it comes directly from the logarithmic divergence of the magnetic energy density power spectrum when $n_B=-3$. If the magnetic spectral index is close to scale invariance, we can assume for an order of magnitude estimate that the calculation of \cite{barnaby} applies also to this case, and that $f_{\rm NL}^{\rm equiv.~local}\sim 1280 \, \mathcal{P} \, N_{\rm CMB}^3$: one obtains then $f_{\rm NL}^{\rm equiv.~local} \simeq 0.4$ to $0.7$, well below the current Planck limit of roughly $f_{\rm NL} \le 10$~\cite{Ade:2013xxiv}. There is therefore no indication that the model analyzed in this paper is excluded by the new bounds on non-Gaussianity from Planck. Constraints on the magnetic field amplitude $B_{1{\rm Mpc}}$ as a function of the spectral index can be placed imposing the Planck upper bound on $f_{\rm NL}$ on the non-Gaussianity due to the inflationary magnetic mode. As for the constraints from the power spectrum, we expect these would be stronger than the constraints derived in \cite{Caprini:2009vk} from the compensated mode.
\section{Conclusions}
\label{sec:con}
In this paper we have computed the CMB anisotropy spectrum from magnetic fields generated during inflation. We have paid special attention to a new mode which we call the inflationary magnetic mode. It is due to contributions to the comoving curvature perturbation $\zeta$ which come from the perturbations of the geometry by the magnetic field during inflation.
This mode is always scale invariant, and dominates the CMB signal with respect to the passive and compensated magnetic modes, which are sourced by the magnetic field after inflation. We have evaluated analytically the CMB spectra from the inflationary magnetic mode and inferred an analytical constraint on the magnetic field amplitude today, as a function of the magnetic field spectral index. The constraint is much stronger than what is usually found with CMB analyses for spectral indexes far from scale invariance: through the inflationary magnetic mode, even a magnetic field with spectrum far from scale invariance can leave a detectable imprint in the CMB. The new mode should therefore be accounted for when constraining primordial magnetism with the CMB. This implies however that the magnetic field generation time must be inserted as a new parameter in CMB analyses, and the constraints on the magnetic field amplitude and spectral index should be diversified depending on the generation mechanism of the primordial magnetic field.
In this analysis we started from a given model of inflationary magnetogenesis, where the standard electromagnetic action is modified by inserting a specific coupling with the inflaton in the kinetic term, breaking conformal invariance. However, the constraint on the magnetic field amplitude that we finally derive from the CMB temperature spectrum does not depend strongly on the choice of the magnetogenesis model, apart from the numerical pre-factor. The strong dependence on the magnetic spectral index, which is the novelty of this CMB constraint with respect to those obtained using the passive and compensated modes, is a general feature of any magnetic mode in the curvature perturbation generated during inflation.
Magnetic fields also generate vector and tensor perturbations during inflation: a homogeneous (inflationary) magnetic vector mode subsequently decays and does not leave any signature in the CMB, however, a tensor mode might be present and should also be taken into account in a full MCMC study.
Finally, we want to point out that the inflationary magnetic mode obtained in Ref.~\cite{maginf} is actually more general than its derivation. The vacuum fluctuations of an arbitrary light field which is not conformally coupled, e.g. a minimally coupled light scalar field, will be amplified during inflation. Even if the field decays after inflation e.g. into standard model particles, its contribution $\zeta_*$ to the curvature will remain by continuity and might result in observable temperature anisotropies which, generically will be non-Gaussian. Therefore, limits on primordial non-Gaussianity also provide a limit on the number of light fields which are non-conformally coupled during inflation. This is an interesting new window into very high energy physics from cosmological observations.
\acknowledgments{
It is a pleasure to thank Fabio Finelli, Lukas Hollenstein, Daniela Paoletti, Levon Pogosian and Federico Urban for very useful discussions, and Richard Shaw for his help with the magnetic version of CAMB. CB is supported by the Herchel Smith Postdoctoral fund and
by King's College Cambridge.
RD is supported by the Swiss National Science Foundation and the (US) National Science Foundation under Grant No. NSF PHY11-25915. }
|
1,108,101,564,139 | arxiv | \section{Introduction}
These proceedings are based on our work~\cite{Delgado:2017cls}. The ATLAS and CMS experiments at CERN discovered a new scalar boson with the properties of the Standard Model (SM) one~\cite{Aad:2012tfa,Chatrchyan:2012xdj}, and an energy gap for any new physics~\cite{Khachatryan:2014jba, Aaboud:2017eta, Sirunyan:2017acf}. The Higgs boson is a key component of the electroweak symmetry breaking sector (EWSBS) of the SM. Since, at the LHC, we are exploring through direct search the EWSBS, the new physics (if any) could lie on it. The existence of an energy gap between the electroweak scale and the new physics scale (if any) would naturally fit into a BSM model with a strongly interacting EWSBS. These models introduce a new energy scale $f\gg v = 246\,{\rm GeV}$ where some new strong interactions trigger the dynamical breaking of a global symmetry group $G$ to a certain subgroup $H$. As indicated by the Equivalence Theorem (ET), at high energies $\gg v$, the constituents of the EWSBS behave as scalar Goldstone bosons. At lower energies, 3 of these Goldstone bosons give rise to the longitudinal components of gauge bosons. Hence, taking into account the SM suppression of the longitudinal gauge boson production at the LHC, a resonance on the longitudinal gauge boson scattering processes would be a smoking gun for new physics at the LHC involving the EWSBS.
There are two approaches for studying the collider phenomenology of beyond-SM (BSM) physics. The first one, top-down, takes a particular model with a UV-completion scheme which is studied at the TeV scale. This model can be a fully renormalizable one. The disadvantage is that we have no clue about the actual UV-completion of the underlying BSM theory (if any) and some BSM models, like the MSSM, have $\sim 100$ free parameters.
The second approach, bottom-up, involves developing an effective field theory (EFT) as general as possible. In particular, without making assumptions about the particular UV-completion scheme. In this work, the second approach is used. Hence, we will assume the SM spontaneous symmetry breaking pattern $SU(2)_L\times SU(2)_R\to SU(2)_{L+R}$, which involves 3 Goldstone bosons and the Higgs boson, and is the minimum to generate the electroweak (EW) gauge bosons masses (longitudinal modes of $W^\pm,Z$) while preserving the custodial symmetry $SU(2)_C=SU(2)_{L+R}$ (SM tree level relation $m_W=\cos\theta_W m_Z$), by means of the Effective Electroweak Chiral Lagrangian (EChL). It was developed from the eighties~\cite{Appelquist:1980vg, Longhitano:1980iz, Longhitano:1980tm, Chanowitz:1985hj, Cheyette:1987jf, Dobado:1989ax, Dobado:1989ue}, alongside the well established chiral perturbation theory (ChPT) of low energy QCD~\cite{Weinberg:1978kz, Gasser:1984gg, Gasser:1983yg}. It was used in the early nineties for LEP phenomenology \cite{Dobado:1990zh, Espriu:1991vm}, and for LHC prospects (mostly Higgs)~\cite{Dobado:1990jy, Dobado:1990am, Dobado:1995qy, Dobado:1999xb}. Although in principle it described just the interactions among EW Goldstones, it has incorporated the scalar field $H$ in the last years as a consequence of the discovery of a light Higgs-like particle~\cite{Feruglio:1992wf, Alonso:2012px, Buchalla:2013rka, Espriu:2012ih, Delgado:2013loa, Delgado:2013hxa, Brivio:2013pma, Espriu:2013fia, Espriu:2014jya, Delgado:2014jda, Buchalla:2015qju, Arnan:2015csa}.
If the electroweak sector happens to be strongly interacting, the perturbative analysis will break at the TeV scale. As well known in the case of low energy QCD~\cite{Weinberg:1978kz, Gasser:1984gg, Gasser:1983yg}, dispersion relations, encoded in the so--called unitarization procedures, are needed. A detailed study regarding the usage of dispersion relations, including coupled channels, can be found on~\cite{Delgado:2015kxa}. This work, based on~\cite{Delgado:2014jda, Delgado:2017cls}, is part of the effort on exploring the main implication of the EChL for LHC phenomenology. The absence of signals of strongly interacting EWSBS sets experimental bounds on the values of the chiral parameters of the EChL~\cite{Falkowski:2013dza, Brivio:2013pma, Khachatryan:2014jba, Aad:2014zda, ATLAS:2014yka, Fabbrichesi:2015hsa, Buchalla:2015qju, Aaboud:2016uuk, deBlas:2018tjm}.
For studying its collider phenomenology, we introduce a unitarized EChL description of $WZ$ scattering on MadGraph~5 by means of an intermediate effective Proca Lagrangian. Other approaches found on the literature are form-factors~\cite{Arnold:2008rz} or modified Feynman vertices~\cite{Kilian:2014zja}. The direct output of dispersion relations is an on-shell matrix element, whereas the input of Monte Carlo programs are Feynman rules and, of course, they deal with off-shell processes.
\section{EChL and Effective Proca Lagrangian}
In this work, we compute the cross section for the processes $pp\to W^+Z jj\to l^+l^+l^-\nu jj$, where the vector resonance is produced in the intermediate VBS subprocess $WZ\to WZ$, by means of the IAM and an effective Proca Lagrangian. We use the non-linear EChL, $\mathcal{L} = \mathcal{L}_2 + \mathcal{L}_4$, with a derivative expansion,\par\noindent%
{\scriptsize
\begin{align}
\mathcal{L}_2 &= -\frac{1}{2 g^2} {\rm Tr}\Big(\hat{W}_{\mu\nu}\hat{W}^{\mu\nu}\Big) -\frac{1}{2 g'^{2}} %
{\rm Tr}\Big(\hat{B}_{\mu\nu}\hat{B}^{\mu\nu}\Big) %
+\frac{v^2}{4}\left[1+2a\frac{H}{v}+b\frac{H^2}{v^2}\right] {\rm Tr} \Big(D^\mu U^\dagger D_\mu U \Big) %
+\frac{1}{2}\partial^\mu H\,\partial_\mu H + \dots\,\label{EChL2}\\
\mathcal{L}_4 &= a_1 {\rm Tr}\Big( U \hat{B}_{\mu\nu} U^\dagger \hat{W}^{\mu\nu}\Big) %
+ ia_2{\rm Tr}\Big(U\hat{B}_{\mu\nu} U^\dagger [{\cal V}^\mu, {\cal V}^\nu ]\Big) %
- ia_3{\rm Tr}\Big(\hat{W}_{\mu\nu}[{\cal V}^\mu, {\cal V}^\nu]\Big) %
+a_4\Big[{\rm Tr}({\cal V}_\mu {\cal V}_\nu) \Big] \Big[{\rm Tr}({\cal V}^\mu {\cal V}^\nu)\Big] \nonumber\\
&+a_5 \Big[{\rm Tr}({\cal V}_\mu {\cal V}^\mu)\Big] \Big[{\rm Tr}({\cal V}_\nu {\cal V}^\nu)\Big] %
-c_{W}\frac{H}{v} {\rm Tr}\Big(\hat{W}_{\mu\nu} \hat{W}^{\mu\nu}\Big) %
-c_B\frac{H}{v}\, {\rm Tr} \Big(\hat{B}_{\mu\nu} \hat{B}^{\mu\nu} \Big) + \dots
\end{align}}%
Here, $U(w^\pm,z) = 1 + iw^a\tau^a/v +\mathcal{O}(w^2)$, $D_\mu U = \partial_\mu U + i\hat{W}_\mu U -iU\hat{B}_\mu$, $\hat{W}_{\mu\nu} = \partial_\mu \hat{W}_\nu - \partial_\nu \hat{W}_\mu +i[\hat{W}_\mu,\hat{W}_\nu ],\;\hat{B}_{\mu\nu} = \partial_\mu \hat{B}_\nu -\partial_\nu \hat{B}_\mu$, $\hat{W}_\mu = g \vec{W}_\mu \vec{\tau}/2 ,\;\hat{B}_\mu = g'\, B_\mu \tau^3/2$ and ${\cal V}_\mu = (D_\mu U) U^\dagger$.
Note that some higher order operators that appear at dimension 8 in linear representation~\cite{Contino:2013kra, Alloul:2013naa} can contribute to a lower order in the non-linear one. In particular, this is the case of the chiral parameters $a_4$ and $a_5$, whose contribution is crucial for the processes we are considering here.
We use the on-shell Vector Boson Scattering (VBS) matrix elements computed by mean of the IAM~\cite{Delgado:2017cls} to adjust an effective Proca Lagrangian so that it reproduces the behaviour of the computed matrix elements up to the first resonance, which is the energy scale where the underlying EChL breaks. The Proca Lagrangian can be directly introduced inside MadGraph~5~\cite{Alwall:2014hca,Frederix:2018nkq} by means of FeynRules~\cite{Alloul:2013bka}. The key element is that we let the effective Proca couplings to be functions on the scale energy of the process, by means of \emph{ad hoc} Fortran functions inside our UFO model\footnote{Universal FeynRules Output, see Ref.~\cite{Degrande:2011ua}.}. In some way, this approach has some similarities with other approaches like the form-factor one~\cite{Arnold:2008rz} or with the effective approach of Kilian \emph{et. al.} (appendix~D of Ref.~\cite{Kilian:2014zja}). However, our underlying physical model is pretty different, specially from the form-factor approach, since we are, indeed, considering a BSM resonance coming from a strongly interacting EWSBS. This is why we are introducing an effective Proca Lagrangian, which explicitly introduces a vector resonance $V$ with a mass $M_V$, a width $\Gamma_V$ (that enters into the $V$ propagator) and couplings $f_V$ and $g_V$,\par\noindent%
{\scriptsize
\begin{multline} %
\mathcal{L}_V =
\mathcal{L}_V^{\rm kin} - \frac{if_V}{v^2} \bigg[m_W^2 V^0_\nu (W^+_\mu W^{-\,\mu\nu} -W^-_\mu W^{+\,\mu\nu}) %
+ m_W m_Z V^+_\nu (W^-_\mu Z^{\mu\nu} -Z_\mu W^{-\,\mu\nu}) %
+ m_W m_Z V^-_\nu (Z_\mu W^{+\, \mu\nu}- W^+_\mu Z^{\mu\nu}) \bigg] \\
+\frac{2ig_V}{v^2}\bigg[m_W^2 V^{0\,\,\mu\nu} W_\mu^+ W_\nu^- %
+ m_W \, m_Z\, V^{+\,\, \mu\nu} W_\mu^-Z_\nu %
+ m_W \, m_Z\, V^{-\,\, \mu\nu} Z_\mu W_\nu^+ \bigg], %
\label{LVugauge}
\end{multline}}%
where $V^a_{\mu\nu}= \partial_\mu V^a_\nu - \partial_\nu V^a_\mu$ ($a=\pm,0$), $W^a_{\mu\nu}=\partial_\mu W^a_\nu - \partial_\nu W^a_\mu$ ($a=\pm$), and $Z_{\mu\nu}= \partial_\mu Z_\nu - \partial_\nu Z_\mu$. Our requirements are~\cite{Delgado:2017cls}:
\begin{itemize}
\item At low energies, below the resonance, the predictions from the effective Proca Lagrangian should mimic the unitarized scattering matrix element.
\item Above the resonance, the cross section should not grow faster than the Froissart bound. That is, $\sigma(s)\le\sigma_0\log^2\left(s/s_0\right)$.
\end{itemize}
Since we are studying deviations from the SM coming from the EWSBS, we are mostly focused on the longitudinal polarizations, so that we can set $f_V=0$ on Eq.~(\ref{LVugauge}). The function $g_V^2(z) = \left[\theta(M_V^2-s)(M_V^2/z) + \theta(s-M_V^2)(M_V^4/z^2)\right]\cdot g_V^2(M_V^2)$ is well suited for the coupling $g_V$, where $z=s,t,u$ when the resonance $V$ is propagating in the $s$, $t$ and $u$ channels, respectively.
Hence, for each benchmark point, we set $M_V$ and $\Gamma_V$ to the pole of the unitarized scattering amplitude. We extract $g_V(M_V^2)$ requiring that, for $s=M_V^2$ (on the resonance peak), $\left\lvert a_{11}^{{\rm EChL}_{\rm tree}}+a_{11}^V\right\rvert = \left\lvert a_{11}^{\rm IAM}\right\rvert$, where $a_{11}$ stands for the isovector partial wave ($IJ=11$, see Ref.~\cite{Delgado:2015kxa}); ${\rm EChL}_{\rm tree}$, for the perturbative $\mathcal{L}_2$ EChL amplitude [Eq.~(\ref{EChL2})]; $V$, for the Proca Lagrangian [Eq.~(\ref{LVugauge})]; and ${\rm IAM}$, for the unitarized scattering amplitude. Then, we substitute $g_V$ by $g_V(s)$, $g_V(t)$ and $g_V(u)$ when the resonance $V$ appears in the $s$, $t$ and $u$ channels, respectively.
\section{Results}\label{secresults}
We have chosen 6 benchmark points (BPs), which are cited on table~\ref{BPtable} and Fig.~\ref{BPfig}. These are sets of $a$, $a_4$ and $a_5$ chiral parameters that have been chosen to dynamically generate resonances in the isovector $IJ=11$ channel, with masses around $1.5$, $2$ and $2.5\,{\rm TeV}$.
\begin{table}[t!h]
\begin{center}
\vspace{.2cm}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\rule{0pt}{1ex}
{\footnotesize {\bf BP}} & {\footnotesize {\bf $M_V ({\rm GeV})$}} & {\footnotesize {\bf $\Gamma_V ({\rm GeV)}$}} & {\footnotesize {\bf $g_V(M_V^2)$}} & {\footnotesize {\bf $a$}} & {\footnotesize {\bf $a_4 \cdot 10^{4}$}} & {\footnotesize {\bf $a_5\cdot 10^{4}$}}
\\[5pt] \hline
\rule{0pt}{1ex}
BP1 & $\quad 1476 \quad $ & $\quad 14 \quad $ & $ \quad 0.033 \quad $ & $ \quad 1 \quad $ & $ \quad 3.5 \quad $ & $ \quad -3 \quad $
\\[5pt] \hline
\rule{0pt}{1ex}
BP2 & $\quad 2039 \quad $ & $\quad 21 \quad $ & $ \quad 0.018 \quad $ & $ \quad 1 \quad $ & $ \quad 1 \quad $ & $ \quad -1 \quad $
\\[5pt] \hline
\rule{0pt}{1ex}
BP3 & $\quad 2472 \quad $ & $\quad 27 \quad $ & $ \quad 0.013 \quad $ & $ \quad 1 \quad $ & $ \quad 0.5 \quad $ & $ \quad -0.5 \quad $
\\[5pt] \hline
\rule{0pt}{1ex}
BP1' & $\quad 1479 \quad $ & $\quad 42 \quad $ & $ \quad 0.058 \quad $ & $ \quad 0.9 \quad $ & $ \quad 9.5 \quad $ & $ \quad -6.5 \quad $
\\[5pt] \hline
\rule{0pt}{1ex}
BP2' & $\quad 1980 \quad $ & $\quad 97 \quad $ & $ \quad 0.042 \quad $ & $ \quad 0.9 \quad $ & $ \quad 5.5 \quad $ & $ \quad -2.5\quad $
\\[5pt] \hline
\rule{0pt}{1ex}
BP3' & $\quad 2480 \quad $ & $\quad 183 \quad $ & $ \quad 0.033 \quad $ & $ \quad 0.9 \quad $ & $ \quad 4\quad $ & $ \quad -1 \quad $
\\[5pt] \hline
\end{tabular}
\caption{\small Selected benchmark points (BPs) of dynamically generated vector resonances. The mass, $M_V$, width, $\Gamma_V$, coupling to gauge bosons, $g_V(M_V)$, and relevant chiral parameters, $a$, $a_4$ and $a_5$ are given for each of them. $b$ is fixed to $b=a^2$. This table is generated using the FORTRAN code that implements the EChL+IAM framework, borrowed from the authors in Refs.~\cite{Espriu:2012ih,Espriu:2013fia,Espriu:2014jya}.}\label{BPtable}
\end{center}
\end{table}
\begin{figure}[th]
\null%
\hfill\includegraphics[width=.33\textwidth]{FIGS.DIR/contourM.pdf}%
\hfill\includegraphics[width=.33\textwidth]{FIGS.DIR/contourW.pdf}%
\hfill\includegraphics[width=.33\textwidth]{FIGS.DIR/contourgv.pdf}%
\hfill\null
\caption{\small Selected benchmark points, with $M_V$, $\Gamma_V$ and $g_V(M_V^2)$. From the IAM $IJ=11$ partial wave. Only the extremes $a=0.9$ and $a=1$ are used for the main analysis.}\label{BPfig}
\end{figure}
For each BP, we generate two runs. One, with $W^+Zjj$ as final state. The other one, including the leptonic decays $W^+\to l^+\nu$, $Z\to l^+l^-$. In all the cases, the following cuts are set over the final 2-jets: $2<\lvert\eta_{j_1,j_2}\rvert<5$, $\eta_{j_1}\cdot\eta_{j_2}<0$, $p_T^{j_1,j_2}>20\,{\rm GeV}$ and $M_{jj}>500\,{\rm GeV}$. For the $W^+Zjj$ final state run, an additional cut $\lvert\eta_{W,Z}\rvert < 2$ is used. For the leptonic decay run, we set the additional cuts $M_Z-10\,{\rm GeV} < M_{\ell^+_Z \ell^-_Z} < M_Z+10\,{\rm GeV}$, $M^T_{WZ}\equiv M^T_{\ell\ell\ell\nu}>500\,{\rm GeV}$, $\slashed{p}_T>75\,{\rm GeV}$ and $p_T^\ell>100\,{\rm GeV}$. On top of the BSM signals, we have computed two SM backgrounds: pure SM-EW background (shown in Fig.~\ref{ProdSMfig}), $q_1q_2\to q_3q_4W^+Z$ scattering at order $\mathcal{O}(\alpha^2)$. And mixed SM-QCDEW, at order $\mathcal{O}(\alpha\alpha_S)$.
\begin{figure}[th]
\null%
\hfill\includegraphics[width=.49\textwidth]{FIGS.DIR/pp_WZjj_ptj1_sm.pdf}%
\hfill\includegraphics[width=.49\textwidth]{FIGS.DIR/pp_WZjj_invmass_sm.pdf}%
\hfill\null
\caption{\small Pure SM-EW background. Left, invariant mass of $W^+Z$. Right, transverse momentum of the most energetic jet. Cuts: $\lvert\eta_{j_1,j_2}\rvert<5$, $\eta_{j_1}\cdot\eta_{j_2}<0$, $\lvert\eta_{W,Z}\rvert<2$. Polarizations are separated. Note the suppression of longitudinal polarization in the SM.}\label{ProdSMfig}
\end{figure}
\begin{figure}[th]
\null%
\hfill\includegraphics[width=.49\textwidth]{FIGS.DIR/t11_comparison_BP1pp.pdf}%
\hfill\includegraphics[width=.49\textwidth]{FIGS.DIR/t11_comparison_const_BP1pp.pdf}%
\hfill\null
\caption{\small Absolute value of the isovector-vector partial wave, $a_{11}$ ($IJ=11$) for BP1' (see table~\ref{BPtable}). ${\rm EChL}_{\rm tree}^{(2)}$ and ${\rm EChL}_{\rm loop}^{(2+4)}$, perturbative (non-unitarized) LO and NLO computation with the EWChL; IAM, unitarized partial wave; ${\rm EChL}_{\rm tree}^{(2)}+\mathcal{L}_V$, perturbative LO computation in the EChL + Proca Lagrangian (constant $g_V$); IAM-MC, the MadGraph~5 model developed in this work.%
}\label{t11compar}
\end{figure}
\begin{figure}[th]
\null%
\hfill\includegraphics[width=.49\textwidth]{FIGS.DIR/MWZa9.pdf}%
\hfill\includegraphics[width=.49\textwidth]{FIGS.DIR/MWZa9lep.pdf}%
\hfill\null
\caption{\small BP1' (see table~\ref{BPtable}). $W^+Zjj$ in final state (left) vs. leptonic decay (right).}\label{BPa9}
\end{figure}
Finally, we make a prediction on the number of events expectable at $14\,{\rm TeV}$ for different LHC luminosities (see table~\ref{tablasigmaslep} and Fig.~\ref{figevents}). The results for the cross sections are shown in Fig.~\ref{BPa9}. The results for the relevant partial wave $a_{11}$ are shown in Fig.~\ref{t11compar}. For the statistical significance, we are using the standard expression $\sigma^{\rm stat}_\ell=S_\ell/\sqrt{B_\ell}$, $S_\ell=N^{\rm IAM-MC} - N^{\rm SM}$, $B_\ell=N^{\rm SM}$, $N^i=N(pp\to l_1^+l_1^-l_2^+\slashed{p}_T jj)^i$, and the following ranges of $M_{lll\nu}^T$:
\begin{align}
&{\rm BP1:}~1325-1450~{\rm GeV}\,, && {\rm BP2:}~1875-2025~{\rm GeV}\,, && {\rm BP3:}~2300-2425~{\rm GeV}\,,\nonumber \\
&{\rm BP1':}~1250-1475~{\rm GeV}\,, && {\rm BP2':}~1675-2000~{\rm GeV}\,, &&{\rm BP3':}~2050-2475~{\rm GeV}\,.
\label{MTintervals}
\end{align}
The cases with $a=1$ have smaller significances, and only the lightest resonances $M_V=1.5\,{\rm TeV}$ (BP1) could be seen at $\sim 3\sigma$ and the highest luminosity ($3000\,{\rm fb}^{-1}$). Note that the cells without data means \emph{lack of statistics}. The cases with heavier $M_V\sim 2.5\,{\rm TeV}$ seem difficult to observe, due to poor statistics in the leptonic channels. Only BP3' obtain a significance $>2\sigma$ (for $3000\,{\rm fb}^{-1}$). Hence, semileptonic and fully hadronic channels look necessary to improve these significances. On the other case, the largest significances are obtained for $a=0.9$ and the lightest resonances, which corresponds to our BP1'. Significances $\sim 2.8\sigma$, $5.1\sigma$ and $8.9\sigma$ are predicted for LHC luminosities $\mathcal{L}=300\,{\rm fb}^{-1}$, $1000\,{\rm fb}^{-1}$ and $3000\,{\rm fb}^{-1}$, respectively.
\begin{table}[th]
\begin{center}
\begin{tabular}{cc|c|c|c|c|c|c|}
\hhline{~~|-|-|-|-|-|-|}
& & \cellcolor{gray! 50}BP1 & \cellcolor{gray! 50}BP2 &\cellcolor{gray! 50} BP3 & \cellcolor{gray! 50}BP1' & \cellcolor{gray! 50}BP2' & \cellcolor{gray! 50}BP3' \\
\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{sideways}$\mathcal{L}=300\,{\rm fb}^{-1}$\end{sideways}}}&\cellcolor{gray! 15} ${\rm N}^{\rm IAM-MC}_{\ell}$ & 2 & 0.5 & 0.1 & 5 & 2 & 0.7 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{} & \cellcolor{gray! 15}${\rm N}^{\rm SM}_{\ell}$ & 1 & 0.4 & 0.1 & 2 & 0.6 & 0.3 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{} &\cellcolor{gray! 15} $\sigma^{\rm stat}_{\ell}$ & 0.9 & - &- & 2.8 & 1.4 &- \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}\\[-4ex]\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{sideways}$\mathcal{L}=1000\,{\rm fb}^{-1}$\end{sideways}}} &\cellcolor{gray! 15}${\rm N}^{\rm IAM-MC}_{\ell}$ &7 & 2 & 0.4 & 18 & 5 & 2 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{}&\cellcolor{gray! 15} ${\rm N}^{\rm SM}_{\ell}$ & 4 & 1 & 0.3 & 6 & 2 & 1 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{}& \cellcolor{gray! 15}$\sigma^{\rm stat}_{\ell}$ & 1.6 & 0.3 & - & 5.1 & 2.5 & 1.4 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}\\[-4ex]\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{sideways}$\mathcal{L}=3000\,{\rm fb}^{-1}$\end{sideways}}} &\cellcolor{gray! 15} ${\rm N}^{\rm IAM-MC}_{\ell}$ & 22 & 5 & 1 & 53 & 16 & 7 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{}& \cellcolor{gray! 15}${\rm N}^{\rm SM}_{\ell}$ & 12 & 4 & 1 & 17 & 6 & 3 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}
\multicolumn{1}{c|}{}& \cellcolor{gray! 15}$\sigma^{\rm stat}_{\ell}$ & 2.7& 0.6 & 0.3 & 8.9 & 4.4 & 2.4 \\[0.5ex]
\hhline{~|-|-|-|-|-|-|-|}
\end{tabular}\caption{Predicted number of $pp\to l_1^+l_1^-l_2^+\nu jj$ events of the IAM-MC, ${\rm N}^{\rm IAM-MC}_l$, and of the SM background (EW+QCDEW), ${\rm N}^{\rm SM}_l$, at $14\,{\rm TeV}$, for different LHC luminosities: $\mathcal{L}=300\,{\rm fb}^{-1}$, $\mathcal{L}=1000\,{\rm fb}^{-1}$ and $\mathcal{L}=3000 ~{\rm fb}^{-1}$. We also present the corresponding statistical significances, $\sigma^{\rm stat}_\ell$.}\label{tablasigmaslep}
\end{center}
\end{table}
\begin{figure}[th]
\begin{center}
\includegraphics[width=.49\textwidth]{FIGS.DIR/PEvWZ.pdf}
\includegraphics[width=.48\textwidth]{FIGS.DIR/PsigmaWZ.pdf}
\caption{Predictions for the number of events, ${\rm N}^{\rm IAM-MC}_{WZ}$ (left panel), and the statistical significance, $\sigma^{\rm stat}_{WZ}$ (right panel), as a function of the parameter $a$ for $\mathcal{L}=3000~{\rm fb}^{-1}$. The marked points correspond to our selected BPs in Fig.~\ref{BPfig}. The two lines for each mass are computed by summing events within $\pm 0.5\,\Gamma_V $ and $\pm 2\,\Gamma_V$, respectively.}\label{figevents}
\end{center}
\end{figure}
\section{Conclusions}
In this work, we have developed a MadGraph~5 model for strongly interacting Vector Boson Scattering in the isovector channel ($IJ=11$), by means of the Inverse Amplitude Method (IAM) and an effective Proca Lagrangian. The process $pp\to W^+Zjj$ via VBS, with a fully leptonic decay $W^+\to l^+\nu$, $Z\to l^+l^-$, has been studied.
We selected 6 benchmark points, with $M_V=1.5$, $2$, $2.5\,{\rm TeV}$ and $a=0.9$, $1$. We have selected our BPs to make a first scan of the parameter space of the chiral parameters $a\in (0.9,1)$, $b=a^2$ and $a_4,\,a_5\in (10^{-4},10^{-3})$. For the sake of completeness, we have included on Fig.~\ref{BPfig} additional intermediate points to show the dependence of $M_V$, $\Gamma_V$ and $g_V(M_V^2)$ on the chiral parameters. For each benchmark point, a MadGraph~5 Monte Carlo model has been developed and run, both with and without leptonic decay. For the sake of brevity, only BP1' is reproduced here. However, the full analysis can be found on Ref.~\cite{Delgado:2017cls}.
Finally, we have included a prediction of number of events for several LHC luminosities at $14\,{\rm TeV}$ (see table~\ref{tablasigmaslep}). As discussed on section~\ref{secresults}, semileptonic and hadronic studies seem necessary in order to improve the sensitivity of LHC Run-II to the BSM effects in this process. Besides the leptonic channels considered here, a discussion on the semileptonic and hadronic channels can be found in~\cite{Delgado:2017cls}.
\section{Acknowledgements}
We thank P.~Arnan for providing us with the FORTRAN code to localize the IAM resonances and for his help at the early stages of this work. A.D. thanks F.J. Llanes-Estrada for previous collaboration. R.L.D. and J.J.S.C. thank Corinne Goy for useful discussions. This work is supported by the European Union through the ITN ELUSIVES H2020-MSCA-ITN-2015//674896, the RISE INVISIBLESPLUS H2020-MSCA-RISE-2015//690575 and the STSM Grant from COST Action CA16108, by the Spanish MINECO through the projects FPA2013-46570-C2-1-P, FPA2014-53375-C2-1-P, FPA2016-75654-C2-1-P, FPA2016-76005-C2-1-P, FPA2016-78645-P(MINECO/ FEDER, EU), by the Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042) and by the Spanish MINECO's ``Centro de Excelencia Severo Ochoa'' Programme under grants SEV-2012-0249 and SEV-2016-0597 and the ``Mar\'ia de Maeztu'' Programme under grant MDM-2014-0369. R.L.D is supported by the MINECO project FIS2013-41716-P, FPA2016-75654-C2-1-P and the ``Ram\'on Areces'' Foundation. We also acknowledge 8000 hours of computer time granted at a small departmental cluster at the UCM.
|
1,108,101,564,140 | arxiv | \section{Introduction}
Because it is supersonic, magnetized and develops in a multiphase
medium, interstellar turbulence is expected to differ from turbulence
in laboratory flow experiments or in state-of-the-art numerical
simulations, \eg\ \cite{chanal2000} for recent experiments in gaseous
helium and \cite{mininni2006a} or \cite{alexakis2007} for MHD
simulations. Nonetheless, it may carry some universal properties of
turbulence, such as space-time intermittency \citep[for a review
see][]{anselmet2001}. Of particular interest to star formation, is
the behavior of turbulence dissipation. In a series of papers, we
have shown that the \twCO\jone\ line--centroid-velocity increments
(CVI) in translucent molecular gas have non-Gaussian statistics more
pronounced at small scale. The extreme CVI (E-CVI) responsible for the
non-Gaussian tails of their probability density functions (\textit{pdf}) form
elongated coherent structures over 0.8~pc. These structures have been
tentatively identified with regions of intense
velocity-shear~\footnote{In the following, ``shear'' is used instead
of gradient to \answer{emphasize} that the observations provide
cross-derivatives of the line--centroid-velocities (CV) (\ie\ the
displacement, in the plane-of-the-sky (POS), is perpendicular to the
projection axis of the velocities).}
\answer{and enhanced local dissipation rate, based on} their thermal,
dynamical, and chemical properties. These pure velocity-structures do
not follow those of dense gas, they tend to be parallel to the
magnetic field orientation, they are associated with gas warmer
($\tkin>25$~K) than the bulk of the gas \citep[][ hereafter Paper~I
and Paper~II]{hilyblant2007ii,hilyblant2008cvi}, and they bear
chemical signatures ascribed to a warm chemistry not driven by UV
photons \citep{falgarone2006hcop, godard2009}. Last, in one such
E-CVI structure, Plateau de Bure Interferometer (PdBI) observations
disclose several sub-structures of intense velocity-shear at scales as
small as 6~\answer{milli-parsec (mpc)} \citep[][ hereafter
FPH09]{falgarone2009pdbi}. This suggests that turbulent molecular
clouds harbour coherent velocity-shear structures from 6~mpc to
800~mpc.
We have increased the dynamic range by a factor 8 compared with
Paper~II, by mapping four times larger an area in the Polaris Flare
with twice better a spatial resolution. The aim is to further explore
the range of scales over which the spatial coherence of these
intense-shear structures is found. These are the first large scale
observations performed at high-angular resolution and high spectral
resolution in translucent gas. The observations and the results are
described in Section 2 and 3. We briefly discuss possible
interpretations and the nature of these structures in light of recent
numerical simulations (Section 4).
\section{Observations}
\begin{figure*}[t]
\centering
\includegraphics[height=0.4\hsize,angle=-90]{tdv.eps}\hskip0.05\hsize
\includegraphics[width=0.45\hsize,angle=-90]{pl-lv.eps}
\caption{\textit{Left:} Integrated intensity map (\kkms, \tant
scale) smoothed to 15\arcsec. The white dashed boxes show the
areas used to build the average \ensuremath{p-v}\ cuts (right panels). The blue
rectangle (dashed line) delineates the previous field of
Paper~I. \textit{Right:} \ensuremath{p-v}\ cuts in the 3 boxes shown in left
panel (\tant\ scale, distance measured in \arcsec\ with arbitrary
origin). The line CV are shown in red. }
\label{fig:tdv}
\end{figure*}
Observations of the \twCO\jtwo\ line were carried out at the IRAM-30m
telescope with the 1.3mm multibeam heterodyne receiver HERA
\citep{schuster2004} during August 2007 and January 2008, under good
weather conditions. The map covers 0.3~deg$^2$ and consists of 9
submaps of $10\arcmin\times10\arcmin$, each of which was observed in
two orthogonal scanning directions to minimize striping due to gain or
atmosphere variations. The final map encompasses the fields
successively observed by \cite{falgarone1998kp} and in Paper~I shown
as a dashed box in Fig.\ref{fig:tdv} and is $\sim 2\times 1$~pc at the
adopted distance of the source (150~pc). Data were acquired in the
powerful on-the-fly (OTF) frequency-switched mode ($4\arcsec\, s^{-1}$
scanning velocity, $1s$ time sampling, $4\arcsec$ spatial sampling in
both directions, 13.8~MHz frequency throw), using the VESPA
autocorrelator facility as backends. A total of 1.5\tdix{6}
\answer{raw} spectra was recorded in 80~hours of telescope time with a
spectral resolution of 0.05~\kms. Data were reduced with the new
\texttt{CLASS90} software optimized for OTF
\citep{IRAM_report_2005-1}. The instrumental response was canceled by
subtracting linear baselines to each original spectrum, which were
then convolved by a Gaussian kernel ($1/3\,HPBW$) and gridded on a
regular grid with $0.5 HPBW$ sampling. The final data cube was then
smoothed to 15\arcsec\ and 0.1~\kms\ resolutions to improve the
signal-to-noise ratio. The typical rms in each final pixel is
$1\sigma=0.5$~K in 0.1~\kms\ channels.
\section{Results}
\subsection{Space and velocity maps}
The \twCO\jtwo\ integrated emission is displayed in Fig.~\ref{fig:tdv}
(left panel) with three position-velocity (\ensuremath{p-v}) diagrams (right
panels) made along the NE-SW boxes shown. A sharp variation of
velocity, from $\sim-3$~\kms\ to $\sim-5$~\kms\ (from NE to SW) occurs
over a layer thinner than a few 100\arcsec\ in
projection. Fig.\ref{fig:rgb} that displays the integrated emission in
two adjacent velocity intervals, at high (HV) [-3.5, -0.5]~\kms\ and
low velocity (LV) [-6.5, -3.5]~\kms, stresses a remarkable
characteristic of that field: the edges of the LV and HV components
follow each other closely in projection over more than $\sim1$~pc. It
is then most unlikely that they be unrelated pieces of gas along the
line of sight: they have to be in contact.
The second remarkable characteristic of that field is the following:
while the emissions in the LV and HV components are extended, their
spatial overlap (the pink areas of Fig.\ref{fig:rgb}, also exemplified
in the \ensuremath{p-v}\ diagrams) is limited to narrow filamentary regions in
projection. It is most visible between $\delta=87\deg45'$ and
$88\deg$ (thus over $\sim 1$~pc) where it does not split into several
substructures. Now, if the LV and HV components were parsec-scale
volumes, their interface would appear thin over $\sim 1$~pc only if
viewed edge-on (within $\pm 5$\deg\ for a projected size less than one
tenth of its real size), a case that we rule out on statistical
grounds. We therefore infer that the \twCO\jtwo\ HV and LV components
are {\it layers} rather than {\it volumes} and that their interface is
1-dimensional rather than 2-dimensional. This ensures that under any
viewing angle the two extended velocity components present a narrow
interface.
The slope of the variations of the line CV drawn on the \ensuremath{p-v}\ diagrams
provides a measurement of the velocity shear between the two
components. On each cut shown, there is an average shear of $\approx
13$~\kmspc\ (a velocity variation of 2~\kms\ over $\approx
0.15$~\pc). Steeper slopes are also visible and locally provide higher
shears up to 30~\kmspc\ (1~\kms\ over 0.03~\pc) in the middle cut.
These values are more than one order of magnitude larger (within the
uncertainties due to projections) than the average value of
1~\kmspc\ estimated at the parsec scale in molecular clouds
\citep{goldsmith1985}. The velocity field therefore significantly
departs, at small scales, from predictions based on the generally
adopted scaling laws in molecular clouds. If velocity fluctuations
over a scale $l$ increase as $\delta v_l \propto l^{1/2}$,
velocity-shears should increase by no more than $33^{1/2}\approx 5.7$
between 1~\pc\ and 0.03~\pc.
\begin{figure}
\centering
\includegraphics[width=0.7\hsize,angle=0]{rb.ps}
\caption{\twCO\jtwo\ integrated intensity in two adjacent velocity
ranges: [-6.5:-3.5]~\kms\ in blue and [-3.5:-0.5]~\kms\ in red.
At a distance of 150~pc, 20' correspond to 0.9~pc.}
\label{fig:rgb}
\end{figure}
A closer inspection of the \ensuremath{p-v}\ diagrams shows that \textit{(i)} the
sharpest variations of the line CV occur between two line-wings
appearance (above $-2.0$~\kms\ for the HV wing and below
$-5.5$~\kms\ for the LV wing), \textit{(ii)} the separation between
the LV and HV wings steepens from top (0.1~pc) to bottom (0.03~pc),
and \textit{(iii)} the layer of largest velocity shear coincides with
the lane of enhanced \twCO\jtwo\ emission visible in
Fig.~\ref{fig:tdv} at the center of each cut.
\subsection{Distribution of E-CVI}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{fig4.eps}
\caption{\textit{Left:} \answer{Map of the CVI (top panel, color
scale in \kms) computed for a lag of 4 pixels or 60\arcsec, and
normalized \textit{pdf}\ (circles, bottom panel) compared with a
Gaussian distribution ($\sigma=0.19$~\kms, red)}. The red
crosses (top panel) indicate the position of the dense cores from
\cite{heithausen2002}. The rectangle delineates the PdBI field of
\cite{falgarone2009pdbi}. \textit{Middle:} E-CVI (blue contours)
overplotted on the integrated intensity of
Fig.~\ref{fig:tdv}. \textit{Right:} E-CVI (blue contours)
overplotted on the intensity integrated in the red wing interval
[-2:-0.5]~\kms\ \citep{hilyblant2007ii}.}
\label{fig:cvi}
\end{figure*}
Following \cite{lis1996,pety2003} and Paper~II, we have built the
\textit{pdf}\ \answer{(see Fig.~\ref{fig:cvi})} of \twCO\jtwo\ CVI over the
whole field. The statistics is significantly improved compared to
previous works. The probability density in the most extreme bins
reaches \dix{-5}. Fig.~\ref{fig:cvi} displays the locus of the E-CVI.
It is remarkable that the thin structure delineated by the
\twCO\jtwo\ E-CVI in the SE area (blue box of Fig.~\ref{fig:tdv}) is
so similar to that obtained with the same method applied to a much
smaller sample observed in a different transition,
\twCO\jone\ (Paper~II). The E-CVI structure is the high-angular
resolution view of the structure obtained with the same statistical
analysis performed on KOSMA maps of the field (HPBW=120\arcsec) and
shown in Fig.~12 of Paper~II.
The E-CVI structure does not follow the peaks of the \twCO\jtwo\ line
integrated emission. Instead, it coincides, over the $\sim 1$~pc
region discussed in Section 3.1, with the narrow interface of the HV
and LV components \ie\ the intense velocity-shear (Fig.~\ref{fig:cvi},
center), and follows in detail the thin elongated structure in the
extreme velocity range $[-2.0,-0.5]$~\kms, that of the red line\-wings
(Fig.~\ref{fig:cvi}, right). This association between CO line\-wings
and intense velocity-shears extends the findings of Paper~II to
higher-resolution and over a larger scale.
These properties of the locus of E-CVI unambiguously support, for the
first time, the proposition of Paper~II that the E-CVI trace intense
velocity-shears in turbulent gas and that the extreme variations of
the line CV are driven by the appearance/disappearance of linewings
over small scales. It is also the first observational proof of the
early conjecture of \cite{falgarone1990} that the broad CO linewings
trace the intermittency of turbulence in molecular clouds. These
results clarify, at least in the case of translucent clouds, the
controversy on the origin of small-scale CV variations expected to be
due primarily to density fluctuations, line-of-sight projections, and
radiative transfer \citep{esquivel2007,miville2003b,levrier2004}.
Last, this E-CVI structure is coherent over $\sim2$~pc while its
thickness is as small as 0.03~pc. Its aspect ratio is therefore $\sim
70$. Its length seems to be limited by the size of the map (see the
longer structure computed from the KOSMA data in Paper~II). We also
note that the E-CVI structure splits into multiple branches in several
areas, in particular around offsets $(-1000\arcsec, 800\arcsec)$ and
$(-700\arcsec, 500\arcsec)$.
\section{Discussion }
\subsection{What is the nature of the interface?}
The interface is primarily an intense velocity-shear. The
\ensuremath{p-v}\ diagrams show that this velocity shear corresponds to a
discontinuity in the CO flow: the HV (LV) component is not detected
above $\sim0.5$~K in the SW (NE) of the shear. The flows undetected
in the \twCO\jtwo\ line are either CO-poor and/or too dilute to excite
the transition. In this framework, we observe the yield of a strain
developing in a gas undetected in the \twCO\jtwo\ line: the gas we
detect (denser and/or richer in CO) is generated in the 1-dimensional
intense-shear interface and is spread in the POS by motions whose
velocity cannot be measured. This scenario naturally produces the two
components of the large velocity-shear, with sharp edges closely
following each other over $\sim1$~pc and little overlap in projection
{\it under any viewing angle}.
The intense-shear structure may however belong to a shock of unknown
velocity in the POS. We have searched for SiO\jtwo\ line emission as a
chemical shock signature within this structure and found no emission
above a significant low threshold $3 \sigma = 5$~mK that corresponds,
in the optically thin case, to a tiny SiO column density of about
\dix{10}~\cc. Hence, there is no chemical signature of C-shocks
faster than 20~\kms\ detected at the scale of 0.03~pc
\citep{gusdorf2008}. But we cannot rule out a weak C-shock component
($v_S\leq 2$ \kms) in the POS that would produce the density
enhancement and/or the CO enrichment of the gas required to explain
the non-detection in the \twCO\jtwo\ line of the gas before it enters
the shear layer. This would be consistent with the sub- to
trans-Alfv\'enic nature of the turbulent motions in that field (see
Paper II). The solenoidal contribution to the interface (2~\kms)
would exceed the possible compressive one ($\leq 2$ \kms), in
agreement with \cite{federrath2009} who find that our observed
statistical properties of turbulence in the Polaris Flare are in very
good agreement with solenoidal forcing at large scale. \answer{This
result is reminiscent of the finding of \cite{mininni2006a} that the
stronger the shear at large-scale, the more intense the
intermittency of velocity increments at small-scale}.
\subsection{A plausible link with the dense cores}
The above findings are similar to those inferred from PdBI
observations (FPH09) of the small (1'~by~2') field shown in
Fig.\ref{fig:cvi} (left). Velocity-shears as intense as
500~\kmspc\ are detected there over distances of 6~mpc, at the edge of
CO structures of velocities differing by several \kms. The PdBI field
is close to two low-mass dense cores \citep{heithausen2002},
interestingly located at the tip of the E-CVI structure
(Fig.~\ref{fig:cvi}, left).
The viscous dissipation rate of turbulence being proportional to the
square of the rate-of-strain \citep{landau_fluid}, it is tempting to
interpret the large increase of the velocity-shear, from
30~\kmspc\ (Fig.~\ref{fig:tdv}) to 500~\kmspc\ in the PdBI field, as
due to the development of an instability in the large-scale shear.
The growth of the instability splits the shear into small-scale and
more intense shears, thus increasing the local dissipation rate of
turbulence by two-orders of magnitude.
Clustering of small-scale structures of high strain-rate magnitude
(and therefore large dissipation) into structures of inertial
extension have been found in numerical simulations of incompressible
HD \citep{moisy2004} and MHD turbulence \citep{mininni2006b}. One
then may speculate that these bursts of dissipation eventually lead to
the formation of low mass dense cores largely devoid of turbulent
energy, after an evolution of the gas that remains to be understood.
\section{Conclusions}
We have detected a 1-dimensional structure of intense velocity-shear
($\sim$ 15 to 30 \kmspc), coherent over $\sim$ 1~pc with a thickness
of only 0.03 to 0.15~pc. This remarkable structure follows the
distribution of extreme \twCO\jtwo\ line-wings and coincides partly
with the locus of E-CVIs in the field. These findings support the
previous claim we made that, in translucent molecular clouds, E-CVIs
are tracers of extreme velocity-shears in interstellar turbulence, as
do the broad CO linewings.
This shear structure is proposed to be the source of {\it layers} of
CO-rich dense gas in a CO-poor (and/or dilute) gas component
experiencing the strain and not seen in \twCO\jtwo. The shear is
likely to be the site of enhanced turbulent dissipation. We cannot
rule out an undetected shock component in the POS.
These results, in conjunction with the PdBI results of FPH09, stress
the coupling of small and large scales in interstellar turbulence,
over a dynamic range never reached before, \ie\ from 6~mpc to more
than 1~pc. They support a framework in which trans-Alfv\'enic (but
supersonic) turbulence dissipates primarily in intense-shear layers
connecting the large-scales to mpc scales (or below).
We speculate that turbulence dissipation has been proceeding for a
longer time at the southern tip of the E-CVI structure than in the
northern part, leading to the formation of the two dense cores.
\begin{acknowledgements}
The authors thank M.~Heyer for his clarifying and perceptive report
on the original version of this Letter. We are grateful to C.~Thum
and H.~Wiesemeyer from IRAM for their indefectible support to the
HERA facility. The authors acknowledge the hospitality of the Kavli
Institute for Theoretical Physics (Grant No. PHY05-51164).
\end{acknowledgements}
\bibliographystyle{aa}
\input{shear.bbl}
\end{document}
|
1,108,101,564,141 | arxiv | \section{Introduction}
Although CO is the most abundant interstellar molecule after H$_2$, its
corresponding ion, CO$^+$, is expected to have very low abundance
in molecular clouds. The reason is that CO$^+$
is quickly converted into HCO$^+$ by reactions with H$_2$.
Only in the hot layers of photon-dominated regions (PDRs) where
a significant fraction of hydrogen is still in atomic form, the
CO$^+$ abundance becomes significant. Chemical models
(\cite{ste95}) predict that for the PDRs associated with massive stars
(n $\sim$ 10$^6$ cm$^{-3}$, G$_\circ$ $\sim$ 2 10$^5$ in units of Habing
field) the CO$^+$/HCO$^+$
abundance ratio is $\sim$0.05 at a visual extinction lower than 1.5 mag,
but decreases by more than 2 orders of magnitude when the extinction
increases above 3-5 mag. Based on its chemical behavior,
they proposed the CO$^+$/HCO$^+$ ratio as a tracer of
the HI/H$_2$ transition layer in PDRs.
CO$^+$ was tentatively detected for the first time by \cite{eri81}
toward OMC-1. Later, \cite{lat93} detected CO$^+$ in the well-known
PDR M17SW and the planetary nebula NGC 7027. More recently, CO$^+$ has
also been detected in the Orion Bar (\cite{sto95}).
But so far, all the detections of CO$^+$ have
been made toward the interfaces between the molecular cloud
and the HII regions around massive O stars. \cite{sto95} failed to
detect CO$^+$ toward the reflection nebula S140. They propose that
the large densities and intense UV fields associated with massive
O stars are required to form CO$^+$ column densities
$\ge$ 10$^{11}$ cm$^{-2}$.
We present the detection of CO$^+$ toward the reflection nebula
NGC 7023, which is illuminated by a Be star.
Although in this region the incident UV field is,
$G_\circ$ $\sim$ 10$^3$ (in units of Habing field),
and densities are, n $\sim$ 10$^5$ cm$^{-3}$, we have observed a
CO$^+$ column density of $\sim$ 3 10$^{11}$ cm$^{-2}$.
Furthermore, the spatial-velocity
distribution of the CO$^+$ emission shows that CO$^+$ is only
located within the HI/H$_2$ transition layer of this PDR.
\section{Observations and Results}
In Plate L1, we show the integrated intensity map of
the HCO$^+$ 1$\rightarrow$0
(thin dark contours) and the HI column density map (grey scale)
of the reflection nebula NGC 7023 (\cite{fue93} and \cite{fue96}).
The illuminating star, HD 200775,
is located in a cavity of the parent cloud
delimited by dense walls (\cite{fue93} and references therein).
Dense PDRs are found on the surfaces of these walls.
In particular, an intense HI clump appears
$\approx$ 40 NW from the star position, at the edge of the bulk of
the molecular emission (see Plate L1).
Interferometric observations of the J= 1$\rightarrow$0 line of HCO$^+$
showed the existence of several high
density HCO$^+$ filaments within this clump (\cite{fue96}).
The two most intense filaments are also shown in Plate L1.
We have searched for CO$^+$ toward the peak position of these
filaments. The coordinates of this position are given in Table 1 and
hereafter, we will refer to it as the ``PDR peak".
Beyond the ``PDR peak", our HCO$^+$ single-dish data show the existence
of several clumps immersed in the molecular cloud.
We have also searched for CO$^+$
toward the molecular clump closest to the ``PDR peak" (Plate L1).
The coordinates of this position are also given in Table 1 and
hereafter, we will refer to it as the ``Molecular Peak''.
CO$^+$ has a $^2$$\Sigma$
ground electronic state in which each rotational level is split in
two fine structure levels with J=N$\pm$1/2. The N=1$\rightarrow$0
rotational line
is heavily obscured by the O$_2$ line at 118 GHz and cannot be observed
from ground-based telescopes. The most intense transitions of the
N=2$\rightarrow$1 rotational spectrum are N=2$\rightarrow$1
J=5/2$\rightarrow$3/2 at 236062.553 MHz and N=2$\rightarrow$1
J=3/2$\rightarrow$1/2 at 235789.641 MHz. In the optically thin limit,
the intensity ratio I(235.789)/I(236.062) is 0.55 (\cite{sas81}).
Both line frequencies were covered by the receiver band.
Unfortunate, the most intense line is blended with
the 5$_{-2}$$\rightarrow$4$_{-2}$ and 5$_{2}$$\rightarrow$4$_{2}$
$^{13}$CH$_3$OH E lines (see \cite{ble84}). In order to determine an
upper limit to the $^{13}$CH$_3$OH emission we have observed
the 5$_1$$\rightarrow$4$_1$ methanol line toward
the PDR peak. To determine accurate CO$^+$ column densities,
it is necessary to have accurate estimates of the hydrogen density. For this
aim, maps of about 20''$\times$20'' with a spacing
of 5'' were carried out around the studied positions
in the CS J=2$\rightarrow$1,
3$\rightarrow$2 and 5$\rightarrow$4 lines. Furthermore, the H$^{13}$CO$^+$
J=1$\rightarrow$0 (toward both positions) and 3$\rightarrow$2 (only toward
the molecular peak) have also been observed.
The observations were carried out in 1995 December and 1996 May
using the 30-m telescope. The observational procedure was position
switching with a fixed reference 30' East from the star.
Pointing was checked every two hours using strong continuum sources
(NGC 7027, K3-50A, NGC 7538), and the rms of pointing errors was less
than 2''. The forward and main beam efficiencies were 0.92 and 0.75
at 90 GHz, 0.90 and 0.52 at 145 GHz, and 0.86 and 0.37 at 236-260 GHz
respectively.
The temperature scale is main beam temperature. The HPBW of the
telescope was 27'' at 90 GHz, 16'' at 145 GHz and 10'' at 236 GHz.
Typical system temperatures (in T$_{MB}$) were 300 K at 90 GHz,
600 K at 145 GHz, 1300K at 236 GHz and 3400 at 260 GHz. All the lines
have been observed with a frequency resolution of 80 kHz ($\sim$ 0.1
km s$^{-1}$ at 236 GHz).
\placetable{tbl-1}
\subsection{PDR peak}
The CO$^+$ N=2$\rightarrow$1 5/2$\rightarrow$3/2 and 3/2$\rightarrow$1/2 lines
have been detected toward the PDR peak with a signal to noise ratio
of 10 and 7 respectively (see Plate L1). The observational parameters are
shown in Table 1. The detection of both lines
make very unlikely a possible misidentification. We are not aware
of possible line contamination for the transition at 235789.64 MHz.
The only possible line contamination comes from
the 5$_{-2}$$\rightarrow$4$_{-2}$ and 5$_{2}$$\rightarrow$4$_{2}$
$^{13}$CH$_3$OH E lines whose rest frequency is less than 0.5 MHz from
the CO$^+$ line at 236062.55 MHz (see Blake et al. 1984). Since
the observed linewidths of the two lines of CO$^+$ are the same, it
seems that the line at 236062.55 MHz is very unlikely contaminated by
$^{13}$CH$_3$OH lines. To check for possible contamination,
we have estimated an upper limit to the
emission of the $^{13}$CH$_3$OH lines from
the observed J$_K$=5$_1$$\rightarrow$4$_1$ methanol line. The
excitation conditions and the line strength for this line
are very similar to those of the contaminating $^{13}$CH$_3$OH lines
(\cite{and90}).
Assuming a linewidth of 2 kms$^{-1}$,
we have obtained a 3$\sigma$ upper limit of 0.2 K kms$^{-1}$ to the
integrated intensity emission of the methanol line (see Table 1).
For an isotopic ratio, CH$_3$OH/$^{13}$CH$_3$OH $\sim$ 40,
this would imply an upper limit of 0.005 K kms$^{-1}$ to the integrated
intensity emission of each $^{13}$CH$_3$OH line. Since
there are two $^{13}$CH$_3$OH lines blended,
these lines could contribute to our CO$^+$ detection
at 236.062 GHz with, at most, an integrated intensity of 0.01 K km s$^{-1}$.
This is only 4\% of the observed integrated intensity emission
at 236.062 GHz, and it is
within the observational errors (see Table 1).
We, therefore, conclude
that the emission detected at 236.062 and 235.789 GHz is
due to CO$^+$.
A striking result of our data is that the CO$^+$ lines have
linewidths much larger
than those of CS and H$^{13}$CO$^+$. The interferometric
HCO$^+$ filaments detected toward the PDR peak
are characterized by having
different velocities.
The four well detected filaments are centered
at radial velocities of 1.9, 2.4, 2.8 and 4 km s$^{-1}$,
and the one tentatively detected is centered at 5.8 kms$^{-1}$.
One of these filaments, 2.4 kms$^{-1}$, is
very likely embedded in the molecular cloud, but most of the
others, 2.8, 4.0 and 5.8 km s$^{-1}$, seem to be immersed in the
atomic medium. The situation
is less clear for the filament at 1.9 kms$^{-1}$ that seems to be
part of a weak and extended molecular component (\cite{fue96}). Therefore,
a gradient in the chemical composition of the HCO$^+$ filaments
is expected depending upon the local visual extinction toward the exciting
star.
CS and H$^{13}$CO$^+$ present
narrow lines centered at 2.4 kms$^{-1}$, i.e., the velocity of the
filament immersed in the bulk of the molecular cloud.
Only HCO$^+$ and CO$^+$ present emission
at the velocities of the filaments immersed in the atomic medium.
From the comparison of the
spectra of the H$^{13}$CO$^+$, HCO$^+$ and CO$^+$ lines,
it is clear that there exists a gradient
in the CO$^+$/HCO$^+$ abundance
ratio as a function of velocity, i.e., as a function of
the visual extinction from the star
(see Plate L1). To determine this gradient,
we have estimated the CO$^+$/HCO$^+$ abundance
ratio in
three different velocity intervals, 0 - 1.6 km s$^{-1}$,
1.6 - 3.2 km s$^{-1}$, and 3.2 - 6 km s$^{-1}$.
CO$^+$ column densities have been estimated
using the LTE approximation.
Assuming T$_K$ = 40 K (see \cite{fue93}) we derived from CS data
a hydrogen density of $\sim$3.5 10$^5$ cm$^{-3}$ for the component at
2.4 km s$^{-1}$. Similar densities were obtained for
the other filaments
from the interferometric HCO$^+$ data (\cite{fue96}).
Using a LVG code and assuming T$_k$ = 40 K and
n = 3.5 10$^5$ cm$^{-3}$, we estimate T$_{rot}$ = 10 K for HCO$^+$.
Since HCO$^+$ and CO$^+$ have similar dipole moments and rotational
constants ($\mu$ = 2.77 D for CO$^+$ and 3.91 D for HCO$^+$),
we assume the same rotational temperature for CO$^+$.
In Table 2 we show the derived HCO$^+$, H$^{13}$CO$^+$ and CO$^+$
column densities. From these estimates, we have determined that
the CO$^+$/HCO$^+$ abundance
ratio is a factor of 10 larger for the filaments
immersed in the atomic
region than for the filaments embedded in the molecular cloud.
This gradient in the CO$^+$/HCO$^+$ ratio cannot be due
to an opacity effect.
The I(CO$^+$ 235.789)/I(CO$^+$ 236.062) ratio is consistent with
optically thin emission (within the observational errors) for all
the velocity intervals (see Table 2). Though consistent with
optically thin emission, our data suggest that the opacities of
the CO$^+$ lines could be larger for the velocities
3.2 - 6.0 km s$^{-1}$ than for 1.6 - 3.2 km s$^{-1}$. In this
case, the CO$^+$ column density would be slightly underestimated
for the velocity interval 3.2 - 6.0 km s$^{-1}$, and the
derived CO$^+$/HCO$^+$ ratio would be a lower limit to the actual
value of the CO$^+$/HCO$^+$ ratio for this interval.
Therefore, although we are
aware of the uncertainties involved in column density estimates,
we think that the observed gradient in the CO$^+$/HCO$^+$ ratio
(a factor of 10) is significant, and it is in agreement
with the expected behavior of the CO$^+$/HCO$^+$ ratio,
where CO$^+$ formation is restricted to a narrow range
of visual extinctions A$_v$ $<$ 2 mag. The visual extinction at the surface
of the filament at 2.4 km s$^{-1}$ must be $>$1 mag to be immersed
in a mainly molecular medium, while for the filaments immersed
in a mainly atomic medium, the visual extinction must be $<$1 mag.
Assuming a HCO$^+$ fractional
abundance of 4 10$^{-10}$ (\cite{fue96}),
the CO$^+$ fractional abundance is
$\sim$ 4 10$^{-11}$ in the filaments
immersed in the atomic medium.
CO$^+$ fractional abundances $\sim$ 10$^{-11}$ are also
derived from the CO$^+$ data reported by \cite{sto95}
and \cite{lat93}, toward M17SW and the Orion Optical Bar.
Although the physical conditions and incident UV field
are different (see Section 3), the CO$^+$ fractional abundance
in NGC 7023 is
similar to that found at the edges of the HII regions around
massive stars.
\placetable{tbl-2}
\subsection{Molecular peak}
We have not detected CO$^+$ toward the molecular peak.
Assuming a linewidth of 1 km s$^{-1}$ (a typical linewidth
for the molecular cloud),
we obtain an upper limit to the
integrated intensity of the CO$^+$ line at 236.062 GHz
of 0.03 K kms$^{-1}$.
Assuming a kinetic temperature of T$_K$ = 15 K (\cite{fue90}),
we estimate a density of 10$^5$ cm$^{-3}$ from our CS data.
This density is high enough to excite the CO$^+$ lines. In fact,
the excitation conditions required for the
H$^{13}$CO$^+$ J=3$\rightarrow$2 line are comparable
to those required for
the CO$^+$ N=2$\rightarrow$1 J=5/2$\rightarrow$3/2
and 3/2$\rightarrow$1/2 lines,
and the H$^{13}$CO$^+$ J=3$\rightarrow$2 line
has been detected with an intensity of 1.04 K.
Therefore, the lack of detection of CO$^+$ toward the
molecular peak is not due to the excitation conditions
in this region.
With the same assumptions as for the PDR peak,
the upper limit to the CO$^+$ column density
is 4.5 10$^{10}$ cm$^{-2}$.
Assuming n = 10$^5$ cm$^{-3}$ and T$_K$ = 15 K,
we estimate a H$^{13}$CO$^+$ column density of
8 10$^{11}$ cm$^{-2}$. This means a CO$^+$/HCO$^+$ ratio of $<$ 0.001.
Therefore, the
CO$^+$/H$^{13}$CO$^+$ ratio is {\it at least 100 times lower} in
the molecular peak than in the filaments immersed in the atomic medium.
Assuming a HCO$^+$ abundance of 4 10$^{-10}$, we obtain a fractional
abundance of CO$^+$ of $<$5 10$^{-13}$ in the molecular peak.
\section{Summary and Discussion}
We have detected, for the first time, CO$^+$ in a PDR associated with
a Be star. This region is very different from the massive star forming
regions where CO$^+$ had been detected thus far. First of all, since
the ionization potential of CO is larger than 13.6 eV, a Be star does
not produce a significant number of photons capable to ionize CO.
Furthermore, the intensity of the UV field and the density
around this star, $G_\circ$$\sim$ 10$^3$ ( in units of Habing field), and
densities of $\sim$ 10$^5$ cm$^{-3}$,
are very different from those around massive O stars where
$G_\circ$$\sim$ 10$^5$ and n$\ge$ 10$^6$ cm$^{-3}$.
Chemical models predict that CO$^+$ column densities
decrease sharply for UV fields $<$10$^5$, and
densities $<$10$^6$ cm$^{-3}$ (\cite{sto95}).
Even for the conditions prevailing in massive star forming regions,
chemical models fail to predict the
large CO$^+$ column densities observed toward them.
To solve this problem, some authors have suggested that
the direct photoionization
of CO might be a non-negligible formation mechanism of CO$^+$ in
these regions (\cite{jan95}, \cite{bla96}).
We have estimated a CO$^+$ column density of $\sim$ 3 10$^{11}$
cm$^{-2}$ toward the PDR peak in NGC 7023.
Our results show that large CO$^+$ column densities can be produced
even with UV fields of just a few 10$^3$ and densities of around
10$^5$ cm$^{-3}$. Since the peak CO$^+$ abundance in NGC 7023
($\sim$ 4 10$^{-11}$) is similar
to that found in massive star forming regions, our data suggest
that the direct photoionization of
CO is not a significant formation mechanism for CO$^+$.
\acknowledgments
We are grateful to the technical staff of Pico de Veleta for their
support during the observations. We are also grateful to Dr. R. Gaume
for his careful reading of the manuscript.
This work has been partially supported by the Spanish
DGICYT under grant number
PB93-0048.
\clearpage
|
1,108,101,564,142 | arxiv |
\part*{Auxiliary material}
\begin{figure}[hb!]
\includegraphics[width=0.8\linewidth]{Ranking_TTZ_final.pdf}
\caption{\label{fig:ranking_ttZData} The fitted values of the nuisance
parameters for the most important sources of systematic uncertainty and their
impact on the measured signal strength $\mu$, for the \ttZ fit. The points, which are
drawn conforming to the scale of the bottom axis, show the deviation of each of
the fitted nuisance parameters, $\theta$, from $\theta_0$, which is the nominal
value of that nuisance parameter, in units of the pre-fit standard deviation
$\Delta\theta$. The error bars show the post-fit uncertainties,
$\sigma_\theta$, which are close to 1 if the data do not provide any further
constraint on that uncertainty. Conversely, a value of $\sigma_\theta$ much
smaller than 1 indicates a significant reduction with respect to the original
uncertainty. The nuisance parameters are sorted according to their post-fit
effect on $\mu$ (blue areas), conforming to the scale of the top axis, with
those with the largest impact at the top. }
\end{figure}
\begin{figure}
\includegraphics[width=0.8\linewidth]{Ranking_TTW_final.pdf}
\caption{\label{fig:ranking_ttWData}
The fitted values of the nuisance parameters for the most important sources of
systematic uncertainty and their impact on the measured signal strength $\mu$, for
the \ttW fit. The points, which are drawn conforming to the scale of the bottom
axis, show the deviation of each of the fitted nuisance parameters, $\theta$,
from $\theta_0$, which is the nominal value of that nuisance parameter, in
units of the pre-fit standard deviation $\Delta\theta$. For the WZ
normalisation factor, the fitted value of the normalisation parameter, which has
a pre-fit value of one, is shown together with its uncertainties. The error
bars show the post-fit uncertainties, $\sigma_\theta$, which are close to 1 if
the data do not provide any further constraint on that uncertainty. Conversely,
a value of $\sigma_\theta$ much smaller than 1 indicates a significant
reduction with respect to the original uncertainty. The nuisance parameters are
sorted according to their post-fit effect on $\mu$ (blue areas), conforming to
the scale of the top axis, with those with the largest impact at the top.}
\end{figure}
\begin{table}[htbp]
\centering \renewcommand{\arraystretch}{1.2}
\caption{Event yields after the fit for signal and backgrounds,
and the observed data in all control and signal regions used in the fit to
extract the \ttZ and \ttW cross sections. The quoted uncertainties in the
event yields represent statistical and systematic uncertainties.
The \tZ, \WtZ, \ttH, three- and four-top-quark processes are
denoted $t+X$. The $WZ$, $ZZ$, $H \to ZZ$ (ggF and VBF), $HW$ and $HZ$ and VBS
processes are denoted `Bosons'.
\vspace{1ex}}
\resizebox{\columnwidth}{!}{
\begin{tabular}{%
c|
r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
|r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
|r
}
\toprule
Region &
\multicolumn{2}{c}{$t+X$} &
\multicolumn{2}{c}{Bosons} &
\multicolumn{2}{c}{Fake leptons} &
\multicolumn{2}{c|}{Total bkg.} &
\multicolumn{2}{c}{\ttW} &
\multicolumn{2}{c|}{\ttZ} &
Data \\
\midrule
\TLCR& $0.51$ & $0.13$& $29$ & $6$& $2.1$ & $1.8$& $31$ & $7$& $0.037$ & $0.023$& $0.88$ & $0.32$& 33\\
\FLCR& \multicolumn{2}{c}{$<0.001$} & $37$ & $7$& $1.8$ & $0.6$& $39$ & $7$& \multicolumn{2}{c}{$<0.001$} & $0.028$ & $0.012$& 39\\
\midrule
\SSLSR& $0.93$ & $0.09$& $0.13$ & $0.07$& $1.4$ & $1.2$& $2.5$ & $1.3$& $5.8$ & $3.0$& $0.76$ & $0.26$& 9\\
\TLSRC& $1.07$ & $0.25$& $0.6$ & $0.5$& \multicolumn{2}{c}{$<0.001$} & $1.6$ & $0.5$& $0.16$ & $0.09$& $6.0$ & $2.0$& 8\\
\TLSRA& $1.12$ & $0.24$& $2.7$ & $1.6$& $2.1$ & $1.7$& $5.8$ & $2.4$& $0.09$ & $0.05$& $4.7$ & $1.6$& 7\\
\TLSRB& $0.58$ & $0.19$& $0.25$ & $0.21$& \multicolumn{2}{c}{$<0.001$} & $0.82$ & $0.28$& $0.21$ & $0.11$& $2.1$ & $0.7$& 4\\
\TLSRD& $0.96$ & $0.11$& $0.15$ & $0.14$& $3.3$ & $2.2$& $4.5$ & $2.2$& $4.0$ & $2.1$& $1.6$ & $0.5$& 10\\
\FLSRD& $0.212$ & $0.033$& $0.08$ & $0.06$& $0.113$ & $0.022$& $0.40$ & $0.08$& \multicolumn{2}{c}{$<0.001$} & $0.72$ & $0.25$& 1\\
\FLSRE& $0.121$ & $0.022$& $0.07$ & $0.06$& $0.062$ & $0.012$& $0.25$ & $0.07$& \multicolumn{2}{c}{$<0.001$} & $0.69$ & $0.23$& 1\\
\FLSRB& $0.25$ & $0.04$& $0.0131$ & $0.0032$& $0.114$ & $0.019$& $0.37$ & $0.04$& \multicolumn{2}{c}{$<0.001$} & $0.82$ & $0.28$& 2\\
\FLSRC& $0.16$ & $0.05$& \multicolumn{2}{c}{$<0.001$} & $0.063$ & $0.013$& $0.23$ & $0.05$& \multicolumn{2}{c}{$<0.001$} & $0.70$ & $0.23$& 1\\
\bottomrule
\end{tabular}
}
\end{table}
\section{The ATLAS detector}
\label{s:atlas}
The ATLAS detector~\cite{PERF-2007-01} consists of four main subsystems: an
inner tracking system, electromagnetic (EM) and hadronic calorimeters, and a
muon spectrometer (MS). The inner detector (ID) consists of a high-gra\-nu\-la\-ri\-ty
silicon pixel detector, including the newly installed Insertable
B-Layer~\cite{ATL-INDET-PUB-2015-001}, which is the innermost layer of the
tracking system, and a silicon microstrip tracker, together providing precision
tracking in the pseudorapidity\footnote{ATLAS uses a right-handed coordinate
system with its origin at the nominal interaction point (IP) in the centre of
the detector and the $z$-axis along the beam pipe. The $x$-axis points from the
IP to the centre of the LHC ring, and the $y$-axis points upward. Cylindrical
coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the
azimuthal angle around the $z$-axis. The pseudorapidity is defined in terms of
the polar angle $\theta$ as $\eta=-\ln\tan(\theta/2)$.} range $|\eta|<2.5$ and
of a transition radiation tracker covering $|\eta|<2.0$. All the systems are immersed in a
\SI{2}{T} magnetic field provided by a superconducting solenoid. The EM
sampling calorimeter uses lead and liquid argon (LAr) and is divided into
barrel ($|\eta|<1.475$) and endcap ($1.375<|\eta|<3.2$) regions. Hadron
calorimetry is provided by a steel/scintillator-tile calorimeter, segmented
into three barrel structures, in the range $|\eta|<1.7$, and by two copper/LAr
hadronic endcap calorimeters that cover the region $1.5<|\eta|<3.2$. The solid
angle coverage is completed with forward copper/LAr and tungsten/LAr
calorimeter modules, optimised for EM and hadronic measurements respectively,
covering the region $3.1<|\eta|<4.9$. The muon spectrometer measures the
deflection of muon tracks in the range $|\eta|<2.7$ using multiple layers of
high-precision tracking chambers located in toroidal magnetic fields.
The field integral of the toroids ranges between $2.0$ and \SI{6.0}{Tm} for
most of the detector. The muon spectrometer is also instrumented with separate
trigger chambers covering $|\eta|<2.4$. A two-level trigger system, using
custom hardware followed by a software-based trigger level, is used to reduce the event
rate to an average of around \SI{1}{kHz} for offline storage.
\section{Conclusion}
\label{s:conclusion}
Measurements of the production cross sections of a top-quark pair in
association with a $Z$ or $W$ boson using \lumi of data collected by the ATLAS
detector in $\sqrt{s} = 13\,\TeV$ $pp$ collisions at the LHC are presented.
Final states with either two same-charge muons, or three or four leptons
are analysed. From a simultaneous fit to nine signal regions and two control regions, the
\ttZ and \ttW production cross sections are determined to be $\sttZ = 0.9 \pm
0.3$ pb and $\sttW = 1.5 \pm 0.8$ pb.
Both measurements are consistent with the NLO
QCD theoretical calculations, $\sttZ = 0.84 \pm 0.09\,\text{pb}$ and $\sttW =
0.60 \pm 0.08\,\text{pb}$.
\section{Introduction}
\label{s:intro}
At the Large Hadron Collider (LHC), top quarks are copiously produced in
quark--antiquark pairs (\ttbar). This process has been extensively studied in proton--proton collisions at
$7$ and $8\,\TeV$, and recently at $13\,\TeV$~\cite{TOPQ-2015-09, CMS-TOP-15-003}
centre-of-mass energy. Measurements of the associated production of \ttbar
with a $Z$ boson (\ttZ) allow the extraction of information about the neutral-current coupling of the top quark. The production rate of a top-quark pair with a
massive vector boson could be altered in the presence of physics beyond the
Standard Model (SM), such as vector-like quarks~\cite{AguilarSaavedra:2009es,Aguilar-Saavedra:2013qpa}, strongly coupled Higgs bosons~\cite{Perelstein:2005ka}
or technicolour~\cite{Chivukula:1992ap, Chivukula:1994mn, Hagiwara:1995jx, Mahanta:1996ng, Mahanta:1996qe},
and therefore the measurements of \sttZ and \sttW are important
checks of the validity of the SM at this new energy regime.
The \ttZ and \ttW
processes have been established by ATLAS~\cite{TOPQ-2013-05} and
CMS~\cite{CMS-TOP-14-021} using the Run-1 dataset at $\sqrt{s} = 8\,\TeV$, with
measured cross sections compatible with the SM prediction and having uncertainties of
$\sim\!\!30\%$. At $\sqrt{s} = 13\,\TeV$, the SM cross sections of the \ttZ and
\ttW processes increase by factors of $3.5$ and $2.4$, respectively, compared
to $\sqrt{s} = 8\,\TeV$. The cross sections, computed at next-to-leading-order
(NLO) QCD precision, using \MGMCatNLO (referred to in the
following as \MGAMC), are $\sttZ = 0.84\,\text{pb}$ and $\sttW =
0.60\,\text{pb}$ with an uncertainty of $\sim\!\!12\%$~\cite{Frixione:2015zaa, Alwall:2014hca}, primarily due to higher-order
corrections, estimated by varying the renormalisation and
factorisation scales.
This paper presents measurements of the \ttZ and \ttW cross sections using
\lumi of proton--proton ($pp$) collision data at $\sqrt{s} = 13\,\TeV$
collected by the ATLAS detector in 2015. The final states of top-quark pairs
produced in association with a $Z$ or a $W$ boson comprise up to four isolated,
prompt leptons.\footnote{In this paper, lepton is used to denote electron or
muon, and prompt lepton is used to denote a lepton produced in a $Z$ or $W$
boson or $\tau$-lepton decay.} Decay modes with two same-sign (SS) charged
muons, or three or four leptons are considered in this analysis. The analysis
strategy follows the strategy adopted for the $8\,\TeV$
dataset~\cite{TOPQ-2013-05}, excluding the lower sensitivity SS dilepton
channels. Table~\ref{tab:intro-channels} lists the analysis channels and the
targeted decay modes of the \ttZ and \ttW processes. Each channel is divided
into multiple analysis regions in order to enhance the sensitivity to the
signal. Simultaneous fits are performed to the signal regions and selected
control regions in order to extract the cross sections for \ttZ and \ttW
production. Additional validation regions are defined to check that the
background estimate agrees with the data and are not used in the fit.
\begin{table}[htbp]
\centering
\caption{\label{tab:intro-channels} List of \ttW and \ttZ decay modes and
analysis channels targeting them.\vspace{1ex}}
\begin{tabular}{cccc}
\toprule
Process & \ttbar decay & Boson decay & Channel\\
\midrule
\multirow{2}{*}{$\ttW$}
& $(\mu^{\pm}\nu b) (q\bar{q} b) $ & $\mu^{\pm}\nu$ & SS dimuon\\
& $ (\ell^{\pm}\nu b) (\ell^{\mp}\nu b)$ & $\ell^{\pm}\nu$ & Trilepton\\
\midrule
\multirow{2}{*}{\ttZ}
& $(\ell^{\pm}\nu b) (q\bar{q} b)$ & $ \ell^{+}\ell^{-}$ & Trilepton\\
& $(\ell^{\pm}\nu b) (\ell^{\mp} \nu b)$ & $ \ell^{+}\ell^{-}$ & Tetralepton\\
\bottomrule
\end{tabular}
\end{table}
\section{Object reconstruction}
\label{s:objetcs}
The final states of interest in this analysis contain electrons, muons, jets,
$b$-jets and missing transverse momentum.
Electron candidates~\cite{PERF-2013-03} are reconstructed from energy deposits
(clusters) in the EM calorimeter that are associated with reconstructed tracks
in the inner detector. The electron identification relies on a
likelihood-based selection~\cite{ATLAS-CONF-2014-032,
ATL-PHYS-PUB-2015-041}. Electrons are required to pass the `medium'
likelihood identification requirements described in Ref.~\cite{ATL-PHYS-PUB-2015-041}. These include requirements on the
shapes of the electromagnetic shower in the calorimeter as well as tracking and track-to-cluster matching quantities.
The electrons are also required to have transverse momentum $\pt > 7\,\GeV$ and
$|\eta_\text{cluster}| < 2.47$, where $\eta_\text{cluster}$ is the
pseudorapidity of the calorimeter energy deposit associated with the electron
candidate. Candidates in the EM calorimeter barrel/endcap transition region
$1.37 < |\eta_\text{cluster}| < 1.52$ are excluded.
Muon candidates are reconstructed from a fit to track segments in the various layers of
the muon spectro\-meter, matched with tracks identified in the inner detector.
Muons are required to have $\pt > 7\,\GeV$ and $\abseta < 2.4$ and to pass the
`medium' identification requirements defined in Ref.~\cite{PERF-2015-10}. The `medium' requirement includes selections on the numbers of hits
in the ID and MS as well as a compatibility requirement between momentum measurements in the ID and MS. It provides a high efficiency and purity of selected muons. Electron candidates sharing a track with a muon candidate are removed.
To reduce the non-prompt lepton background from hadron decays or jets
misidentified as leptons (labelled as ``fake leptons'' throughout this paper), electron and muon
candidates are required to be isolated. The total sum of track transverse
momenta in a surrounding cone of size $\min(10\,\GeV/\pT, r_{e,\mu})$, excluding the track of the candidate from the sum, is required to be less than 6\% of the candidate \pt, where
$r_e = 0.2$ and $r_{\mu} = 0.3$. In addition, the sum of the cluster
transverse energies in the calorimeter within a cone of size $\Delta R_{\eta} \equiv
\sqrt{(\Delta\eta)^2 + (\Delta\phi)^2} = 0.2$ of any electron candidate,
excluding energy deposits of the candidate itself, is required to be less than
6\% of the candidate \pt.
For both electrons and muons, the longitudinal impact parameter of the
associated track with respect to the primary vertex,\footnote{A primary vertex
candidate is defined as a vertex with at least five associated tracks,
consistent with the beam collision region. If more than one such vertex is
found, the vertex candidate with the largest sum of squared transverse momenta
of its associated tracks is taken as the primary vertex.} $z_{0}$, is required
to satisfy $|z_0 \sin\theta|<0.5$ mm. The significance of the transverse impact
parameter $d_0$ is required to satisfy $|d_0|/\sigma(d_0)<5$ for electrons and
$|d_0|/\sigma(d_0)<3$ for muons, where $\sigma(d_0)$ is the uncertainty in
$d_0$.
Jets are reconstructed using the anti-$k_t$ algorithm~\cite{Cacciari:2008gp,
Cacciari:2005hq} with radius parameter $R = 0.4$, starting from topological
clusters in the calorimeters~\cite{Aad:2016upy}. The effect of pile-up on jet
energies is accounted for by a jet-area-based correction~\cite{Cacciari:2008gn}
and the energy resolution of the jets is improved by using global sequential
corrections~\cite{ATLAS-CONF-2015-002}. Jets are calibrated to the hadronic
energy scale using $E$- and $\eta$-dependent calibration factors based on MC
simulations, with in-situ corrections based on Run-1 data~\cite{PERF-2011-03,
ATLAS-CONF-2015-037} and checked with early Run-2
data~\cite{ATL-PHYS-PUB-2015-015}. Jets are accepted if they fulfil the
requirements $\pT > 25\,\GeV$ and $|\eta| < 2.5$. To reduce the contribution
from jets associated with pile-up, jets with $\pT < 60\,\GeV$ and $|\eta| < 2.4$
are required to satisfy pile-up rejection criteria (JVT), based on a multivariate
combination of track-based variables ~\cite{PERF-2014-03}.
Jets are $b$-tagged as likely to contain $b$-hadrons using the \tool{MV2c20}
algorithm, a multivariate discriminant making use of the long lifetime, large
decay multiplicity, hard fragmentation and high mass of $b$-hadrons
\cite{PERF-2012-04}. The average efficiency to correctly tag a $b$-jet is
approximately $77\%$, as determined in simulated \ttbar events, but it varies as
a function of \pT and $\eta$. In simulation, the tagging algorithm gives a
rejection factor of about $130$ against light-quark and gluon jets, and about
$4.5$ against jets containing charm quarks~\cite{ATL-PHYS-PUB-2015-022}. The
efficiency of $b$-tagging in simulation is corrected to that in data using a
\ttbar-based calibration using Run-1 data~\cite{ATLAS-CONF-2014-004} and
validated with Run-2 data~\cite{ATL-PHYS-PUB-2015-039}.
The missing transverse momentum $\mathbf{p}^\text{miss}_\text{T}$, with
magnitude \met, is a measure of the transverse momentum imbalance due to
particles escaping detection. It is computed~\cite{PERF-2011-07} as the
negative sum of the transverse momenta of all electrons, muons and jets and an
additional soft term. The soft term is constructed from all tracks that are associated
with the primary vertex but not with any physics object. In this way, the \met is adjusted for the best calibration of the jets
and the other identified physics objects above, while maintaining pile-up
independence in the soft term~\cite{ATL-PHYS-PUB-2015-027,
ATL-PHYS-PUB-2015-023}.
To prevent double-counting of electron energy deposits as jets, the closest jet
within $\Delta R_y = 0.2$ of a reconstructed electron is removed, where $\Delta
R_y \equiv \sqrt{(\Delta y)^2 + (\Delta\phi)^2}$. If the nearest jet surviving
the above selection is within $\Delta R_y = 0.4$ of an electron, the electron
is discarded to ensure that selected electrons are sufficiently separated from
nearby jet activity. To reduce the background from muons originating from heavy-flavour particle
decays inside jets, muons are removed if they are separated from the nearest
jet by $\Delta R_y < 0.4$. However, if this jet has fewer than three
associated tracks, the muon is kept and the jet is removed instead; this avoids
an inefficiency for high-energy muons undergoing significant energy loss in the
calorimeter.
\section*{Acknowledgements}
\input{acknowledgements}
\printbibliography
\newpage
\input{atlas_authlist}
\end{document}
\section{Results}
\label{s:results}
In order to extract the \ttZ and \ttW cross sections, nine signal regions (\SSLSR,
\TLSRA, \TLSRB, \TLSRC, \TLSRD, \FLSRB, \FLSRC, \FLSRD, \FLSRE) and two control regions
(\TLCR, \FLCR) are simultaneously fitted.
The \SSLSR\ signal region is particularly sensitive to \ttW, the \TLSRD\ signal region
is sensitive to both, \ttW and \ttZ, while all other signal regions aim at the determination
of the \ttZ cross section.
The cross sections \sttZ and \sttW are determined using a binned maximum-likelihood fit
to the numbers of events in these regions.
The fit is based on the profile-likelihood
technique, where systematic uncertainties are allowed to vary as nuisance
parameters and take on their best-fit values. None of the uncertainties are
found to be significantly constrained or pulled from their initial values. The calculation of confidence intervals and
hypothesis testing is performed using a modified frequentist method as implemented in RooStats~\cite{RooFit,RooFitManual}.
A summary of the fit to all regions used to measure the \ttZ and \ttW
production cross sections are shown in Figure~\ref{fig:allyields}. The
normalisation corrections for the $WZ$ and $ZZ$ backgrounds with respect to the
Standard Model predictions are obtained from the fits as described in
Section~\ref{s:selection} and found to be compatible with unity:
$1.11 \pm 0.30$
for the $WZ$ background and
$0.94 \pm 0.17$
for the $ZZ$ background.
\vspace{1ex}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\textwidth]{Summary_ttWZ}
\caption{Expected yields after the fit compared to data for the
fit to extract \sttZ and \sttW in the signal regions and in the control regions
used to constrain the $WZ$ and $ZZ$ backgrounds. The `Other' background
summarises all other backgrounds described in Section~\ref{s:samples}.
\hatch.\label{fig:allyields}}
\end{figure}
The results of the fit are $\sigma_{\ttZ} = 0.92 \pm 0.29 \stat \pm 0.10 \syst$
pb and $\sigma_{\ttW} = 1.50 \pm 0.72 \stat \pm 0.33 \syst$ pb with a
correlation of $-0.13$ and are shown in Figure~\ref{fig:simfit}. The fit
yields significances of $3.9\sigma$ and $2.2\sigma$ over the background-only
hypothesis for the \ttZ and \ttW processes, respectively. The expected
significances are $3.4\sigma$ for \ttZ and $1.0\sigma$ for \ttW production.
The significance values are computed using the asymptotic approximation described in Ref.~\cite{cls_3}.
In the two channels most sensitive to the \ttW signal the observed relative
number of events with two positively or two negatively charged leptons
is compatible with expectation. In the \TLSRD\ channel the observed distribution of
the number of events with a given amount of electrons and muons match expectation, as well.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\textwidth]{ttZ_vs_ttW_2Dfit}
\caption{The result of the simultaneous fit to the \ttZ and \ttW cross sections
along with the 68\% and 95\% confidence level (CL) contours. The shaded areas
correspond to the theoretical uncertainties in the Standard Model predictions,
and include renormalisation and factorisation scale uncertainties as well as
PDF uncertainties including $\alphas$ variations.}
\label{fig:simfit}
\end{figure}
Table~\ref{tab:syst} shows the leading and total uncertainties in the
measured \ttZ and \ttW cross sections. In estimating the uncertainties for \ttZ (\ttW), the cross section for \ttW (\ttZ) is fixed to its
Standard Model value. For both processes, the precision of
the measurement is dominated by statistical uncertainties. For the \ttZ determination,
the different sources contribute with similar size to the total systematic uncertainty.
For the \ttW determination, the dominant systematic uncertainty source is
the limited amount of data available for the estimation of the fake leptons.
\begin{table}[htbp]
\centering \renewcommand{\arraystretch}{1.2}
\caption{List of dominant and total uncertainties in the measured cross
sections of the \ttZ and \ttW processes from the fit. All uncertainties are
symmetrised.\vspace{1ex}}
\label{tab:syst}
\begin{tabular}{lcc}
\toprule
Uncertainty & $\sigma_{\ttZ}$ & $\sigma_{\ttW}$ \\
\midrule
Luminosity & 2.6\% & 3.1\% \\
Reconstructed objects & 8.3\% & 9.3\% \\
Backgrounds from simulation & 5.3\% & 3.1\% \\
Fake leptons and charge misID & 3.0\% & 19\% \\
Signal modelling & 2.3\% & 4.2\% \\
\midrule
Total systematic & 11\% & 22\% \\
Statistical & 31\% & 48\% \\
\midrule
Total & 32\% & 53\% \\
\bottomrule
\end{tabular}
\end{table}
\section{Data and simulated event samples}
\label{s:samples}
The data were collected with the ATLAS detector during 2015 with a
bunch spacing of \SI{25}{ns} and a mean number of $14$ $pp$
interactions per bunch crossing (pile-up).
With strict data-quality requirements, the integrated luminosity considered
corresponds to \lumi with an uncertainty of $2.1\%$~\cite{Aaboud:2016hhf}.
Monte Carlo simulation samples (MC) are used to model the expected signal and
background distributions in the different control, validation and signal regions
described below. The
heavy-flavour decays involving $b$- and $c$-quarks,
particularly important to this measurement, are modelled
using the \EVTGEN~\cite{EvtGen} program, except for processes modelled using the
\SHERPA generator. In all samples the top-quark mass is set to $172.5\,\GeV$ and
the Higgs boson mass is set to $125\,\GeV$. The response of the detector to
stable\footnote{A particle is considered stable if $c\tau \ge 1$ cm.} particles
is emulated by a dedicated simulation~\cite{SOFT-2010-01} based either fully on
\GEANT~\cite{geant} or on a faster parameterisation~\cite{ATL-PHYS-PUB-2010-013}
for the calorimeter response and \GEANT for other detector systems. To account for
additional $pp$ interactions from the same and close-by bunch crossings, a set
of minimum-bias interactions generated using \PYTHIA v8.210~\cite{Sjostrand:2014zea},
referred to as \PYTHIA 8 in the following,
with the \tune{A2}~\cite{ATL-PHYS-PUB-2011-014} set of tuned MC parameters (\tune{A2} tune)
is superimposed on the
hard-scattering events. In order to reproduce the same pile-up levels present in
the data, the distribution of the number of additional $pp$ interactions in the
MC samples is reweighted to match the one in the data. All samples are processed
through the same reconstruction software as the data. Simulated events are
corrected so that the object identification, reconstruction and trigger
efficiencies, energy scales and energy resolutions match those determined from
data control samples.
The associated production of a top-quark pair with one or two vector bosons is
generated at leading order (LO) with \MGAMC\ interfaced to \PYTHIA 8, with up to
two (\ttW), one (\ttZ) or no ($\bgttWW$) extra partons included in the matrix
elements. The $\gamma^{*}$ contribution and the \Zgamstar interference are
included in the \ttZ samples.
The \tune{A14} ~\cite{ATL-PHYS-PUB-2014-021} set of tuned MC parameters (\tune{A14} tune) is used together with the
\pdf{NNPDF2\!.\!3LO} parton distribution function (PDF) set~\cite{Ball:2012cx}.
The samples are normalised using cross sections computed at
NLO in QCD~\cite{ATL-PHYS-PUB-2016-005}.
The $t$-channel production of a single top quark in association with a $Z$ boson
(\tZ) is generated using \MGAMC\ interfaced with \PYTHIA v6.427~\cite{PythiaManual},
referred to as \PYTHIA 6 in the following,
with the \pdf{CTEQ6L1} PDF~\cite{cteq6} set and the \tune{Perugia2012}~\cite{Skands:2010ak}
set of tuned MC parameters at NLO in QCD. The \Zgamstar interference is
included, and the four-flavour scheme is used in the computation.
The \Wt-channel production of a single top quark together with a $Z$ boson
(\WtZ) is generated with \MGAMC\ and showered with \PYTHIA 8, using the
\pdf{NNPDF3\!.\!0NLO} PDF set~\cite{Ball:2015cx} and the \tune{A14} tune. The generation is performed at
NLO in QCD using the five-flavour scheme.
Diagrams containing a top-quark pair are removed to avoid
overlap with the \ttZ process.
Diboson processes with four charged leptons ($4\ell$), three charged leptons and
one neutrino ($\ell\ell\ell\nu$) or two charged leptons and two neutrinos
($\ell\ell\nu\nu$) are simulated using the \SHERPA 2.1
generator~\cite{Gleisberg:2008ta}.
The matrix elements include all diagrams with four electroweak vertices.
They are calculated for up to one ($4\ell,
\ell\ell\nu\nu$) or no additional partons ($\ell\ell\ell\nu$) at NLO and up to three
partons at LO using the \COMIX~\cite{Gleisberg:2008fv} and
\OPENLOOPS~\cite{Cascioli:2011va} matrix element generators and merged with the
\SHERPA parton shower using the \textsc{ME+PS@NLO}
prescription~\cite{Hoeche:2012yf}. The \pdf{CT10nlo} PDF set~\cite{Lai:2010vv} is used in conjunction
with a dedicated parton-shower tuning developed by the \SHERPA authors. The NLO
cross sections calculated by the generator are used to normalise diboson
processes. Alternative diboson samples are simulated using the \POWHEGBOX
v2~\cite{Powbox2} generator, interfaced to the \PYTHIA 8 parton shower model,
and for which the \pdf{CT10nlo} PDF set is used in the matrix element, while the
\pdf{CTEQ6L1} PDF set is used for the parton shower along with the
\tune{AZNLO}~\cite{AZNLO:2014} set of tuned MC parameters.
The production of three massive vector bosons with subsequent leptonic decays of
all three bosons is modelled at LO with the \SHERPA 2.1 generator and the
\pdf{CT10} PDF set~\cite{Lai:2010vv}. Up to two additional partons are included in the matrix
element at LO and the full NLO accuracy is used for the inclusive process.
Electroweak processes involving the vector-boson scattering (VBS) diagram and
producing two same-sign leptons, two neutrinos and two partons are modelled
using \SHERPA 2.1 at LO accuracy and the \pdf{CT10} PDF set. Processes of
orders four and six in the electroweak coupling constant are considered, and up
to one additional parton is included in the matrix element.
For the generation of \ttbar events and \Wt-channel single-top-quark events the
\POWHEGBOX v2 generator is used with the \pdf{CT10} PDF set. The parton shower
and the underlying event are simulated using \PYTHIA 6
with the \pdf{CTEQ6L1} PDF set and the corresponding
\tune{Perugia2012} tune. The \ttbar samples are normalised to their
next-to-next-to-leading-order (NNLO) cross-section predictions, including soft-gluon
resummation to next-to-next-to-leading-log order, as calculated with the
\tool{Top++2.0} program (see Ref.~\cite{ref:xs6} and references therein). For more
efficient sample generation, the \ttbar sample is produced by selecting only true
dilepton events in the final state. Moreover, an additional dilepton \ttbar
sample requiring a $b$-hadron not coming from top-quark decays is generated
after $b$-jet selection. Diagram removal is employed to remove
the overlap between \ttbar and \Wt~\cite{Re:2010bp}.
Samples of \ttbar events produced in association with a Higgs boson (\ttH)
are generated using NLO matrix elements in \MGAMC\ with
the \pdf{CT10NLO} PDF set and interfaced with \PYTHIA 8 for the modelling of the
parton shower. Higgs boson production via gluon--gluon fusion (ggF) and vector
boson fusion (VBF) is generated using the \POWHEGBOX v2 generator with
\pdf{CT10} PDF set. The parton shower and underlying event are simulated using
\PYTHIA 8 with the \pdf{CTEQ6L1} PDF set and \tune{AZNLO} tune. Higgs boson
production with a vector boson is generated at LO using \PYTHIA 8 with the
\pdf{CTEQ6L1} PDF. All Higgs boson samples are normalised using theoretical
calculations of Ref.~\cite{lhcxs}.
Events containing $Z$ or $W$ bosons with associated jets, referred to as
$Z$+jets and $W$+jets in the following, are simulated using the
\SHERPA 2.1 generator. Matrix elements are calculated for up to two partons at
NLO and four partons at LO. The \pdf{CT10} PDF set is used in conjunction with
a dedicated parton-shower tuning developed by the \SHERPA authors~\cite{Gleisberg:2008ta}. The
$Z/W$+jets samples are normalised to the NNLO cross
sections~\cite{Anastasiou:2003ds, Gavin:2010az, Gavin:2012sy, Li:2012wna}.
Alternative $Z/W$+jets samples
are simulated using \MGAMC\ at LO interfaced to the \PYTHIA 8 parton shower
model. The \tune{A14} tune is used together with the \pdf{NNPDF2.3LO} PDF set.
The SM production of three and four top quarks is generated at LO with
\MGAMC+\PYTHIA 8, using the \tune{A14} tune together with the
\pdf{NNPDF2\!.\!3LO} PDF set. The samples are normalised using cross
sections computed at NLO~\cite{Barger:2010uw,Bevilacqua:2012em}.
\section{Event selection and background estimation}
\label{s:selection}
Only events collected using single-electron or single-muon triggers are
accepted. The trigger thresholds, $\pT > 24\,\GeV$ for electrons and
$\pT > 20\,\GeV$ for muons, are set to be almost fully efficient for
reconstructed leptons with $\pT > 25\,\GeV$. Events are required
to have at least one reconstructed primary vertex. In all selections
considered, at least one reconstructed lepton with $\pT > 25\,\GeV$ is required
to match ($\Delta R_{\eta} < 0.15$) a lepton with the same flavour
reconstructed by the trigger algorithm. Three channels are defined based on
the number of reconstructed leptons, which are sorted according to their
transverse momentum in decreasing order.
Background events containing well-identified prompt leptons are modelled by simulation. The
normalisations for the $WZ$ and $ZZ$ processes are taken from data control regions and
included in the fit.
The yields in these data control regions are
extrapolated to the signal regions using simulation.
Systematic uncertainties in the extrapolation are taken into account in the
overall uncertainty in the background estimate.
Background sources involving one or more fake leptons are modelled
using data events from control regions. For the same-sign dimuon (\SSLSR) analysis and the
trilepton analysis the fake-lepton background is estimated using the matrix
method~\cite{TOPQ-2010-01}, where any combination of fake leptons among the
selected leptons is considered. However, compared to Ref.~\cite{TOPQ-2010-01}, the
real- and fake-lepton efficiencies used by the matrix method are estimated
in a different way in this measurement. The lepton efficiencies are measured by
applying the matrix method in control regions, where the lepton efficiencies
are extracted in a likelihood fit as free parameters using the matrix method
as model, assuming Poisson statistics, and assuming that events with two fake
leptons are negligible. In this way the parameters are by construction the
actual parameters of the matrix model itself, instead of relying on external
lepton efficiency measurements which are not guaranteed to be fully consistent
with the matrix model.
The
control regions are defined in dilepton events, separately for $b$-tagged and
$b$-vetoed events to take into account the different fake-lepton efficiencies
depending on whether the source is a light-flavour jet or a heavy-flavour jet.
The real-lepton efficiencies are measured in inclusive opposite-sign events,
and fake-lepton efficiencies in events with same-sign leptons and $\met>40\,\GeV$
(for $b$-tagged events $\met>20\,\GeV$), after subtracting the estimated
contribution from events with misidentification of the charge of a lepton
(referred to as ``charge-flip'' in the following), and excluding the same-sign dimuon signal region. The charge-flip events are subtracted using simulation. The
extracted fake-lepton efficiencies are found to be compatible with fake-lepton
efficiencies from a fully data-driven procedure where the charge-flip events
are estimated from data. For the \FLC, the contribution from backgrounds
containing fake leptons is estimated from simulation and corrected with scale
factors determined in control regions.
The full selection requirements and the background evaluation strategies in the
different channels are described below.
\subsection{Same-sign dimuon analysis}
The same-sign dimuon signal region targets the \ttW process and has
the highest sensitivity among all same-sign dilepton
regions~\cite{TOPQ-2013-05}. The main reason for this is that electrons have a
much larger charge misidentification probability, inducing a significant
background from top-quark pairs. Events are required to have two muon
candidates with the same charge and $\pt>25\,\GeV$, $\met > 40\,\GeV$, the scalar
sum of the \pT of selected leptons and jets, \HT, above $240\,\GeV$, and at least
two $b$-tagged jets. Events containing additional leptons (with $\pT>7\,\GeV$)
are vetoed.
The dominant background in the \SSLSR\ region arises from events containing
fake leptons, where the main source is \ttbar events. Backgrounds from the
production of prompt leptons with correctly
identified charge come primarily from $WZ$ production, but the relative
contribution of this background is small compared to the fake-lepton
background. The charge-flip background is negligible in this signal region, as
the probability of misidentifying the charge of a muon in the relevant \pT
range is negligible.
For the validation of the fake-lepton background estimate a region is defined
based on the signal region selection but omitting the \met requirement,
reducing the \pT threshold of the subleading lepton to $20\,\GeV$ and requiring
at least one $b$-tagged jet.
The distributions of \met and subleading lepton \pT in this validation region ($2\mu$-SS-VR) are shown in
Figure \ref{fig:ssmm_val}. The expected numbers of events in the \SSLSR\
signal region are shown in Table~\ref{tab:yields}. Nine events are
observed in data for this signal region.
\begin{figure}[htbp]
\centering
\includegraphics[width=\twofigwidth]{SS2mu1b_MET}
\includegraphics[width=\twofigwidth]{SS2mu1b_pT_2lep}
\caption{\label{fig:ssmm_val} (Left) The \met and (right) subleading lepton \pT
distributions shown for the $b$-tagged \SSLSR\ channel where the signal region
requirements on subleading lepton \pT, number of $b$-tags, and \met are
relaxed. \hatch. The background denoted `Other' contains other SM processes
producing two same-sign prompt leptons. \oflo.}
\end{figure}
\subsection{Trilepton analysis}
Four signal regions with exactly three leptons are considered. The first three
are sensitive to \ttZ; each of these requires an opposite-sign same-flavour
(OSSF) pair of leptons whose invariant mass is within $10\,\GeV$ of the $Z$ boson mass.
The signal regions are categorised by their jet and $b$-jet multiplicities and
have different signal-to-background ratios. In the \TLSRA\ region, at least
four jets are required, exactly one of which is $b$-tagged. In the \TLSRB\
region, exactly three jets with at least two $b$-tagged jets are required. In
the \TLSRC\ region, at least four jets are required, of which at least two are
$b$-tagged.
In the \TLSRD\ region at least two and at most four jets are required, of which
at least two are $b$-tagged, no OSSF lepton pair is allowed in the $Z$
boson mass window, and the sum of the lepton charges must be $\pm$1. This region
primarily targets the \ttW process but also has a sizeable \ttZ contribution.
The signal region definitions for the \TLC\ are summarised
in Table~\ref{tab:SRs3l}, while the expected numbers of events in the signal
regions are shown in Table~\ref{tab:yields}. The dominant backgrounds in the
\TLSRA, \TLSRB\ and \TLSRC\ signal regions arise from $Z$+jets production with
a fake lepton, diboson production and the production of a single top quark in
association with a $Z$ boson.
\begin{table}[htbp]
\centering
\caption{Summary of event selections in the trilepton signal regions.\vspace{1ex}}
\label{tab:SRs3l}
\begin{tabular}{l|c|c|c|c}
\toprule
Variable & \TLSRA & \TLSRB & \TLSRC & \TLSRD\\
\midrule
Leading lepton & \multicolumn{4}{c}{$\pT>25\,\GeV$} \\
Other leptons & \multicolumn{4}{c}{$\pT>20\,\GeV$} \\
Sum of lepton charges & \multicolumn{4}{c}{$\pm1$} \\
$Z$-like OSSF pair & \multicolumn{3}{c|}{$|m_{\ell\ell} - m_Z| < 10\,\GeV$} & $|m_{\ell\ell} - m_Z| > 10\,\GeV$\\
$n_{\text{jets}}$ & $\ge 4$ & $3$ & $\ge 4$ & $\ge2$ and $\le4$\\
$n_{b{\text{-jets}}}$ & $1$ & $\ge2$ & $\ge2$ & $\ge2$\\
\bottomrule
\end{tabular}
\end{table}
A control region is used to constrain the normalisation of the $WZ$ background
in data. Exactly three leptons are required, at least one pair of which must
be an OSSF pair with an invariant mass within $10\,\GeV$ of the $Z$ boson mass.
There must be exactly three jets,
none of which pass the $b$-tagging requirement. With these requirements, the
expected \ttZ signal contribution is roughly 1\% of the total number of events.
This region is referred to as \TLCR\ and it is included in the fit. Distributions
comparing data and SM prediction are shown in Figure~\ref{fig:3l_wzcr}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\twofigwidth]{CRWZnEl}
\includegraphics[width=\twofigwidth]{CRWZpT3lep}
\caption{\label{fig:3l_wzcr} Distributions of (left) the number of electrons
and (right) the third-lepton \pt in the \TLCR\ control region before the fit.
The background
denoted `Other' contains other SM processes producing three prompt leptons.
\hatch. \oflor.}
\end{figure}
Two background validation regions are defined for the \TLC. In the first
region, $3\ell$-Z-VR, the presence of two OSSF leptons with an invariant mass
within $10\,\GeV$ of the mass of the $Z$ boson is required. The region requires
the events to have at most three jets where exactly one is $b$-tagged, or
exactly two jets where both jets are $b$-tagged. The main backgrounds are $WZ$
production and $Z$+jets events with fake leptons. In the second region,
$3\ell$-noZ-VR, events with such a pair of leptons are vetoed. This region
requires the events to have at most three jets where exactly one is $b$-tagged,
and it is dominated by the fake-lepton background from top-quark pair production.
Neither validation region is used in the fit. The distributions of the number
of electrons in each of the two validation regions are shown in
Figure~\ref{fig:3l_val}, demonstrating that data and background modelling are
in good agreement within statistical uncertainties.
\begin{figure}[htbp]
\centering
\includegraphics[width=\twofigwidth]{VR3lZ_nEl}
\includegraphics[width=\twofigwidth]{VR3lnoZ_nEl}
\caption{\label{fig:3l_val} Distributions of the number of electrons in the
(left) $3\ell$-Z-VR and (right) $3\ell$-noZ-VR validation regions, shown before
the fit. The background denoted `Other' contains other SM processes producing
three prompt leptons. \hatch.}
\end{figure}
In total, 29 events are observed in the four signal regions. Distributions of
the number of jets, number of $b$-tagged jets, missing transverse momentum and
transverse momentum of the third lepton are shown in Figure~\ref{fig:3l_sr}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\twofigwidth]{SR3lZnJets}
\includegraphics[width=\twofigwidth]{SR3lZnBJets}
\includegraphics[width=\twofigwidth]{SR3lZMET}
\includegraphics[width=\twofigwidth]{SR3lZpT3lep}
\caption{\label{fig:3l_sr} Distributions of (top left) the number of jets, (top
right) the number of $b$-tagged jets, (bottom left) the missing transverse
momentum and (bottom right) the third-lepton \pt, for events contained in any
of the three signal regions \TLSRA, \TLSRB\ or \TLSRC. The distributions are
shown before the fit. The background denoted `Other' contains other SM
processes producing three prompt leptons. \hatch. The last bin in each of the
distributions shown in the bottom panels includes the overflow.}
\end{figure}
\subsection{Tetralepton analysis}
\label{s:4L}
The \FLC\ targets the \ttZ process for the case where both $W$ bosons resulting
from top-quark decays and the $Z$ boson decay leptonically. Events with two
pairs of opposite-sign leptons are selected, and at least one pair must be
of same flavour. The OSSF lepton pair with reconstructed invariant mass closest to
$m_Z$ is attributed to the \Zboson boson decay and denoted in the following by
$Z_1$. The two remaining leptons are used to define $Z_2$. Four signal
regions are defined according to the relative flavour of the two $Z_2$ leptons,
different flavour (DF) or same flavour (SF), and the number of $b$-tagged jets:
one, or at least two ($1b$, $2b$). The signal regions are thus \FLSRB, \FLSRC,
\FLSRD\ and \FLSRE.
To suppress events with fake leptons in the 1-$b$-tag multiplicity regions, additional requirements on the scalar sum of the transverse momenta of the
third and fourth leptons ($\pttf$) are imposed. In the \FLSRD\ and \FLSRB\
regions, events are required to satisfy $\pttf > \SI{25}{\gev}$ and $\pttf >
\SI{35}{\gev}$, respectively. In all regions, the invariant mass of any two
reconstructed OS leptons is required to be larger than \SI{10}{\gev}. The signal region definitions for the \FLC\ are summarised in Table~\ref{tab:SRs4l}.
\begin{table}[htbp]
\centering \renewcommand{\arraystretch}{1.2}
\caption{\label{tab:SRs4l} Definitions of the four signal regions in the \FLC.
All leptons are required to satisfy \mbox{$\pT > 7\,\GeV$} and at least one lepton with
$\pt > 25\,\GeV$ is required to be trigger matched. The invariant mass of any two
reconstructed OS leptons is required to be larger than \SI{10}{\gev}.\vspace{1ex}}
\begin{tabular}{lccr@{}cc@{}lc}
\toprule
Region & $Z_2$ leptons & \pttf && $|m_{Z_{2}} - m_Z| $ & \met && $n_{b{\text{-tags}}}$\\
\midrule
\FLSRB & $e^{\pm}\mu^{\mp}$ & $>\SI{35}{\gev}$ &&-&-&& 1\\
\FLSRC & $e^{\pm}\mu^{\mp}$ & -&&-&-&& $\ge2$\\
\FLSRD & $e^{\pm}e^{\mp},\mu^{\pm}\mu^{\mp}$ & $>\SI{25}{\gev}$ & \begin{tabular}{@{}r@{}} \ldelim\{{2}{2ex} \\ \\ \end{tabular} &
\begin{tabular}{c}$>\SI{10}{\gev}$\\ $<\SI{10}{\gev}$\end{tabular} & \begin{tabular}{c}$>\SI{40}{\gev}$\\ $>\SI{80}{\gev}$\end{tabular} &\begin{tabular}{@{}l@{}} \rdelim\}{2}{2ex} \\ \\ \end{tabular} & 1\\
\FLSRE & $e^{\pm}e^{\mp},\mu^{\pm}\mu^{\mp}$ &-&\begin{tabular}{@{}r@{}} \ldelim\{{2}{2ex} \\ \\ \end{tabular} &
\begin{tabular}{c}$>\SI{10}{\gev}$\\ $<\SI{10}{\gev}$\end{tabular} & \begin{tabular}{c}-\\ $>\SI{40}{\gev}$\end{tabular} &\begin{tabular}{@{}l@{}} \rdelim\}{2}{2ex} \\ \\ \end{tabular} & $\ge2$ \\
\bottomrule
\end{tabular}
\end{table}
A control region used to constrain the $ZZ$ normalisation, referred to as \FLCR,
is included in the fit and
is defined to have exactly four reconstructed leptons, a \SecondZ\ pair with
OSSF leptons, the value of both \FirstZM\ and \SecondZM\ within $10\,\GeV$ of the
mass of the $Z$ boson, and $\met <40\,\GeV$. The leading lepton \pT, the
invariant mass of the $Z_2$ lepton pair, the missing transverse momentum and the
jet multiplicity in this control region are shown in Figure~\ref{fig:zz_val},
and good agreement is seen between data and prediction.
\begin{figure}[htbp]
\centering
\includegraphics[width=\twofigwidth]{CRZZpT1lep}
\includegraphics[width=\twofigwidth]{CRZZMZ2}
\includegraphics[width=\twofigwidth]{CRZZMET}
\includegraphics[width=\twofigwidth]{CRZZnJets}
\caption{\label{fig:zz_val} (Top left) Leading lepton \pt, (top right)
$m_{Z_2}$, (bottom left) missing transverse momentum and (bottom right) jet
multiplicity distributions in the \FLCR\ control region. The distributions are shown
before the fit. \hatch. The last bin of the distribution shown in the top left
panel includes the overflow.}
\end{figure}
The contribution from backgrounds containing fake leptons is estimated from
simulation and corrected with scale factors determined in two control regions:
one region enriched in \ttbar events and thus in heavy-flavour jets, and one
region enriched in $Z$+jets events, and thus in light-flavour jets. The scale factors
are calibrated separately for electron and muon fake-lepton candidates. The
scale factors are applied to all MC simulation events with fewer than four
prompt leptons according to the number and the flavour of the fake leptons.
The \ttbar scale factors are applied to MC processes with real top quarks,
while for all other processes the $Z$+jets scale factors are applied. Different
generators are used when determining the scale factors and when applying them.
It is verified that the uncertainties in the scale factors include the differences between these generators.
The expected yields in the signal and control regions in the \FLC\ are shown
in Table~\ref{tab:yields}. Five events are observed in the four signal
regions. Figure~\ref{fig:SR4l} shows the data superimposed to the expected
distributions for all four signal regions combined.
Overall the acceptance times efficiency for the \ttZ and \ttW
processes is 6\textperthousand\ and 2\textperthousand, respectively.
\begin{figure}[htbp]
\centering
\includegraphics[width=\twofigwidth]{SR4lmll}
\includegraphics[width=\twofigwidth]{SR4lnBJets}
\caption{\label{fig:SR4l} Distributions (left) of the invariant mass of the
OSSF lepton pair closest to the $Z$ boson mass, $m_{Z_1}$, and (right) of the
number of $b$-tagged jets, for events in the tetralepton signal regions. The
distributions are shown before the fit. The background denoted `Other'
contains other SM processes producing four prompt leptons. \hatch. The first
and last bin of the distribution shown in the left panel include the underflow
and overflow, respectively.}
\end{figure}
\begin{table}[htbp]
\centering \renewcommand{\arraystretch}{1.2}
\caption{\label{tab:yields} Expected event yields for signal and backgrounds,
and the observed data in all control and signal regions used in the fit to
extract the \ttZ and \ttW cross sections. The quoted uncertainties in the expected
event yields represent systematic uncertainties including MC statistical
uncertainties. The \tZ, \WtZ, \ttH, three- and four-top-quark processes are
denoted $t+X$. The $WZ$, $ZZ$, $H \to ZZ$ (ggF and VBF), $HW$ and $HZ$ and VBS
processes are denoted `Bosons'.
\vspace{1ex}}
\resizebox{\columnwidth}{!}{
\begin{tabular}{%
c|
r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
|r@{\,}@{$\pm$}@{\,}l
r@{\,}@{$\pm$}@{\,}l
|r
}
\toprule
Region &
\multicolumn{2}{c}{$t+X$} &
\multicolumn{2}{c}{Bosons} &
\multicolumn{2}{c}{Fake leptons} &
\multicolumn{2}{c|}{Total bkg.} &
\multicolumn{2}{c}{\ttW} &
\multicolumn{2}{c|}{\ttZ} &
Data \\
\midrule
\TLCR& $0.52$ & $0.13$& $26.9$ & $2.2$& $2.2$ & $1.8$& $29.5$ & $2.8$& $0.015$ & $0.004$& $0.80$ & $0.13$& 33\\
\FLCR& \multicolumn{2}{c}{$<0.001$} & $39.5$ & $2.6$& $1.8$ & $0.6$& $41.2$ & $2.7$& \multicolumn{2}{c}{$<0.001$} & $0.026$ & $0.007$& 39\\
\midrule
\SSLSR& $0.94$ & $0.08$& $0.12$ & $0.05$& $1.5$ & $1.3$& $2.5$ & $1.3$& $2.32$ & $0.33$& $0.70$ & $0.10$& 9\\
\TLSRC& $1.08$ & $0.25$& $0.5$ & $0.4$& \multicolumn{2}{c}{$<0.001$} & $1.6$ & $0.5$& $0.065$ & $0.013$& $5.5$ & $0.7$& 8\\
\TLSRA& $1.14$ & $0.24$& $3.3$ & $2.2$& $2.2$ & $1.7$& $6.7$ & $2.8$& $0.036$ & $0.011$& $4.3$ & $0.6$& 7\\
\TLSRB& $0.58$ & $0.19$& $0.22$ & $0.18$& \multicolumn{2}{c}{$<0.001$} & $0.80$ & $0.26$& $0.083$ & $0.014$& $1.93$ & $0.28$& 4\\
\TLSRD& $0.95$ & $0.11$& $0.14$ & $0.12$& $3.6$ & $2.2$& $4.7$ & $2.2$& $1.59$ & $0.28$& $1.45$ & $0.20$& 10\\
\FLSRD& $0.212$ & $0.032$& $0.09$ & $0.07$& $0.113$ & $0.022$& $0.42$ & $0.08$& \multicolumn{2}{c}{$<0.001$} & $0.66$ & $0.09$& 1\\
\FLSRE& $0.121$ & $0.021$& $0.07$ & $0.06$& $0.062$ & $0.012$& $0.25$ & $0.07$& \multicolumn{2}{c}{$<0.001$} & $0.63$ & $0.09$& 1\\
\FLSRB& $0.25$ & $0.04$& $0.0131$ & $0.0032$& $0.114$ & $0.019$& $0.37$ & $0.04$& \multicolumn{2}{c}{$<0.001$} & $0.75$ & $0.10$& 2\\
\FLSRC& $0.16$ & $0.05$& \multicolumn{2}{c}{$<0.001$} & $0.063$ & $0.013$& $0.23$ & $0.05$& \multicolumn{2}{c}{$<0.001$} & $0.64$ & $0.09$& 1\\
\bottomrule
\end{tabular}
}
\end{table}
\section{Systematic uncertainties}
\label{s:systematics}
The normalisation of signal and background in each channel can be affected by
several sources of systematic uncertainty. These are described in the
following subsections.
\subsection{Luminosity}
\label{sec:syst_lumi}
The uncertainty in the integrated luminosity in the 2015 dataset is 2.1\%. It is
derived, following a methodology similar to that detailed in
Ref.~\cite{DAPR-2011-01}, from a calibration of the luminosity
scale using $x$--$y$ beam-separation scans performed in August 2015. This
systematic uncertainty is applied to all processes modelled using \MC\
simulations.
\subsection{Uncertainties associated with reconstructed objects}
\label{sec:syst_objects}
Uncertainties associated with the lepton selection arise from imperfect
knowledge of the trigger, reconstruction, identification and isolation
efficiencies, and lepton momentum scale and resolution \cite{PERF-2013-03,
ATL-PHYS-PUB-2011-006, ATLAS-CONF-2014-032, ATL-PHYS-PUB-2015-041,
PERF-2015-10}. The uncertainty in the electron identification
efficiency is the largest systematic uncertainty in the \TLC\ and among the
most important ones in the \FLC.
Uncertainties associated with the jet selection arise from the jet energy scale
(JES), the JVT requirement and the jet energy resolution (JER).
Their estimations are based on Run-1 data and checked with early
Run-2 data. The JES and its uncertainty are derived by combining information from
test-beam data, collision data and simulation~\cite{PERF-2012-01}. JES
uncertainty components arising from the in-situ calibration and the jet flavour
composition are among the dominant uncertainties in the \SSLSR\ and \TL\
channels. The uncertainties in the JER and JVT have a significant effect at
low jet \pt. The JER uncertainty results in the second largest uncertainty in the
\TLC.
The efficiency of the flavour-tagging algorithm is measured for each jet
flavour using control samples in data and in simulation. From these
measurements, correction factors are defined to correct the tagging rates in
the simulation. In the case of $b$-jets, correction factors and their
uncertainties are estimated based on observed and simulated $b$-tagging rates
in \ttbar dilepton events~\cite{ATLAS-CONF-2014-004}. In the case of $c$-jets,
they are derived based on jets with identified $D^{*}$
mesons~\cite{ATLAS-CONF-2014-046}. In the case of light-flavour jets,
correction factors are derived using dijet events~\cite{ATLAS-CONF-2014-046}.
Sources of uncertainty affecting the $b$- and $c$-tagging efficiencies are
considered as a function of jet \pt, including bin-to-bin
correlations~\cite{ATLAS-CONF-2014-004}. An additional uncertainty is assigned
to account for the extrapolation of the $b$-tagging efficiency measurement from
the \pT region used to determine the scale factors to regions with higher \pT.
For the efficiency to tag light-flavour jets, the dependence of the uncertainty on the jet \pt and $\eta$ is considered. These systematic uncertainties are taken as
uncorrelated between $b$-jets, $c$-jets, and light-flavour jets.
The treatment of the uncertainties associated with reconstructed objects is common to all
three channels, and thus these are considered as correlated among different
regions.
\subsection{Uncertainties in signal modelling}
\label{sec:signal_modeling}
From the nominal \MGAMC+\PYTHIA~8 (\tune{A14} tune) configuration, two
parameters are varied to investigate uncertainties from the modelling of the
\ttZ and \ttW processes: the renormalisation ($\mu_{\rm R}$) and factorisation ($\mu_{\rm F}$) scales.
A simultaneous variation of $\mu_{\rm R} = \mu_{\rm F}$ by factors $2.0$ and $0.5$ is performed. In addition, the effects of a set of variations in the tune parameters
(\tune{A14} eigentune variations), sensitive to initial- and final-state
radiation, multiple parton interactions and colour reconnection, are evaluated.
Studies performed at particle level show that the largest impact comes from
variations in initial-state radiation~\cite{ATL-PHYS-PUB-2016-005}. The
systematic uncertainty due to the choice of generator for the \ttZ and \ttW signals
is estimated by comparing the nominal sample with one generated with \SHERPA
v2.2. The \SHERPA sample uses the LO matrix element with up to one (two) additional parton(s) included
in the matrix element calculation for \ttZ (\ttW) and merged with the \SHERPA
parton shower~\cite{Schumann:2007mg} using the \textsc{ME+PS@LO} prescription.
The \pdf{NNPDF3\!.\!0NLO} PDF set is used in conjunction
with a dedicated parton shower tune developed by the \SHERPA authors. Signal
modelling uncertainties are treated as correlated among channels.
\subsection{Uncertainties in background modelling}
\label{sec:bkg_modeling}
In the \TL\ and \SSLSR\ channels, the diboson background is dominated by $WZ$
production, while $ZZ$ production is dominant in the \FLC. While the inclusive
cross sections for these processes are known to better than 10\%, they
contribute to the background in these channels if additional $b$-jets and
other jets are produced and thus have a significantly larger uncertainty.
In the \TL\ and \SSLSR\ channels, the normalisation of the $WZ$ background is
treated as a free parameter in the fit used to extract the \ttZ and \ttW signals. The
uncertainty in the extrapolation of the $WZ$ background estimate from the
control region to signal regions with specific jet and $b$-tag multiplicities
is evaluated by comparing predictions obtained by varying the renormalisation,
factorisation and resummation scales used in MC generation. The
uncertainties vary across the different regions and an overall uncertainty of
$-50\%$ and $+100\%$ is used.
The normalisation of the $ZZ$ background is treated as a free parameter in the
fit used to extract the \ttZ and \ttW signals. In the \FLC, several uncertainties in
the $ZZ$ background estimate are considered. They arise from the extrapolation
from the \FLCR\ control region (corresponding to on-shell $ZZ$ production) to
the signal region (with off-shell $ZZ$ background) and from the extrapolation
from the control region without jets to the signal region with at least one
jet. They are found to be 30\% and 20\%, respectively. An additional
uncertainty of 10--30\% is assigned to the normalisation of the heavy-flavour
content of the $ZZ$ background, based on a data-to-simulation comparison of
events with one $Z$ boson and additional jets and cross-checked with a
comparison between different $ZZ$ simulations~\cite{TOPQ-2013-05}.
The uncertainty in the \ttH background is evaluated by varying the factorisation and renormalisation scales up and down by a factor of two with respect to the nominal value, $H_{\rm T}/2$, where $H_{\rm T}$ is defined as the scalar sum of the transverse masses $\sqrt{\pt^2+m^2}$ of all final state particles.
For the \tZ background, an overall normalisation uncertainty of 50\% is
assumed. An additional uncertainty affecting the distribution of this
background as a function of jet and $b$-jet multiplicity is evaluated by
varying the factorisation and renormalisation scales, as well as the amount of
radiation in the \tune{Perugia2012} parton shower tune.
An uncertainty of $+10\%$ and $-22\%$ is assigned to the \WtZ background cross
section. The uncertainty is asymmetric due to an alternative estimate of the
interference effect between this process and the \ttZ production. The shape
uncertainty is evaluated by varying the factorisation and renormalisation
scales up and down by a factor of two with respect to the nominal value $H_{\rm T}/2.$
For other prompt-lepton backgrounds, uncertainties of 20\% are assigned to the
normalisations of the $WH$ and $ZH$ processes, based on calculations from
Ref.~\cite{Heinemeyer:2013tqa}. An uncertainty of 50\% is considered for
triboson and same-sign $WW$ processes.
The fake-lepton background uncertainty is evaluated as follows. The uncertainty due to the matrix method is estimated by propagating the
statistical uncertainty on the measurement of the fake-lepton efficiencies.
Additionally, a 20\% uncertainty is added to the subtracted charge-flip yields
estimated as the difference between data-driven charge-flips and simulation,
and the \met requirement used to enhance the single-fake-lepton fraction is varied by
$20\,\GeV$. The main sources
of fake muons are
decays of light-flavour or heavy-flavour hadrons inside jets. For the \SSLSR\
region, the flavour composition of the jets faking leptons is assumed to be
unknown. To cover this uncertainty, the central values of the fake-lepton
efficiencies extracted from the $b$-veto and the $b$-tag control regions are
used, with the efficiency difference assigned as an extra uncertainty. For the
\FL\ channel, fake-lepton systematic uncertainties are covered by the
scale-factor uncertainties used to calibrate the simulated fake-lepton yield in
the control regions. Within a fake-lepton estimation method, all systematic
uncertainties are considered to be correlated among analysis channels and
regions. Thus \SSLSR\ and \TL\ fake-lepton systematic uncertainties that use the
matrix method are not correlated with the \FL\ systematic uncertainties. The
expected uncertainties in the fake-lepton backgrounds relative to the total
backgrounds vary in each channel and signal region: 50\% for the \SSLSR\
region, 25--50\% for the \TLC\ and 5--10\% for the \FLC.
|
1,108,101,564,143 | arxiv | \section{Introduction}
\vspace{-1mm}
Nowadays, most competitive natural language processing systems are based on supervised machine learning. Despite the great successes obtained by those techniques, they unfortunately still suffer from important limitations. One of them is their sensitivity to domain shift: for example, a state-of-the-art part-of-speech tagger trained on the Wall Street Journal section of the Penn treebank achieves an accuracy of $97 \%$ when tested on sentences from the Wall Street Journal, but only $90 \%$ when tested on textual data from the Web~\cite{petrov2012overview}. This drop in performance can also be observed for other tasks such as syntactic parsing or named entity recognition.
One of the explanations for this drop in performance is the big lexical difference that exists accross domains. This results in a lot of out-of-vocabulary words (OOV) in the test data, \emph{i.e.}, words of the test data that were not observed in the training set. For example, more than $25\%$ of the tokens of the test data from the Web corpus~\cite{petrov2012overview} are unobserved in the training data from the WSJ. By comparison, only $11.5\%$ of the tokens of the test data from the WSJ are unobserved in the training data from the WSJ. Part-of-speech taggers make most of their errors on those out-of-vocabulary words.
Labeling enough data to obtain a high accuracy for each new domain is not a viable solution. Indeed, it is expensive to label data for natural language processing, because it requires expert knowledge in linguistics. Thus, there is an important need for transfer learning, and more precisely for domain adaptation, in computational linguistics. A common solution consists in using large quantities of unlabeled data, from both source and target domains, in order to learn a good word representation. This representation is then used as features to train a supervised classifier that is more robust to domain shift. Depending on how much data from the source and the target domains are used, this method can be viewed as performing semi-supervised learning or domain adaptation. The goal is to reduce the impact of out-of-vocabulary words on performance. This scheme was first proposed to reduce data sparsity for named entity recognition~\cite{freitag2004trained,miller2004name}, before being applied to domain adaptation for part-of-speech tagging~\cite{huang2009distributional,huang2011language} or syntactic parsing~\cite{candito2011word,seddah2012alpage,hayashi2012naist,wu2012semi}.
Hidden Markov models have already been considered in previous work to learn word representations for domain adaptation~\cite{huang2009distributional,huang2011language} or semi-supervised learning~\cite{grave2013hidden}. Our contributions in this paper are mostly experimental: we compare different word representations that can be obtained from an HMM and study the effect of training the unsupervised HMM on source, target or both domains. While previous work mostly use Viterbi decoding to obtain word representations from an HMM, we empirically show that posterior distributions over latent classes give better results.
\section{Method}
\vspace{-1mm}
In this section, we describe the method we use to perform domain adaptation for word sequence labeling. We suppose that we have large quantities of unlabeled word sequences from both the source domain and the target domain. In addition, we suppose that we have labeled sentences only for the source domain. Our method is a two-step procedure: first, we train an unsupervised model on the unlabeled data, second, we use this model to obtain word representations used within a supervised classifier trained on labeled data. We now describe in greater details each step of the method.
\subsection{First step: unsupervised learning of an HMM}
\vspace{-1mm}
We start by training a hidden Markov chain model using unlabeled data. The words forming a sequence are denoted by the $K$-tuple $\mathbf{w} = (w_1, ..., w_K) \in \{ 1, ..., V\}^K$, where $K$ is the length of the sequence and $V$ is the size of the vocabulary. Similarly, we note the corresponding latent classes by the $K$-tuple $\mathbf{c} = (c_1, ..., c_K) \in \{ 1, ..., N\}^K$, where $N$ is the number of latent classes. The joint probability distribution on words and classes thus factorizes as
\begin{equation*}
p(W = \mathbf{w},\ C = \mathbf{c}) = \prod_{k=1}^K p(C_k = c_k \ | \ C_{k-1} = c_{k-1}) \ p(W_k = w_k \ | \ C_k = c_k).
\end{equation*}
We follow the method described by~\cite{grave2013hidden}, which consists in using the online variant of the expectation-maximization algorithm proposed by~\cite{cappe2009online} in order to scale to datasets with millions of word sequences, and to use an approximate message passing algorithm~\cite{pal2006sparse} to perform inference. Its complexity is $O(N \log N)$, where $N$ is the number of latent classes, compared to the $O(N^2)$ complexity of the exact message passing algorithm.\footnote{We use the \emph{k-best} approximation described by \cite{grave2013hidden}, setting $k$ to $3 \log_2(N)$.} This allows us to train models with hundreds of latent classes on tens of millions of sequences on a single core in a day.
\subsection{Second step: supervised learning of a CRF}
\vspace{-1mm}
\label{wordrep}
We then use the learnt unsupervised hidden Markov model to obtain features that are used to train a conditional random field~\cite{lafferty2001conditional} using the labeled sequences. We consider three kinds of features that can be obtained using an HMM. The first one, referred to as \textsc{Viterbi}, consists in computing the most probable sequence of latent classes corresponding to the sentence $\mathbf{w}$ by using Viterbi decoding:
\begin{equation*}
\mathbf{\hat c} = \argmax_{\mathbf{c}} p(C = \mathbf{c} \ | \ W = \mathbf{w}) = \argmax_{\mathbf{c}} p(W = \mathbf{w},\ C = \mathbf{c}).
\end{equation*}
Then we use the latent class $\mathbf{\hat c}_k$ as a categorical feature for the $k$th word of the sequence. The second one, referred to as \textsc{Posterior-Token}, consists in computing for the $k$th word of the sentence~$\mathbf{w}$ its posterior distribution $\mathbf{u}^{(k)}$ over the $N$ latent classes:
\begin{equation*}
u_{c}^{(k)} = \mathbb{E} \left[ \mathbf{1}{\{C_k = c\}} \ | \ W = \mathbf{w} \right].
\end{equation*}
This distribution is then used as $N$ continuous features. Finally, the last one, referred to as \textsc{Posterior-Type}, consists in computing for each word type $\tilde w$ of the vocabulary its posterior distribution $\mathbf{v}^{(\tilde w)}$ over the $N$ latent classes, averaged over all the utterances of this word type in the unlabeled data $\mathcal{U}$:
\begin{equation*}
\mathbf{v}^{(\tilde w)} = \frac{1}{Z_{\tilde w}} \sum_{i \in \mathcal{U} \ : \ w_i = \tilde w} \mathbf{u}^{(i)}
\end{equation*}
where $Z_{\tilde w}$ is the number of utterances of the word type $\tilde w$.
Let us briefly discuss the differences between those three word representations. First, the \textsc{Viterbi} representation is categorical, while both \textsc{Posterior} representations are vectorial and continuous. They thus capture much finer information, but on the other hand, lead to a slower learning algorithm for CRF, since they are not sparse. Another difference is the fact that both the \textsc{Viterbi} and \textsc{Posterior-Token} representations depend on the context, while the \textsc{Posterior-Type} representation is independent of the context, and thus all tokens with the same word type have the same representation. \textsc{Viterbi}, \textsc{Posterior-Token} and \textsc{Posterior-Type} representations have been previously used for weakly supervised learning (see \cite{huang2009distributional,huang2011language,grave2013hidden}), but were never compared.
\section{Experiments}
\vspace{-1mm}
In this section, we describe the experiments we carried out on part-of-speech tagging. We use the universal part-of-speech tagset, introduced by~\cite{petrov2012universal}, which comprises twelve coarse tags. As a baseline, we use a CRF trained on the source domain, without any word representation or other features but the word type. In the following, we consider HMM with $128$ latent classes.
\subsection{Datasets}
\vspace{-1mm}
In all our experiments, the source domain corresponds to news articles. We use the $2,000$ first sentences of the Wall Street Journal section of the Penn treebank as labeled data for training the CRF. The first ten years of the New York Times corpus (1987-1997) are used as unlabeled data to train the hidden Markov model. This corresponds to approximately $15$ millions sentences and $300$ millions tokens.
The first target domain we consider corresponds to textual data from abstracts of biomedical articles. We use the first $5,000$ sentences of the Genia treebank~\cite{tateisi2005syntax} to evaluate our method. In this test set, approximately $40\%$ of the tokens were not observed in the training set. For the unlabeled data used to train the HMM, we have collected approximately $8$ millions sentences using the PubMed online library,\footnote{http://www.ncbi.nlm.nih.gov/pubmed} by performing a search with the keyword \emph{cancer}.
The second target domain we consider corresponds to textual data from Twitter. We use the $547$ sentences of the \textsc{Daily547} test set of~\cite{owoputi2013improved} to evaluate our method. In this test set, approximately $35\%$ of the tokens were not observed in the training set. For the unlabeled data used to train the HMM, we have collected approximately 2 millions Twitter messages in English.
\begin{table}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \hspace{1mm} & \multicolumn{3}{c}{\textbf{Biomedical}}
& \hspace{2mm} & \multicolumn{3}{c}{\textbf{Twitter}} \\
\midrule
&& \textsc{Source} & \textsc{Both} & \textsc{Target}
&& \textsc{Source} & \textsc{Both} & \textsc{Target} \\
\midrule
\textsc{Viterbi} && 87.8 & 90.2 & 89.9 && 64.5 & 66.5 & 67.4 \\
\textsc{Posterior-Token} && 90.4 & 91.5 & 91.4 && 67.6 & 68.5 & 70.4 \\
\textsc{Posterior-Type} && 90.4 & 89.9 & 92.2 && 68.5 & 69.4 & 71.5 \\
\textsc{Posterior-Both} && 91.7 & 92.0 & \textbf{92.9} && 69.4 & 70.3 & \textbf{72.4} \\
\midrule
\textsc{Baseline} && & 84.6 & && & 62.4 \\
\bottomrule
\end{tabular}
\caption{Domain adaptation results for part-of-speech tagging. Unsupervised models were trained using sentences only from the news domain (\textsc{Source}), from both domains (\textsc{Both}) and only from the target domain (\textsc{Target}). The labeled data used to train the CRF \emph{only} come from the \emph{source} domain.}
\label{results-wordrep-bio}
\end{table}
\subsection{Comparison of word representations}
\vspace{-1mm}
In this section, we perform experiments to compare the different word representations we have introduced in section~\ref{wordrep}. In particular, we compare the three representations, \textsc{Viterbi}, \textsc{Posterior-Token} and \textsc{Posterior-Type}. We also consider a fourth representation, called \textsc{Posterior-Both}, which is just the concatenation of the two posterior representations introduced in section~\ref{wordrep}.
We also compare unsupervised HMM trained by using sentences only from the source domain, only from the target domain and from both domains, in order to determine the importance of the domain from which the unlabeled data come from. Indeed, as far as we now, in previous work, people have always considered using unlabeled data coming from both domains, and never investigated changing the weight of a sentence based on the domain from which it comes from. We recall that all the labeled sentences used to train the CRF come from the \emph{source} domain. We report results for adaptation to both domains in table~\ref{results-wordrep-bio}.
First, we observe that features obtained by Viterbi decoding are outperformed by posterior based representations. This is not really surprising, since the latter capture more information. We also note that using both posterior representations gives better results than using only one of them. Finally, the best results are attained for word representations trained only on unlabeled sentences from the target domain.
\subsection{Using labeled data from the target domain}
\vspace{-1mm}
In this section, we assume that we also have some labeled sentences from the target domain. We compare the performance obtained by training a CRF using only this small amount of labeled data with training a CRF on labeled sentences from both the source and target domains. We report results for adaptation to the biomedical domain and Twitter in figure~\ref{results-target}.
We note that adding sentences from the target domain to the training set improves the results differently for the two domains: we observe a $0.5$ point improvement for biomedical domain and $9.5$ points improvement for Twitter. Moreover we observe that when enough sentences from the target domain are available, using data from source domain for training is not useful (biomedical domain), or even gives worst results (Twitter). We have not investigated weighting sentences differently whether they come from the source or the target domain.
\begin{figure*}[t]
\centering
\includegraphics[height=6.2cm]{res.pdf}
\caption{Domain adaptation for part-of-speech tagging. In this experiment, we consider adding labeled sentences from the target domain to the training set. \textsc{Baseline} is a CRF without word representation, trained on the \emph{labeled} sentences from both domains. \textsc{Target only} is the accuracy obtained by training a CRF with word representation using only the \emph{labeled} sentences from the target domain. The scale of the $x$-axis is logarithmic.}
\label{results-target}
\end{figure*}
\section{Discussion}
\vspace{-2mm}
In this paper, we presented a very simple yet efficient method for domain adaptation for sequence tagging, and conduct experiments on part-of-speech tagging on two domains. We demonstrated that this method gives very good results when no labeled data from target domain is considered. In the case where a large enough quantity of labeled data from the target domain is used, it is not necessarily useful to use data from the source domain. Whether it is possible to still leverage data from the source domain is an interesting question for future work.
\clearpage
|
1,108,101,564,144 | arxiv | \section{Introduction}
In astronomy data analysis (e.g. cosmological analysis), we often need to combine different data sets for joint analysis. In certain cases, the desirable quantity to extract from the joint data is the ratio of two data sets. We list several examples as follows. (1) The lensing ratio \citep{Jain2003, Bernstein2004, ZhangJun2005}. This is the ratio of galaxy-galaxy lensing of different sources (e.g. cosmic shear at various redshifts and CMB lensing) but identical lenses. The ratio provides a clean measure on the geometry of the universe. It has been measured for various data combinations \citep{Taylor2007, Das2009, Kitching2015, Baxter2016, Miyatake2017, DESY1_2018, Prat2018, Prat2019, DESY3_2021}. (2) The decay rate $DR$ of gravitational potential, which is the ratio between galaxy-ISW cross-correlation and galaxy-lensing cross correlation \citep{Zhang2006ISW}. It has recently been measured by \citet{Dong2022} combining DESI galaxy-Planck ISW/lensing cross-correlations. The $DR$ measurement, in combination with BAO or SNe Ia data, improved constraints of dark energy by $\sim 20$-$50\%$ \citep{Dong2022}. (3) Interloper rate due to line confusion in spectroscopic redshift surveys. This particular error in redshift measurement can be approximated as the ratio between the cross-correlation of two target galaxy samples and the auto-correlation (e.g. \citet{Gong2021, Addison2019, Farrow2021}). (4) $E_G$ as a probe of gravity at cosmological scales \citep{Zhang2007E_G}. It is essentially the ratio between cross-correlations of galaxy-velocity and galaxy-lensing. It has been measured by \citet{Reyes2010, Leonard2015, Pullen2016, Blake2016, Alam2017, delaTorre2017, Amon2018, Singh2019, Skara2020S, ZhangYucheng2021}, using various data of redshift space distortion (RSD) and weak lensing.
The ratio is therefore straightforward to measure by simply taking the ratio of the two corresponding measurements. However, there are a number of reasons to improve this naive estimator. (1) The naive estimator is not only sub-optimal in terms of statistical errors, but also biased. Suppose that we have two data, $D_1=\bar{D}_1(1+n_1)$ and $D_2=\bar{D}_2(1+n_2)$. Here $\bar{D}$ is the true value the theory predicts $\bar{D}_2/\bar{D}_1=R$. $n$ is the fractional error and for brevity we assume only statistical error here ($\langle n\rangle=0$). The naive estimator $\hat{R}=D_2/D_1$ is biased since
\begin{equation}
\begin{aligned}
\langle \hat{R}\rangle & =\left\langle \frac{D_2}{D_1}\right\rangle=R\left(1+\langle n_1^2\rangle-\langle n_1n_2 \rangle+\cdots\right)\neq R\ .
\end{aligned}
\end{equation}
(2) Furthermore, in some applications the physically meaningful $R$ is not directly the ratio of two data sets, but rather the ratio of some underlying models. For example, one data set is the galaxy-tangential shear cross-correlation, and the other is the galaxy-CMB lensing convergence cross-correlation. The first is related to the galaxy-lensing power spectrum through the Bessel function $J_2(x=\ell \theta)$. The second is related to $J_0(x)$ instead. So although the underlying galaxy-lensing power spectra follow the proportionality relation, the data sets do not. Another example is $E_G$, as measured by combining a 3D galaxy-velocity power spectrum inferred from RSD and a 2D galaxy-lensing angular power spectrum. In both cases, we can not simply take the ratio of two data sets to obtain the true ratio. (3) A further issue is that there are multiple $R$ of interest (namely ${\bf R}=(R_1, R_2, \cdots)$), but the corresponding data sets to measure them are correlated. This is the case for the interloper rate, and also the more general case of photo-z outliers.
We present a likelihood-based optimal estimator of the ratio, free of the above problems. Under the usual assumption of Gaussian errors in the data, we derive the exact analytical expression for the posterior PDF $P(R)$ (or the joint PDF $P({\bf R})=P(R_1,R_2,\cdots)$). Since the expression is exact, it enables unbiased $R$ estimation with minimum statistical uncertainty. The method has been applied in a companion paper \citep{Dong2022} to measure the decay rate of gravitational potential at cosmological scales. Here we present a thorough description of the method, and further demonstrate its applicability with the lensing ratio measurement as an example.
The paper is organized as follows. In \S \ref{sec: methodology}, we introduce the method. In \S \ref{sec: application}, we show the application of this method by measuring lensing ratios, including the measurements of the lensing ratios (\S \ref{subsec: measurements of lensing ratios}) and the consistency checks(\S \ref{subsec: consistency checks}). We conclude in \S \ref{sec: conclusions}. Appendix \ref{sec: data pre-processing} describes the data pre-processing. Appendix \ref{sec: statistics} shows the details on the lensing ratio statistics, ratio modeling and implications of the measured ratio.
\section{Methodology}
\label{sec: methodology}
The problem to be solved can be formulated as follows. We have two data sets, $\mathbf{d}_1$ and $\mathbf{d}_2$. The theory expectation of $\mathbf{d}_1$ is fixed by the theory parameter vector $\lambda$, $\mathbf{d}_1 = \mathbf{A}_1\lambda + \mathbf{n}_1$. Here the mapping matrix $\mathbf{A}_1$ of dimension $N_\mathbf{d} \times N_\lambda$. $n_1$ is the corresponding noise, which we assume to be Gaussian with zero mean. $\mathbf{d}_2$ is fixed by the same set of theory parameter $\lambda$ and an extra parameter $R$. The dependence on $R$ is through the second mapping matrix $\mathbf{A}_2(R)$, $\mathbf{d}_2 = \mathbf{A}_2(R)\lambda + \mathbf{n}_2$. The simplest case is ${\bf A}_2=R{\bf A}_1$. This is the case for the shear ratio, where $d_i=w_i(\theta)$, $\lambda=\langle w_1(\theta)\rangle$, and $w_i(\theta)$ are the two galaxy-shear cross-correlations. But in general $A_2$ and $A_1$ are independent, and ${\bf A}_2(R)$ can be an arbitrary function of $R$ or even ${\bf R}=(R_1,R_1,\cdots)$. For brevity, we work on the case of $R$. The extension to the more general case of ${\bf R}$ is straightforward.
We can study this problem based on the Bayesian analysis,
\begin{equation}\label{eq: P_R}
P(R|\mathbf{d}_1, \mathbf{d}_2) \propto \int P(\mathbf{d}_1, \mathbf{d}_2 |R, \lambda) P(\lambda)d\lambda \ .
\end{equation}
We take a flat prior ($P(\lambda) \propto$ const.) in order not to introduce extra model dependence. What we find is that, for Gaussian distribution of the data, the marginalization over $\lambda$ can be done analytically. So we can obtain an analytical expression of $P(R)$. The expression depends on whether ${\bf d}_{1,2}$ are correlated. We first derive the result for the simpler case of uncorrelated ${\bf d}_{1,2}$, and then proceed to the general case of correlated ${\bf d}_{1,2}$.
\subsection{Uncorrelated \texorpdfstring{$\mathbf{d}_{1,2}$}{}}
When measurement errors in $\mathbf{d}_{1}, \mathbf{d}_2$ are uncorrelated, the joint likelihood on the right-hand-side of Eq.~\ref{eq: P_R} can be separated into the product of two individual likelihood functions,
\begin{equation}
\label{eqn:P12}
P(\mathbf{d}_1, \mathbf{d}_2| R, \lambda) = P(\mathbf{d}_1 | R,\lambda) P(\mathbf{d}_2 | R,\lambda) \ .
\end{equation}
For Gaussian distributed $\mathbf{d}_{1,2}$,
\begin{equation}
P(\mathbf{d}_{i} | R,\lambda) = \frac{1}{\sqrt{(2\pi)^N {\rm det} \mathbf{C}_i}} {\rm exp}\bigg[-\frac{1}{2} \mathbf{\Delta}_i^T \mathbf{C}_i^{-1} \mathbf{\Delta}_i \bigg] \ . \nonumber
\end{equation}
Here $\mathbf{\Delta}_i\equiv \mathbf{d}_i- \mathbf{A}_i\lambda$. $\mathbf{C}_i$ is the covariance matrix of ${\bf d}_i$, ${\bf C}_i\equiv \langle \mathbf{n}_i \mathbf{n}_i^T \rangle$. All vectors are column vectors by default, like $\lambda^T = (\lambda_1, \lambda_2, ..., \lambda_N)$. $N$ is the size of $\mathbf{d}_{1,2}$. Plug the above expression into Eq. \ref{eq: P_R} \& \ref{eqn:P12},
\begin{equation}
\label{eqn:PE}
P(R|\mathbf{d}_1, \mathbf{d}_2)\propto \int \exp(E)d\lambda\ .
\end{equation}
We have to ignore the proportionality prefactors which do not depend on $\lambda$.
$E$ in the exponential is
\begin{equation}
\label{eq:expan exp}
\begin{aligned}
E= -\frac{1}{2}& (\lambda^T \mathbf{A}_1^T \mathbf{C}_1^{-1} \mathbf{A}_1 \lambda - \lambda^T\mathbf{A}_1^T \mathbf{C}_1^{-1}\mathbf{d}_1 - \mathbf{d}_1^T\mathbf{C}_1^{-1}\mathbf{A}_1\lambda + \mathbf{d}_1^T \mathbf{C}_1^{-1}\mathbf{d}_1 \\
&+\lambda^T \mathbf{A}_2^T \mathbf{C}_2^{-1} \mathbf{A}_2 \lambda - \lambda^T\mathbf{A}_2^T \mathbf{C}_2^{-1}\mathbf{d}_2 - \mathbf{d}_2^T\mathbf{C}_2^{-1}\mathbf{A}_2\lambda + \mathbf{d}_2^T \mathbf{C}_2^{-1}\mathbf{d}_2) \ .
\end{aligned}
\end{equation}
If we let $\mathbf{Q} \equiv \mathbf{Q}_1 + \mathbf{Q}_2$, ${\bf Q}_i\equiv \mathbf{A}_i^T \mathbf{C}_i^{-1} \mathbf{A}_i$, $\mathbf{T}\equiv \mathbf{T}_1 + \mathbf{T}_2$,${\bf T}_i\equiv \mathbf{A}_i^T \mathbf{C}_i^{-1}\mathbf{d}_i$ and group terms in powers of $\lambda$, we can rewrite the Eq.~\ref{eq:expan exp} as
\begin{equation}
\label{eq:A5}
E= -\frac{1}{2} (\lambda^T \mathbf{Q}\lambda -\lambda^T \mathbf{T} - \mathbf{T}^T \lambda) \ .
\end{equation}
Here we ignore the two terms $\mathbf{d}_1^T \mathbf{C}_1^{-1}\mathbf{d}_1$ and $\mathbf{d}_2^T \mathbf{C}_2^{-1}\mathbf{d}_2$, The two do not depend on $\lambda$ and therefore do not affect the shape of $P(R)$. Since ${\bf C}^T_i={\bf C}$, $\mathbf{Q}_i^T = \mathbf{Q}_i$ and ${\bf Q}^T={\bf Q}$. This means that ${\bf Q}$ is a Hermitian matrix, with real eigenvalues. So we make the assumption that it is invertible and change the form of $E$ as
\begin{equation}
E= -\frac{1}{2} \Big[ (\lambda - \mathbf{Q}^{-1} \mathbf{T})^T \mathbf{Q} (\lambda - \mathbf{Q}^{-1} \mathbf{T}) - \mathbf{T}^T \mathbf{Q}^{-1}\mathbf{T} \Big] \ .
\end{equation}
Plug it into Eq. \ref{eqn:PE} and integrate away $\lambda$, we obtain the first major result of this paper,
\begin{equation}
\label{eq: P_R uncorr}
P(R|\mathbf{d}_1, \mathbf{d}_2) \propto [{\rm det}\mathbf{Q}]^{1/2} {\rm exp} \bigg[ -\frac{1}{2} \mathbf{T}^T \mathbf{Q}^{-1}\mathbf{T} \bigg] \ .
\end{equation}
Here we have used the relation
\begin{equation}
G(\mathbf{Q}) \equiv \int {\rm exp} \bigg[ -\frac{1}{2} \mathbf{z}^T\mathbf{Q}\mathbf{z} \bigg] \prod \limits_{i=0}^N dz_i = (2\pi)^{N/2} ({\rm det}\mathbf{Q})^{-1/2} \ .
\end{equation}
Notice that in Eq. \ref{eq: P_R uncorr}, both ${\bf Q}$ and ${\bf T}$ depend on $R$, through ${\bf A}_2(R)$. There is no analytical expression for the bestfit $R$, so we have to numerically evaluate $P(R)$. For the numerical evaluation, we should instead evaluate
\begin{equation}
\ln P(R|{\bf d}_1,{\bf d}_2)= \frac{1}{2}{\rm det}\mathbf{Q}(R)-\frac{1}{2} \mathbf{T}^T(R) \mathbf{Q}^{-1}(R)\mathbf{T}(R)+{\rm const.}\ .
\end{equation}
To avoid numerical errors associated with too large/small exponential terms, a safer way is to evaluate the r.h.s. as a function of $R$, find the maximum, and then subtract this maximum before evaluating $P(R|{\bf d}_1,{\bf d}_2)$.
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{figures/P_R_deltaR0.001_a4_b10_planck.pdf}
\caption{Measured posterior PDF $P(R)$ on the lensing ratios, using our ratio measurement method. $P(R)$ is nearly Gaussian. We also show the bestfit value and the associated $1\sigma$ errors. }
\label{fig:P_R}
\end{figure*}
\subsection{Correlated \texorpdfstring{$\mathbf{d}_{1,2}$}{}}
The above result can be extended straightforwardly to the case of correlated $\mathbf{d}_{1,2}$. Now we define the data vector $\mathbf{d}^T = (\mathbf{d}_1, \mathbf{d}_2)$. The probability distribution of $\mathbf{d}$ is
\begin{equation}
P(\mathbf{d} | R, \lambda) = \frac{1}{\sqrt{(2\pi)^N {\rm det}\mathbf{C}}} {\rm exp} \left[ -\frac{1}{2} \mathbf{\Delta}^T \mathbf{C}^{-1} \mathbf{\Delta} \right] \ .
\end{equation}
Here $\mathbf{\Delta}\equiv {\bf d}-({\bf A}_1\mathbf{\lambda}, {\bf A}_2\lambda)$.
Because the errors in $\mathbf{d}_{1,2}$ are not independent, the covariance matrix ${\bf C}$ includes the off-diagonal blocks. We denote
\begin{equation}
\mathbf{C} = \left(
\begin{array}{cc}
\mathbf{C}_{11} & \mathbf{C}_{12} \\
\mathbf{C}_{21} & \mathbf{C}_{22}
\end{array}
\right) \ ,
\mathbf{C}^{-1} = \left(
\begin{array}{cc}
\mathbf{B}_{11} & \mathbf{B}_{12} \\
\mathbf{B}_{21} & \mathbf{B}_{22}
\end{array}
\right) \ ,
\end{equation}
where $\mathbf{C}_{ij} = \langle \mathbf{n}_i \mathbf{n}_j^T \rangle$, and $\mathbf{C}_{12} = \mathbf{C}_{21}$. The blocks of inverse $\mathbf{C}$ are
\begin{equation}
\begin{aligned}
&\mathbf{B}_{11} = (\mathbf{C}_{11} - \mathbf{C}_{12} \mathbf{C}_{22}^{-1} \mathbf{C}_{21} )^{-1} \ , \\
&\mathbf{B}_{12} = -(\mathbf{C}_{11} - \mathbf{C}_{12}\mathbf{C}_{22}^{-1}\mathbf{C}_{21})^{-1} \mathbf{C}_{12} \mathbf{C}_{22}^{-1} \ , \\
&\mathbf{B}_{21} = -\mathbf{C}_{22}^{-1}\mathbf{C}_{21} (\mathbf{C}_{11} - \mathbf{C}_{12}\mathbf{C}_{22}^{-1}\mathbf{C}_{21})^{-1} \ , \\
&\mathbf{B}_{22} = \mathbf{C}_{22}^{-1} + \mathbf{C}_{22}^{-1}\mathbf{C}_{21}(\mathbf{C}_{11} - \mathbf{C}_{12} \mathbf{C}_{22}^{-1}\mathbf{C}_{21})^{-1} \mathbf{C}_{12}\mathbf{C}_{22}^{-1} \ .
\end{aligned}
\end{equation}
The expansion of exponential part has 16 terms, which is twice as in Eq.~\ref{eq:expan exp}. We define
\begin{align}
&\mathbf{Q}' \equiv \mathbf{A}_1^T \mathbf{B}_{11} \mathbf{A}_1 + \mathbf{A}_1^T \mathbf{B}_{12} \mathbf{A}_2 + \mathbf{A}_2^T \mathbf{B}_{21} \mathbf{A}_1 + \mathbf{A}_2^T \mathbf{B}_{22} \mathbf{A}_2 \ , \\
&\mathbf{T}' \equiv \mathbf{A}_1^T \mathbf{B}_{11} \mathbf{d}_1 + \mathbf{A}_1^T \mathbf{B}_{12} \mathbf{d}_2 + \mathbf{A}_2^T \mathbf{B}_{21} \mathbf{d}_1 + \mathbf{A}_2^T \mathbf{B}_{22} \mathbf{d}_2 \ .
\end{align}
We find that $E$ in Eq. \ref{eqn:PE} is now
\begin{equation}
E= -\frac{1}{2} (\lambda^T \mathbf{Q}'\lambda -\lambda^T \mathbf{T}' - \mathbf{T'}^T \lambda) \ .
\end{equation}
Therefore the final expression of $P(R|{\bf d}_1,{\bf d}_2)$ is identical to Eq. \ref{eq: P_R uncorr}, but replacing ${\bf Q}$ and ${\bf T}$ with ${\bf Q}'$ and ${\bf T}'$.
\begin{equation}
\label{eq:P_Rcorr}
P(R|\mathbf{d}) \propto \left[{\rm det}\mathbf{Q}'\right]^{1/2} {\rm exp} \bigg[ -\frac{1}{2} \mathbf{T'}^T \mathbf{Q'}^{-1}\mathbf{T}' \bigg] \ .
\end{equation}
Eq. \ref{eq: P_R uncorr} for uncorrelated ${\bf d}_{1,2}$ and Eq. \ref{eq:P_Rcorr} for correlated ${\bf d}_{1,2}$ are the major results of this paper.
They provide the analytical expressions of $P(R)$.
\section{Application: measuring the lensing ratio}
\label{sec: application}
We have applied our method to measure the ratio between cross-correlations of galaxy-ISW and galaxy-CMB lensing \citep{Dong2022}, which is a measure of the gravitational potential decay rate and therefore a measure of dark energy. Here we take the lensing ratio measurement as another example to demonstrate the applicability of our method.
The two data sets that we adopt are $w^{g\gamma_t}(\theta,z_L,z_\gamma)$ and $w^{g\kappa_{\rm CMB}}(\theta,z_L)$. $w^{g\gamma_t}(\theta,z_L,z_\gamma)$ is the galaxy-tangential shear cross-correlation function between galaxies at lens redshift bin denoted by the mean redshift $z_L$ and shear at source redshift bin denoted by the mean redshift $z_\gamma$. The corresponding galaxy-convergence cross-correlation function is denoted as $w^{g\kappa_\gamma}(\theta,z_L,z_\gamma)$. $w^{g\kappa_{\rm CMB}}(\theta,z_L)$ is the galaxy-CMB lensing convergence cross-correlation function. It is expected that for narrow lens redshift bins \citep{Jain2003, Bernstein2004, ZhangJun2005},
\begin{equation}
w^{g\kappa_{\rm CMB}}(\theta,z_L)=R(z_L,z_\gamma) w^{g\kappa_\gamma}(\theta,z_L,z_\gamma)\ .
\end{equation}
Notice that the lensing ratio $R(z_L,z_\gamma)$ depends on $z_L$ and $z_\gamma$ only through the geometry of the universe, but not structure growth. Measurement errors in $w^{g\gamma_t}(\theta,z_L,z_\gamma)$ and $w^{g\kappa_{\rm CMB}}(\theta,z_L)$ are dominated by shear measurement errors and lensing map noises. So we will treat the two data sets as independent and apply Eq. \ref{eq: P_R uncorr} to measure the ratio $R$. Notice that we restrict the measurement to the CMB lensing/cosmic shear ratio and will not measure the shear ratio between shear at different source redshifts.
In order not to divert readers into weak lensing details, we present the measurement of $w^{g\gamma_t}(\theta,z_L,z_\gamma)$ and $w^{g\kappa_{\rm CMB}}(\theta,z_L)$ with DESI imaging surveys DR8, DECaLS shear catalog and \emph{Planck} CMB lensing maps in the appendix \ref{sec: data pre-processing}.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{figures/R_theta_cut_deltaR0.001.pdf}
\caption{Consistency test 1. We find no statistically significant dependence of $R$ on the chosen scale cut $\theta_{\rm min}$ ($\theta_{\rm max}$).}
\label{fig:R_theta_cut}
\end{figure*}
Since $w^{g\gamma_t}$ and $w^{g\kappa_{\rm CMB}}$ involve different Bessel functions in the power spectrum-correlation function conversion ($J_2(\ell\theta)$ versus $J_0(\ell\theta)$), they are not proportional to each other. One way to deal with it in our method is to choose the theory $\lambda$ as the power spectrum, and the mapping matrix ${\bf A}_{1,2}$ in Eq. \ref{eq: P_R uncorr} will then involve the oscillating functions of $J_{0,2}$. This is numerically challenging, even with the help of FFTlog \citep{Hamilton2000}. For the purpose of demonstrating the usage of our method, we adopt a more convenient choice, that is to rescale the correlation functions $w\rightarrow \tilde{w}\equiv w/w_{\rm tem}$. $w_{\rm tem}$ is the corresponding template correlation function based on the theoretical prediction of a fiducial cosmology (Eq.~\ref{eq:w_tem^ggammat} \& Eq.~\ref{eq:w_tem^gkcmb}), which absorbs the $J_{0,2}$ dependences. Therefore, the rescaled correlation functions directly follow the proportionality relation ($\tilde{w}^{g\kappa_{\rm CMB}}=R\tilde{w}^{g\gamma_t}$). Therefore we will use $\tilde{w}^{g\kappa_{\rm CMB}}$ and $\tilde{w}^{g\gamma_t}$ as the data sets to demonstrate our ratio measurement method.
\subsection{Measurements of lensing ratios}
\label{subsec: measurements of lensing ratios}
Following \S \ref{sec: methodology}, we choose the data sets ${\bf d}_{1,2}$, the theory vector $\lambda$, and the mapping matrixes ${\bf A}_{1,2}$ as
\begin{equation}
\mathbf{d}_1 =\tilde{w}^{g\gamma_t} \ , \quad \mathbf{d}_2 = \tilde{w}^{g\kappa_{\rm CMB}}\ , \ \lambda=\langle {\bf d}_1\rangle\ ,\ {\bf A}_1={\bf I}\ ,\ {\bf A}_2=R{\bf I}\ .
\end{equation}
Since the theory vector $\lambda$ is just the expectation value of ${\bf d}_1$, the theory makes no assumptions on cosmology and the measured $R$ will be model independent. We have three lens bins ($0.1<z_L<0.3$, $0.3<z_L<0.5$ and $0.5<z_L<0.7$) and three source shear bins ($0.4<z_\gamma<0.6$, $0.6<z_\gamma<0.8$ and $0.8<z_\gamma<1.0$). We denote the lens bins with Latin letter $i=1,2,3$ and source bins with Greek letter $\alpha=1,2,3$. Since we only measure the ratio between CMB lensing and cosmic shear, we have $6$ ratios ($R_i^\alpha=R_1^1$, $R_1^2$, $R_1^3$, $R_2^2$, $R_2^3$ and $R_3^3$). We use the data in the range $10 < \theta < 300$ arcmin to measure the ratios.
Fig.~\ref{fig:P_R} shows the posterior $P(R)$, which is normalized such that $\int P(R)dR=1$. $P(R)$ is nearly Gaussian, resulting into $R_1^1 = 1.93^{+0.36}_{-0.36}, R_1^2 = 1.79^{+0.34} _{-0.33}, R_1^3 = 1.57^{+0.30}_{-0.30}, R_2^2 = 2.30^{+0.28}_{-0.28}, R_2^3 = 1.79^{+0.22}_{-0.22}, R_3^3 = 3.16^{+0.38}_{-0.37}$. The S/N of each ratio is between $5.3$ and $8.4$. The error budget is dominated by errors in galaxy-CMB lensing correlation measurement. Therefore errors in $R_1^1$, $R_1^2$ and $R_1^3$ are tightly correlated, since they share the same galaxy-CMB lensing measurement. So are errors in $R_2^2$ and $R_2^3$.
\subsection{Consistency checks}
\label{subsec: consistency checks}
To demonstrate the validity of our method and measurement, we perform several consistency checks.
We first check whether the constrained $R$ depends on the chosen $\theta$ range ($\theta_{\rm min} < \theta < \theta_{\rm max}$). By theoretical design, the ratio $R$ is scale-independent. Therefore the measured $R$ should be independent of scale cuts. Nonetheless, the correlation function measurements themselves may suffer from potential systematics in the galaxy clustering, shear and CMB lensing. The left panel of Fig.~\ref{fig:R_theta_cut} shows the consistency tests by varying $\theta_{\rm min} = 7, 10, 20, 30$ arcmin while fixing $\theta_{\rm max} = 300$ arcmin. The right panel shows the tests by varying $\theta_{\rm max} = 60, 100, 200, 300$ arcmin while fixing $\theta_{\rm min} = 7$ arcmin. The constrained $R$ are fully consistent with each other. The left panels show a larger scatter in $R$, since a small scale cut affects the overall S/N more significantly.
The second check is to compare against the direct model fitting result introduced in \S \ref{subsec: sn}. This method adopt the \emph{Planck} cosmology predicted $w$ up to a scale-independent amplitude $b$ and fit $b$ for both $w^{g\gamma_t}$ and $w^{g\kappa_{\rm CMB}}$. The ratio is then $R^{\rm fit} = b_{\rm fit}^{g\kappa_{\rm CMB}}/b_{\rm fit}^{g\gamma_t}$. As shown in the appendix, this one-parameter bias model describes the measured correlation functions well, and confirms the robustness of the cross-correlation measurement. Therefore the ratio obtained in this way provides a robust check of $R$ measured by our method. Fig.~\ref{fig:R_fit} shows the two results (bestfit values and the associated errors) are fully consistent with each other. Our method, without the assumption of scale-independent bias, can be directly applied to smaller angular scales, where scale dependence of bias may be non-negligible.
The third test is on the potential cosmological dependence that we introduce by the scaling $w\rightarrow \tilde{w}/w_{\rm tem}$, which is a convenient but not exact implementation of our method. We measure the lensing ratio adopting different cosmologies (the first year \citep{wmap1}, five-year \citep{wmap5}, nine-year \emph{WMAP} cosmology \citep{wmap9} and \emph{Planck} 2018 cosmology in Table.~\ref{tab:parameter}) and corresponding template of scaling. Fig.~\ref{fig:R_wmap} shows that, despite various differences in cosmological parameters, the differences in the measured $R$ are all smaller than $0.5\sigma$ and therefore are totally negligible.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/R_fit_a4_b10_planck.pdf}
\caption{Consistency test 2. We compare the measured $R$ by our method with $R^{\rm fit}$ from model fitting. }
\label{fig:R_fit}
\end{figure}
\section{Conclusions} \label{sec: conclusions}
We develop an unbiased method to solve the problem of measuring the ratio of two data sets. This solution is developed based on the Bayesian analysis, including all the data points and their uncertainties. The posterior distribution of the ratio $P(R)$ has an analytical expression. This method enables fast and unbiased $R$ measurement, with minimal statistical errors. Furthermore, it relies on the usual assumption of Gaussian error in data, but no underlying model other than the proportionality relation between the two data sets.
We measure the lensing ratio as an application. We take the lenses as DESI imaging survey galaxies, and sources as DECaLS cosmic shear and \emph{Planck} CMB lensing. We measure the ratio between CMB lensing and cosmic shear at multiple lens-source redshift pairs, with S/N ranging from 5 to 8. We verify that the measured $R$ is insensitive to the scale cuts and the adopted cosmology. Together with another example of measuring the decay rate of cosmological gravitational potential \citep{Dong2022}, we demonstrate the applicability of our method to measure the ratios.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/R_deltaR0.001_a4_b10_WMAP.pdf}
\caption{Consistency test 3 shows that the measured $R$ has negligible dependence on the adopted theoretical template, which depends on cosmology. }
\label{fig:R_wmap}
\end{figure}
\begin{table}
\centering
\begin{tabular}{cccccccc}
\hline
Parameter & $\Omega_{\rm c}$ & $\Omega_{\rm b}$ & $n_s$ & $H_0$ & $\sigma_8$ \\
\hline
WMAP1 & 0.224 & 0.0463 & 0.99 & 72 & 0.9 \\
WMAP5 & 0.206 & 0.0432 & 0.961 & 72.4 & 0.787 \\
WMAP9 & 0.235 & 0.0464 & 0.9710 & 69.7 & 0.820 \\
\emph{Planck}18 & 0.265 & 0.04887 & 0.9649 & 67.36 & 0.8111 \\
\hline
\end{tabular}
\caption{The first year \citep{wmap1}, five-year \citep{wmap5}, nine-year \emph{WMAP} cosmology \citep{wmap9} and \emph{Planck} 2018 cosmology. }
\label{tab:parameter}
\end{table}
\section*{Acknowledgements}
This work is supported by the National Science Foundation of China (11621303), the National Key R\&D Program of China (2020YFC2201602, 2018YFA0404504, 2018YFA0404601, 2020YFC2201600) and CMS-CSST-2021-A02. JY acknowledges the support from China Postdoctoral Science Foundation (2021T140451). FYD is supported by a KIAS Individual Grant PG079001 at Korea Institute for Advanced Study. HYS acknowledges the support from CMS-CSST-2021-A01, NSFC of China under grant 11973070, and Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7013. This work made use of the Gravity Supercomputer at the Department of Astronomy, Shanghai Jiao Tong University. The results in this paper have been derived using the following packages: Numpy \citep{numpy}, \texttt{HEALPix} \citep{healpix}, IPython \citep{ipython}, CCL \citep{pyccl}, TreeCorr \citep{treecorr}.
\section*{Data Availability}
No new data were generated or analyzed in support of this research. The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
1,108,101,564,145 | arxiv | \section{Introduction}
In classical linearized elasticity theory, a special role is played by
infinitesimal longitudinal homogeneous plane waves.
For such waves the amplitude vector
is parallel to the propagation direction $\mathbf{n}$ (say) so that all
the particles oscillate along that direction $\mathbf{n}$ and
the motion is one-dimensional.
Such waves may propagate in every direction in an isotropic
compressible elastic material, but this is no longer the case for
elastic anisotropic materials such as crystals.
Possible directions of propagation of longitudinal homogeneous
plane waves, called ``specific directions'' by Borgnis \cite{Borg55}
may be as few as three in an elastic anisotropic crystal.
By assuming certain restrictions on the elastic constants,
Hadamard \cite{Hada03} created a special model anisotropic elastic
material in which longitudinal plane waves may propagate in every
direction.
Here, we consider {\it inhomogeneous} plane waves.
These waves are attenuated in a direction different from the
propagation direction.
They may be described in terms of bivectors \cite{BoHa93} --
complex vectors -- the amplitude bivector, $\mathbf{A}$ (say),
and the slowness bivector, $\mathbf{S}$ (say), which may be written
\cite{Haye84} $\mathbf{S} = N\mathbf{C}$, where the
``directional bivector'' $\mathbf{C}$ is written
$\mathbf{C} = m \mathbf{\hat{m}} + i \mathbf{\hat{n}}$
($\mathbf{\hat{m} \cdot \hat{n}}=0$, $m \ge 1$,
$|\mathbf{\hat{m}}| = |\mathbf{\hat{n}}| = 1$) and $N$ is called
the ``complex scalar slowness''.
Once the directional bivector $\mathbf{C}$ is prescribed,
the slowness $\mathbf{S}$ and amplitude $\mathbf{A}$ are determined
from the equations of motion.
To prescribe $\mathbf{C}$ is equivalent to prescribing an ellipse
with major semi-axis $m \mathbf{\hat{m}}$ and minor semi-axis
$\mathbf{\hat{n}}$;
this so-called ``directional ellipse'' for inhomogeneous plane waves
is the analogue to the direction of propagation $\mathbf{n}$ for
homogeneous plane waves \cite{Haye84}.
The inhomogeneous plane wave is said to be ``longitudinal'' if
$\mathbf{A}$ and $\mathbf{S}$ (and therefore also $\mathbf{C}$) are
``parallel'': $\mathbf{A \wedge S} = \mathbf{0}$.
What this means is that \cite{BoHa93} the ellipses of $\mathbf{A}$
and of $\mathbf{S}$ (and $\mathbf{C}$) are all ``parallel'', being
similar -- same aspect ratio -- and being similarly situated
-- parallel major axes and parallel minor axes.
In particular we consider ``circularly polarized longitudinal
inhomogeneous plane waves'' (CPLIPW).
For such waves both $\mathbf{C}$ and $\mathbf{A}$ are isotropic,
that is $\mathbf{C \cdot C} = 0$, $\mathbf{A \cdot A} = 0$;
the ellipses corresponding to $\mathbf{C}$ and $\mathbf{A}$ are
coplanar circles, the normal to the plane being $\mathbf{a}
= \mathbf{\hat{m} \wedge \hat{n}}$.
Here we seek to determine restrictions on the elastic constants such
that for any choice of $\mathbf{a}$, the normal to the plane of
$\mathbf{C}$ and $\mathbf{A}$ (which is the plane of the motion),
a CPLIPW may propagate.
We call the corresponding materials ``special''.
Starting with the constitutive equation for a general anisotropic
elastic crystal (which involves twenty-one independent elastic
constants), we obtain necessary and sufficient conditions on the
constants in order that CPLIPWs may propagate in every plane.
It turns out that nine linear relations among the elastic constants
must be satisfied so that the special model material has at most
twelve independent elastic constants.
For such materials we determine the general structure of the
(symmetric) acoustical tensor.
The complex slowness $N$ of the CPLIPW is determined
for all choices of isotropic $\mathbf{C}$.
Because the wave is circularly polarized, one eigenvalue of the
acoustical tensor is double \cite{Haye84}.
The remaining simple eigenvalue corresponds to a ``transverse''
inhomogeneous plane wave that is transverse in the sense that
its amplitude bivector $\mathbf{B}$ (say) is ``orthogonal'' to
$\mathbf{C}$: $\mathbf{B \cdot C} = 0$, which means that the
orthogonal projection of the ellipse of $\mathbf{B}$ upon the plane
of $\mathbf{C}$ is a circle.
The equation giving the complex slowness $N$ for the CPLIPW is of
precisely the same structure as the equation giving the complex
slowness of the transverse wave.
Both are of the form $\mathbf{C \cdot \Theta C} = N^{-2}$,
$\mathbf{C \cdot C} = 0$,
where $\mathbf{\Theta}$ is a real symmetric tensor.
An ellipsoid $E$ (say) may be associated with $\mathbf{\Theta}$
\cite{BoHa93}.
It is seen that the slowness bivectors are obtained by first
determining the central ellipsoidal section $\mathcal{E}$ (say)
of the ellipsoid $E$ by the central plane with normal $\mathbf{a}$.
Then $\mathbf{\hat{m}}$ and $\mathbf{\hat{n}}$ are chosen to lie
along the principal axes of $\mathcal{E}$.
This fixes $\mathbf{C}$, and then $N$, and therefore $\mathbf{S}$
($= N\mathbf{C}$) is determined.
For choices of $\mathbf{a}$ along the normals to the planes of
central circular sections of the ellipsoid $E$, there is no
propagating wave.
Finally, we briefly consider the possibility of having CPLIPWs
propagating in crystals of various classes: triclinic,
monoclinic, etc.
It is seen that CPLIPWs may not propagate in trigonal, tetragonal,
and cubic crystals, nor in isotropic materials.
It is seen that they may propagate in triclinic, monoclinic,
orthorhombic, and
hexagonal crystals, provided the linear relations
among the elastic constants evoked above
(or their specialization to those classes of symmetry) are satisfied.
As kindly pointed out by a referee, we must ensure that the crystals
are purely elastic so that mechanical fields are not coupled to
electrical fields.
Hence in what follows, we restrict our attention to the following
crystal classes:
$\bar{1}$, 2/m, mmm, $\bar{3}$m, 4/m, 4/mmm, 6/m, 6/mmm, 432, m3, m3m.
\section{Basic equations}
The constitutive equations relating stress components $\sigma_{ij}$
with strain components $e_{ij}$ for a homogeneous anisotropic
elastic crystal are given by Hooke's law:
\begin{equation}
\sigma_{ij} = d_{ijkl} e_{kl}.
\end{equation}
Here $d_{ijkl}$ the elastic constants, or stiffnesses, have the
symmetries,
\begin{equation} \label{symmetries}
d_{ijkl} = d_{jikl} = d_{klij},
\end{equation}
so that there are at most 21 independent stiffnesses.
Also,
\begin{equation}
2e_{ij} := \partial u_i / \partial x_j + \partial u_j / \partial x_i,
\end{equation}
where $u_i$ are the displacement components: $u_i := x_i - X_i$.
Here $\mathbf{x}$ is the current position of a particle initially at
$\mathbf{X}$.
The equations of motion, in the absence of body forces, are given by
\begin{equation} \label{motion}
d_{ijkl} \partial^2 u_k / \partial x_l \partial x_j
= \rho \partial^2 u_i / \partial t^2,
\end{equation}
where $\rho$ is the mass density of the crystal.
We consider displacements of the form
\begin{equation} \label{u}
\mathbf{u} =
\{\mathbf{A} \text{e}^{i \omega (\mathbf{S \cdot x} - t)} \}^+
= \text{e}^{- \omega \mathbf{S^- \cdot x}}
\{\mathbf{A}^+ \cos \omega (\mathbf{S}^+ \mathbf{\cdot x} - t)
- \mathbf{A}^- \sin \omega (\mathbf{S}^+ \mathbf{\cdot x} - t)\},
\end{equation}
where
$\mathbf{S} = \mathbf{S}^+ + i \mathbf{S}^-$ is the slowness bivector
(complex vector) \cite{Haye84, BoHa93} and
$\mathbf{A} = \mathbf{A}^+ + i \mathbf{A}^-$ is the amplitude bivector
(complex vector).
When $\mathbf{A}$ is parallel to $\mathbf{S}$, that is when
$\mathbf{A} = \alpha \mathbf{S}$, where $\alpha$ is some complex
number, the inhomogeneous wave is said to be ``longitudinal''
\cite{DeHa02, DeHa04}.
When $\mathbf{A}$ is isotropic, that is when $\mathbf{A \cdot A} = 0$,
the wave is circularly polarized, the ellipse associated with the
bivector $\mathbf{A}$ being a circle \cite{BoHa93}.
Circularly polarized longitudinal inhomogeneous waves are those waves
for which $\mathbf{A}$ and $\mathbf{S}$ are both isotropic and
parallel.
For such waves,
\begin{equation}
\mathbf{S \cdot S} = 0,
\end{equation}
or equivalently,
\begin{equation}
\mathbf{S}^+ \cdot \mathbf{S}^+
= \mathbf{S}^- \cdot \mathbf{S}^-, \quad
\mathbf{S}^+ \cdot \mathbf{S}^- = 0.
\end{equation}
Thus, the planes of constant phase are orthogonal to the planes of
constant amplitude; the waves propagate in the direction of
$\mathbf{S}^+$ whilst the amplitude decays in the direction of
$\mathbf{S}^-$.
The particle paths are circles in the plane of $\mathbf{A}^+$
and $\mathbf{A}^-$, or equivalently, because
$\mathbf{A} = \alpha \mathbf{S}$,
in the plane of $\mathbf{S}^+$ and $\mathbf{S}^-$.
The sense of description of the circle is from $\mathbf{S}^+$
towards $\mathbf{S}^-$, retrograde, similar to the sense of
Rayleigh waves propagating close to the free surface of a
semi-infinite isotropic elastic material.
Inserting \eqref{u} into \eqref{motion} gives the \textit{propagation
condition},
\begin{equation} \label{propagationCondition}
Q_{ik}(\mathbf{S}) A_k = \rho A_i, \quad
Q_{ik}(\mathbf{S}) = d_{ijkl} S_j S_l,
\end{equation}
where $Q_{ik}(\mathbf{S})$ is the \textit{acoustical tensor}
corresponding to the slowness bivector $\mathbf{S}$.
A systematic procedure for obtaining all solutions $\mathbf{A}$,
$\mathbf{S}$ of \eqref{propagationCondition} has been introduced
by Hayes \cite{Haye84} and is called the ``directional ellipse method''
or ``DE-method''.
It consists in writing $\mathbf{S}$ as
\begin{equation} \label{DE}
\mathbf{S} = N \mathbf{C},
\end{equation}
where $N$ is a complex number and $\mathbf{C}$ is any bivector of the
form
$\mathbf{C} = m \mathbf{\hat{m}} + i \mathbf{\hat{n}}$,
with $\mathbf{\hat{m}}$, $\mathbf{\hat{n}}$, two unit orthogonal
vectors and $m \ge 1$.
We call $N$ the \textit{complex scalar slowness} and $\mathbf{C}$
the \textit{directional bivector}.
Inserting \eqref{DE} into \eqref{propagationCondition} yields the
eigenvalue problem,
\begin{equation} \label{QA}
Q_{ik}(\mathbf{C})A_k = \rho N^{-2} A_i,
\end{equation}
for the complex symmetric tensor $Q_{ik}(\mathbf{C})$.
All solutions of the propagation condition may then be obtained
by prescribing $\mathbf{C}$ arbitrarily and solving \eqref{QA} for
$N^{-2}$ and $\mathbf{A}$, which gives the slowness bivector
through \eqref{DE}, and the amplitude bivector $\mathbf{A}$
(up to a complex scalar factor).
When, for some eigenvalue $N^{-2}$, the corresponding eigenbivector
$\mathbf{A}$ is parallel to the directional bivector $\mathbf{C}$,
the corresponding inhomogeneous plane wave is said to be longitudinal.
Thus, for longitudinal inhomogeneous plane waves the
propagation condition becomes
\begin{equation} \label{QC}
Q_{ik}(\mathbf{C})C_k = \rho N^{-2} C_i,
\end{equation}
for some $N^{-2}$.
Hence, these waves are only possible
for directional bivectors $\mathbf{C}$ such that
$\mathbf{Q}(\mathbf{C})\mathbf{C}$
is parallel to $\mathbf{C}$, or equivalently,
\begin{equation}
\mathbf{C} \times \mathbf{Q}(\mathbf{C})\mathbf{C} = \mathbf{0},
\end{equation}
or
\begin{equation} \label{conditions}
\dfrac{Q_{1k}(\mathbf{C})C_k}{C_1} =
\dfrac{Q_{2k}(\mathbf{C})C_k}{C_2} =
\dfrac{Q_{3k}(\mathbf{C})C_k}{C_3}.
\end{equation}
Here we seek particular classes of materials which are such that
longitudinal inhomogeneous plane waves are possible
for any choice of $\mathbf{C}$ satisfying $\mathbf{C \cdot C} = 0$
(so that $\mathbf{C}$ is of the form
$\mathbf{C} = \mathbf{\hat{m}} + i\mathbf{\hat{n}}$).
That is, we wish to determine under which conditions on the
stiffnesses $d_{ijkl}$ are CPLIPWs possible for all choices of
the plane of $\mathbf{C}$,
or equivalently of the normal
$\mathbf{a} = \mathbf{\hat{m} \times \hat{n}}$.
\section{Crystals for which longitudinal circularly polarized waves
are possible in all planes}
\subsection{Necessary and sufficient conditions}
Here we seek under which conditions on the stiffnesses $d_{ijkl}$,
we have:
$\mathbf{C} \times \mathbf{Q}(\mathbf{C})\mathbf{C} = \mathbf{0}$
for all $\mathbf{C} = \mathbf{\hat{m}} + i \mathbf{\hat{n}}$, or
equivalently, for all $\mathbf{C}$ satisfying
$\mathbf{C \cdot C} = 0$.
For convenience we adopt the Voigt \cite{Voig10} contracted notation
for the elastic stiffnesses,
\begin{equation}
d_{12} = d_{1122}, \quad d_{33} = d_{3333}, \quad
d_{45} = d_{2313}, \quad d_{66} = d_{1212}, \quad \text{etc.}
\end{equation}
With these notations,
\begin{align} \label{Q_ii}
& Q_{11}(\mathbf{C}) =
d_{11}C_1^2 + d_{66}C_2^2 + d_{55}C_3^2
+ 2d_{16}C_1C_2 + 2d_{15}C_1C_3 + 2d_{56}C_2C_3,
\notag \\&
Q_{22}(\mathbf{C}) =
d_{66}C_1^2 + d_{22}C_2^2 + d_{44}C_3^2
+ 2d_{26}C_1C_2 + 2d_{64}C_1C_3 + 2d_{24}C_2C_3,
\notag \\
& Q_{33}(\mathbf{C}) =
d_{55}C_1^2 + d_{44}C_2^2 + d_{33}C_3^2
+ 2d_{45}C_1C_2 + 2d_{35}C_1C_3 + 2d_{34}C_2C_3,
\end{align}
and
\begin{align} \label{Q_ij}
& Q_{12}(\mathbf{C}) =
d_{16}C_1^2 + d_{26}C_2^2 + d_{45}C_3^2
\notag \\
& \phantom{0123456789}
+ (d_{12}+d_{66})C_1C_2 + (d_{14}+d_{56})C_1C_3
+ (d_{25}+d_{46})C_2C_3,
\notag \\
& Q_{23}(\mathbf{C}) =
d_{56}C_1^2 + d_{24}C_2^2 + d_{34}C_3^2
\notag \\
& \phantom{0123456789}
+ (d_{25}+d_{46})C_1C_2
+ (d_{36}+d_{45})C_1C_3
+ (d_{23}+d_{44})C_2C_3,
\notag \\
& Q_{31}(\mathbf{C}) =
d_{15}C_1^2 + d_{46}C_2^2 + d_{35}C_3^2
\notag \\
& \phantom{0123456789}
+ (d_{14}+d_{56})C_1C_2
+ (d_{13}+d_{55})C_1C_3
+ (d_{36}+d_{45})C_2C_3.
\end{align}
Consider first $\mathbf{C} = (0,1,i)$. It is isotropic.
For this $\mathbf{C}$, conditions \eqref{conditions} become
$Q_{1k}C_k = 0$ and $iQ_{2k}C_k = Q_{3k}C_k$,
which read, explicitly,
\begin{align}
& (d_{36} + 2d_{45} - d_{26}) - i(d_{25} + 2d_{46} - d_{35})=0,
\notag \\
& 4(d_{34} - d_{24}) + i(d_{22} + d_{33} - 2d_{23} - 4d_{44}) = 0.
\end{align}
Hence,
\begin{equation} \label{cond1}
d_{26} = d_{36} + 2d_{45}, \quad
d_{35} = d_{25} + 2d_{46}, \quad
d_{24} = d_{34}, \quad
4d_{44} = d_{22} + d_{33} - 2d_{23}.
\end{equation}
We then consider in turn $\mathbf{C} = (i,0,1)$ and
$\mathbf{C} = (1,i,0)$.
These choices yield conditions of the type \eqref{cond1}, which may be
read off from \eqref{cond1} on cycling the indices
$1 \rightarrow 2 \rightarrow 3 \rightarrow 1$,
$4 \rightarrow 5 \rightarrow 6 \rightarrow 4$.
The complete set of conditions obtained in this way is
\begin{align} \label{cond2}
& d_{16} = d_{26} = d_{36} + 2d_{45}, \quad
d_{35} = d_{15} = d_{25} + 2d_{46}, \quad
d_{24} = d_{34} = d_{14} + 2d_{56},
\nonumber \\
& 4d_{44} = d_{22} + d_{33} - 2d_{23}, \;
4d_{55} = d_{33} + d_{11} - 2d_{13}, \;
4d_{66} = d_{11} + d_{22} - 2d_{12}.
\end{align}
We refer to materials whose stiffnesses satisfy
these conditions as ``special''.
It follows that ``special'' materials have at most twelve
independent elastic stiffnesses.
For instance, they are
\begin{equation} \label{constitutive}
\begin{array}{c c c c c c}
d_{11} & d_{12} & d_{13} & d_{14} & d_{15} & d_{16} \\
& d_{22} & d_{23} & d_{24} & d_{25} & \bullet \\
& & d_{33} & \bullet & \bullet & d_{36} \\
& & & \bullet & \bullet & \bullet \\
& & & & \bullet & \bullet \\
& & & & & \bullet
\end{array}
\end{equation}
and the remaining elastic constants (denoted by ``$\bullet$''
above) are determined from those 12 as
\begin{align}
& d_{34} = d_{24},
&& d_{44} = {\textstyle\frac{1}{4}} (d_{22} + d_{33} - 2d_{23}),
&& d_{45} = {\textstyle\frac{1}{2}} (d_{26} - d_{36}),
\notag \\
& d_{35} = d_{15},
&& d_{55} = {\textstyle\frac{1}{4}} (d_{11} + d_{33} - 2d_{13}),
&& d_{56} = {\textstyle\frac{1}{2}} (d_{34} - d_{14})
\notag \\
& d_{26} = d_{16},
&& d_{66} = {\textstyle\frac{1}{4}} (d_{11} + d_{22} - 2d_{12}),
&& d_{46} = {\textstyle\frac{1}{2}} (d_{35} - d_{25}).
\end{align}
We note that out of the nine conditions \eqref{cond2},
six are ``structurally invariant'' \cite{Ting00} for some
rotations of the coordinate system.
In particular, if the two conditions
\begin{equation}
d_{16} - d_{26} = d_{11} + d_{22} - 2d_{12} -4d_{66} = 0,
\end{equation}
are satisfied in the coordinate system linked
to the crystallographic axes ($O x_1 x_2 x_3$), then they are
also satisfied by the stiffnesses $d^*_{ij}$ obtained from the
$d_{ij}$ after any rotation of the coordinate system about the
$x_3$ axis.
These invariants are Type 1B in Ting's classification \cite{Ting00}.
Similarly, the conditions
\begin{equation}
d_{15} - d_{35} = d_{11} + d_{33} - 2d_{13} -4d_{55} = 0,
\end{equation}
are invariants under rotation of the coordinate system about the
$x_2$ axis, and
\begin{equation}
d_{24} - d_{34} = d_{22} + d_{33} - 2d_{23} -4d_{44} = 0,
\end{equation}
are invariants after rotation of the coordinate system about the
$x_1$ axis.
When the relations \eqref{cond2} hold, it may be checked
by direct calculation, using \eqref{Q_ii}, \eqref{Q_ij}, and taking
into account the relation $C_1^2+C_2^2+C_3^2=0$, that
\begin{equation}
Q_{ik}(\mathbf{C})C_k = \rho N_L^{-2}(\mathbf{C}) C_i,
\end{equation}
for all isotropic bivectors $\mathbf{C}$,
with $N_L^{-2}(\mathbf{C})$ given by
\begin{equation} \label{N_L}
\rho N_L^{-2}(\mathbf{C})
= {\textstyle\frac{1}{2}} (d_{11}C_1^2 + d_{22}C_2^2 + d_{33}C_3^2)
+2(d_{16}C_1C_2 + d_{35}C_1C_3 + d_{24}C_2C_3).
\end{equation}
It follows that the relations \eqref{cond2} between the elastic
stiffnesses are the necessary and sufficient conditions for
CPLIPWs to
propagate for all choices of isotropic directional bivectors, or,
equivalently, for all choices of the polarization plane.
The expression \eqref{N_L} may also be written
\begin{equation}
\rho N_L^{-2}(\mathbf{C})
= \mathbf{C} \cdot \mathbf{\Psi}_L \mathbf{C},
\end{equation}
where $\mathbf{\Psi}_L$ is given by
\begin{equation} \label{Psi_L}
\mathbf{\Psi}_L = \begin{bmatrix}
{\textstyle\frac{1}{2}} d_{11} & d_{16} & d_{35} \\
d_{16} & {\textstyle\frac{1}{2}} d_{22} & d_{24} \\
d_{35} & d_{24} & {\textstyle\frac{1}{2}} d_{33}
\end{bmatrix}.
\end{equation}
Associated with $\mathbf{\Psi}_L$ is the
``$\mathbf{\Psi}_L$-ellipsoid'':
$\mathbf{x} \cdot (\mathbf{\Psi}_L + p\mathbf{1})\mathbf{x} = 1$,
where $p$ is chosen such that $ (\mathbf{\Psi}_L + p\mathbf{1})$ is
positive definite.
If $\mathbf{C}$ is chosen to lie on either plane of central circular
section of the $\mathbf{\Psi}_L$-ellipsoid, then
$\mathbf{C} \cdot \mathbf{\Psi}_L \mathbf{C} = 0$ and thus
$N_L^{-2}(\mathbf{C}) = 0$: the corresponding waves do not propagate.
In the next section, we derive the general structure of the acoustical
tensor for ``special'' materials.
\subsection{Acoustical tensor}
From the previous section, we know that the acoustical tensor
$Q_{ik}({\bf C})$, with ${\bf C}\cdot {\bf C}=0$, of
``special'' materials admits
$\rho N_L^{-2}(\mathbf{C})$ given by \eqref{N_L} as an eigenvalue.
The corresponding eigenvector $\mathbf{C}$ is isotropic, and so
\cite{Haye84} this eigenvalue is a double eigenvalue.
It follows that the other (simple) eigenvalue
$\rho N_*^{-2}(\mathbf{C})$, say, is given by
$\rho N_*^{-2}(\mathbf{C})
= \text{tr }\mathbf{Q}(\mathbf{C}) - 2 \rho N_L^{-2}(\mathbf{C})$,
that is,
\begin{multline} \label{N_*}
\rho N_*^{-2}(\mathbf{C}) =
(d_{55}+d_{66})C_1^2 + (d_{44}+d_{66})C_2^2 + (d_{44}+d_{55})C_3^2
\\ +2d_{45}C_1C_2 + 2d_{46}C_1C_3 + 2d_{56}C_2C_3.
\end{multline}
The expression \eqref{N_*} may also be written
\begin{equation} \label{N_*Psi_T}
\rho N_*^{-2}(\mathbf{C})
= \mathbf{C} \cdot \mathbf{\Psi}_T \mathbf{C},
\end{equation}
where $\mathbf{\Psi}_T$ is given by
\begin{equation} \label{Psi_T}
\mathbf{\Psi}_T = \begin{bmatrix}
d_{55}+d_{66} & d_{45} & d_{46} \\
d_{45} & d_{44}+d_{66} & d_{56} \\
d_{46} & d_{56} & d_{44} + d_{55}
\end{bmatrix}.
\end{equation}
Associated with $\mathbf{\Psi}_T$ is the
``$\mathbf{\Psi}_T$-ellipsoid'':
$\mathbf{x} \cdot (\mathbf{\Psi}_T + p\mathbf{1})\mathbf{x} = 1$,
where $p$ is chosen such that $ (\mathbf{\Psi}_T + p\mathbf{1})$ is
positive definite.
If $\mathbf{C}$ is chosen to lie on either plane of central circular
section of the $\mathbf{\Psi}_T$-ellipsoid, then
$\mathbf{C} \cdot \mathbf{\Psi}_T \mathbf{C} = 0$ and so
$N_*^{-2}(\mathbf{C}) = 0$: there is no corresponding
transverse circularly polarized propagating wave.
Now we compute the components of the matrix
$\mathbf{\Gamma}(\mathbf{C}):=
\mathbf{Q}(\mathbf{C})-\rho N_*^{-2}(\mathbf{C})\mathbf{1}$.
We find, using the conditions \eqref{cond2} and
$C_1^2+C_2^2+C_3^2=0$, that
\begin{align}
& \Gamma_{11}(\mathbf{C}) =
(\mu + \nu - \lambda)C_1^2 + 2\gamma C_1 C_2 + 2\beta C_1 C_3,
\notag \\
& \Gamma_{22}(\mathbf{C}) =
(\nu + \lambda - \mu)C_2^2 + 2\alpha C_2 C_3 + 2\gamma C_1 C_2,
\notag \\
& \Gamma_{33}(\mathbf{C}) =
(\lambda + \mu - \nu)C_3^2 + 2\beta C_1 C_3 + 2\alpha C_2 C_3,
\end{align}
and that
\begin{align}
& \Gamma_{12}(\mathbf{C}) =
- \gamma C_3^2 + \nu C_1 C_2 + \alpha C_1 C_3 + \beta C_2 C_3,
\notag \\
& \Gamma_{23}(\mathbf{C}) =
-\alpha C_1^2 + \lambda C_2 C_3 + \beta C_1 C_2 + \gamma C_1 C_3,
\notag \\
& \Gamma_{31}(\mathbf{C}) =
- \beta C_2^2 + \mu C_1 C_3 + \gamma C_2 C_3 + \alpha C_1 C_2,
\end{align}
where
\begin{align}
& \lambda:= {\textstyle\frac{1}{2}} (d_{22}+d_{33}-2d_{44}),
&& \mu:= {\textstyle\frac{1}{2}} (d_{11}+d_{33}-2d_{55}),
&& \nu:= {\textstyle\frac{1}{2}} (d_{11}+d_{22}-2d_{66}),
\notag \\
& \alpha:= d_{14}+d_{56},
&& \beta:= d_{25}+d_{46},
&& \gamma:= d_{36}+d_{45}.
\end{align}
Let $\mathbf{M}$ be the real symmetric matrix defined as
\begin{equation}
\mathbf{M} := \begin{bmatrix}
\lambda & -\gamma & -\beta \\
-\gamma & \mu & -\alpha \\
-\beta & -\alpha & \nu
\end{bmatrix}.
\end{equation}
It can be checked that
\begin{equation}
\mathbf{\Gamma}(\mathbf{C}) =
(\lambda + \mu + \nu) \mathbf{C} \otimes \mathbf{C}
- \mathbf{M C} \otimes \mathbf{C}
- \mathbf{C} \otimes \mathbf{M C}.
\end{equation}
Alternatively, introducing
$\mathbf{\hat{M}} :=
{\textstyle\frac{1}{2}} (\lambda + \mu + \nu)\mathbf{1} - \mathbf{M}$,
the matrix $\mathbf{\Gamma}(\mathbf{C})$ may be written as
\begin{equation}
\mathbf{\Gamma}(\mathbf{C}) =
\mathbf{\hat{M} C} \otimes \mathbf{C}
+ \mathbf{C} \otimes \mathbf{\hat{M} C},
\end{equation}
where, explicitly,
{\small
\begin{equation} \label{Mhat}
\mathbf{\hat{M}} = \begin{bmatrix}
{\textstyle\frac{1}{2}} (d_{11}+d_{44}-d_{55}-d_{66}) & d_{36}+d_{45} & d_{25}+d_{46}
\\
d_{36}+d_{45} & {\textstyle\frac{1}{2}} (d_{22}+d_{55}-d_{44}-d_{66}) & d_{14}+d_{56}
\\
d_{25}+d_{46} & d_{14}+d_{56} & {\textstyle\frac{1}{2}} (d_{33}+d_{66}-d_{44}-d_{55})
\end{bmatrix}.
\end{equation}
}
Hence, noting that $\mathbf{C \cdot C} =0$ and recalling the
definition of $\mathbf{\Gamma}(\mathbf{C})$,
we find that the acoustical tensor may be put in the form,
\begin{equation} \label{decompQ}
\mathbf{Q}(\mathbf{C}) =
\rho N_*^{-2}(\mathbf{C}) \mathbf{1} + \mathbf{\hat{M} C}
\otimes \mathbf{C}
+ \mathbf{C} \otimes \mathbf{\hat{M} C}.
\end{equation}
This decomposition of the acoustical tensor
shows directly that the isotropic bivector $\mathbf{C}$ is an
eigenvector of $\mathbf{Q}(\mathbf{C})$, whose eigenvalue
$\rho N_L^{-2}(\mathbf{C})$ given by \eqref{N_L}, can equivalently
be written
\begin{equation} \label{N_L/N_*}
\rho N_L^{-2}(\mathbf{C})
= \rho N_*^{-2}(\mathbf{C}) + \mathbf{C \cdot \hat{M}C}.
\end{equation}
Also, the eigenvector corresponding to the eigenvalue
$\rho N_*^{-2}(\mathbf{C})$ is $\mathbf{C \times \hat{M}C}$.
We call the corresponding wave the ``transverse'' inhomogeneous plane
wave, because its amplitude bivector ${\bf A}$ is
orthogonal to ${\bf C}$: ${\bf C}\cdot {\bf A}=0$.
In general it is elliptically polarized because
$(\mathbf{C \times \hat{M}C}) \cdot (\mathbf{C \times \hat{M}C})
= - (\mathbf{C \cdot \hat{M}C})^2 \ne 0$.
Of course $\mathbf{C} \cdot
[\mathbf{C} \times \hat{\mathbf{M}}\mathbf{C}] = 0$,
which means \cite{Haye84} that the orthogonal projection,
upon the plane of $\mathbf{C}$, of the ellipse associated with
the amplitude bivector $\mathbf{C \times \hat{M}C}$, is a circle.
Let $\mathbf{\tilde{C}}$ be a choice of $\mathbf{C}$ for which
$\mathbf{\tilde{C} \cdot \hat{M}\tilde{C}}
= \mathbf{\tilde{C} \cdot \tilde{C}} = 0$, so that
$\mathbf{\tilde{C}}$ is parallel to
$\mathbf{\tilde{C} \times \hat{M}\tilde{C}}$ and there is
a triple eigenvalue $\rho N_L^{-2}(\mathbf{\tilde{C}})
= \rho N_*^{-2}(\mathbf{\tilde{C}})$.
This special case occurs when the plane of $\mathbf{\tilde{C}}$ is
one of the two planes of central circular section of the
the ``$\mathbf{\hat{M}}$-ellipsoid'':
$\mathbf{x} \cdot (\mathbf{\hat{M}} + p\mathbf{1})\mathbf{x} = 1$,
where $p$ is chosen such that $ (\mathbf{\hat{M}} + p\mathbf{1})$ is
positive definite.
\subsubsection*{Remark: A relationship between
the scalar slownesses of certain waves}
The form \eqref{N_L/N_*} of the relation triggers the following
remark.
Let the matrix $\mathbf{\hat{M}}$ have eigenvalues
$p_1$, $p_2$, $p_3$ and corresponding unit orthogonal eigenvectors
$\mathbf{a}_1$, $\mathbf{a}_2$, $\mathbf{a}_3$ so that
\begin{equation}
\mathbf{\hat{M}} =
p_1 \mathbf{a}_1 \otimes \mathbf{a}_1 +
p_2 \mathbf{a}_2 \otimes \mathbf{a}_2 +
p_3 \mathbf{a}_3 \otimes \mathbf{a}_3,
\quad
\mathbf{a}_i \cdot \mathbf{a}_j = \delta_{ij}.
\end{equation}
Then consider the following isotropic bivectors $\mathbf{C}_1$,
$\mathbf{C}_2$, $\mathbf{C}_3$,
\begin{equation} \label{Ci}
\mathbf{C}_1 := \mathbf{a}_2 + i \mathbf{a}_3, \quad
\mathbf{C}_2 := \mathbf{a}_3 + i \mathbf{a}_1, \quad
\mathbf{C}_3 := \mathbf{a}_1 + i \mathbf{a}_2.
\end{equation}
Clearly,
\begin{equation}
\mathbf{C}_1 \mathbf{\cdot \hat{M} C}_1 = p_2 - p_3, \quad
\mathbf{C}_2 \mathbf{\cdot \hat{M} C}_2 = p_3 - p_1, \quad
\mathbf{C}_3 \mathbf{\cdot \hat{M} C}_3 = p_1 - p_2,
\end{equation}
so that
\begin{equation} \label{CiSum}
\sum_{i=1}^3 \mathbf{C}_i \mathbf{\cdot \hat{M} C}_i = 0.
\end{equation}
Then, from \eqref{N_L/N_*} and \eqref{CiSum}, we have the relation
\begin{equation}
\sum_{i=1}^3 N_L^{-2} (\mathbf{C}_i) =
\sum_{i=1}^3 N_*^{-2} (\mathbf{C}_i),
\end{equation}
for the $\mathbf{C}_i$ given by \eqref{Ci}.
\subsubsection*{Example}
To illustrate the results of this section, we work out a
simple example.
Let $\mathbf{C} = \mathbf{i} + i \mathbf{j}$.
Then the corresponding acoustical tensor
$\mathbf{Q}(\mathbf{i} + i \mathbf{j}) $ is
{\small
\begin{equation}
\begin{bmatrix}
d_{11}-d_{66}+2id_{16} & i(d_{66}+d_{12})
& d_{15}-d_{46} +i(d_{56}+d_{14}) \\
i(d_{66}+d_{12}) & d_{66}-d_{22}+2id_{26}
& -(d_{56}+d_{14}) +i(d_{15}-d_{46}) \\
d_{15}-d_{46} +i(d_{56}+d_{14}) & -(d_{56}+d_{14}) +i(d_{15}-d_{46})
& d_{55}-d_{44} + 2id_{45}
\end{bmatrix}.
\end{equation}
}
Computing $\mathbf{Q}(\mathbf{C})\mathbf{C}$ and using the
conditions \eqref{cond2}, we find that $\mathbf{C}$ is
indeed an eigenvector of the acoustical tensor, with eigenvalue
given by \eqref{N_L},
\begin{equation}
\mathbf{Q}(\mathbf{C})\mathbf{C}
= \rho N_L^{-2} \mathbf{C},
\quad
\rho N_L^{-2} = {\textstyle\frac{1}{2}} (d_{11} - d_{22}) + 2id_{16}.
\end{equation}
Further, computing $\mathbf{C}\times \mathbf{\hat{M}C}$ and then
$\mathbf{Q}(\mathbf{C})(\mathbf{C}\times \mathbf{\hat{M}C})$ and using
the conditions \eqref{cond2}, we find that
$\mathbf{C}\times \mathbf{\hat{M}C}$ is
also an eigenvector of the acoustical tensor, with eigenvalue
now given by \eqref{N_*},
\begin{equation}
\mathbf{Q}(\mathbf{C})(\mathbf{C}\times \mathbf{\hat{M}C})
= \rho N_*^{-2} (\mathbf{C}\times \mathbf{\hat{M}C}),
\quad
\rho N_*^{-2} = d_{55} - d_{44} + 2id_{45}.
\end{equation}
In this simple example, we prescribed the normal to the plane of the
isotropic slowness bivector (or equivalently, to the plane of the
amplitude bivector) to be $\mathbf{k}$.
Then, we chose $\mathbf{C}$ to be $\mathbf{i} + i \mathbf{j}$,
leading to a complex eigenvalue.
In the next section, we show that it is always possible to choose
$\mathbf{C}$ such that the corresponding eigenvalue is a real positive
number.
\subsubsection*{Remark: General form of the acoustical tensor for the
``special'' materials}
Using the relations \eqref{cond2} in the expressions
$Q_{ij}(\mathbf{C})$ given by \eqref{Q_ii} and \eqref{Q_ij},
it may be seen that without any restrictions on $\mathbf{C}$, the
acoustical tensor $\mathbf{Q}(\mathbf{C})$ for the ``special''
materials may be written as
\begin{equation} \label{newQ}
\mathbf{Q}(\mathbf{C}) = \rho N_*^{-2}(\mathbf{C}) \mathbf{1}
+ \mathbf{\hat{M}C} \otimes \mathbf{C}
+ \mathbf{C}\otimes \mathbf{\hat{M}C}
+ (\mathbf{C \cdot C}) \mathbf{\Delta}.
\end{equation}
Here the expression for $\rho N_*^{-2}(\mathbf{C})$ is given by
\eqref{N_*} and $ \mathbf{\Delta}$ is defined by
\begin{equation} \label{newDelta}
\mathbf{\Delta} = \begin{bmatrix}
-d_{44} & d_{45} & d_{46} \\
d_{45} & -d_{55} & d_{56} \\
d_{46} & d_{56} & -d_{66}
\end{bmatrix}.
\end{equation}
Comparing \eqref{Psi_T} and \eqref{newDelta}, we note that
$\mathbf{\Psi}_T
= \mathbf{\Delta} - (\text{tr }\mathbf{\Delta}) \mathbf{1}$,
so that by \eqref{N_*Psi_T},
\begin{equation}
\rho N_*^{-2}(\mathbf{C}) = \mathbf{C \cdot \Delta C}
- (\text{tr }\mathbf{\Delta})\mathbf{C \cdot C}.
\end{equation}
It follows that
\begin{equation} \label{QQ}
\mathbf{Q}(\mathbf{C}) =
[\mathbf{C \cdot \Delta C}
- (\text{tr }\mathbf{\Delta})\mathbf{C \cdot C}] \mathbf{1}
+ \mathbf{\hat{M}C} \otimes \mathbf{C}
+ \mathbf{C}\otimes \mathbf{\hat{M}C}
+ (\mathbf{C \cdot C}) \mathbf{\Delta},
\end{equation}
which is an expression for the acoustical tensor
$\mathbf{Q}(\mathbf{C})$ for ``special'' materials, given in terms of
only two matrices, $\mathbf{\Delta}$ and $\mathbf{\hat{M}}$.
We note for ``special'' materials that $d_{ijkl}$ may be written
\begin{multline}
d_{ijkl} = \Delta_{jl}\delta_{ik} + \Delta_{ik}\delta_{jl}
+ \Delta_{il}\delta_{jk} + \Delta_{jk}\delta_{il}
- \Delta_{ij}\delta_{kl} - \Delta_{kl}\delta_{ij} \\
- (\text{tr }\mathbf{\Delta})
(\delta_{ik}\delta_{jl} + \delta_{jk}\delta_{il}
- \delta_{ij}\delta_{kl})
+ \hat{M}_{ij}\delta_{kl} + \hat{M}_{kl}\delta_{ij}.
\end{multline}
\subsubsection*{Remark: Homogeneous plane waves in ``special''
materials}
For \textit{homogeneous} plane waves propagating in the direction
$\mathbf{n}$ in the ``special'' materials, let
$\mathbf{u} = \mathbf{A} \exp i\omega(S\mathbf{n \cdot x}-t)$.
The corresponding propagation condition is then
\begin{equation}
\mathbf{Q}(\mathbf{n}) \mathbf{A} = \rho S^2 \mathbf{A},
\end{equation}
where $ \mathbf{Q}(\mathbf{n})$ is given by \eqref{QQ} with $n_1$,
$n_2$, $n_3$ replacing $C_1$, $C_2$, $C_3$, respectively.
Thus
\begin{equation}
\mathbf{Q}(\mathbf{n}) =
[\mathbf{n \cdot \Delta n}
- (\text{tr }\mathbf{\Delta})] \mathbf{1}
+ \mathbf{\hat{M}n} \otimes \mathbf{n}
+ \mathbf{n}\otimes \mathbf{\hat{M}n}
+ \mathbf{\Delta}.
\end{equation}
We note a special solution.
Let the Hamiltonian decomposition of $\mathbf{\Delta}$ be given by
\cite{BoHa93}
\begin{equation}
\mathbf{\Delta} = \mu \mathbf{1}
+ \kappa (\mathbf{h^+} \otimes \mathbf{h^-}
+ \mathbf{h^-} \otimes \mathbf{h^+}),
\end{equation}
where $\mu$, $\kappa$ are constants and $\mathbf{h^\pm}$ are constant
unit vectors.
Choose $\mathbf{n} = \mathbf{n^*}$ such that
\begin{equation}
(\mathbf{n^*} \times \mathbf{\hat{M} n^*}) \times
(\mathbf{h^+} \times \mathbf{h^-}) = \mathbf{0}.
\end{equation}
(It is always possible to do this.)
A solution for a homogeneous plane wave propagating along
$\mathbf{n^*}$ is
\begin{equation}
\mathbf{u} =
(\mathbf{h^+} \times \mathbf{h^-})
\exp i\omega(N^{-1} \mathbf{n^* \cdot x}-t),
\quad
\text{where} \quad
\rho N^{-2} = \mu
+ \mathbf{n^* \cdot \Delta n^*}
- (\text{tr }\mathbf{\Delta}).
\end{equation}
\section{Description of the waves}
Before proceeding, we note that the two equations \eqref{N_L} and
\eqref{N_*}, giving the complex scalar slowness of the longitudinal
circularly polarized wave and of the transverse elliptically polarized
wave, respectively, have the same form,
\begin{equation} \label{CpsiC}
\mathbf{C \cdot \Psi C} = \rho N^{-2}, \quad
\mathbf{C \cdot C} =0,
\end{equation}
where $\mathbf{\Psi}$ is a real symmetric
tensor given by $\mathbf{\Psi} = \mathbf{\Psi}_L$
(see \eqref{Psi_L}) for the longitudinal wave and by
$\mathbf{\Psi} = \mathbf{\Psi}_T$
(see \eqref{Psi_T}) for the transverse wave.
Accordingly, we determine the details of the slowness of the
corresponding wave solutions.
The results may be adapted either to the longitudinal wave
or to the transverse wave by replacing $\mathbf{\Psi}$ with
either $\mathbf{\Psi}_L$ or $\mathbf{\Psi}_T$.
\subsection{Construction of the slowness bivector}
Let $\Psi_1$, $\Psi_2$, $\Psi_3$ be the eigenvalues of
$\mathbf{\Psi}$ and $\mathbf{e}_1$, $\mathbf{e}_2$, $\mathbf{e}_3$
the corresponding orthogonal unit eigenvectors.
We assume that the eigenvalues
are ordered as $\Psi_1>\Psi_2>\Psi_3$.
We note that the pair \eqref{CpsiC} is equivalent to the pair
\begin{equation} \label{CpsiCp}
\mathbf{C} \cdot (\mathbf{\Psi} + p \mathbf{1}) \mathbf{C}
= \rho N^{-2}, \quad
\mathbf{C \cdot C} =0,
\end{equation}
where $p$ is an arbitrary constant.
By choosing $p$ suitably large and positive, so that $\Psi_3 + p>0$,
we may define the positive definite matrix
$(\mathbf{\Psi} + p\mathbf{1})$ and associate the
ellipsoid
\begin{equation}
\mathbf{x} \cdot (\mathbf{\Psi} + p\mathbf{1}) \mathbf{x}
= 1,
\end{equation}
with $\mathbf{\Psi}$.
We call this the ``$\mathbf{\Psi}$-ellipsoid''.
The planes of central circular section of the
$\mathbf{\Psi}$-ellipsoid have unit normals $\mathbf{h^\pm}$ given by
\cite{BoHa93}
\begin{equation} \label{optic}
(\Psi_1 - \Psi_3)^{\textstyle\frac{1}{2}} \mathbf{h^\pm}
= (\Psi_1 - \Psi_2)^{\textstyle\frac{1}{2}} \mathbf{e}_1
\pm (\Psi_2 - \Psi_3)^{\textstyle\frac{1}{2}} \mathbf{e}_3.
\end{equation}
We call these normals $\mathbf{h^\pm}$, the ``optic axes'' of the
$\mathbf{\Psi}$-ellipsoid and note that they are independent of $p$.
To describe the slownesses of the waves corresponding to \eqref{CpsiC},
we recall that
\begin{equation} \label{S,C}
\mathbf{S} = N \mathbf{C}, \quad
\mathbf{C} = \mathbf{\hat{m}} + i \mathbf{\hat{n}}, \quad
\mathbf{\hat{m} \cdot \hat{m}} = \mathbf{\hat{n} \cdot \hat{n}} =1,
\quad
\mathbf{\hat{m} \cdot \hat{n}} =0.
\end{equation}
Here $\mathbf{C}$ is such that $\mathbf{C \cdot C} = 0$ and thus
$\mathbf{S \cdot S} = 0$.
Also ($\mathbf{\hat{m}}, \mathbf{\hat{n}}$) is \textit{any} pair of
orthogonal unit vectors in the plane of $\mathbf{S}$, or equivalently
in the plane with unit normal
$\mathbf{a} := \mathbf{\hat{m}} \times \mathbf{\hat{n}}$.
In \eqref{S,C}, $N$ is a scalar to be determined from the equations
\eqref{CpsiC}.
Because we are at liberty to choose for
($\mathbf{\hat{m}}, \mathbf{\hat{n}}$)
any orthogonal unit pair in the plane with normal $\mathbf{a}$,
we choose ($\mathbf{\hat{m}}, \mathbf{\hat{n}}$) along the principal
axes of the elliptical section of the $\mathbf{\Psi}$-ellipsoid by the
central plane $\mathbf{a \cdot x}=0$.
Specifically, we take $\mathbf{\hat{m}}$ as the unit vector along the
minor axis of this ellipse and $\mathbf{\hat{n}}$ along the major
axis.
In that event,
\begin{equation} \label{mPsin}
\mathbf{\hat{m} \cdot \Psi \hat{n}} =0,
\end{equation}
and using \eqref{S,C} and \eqref{mPsin}, the pair \eqref{CpsiC} give
\begin{equation} \label{Nmn}
\rho N^{-2} =
\mathbf{\hat{m} \cdot \Psi \hat{m}}
- \mathbf{\hat{n} \cdot \Psi \hat{n}} > 0,
\end{equation}
in general, so that $N^{-1}$ is purely real: $N^{-1} = v$, say.
Also,
\begin{equation} \label{S+-}
\mathbf{S}^+ = v^{-1} \mathbf{\hat{m}}, \quad
\mathbf{S}^- = v^{-1} \mathbf{\hat{n}},
\end{equation}
so that the planes of constant phase (amplitude) are:
$\mathbf{\hat{m} \cdot x}=$ constant
($\mathbf{\hat{n} \cdot x}=$ constant).
Of course, if
$ \mathbf{\hat{m} \cdot \Psi \hat{m}}
= \mathbf{\hat{n} \cdot \Psi \hat{n}}$,
so that the radii to the $\mathbf{\Psi}$-ellipsoid along the
orthogonal unit vectors are equal, then the plane
$\mathbf{a \cdot x}=0$ is a plane of central circular section of the
$\mathbf{\Psi}$-ellipsoid, and from \eqref{Nmn} there is no
propagating solution: $\rho N^{-2} = 0$.
Thus, in general, the slowness bivectors corresponding to
\eqref{CpsiC} are obtained by first
determining the central elliptical section of the
$\mathbf{\Psi}$-ellipsoid by the plane $\mathbf{a \cdot x}=0$.
Then $\mathbf{\hat{m}}$ and $\mathbf{\hat{n}}$ are chosen along the
principal axes of the ellipse, and $\mathbf{S}^+$ and $\mathbf{S}^-$
are given by \eqref{Nmn} \eqref{S+-}.
To complete the picture we recall the results \cite{BoHa93}
for the determination of the principal axes of the central elliptical
section of the $\mathbf{\Psi}$-ellipsoid by the plane
$\mathbf{a\cdot x}=0$.
In the determination there are three cases to be considered:
Case(i) Normal $\mathbf{a}$ not coplanar with the optic axes.
Case(ii) Normal $\mathbf{a}$ coplanar with the optic axes but not
parallel to either optic axis.
Case(iii) Normal $\mathbf{a}$ parallel to an optic axis.
We can dispose of Case (iii) immediately because we have just seen
that there is no propagating circularly polarized solution when
$\mathbf{a \cdot x}=0$ is a plane of central circular section of the
$\mathbf{\Psi}$-ellipsoid.
In Case (i), $\mathbf{a \cdot h^+ \times h^-} \ne 0$,
it has been shown \cite{BoHa93} that $\mathbf{\hat{m}}$,
$\mathbf{\hat{n}}$ are in the direction of $\mathbf{r^\pm}$ given by
\begin{equation} \label{mn1}
\mathbf{r^\pm} =
[\mathbf{h^+} - (\cos \phi^+)\mathbf{a}]/(\sin \phi^+)
\pm
[\mathbf{h^-} - (\cos \phi^-)\mathbf{a}]/(\sin \phi^-),
\end{equation}
where $\phi^\pm$ is the angle between $\mathbf{a}$ and the optic axis
$\mathbf{h^\pm}$.
Essentially, $\mathbf{\hat{m}}$ and $\mathbf{\hat{n}}$ are along the
internal and external bisectors of the angle between the orthogonal
projections of $\mathbf{h^+}$ and $\mathbf{h^-}$ onto the plane
$\mathbf{a \cdot x}=0$ (this is the Fresnel construction of Optics.)
In Case (ii), $\mathbf{a \cdot h^+ \times h^-} = 0$,
$\mathbf{a \times h^\pm} \ne \mathbf{0}$,
it has been shown \cite{BoHa93} that
\begin{equation} \label{mn2}
\mathbf{\hat{m}} =
[\mathbf{h^+} - (\cos \phi^+)\mathbf{a}]/(\sin \phi^+),
\quad
\mathbf{\hat{n}} = (\mathbf{h^+} \times \mathbf{a})/(\sin \phi^+).
\end{equation}
Essentially, $\mathbf{\hat{m}}$ is along the orthogonal projection of
$\mathbf{h^+}$ (or $\mathbf{h^-}$) onto the plane
$\mathbf{a \cdot x}=0$, and $\mathbf{\hat{n}}$ is orthogonal to that
plane.
This means that as the direction of the unit normal $\mathbf{a}$ is
varied, the corresponding $\mathbf{\hat{m}}$ and $\mathbf{\hat{n}}$
are given by \eqref{mn1} and \eqref{mn2}.
Also, as shown by Boulanger and Hayes \cite{BoHa93}, in Cases
(i) and (ii),
$\mathbf{\hat{m}\cdot\Psi\hat{m}}-\mathbf{\hat{n}\cdot\Psi\hat{n}}
= (\Psi_1-\Psi_3)\sin \phi^+ \sin\phi^-$,
so that from \eqref{Nmn},
\begin{equation} \label{Nsin}
\rho N^{-2}
= \rho v^2 = (\Psi_1-\Psi_3)\sin \phi^+ \sin\phi^-,
\end{equation}
where $\phi^\pm$ are the angles that the normal $\mathbf{a}$ to the
plane of $\mathbf{S}$ makes with the optic axes.
We summarize the situation.
To determine all $\mathbf{S}^+$ and $\mathbf{S}^-$ corresponding to
\eqref{CpsiC}, the ``optics axes'' $\mathbf{h^\pm}$ are first
determined.
Then the plane of $\mathbf{C} = \mathbf{\hat{m}} + i\mathbf{\hat{n}}$
is chosen; it has unit normal $\mathbf{a}$.
If $\mathbf{a}$ is not coplanar with the two optic axes, then the
corresponding $\mathbf{\hat{m}}$ and $\mathbf{\hat{n}}$ are in the
directions of $\mathbf{r^\pm}$ given by \eqref{mn1} and $N$ is
given by \eqref{Nmn}.
If $\mathbf{a}$ is coplanar with the optic axes $\mathbf{h^\pm}$ but
not along either of them, then the corresponding
$\mathbf{\hat{m}}$ and $\mathbf{\hat{n}}$ are given by \eqref{mn2}
and $N$ is given by \eqref{Nsin}.
If $\mathbf{a}$ is along $\mathbf{h^+}$ or $\mathbf{h^-}$,
there is no propagating wave.
\subsection{Example: Crystal with a plane of symmetry}
To illustrate the method described above, we take a
``special'' material with a
symmetry plane at $x_3=0$, say.
Then the tensor $\mathbf{\Psi}_L$ defined in \eqref{Psi_L}
for the longitudinal wave is given by
\begin{equation}
\mathbf{\Psi}_L =
\begin{bmatrix}
{\textstyle\frac{1}{2}} d_{11} & d_{16} & 0 \\
d_{16} & {\textstyle\frac{1}{2}} d_{22} & 0 \\
0 & 0 & {\textstyle\frac{1}{2}} d_{33}
\end{bmatrix}.
\end{equation}
Its eigenvalues are
\begin{equation}
\Psi_{1,3} =
{\textstyle\frac{1}{4}} \left[d_{11}+d_{22}
\pm \sqrt{(d_{11}-d_{22})^2 + 16d_{16}^2}\right],
\quad
\Psi_2 = {\textstyle\frac{1}{2}} d_{33}.
\end{equation}
Here we assume that the stiffnesses of the material
are such that these eigenvalues are ordered
$\Psi_1 > \Psi_2 > \Psi_3$.
The corresponding unit eigenvectors are
\begin{equation} \label{e1e2e3}
\mathbf{e}_1 =
\frac{1}{\delta} \begin{bmatrix}
{\textstyle\frac{1}{2}} d_{22} - \Psi_1 \\
-d_{16} \\
0
\end{bmatrix},
\quad
\mathbf{e}_2 =
\begin{bmatrix}
0 \\
0 \\
1
\end{bmatrix},
\quad
\mathbf{e}_3 =
\frac{1}{\delta} \begin{bmatrix}
-d_{16} \\
{\textstyle\frac{1}{2}} d_{11} - \Psi_3 \\
0
\end{bmatrix},
\end{equation}
where $\delta$ is the positive quantity given by
\begin{equation}
\delta^2 = \textstyle{\frac{1}{8}}(d_{11}-d_{22})^2
+ 2d_{16}^2
+ {\textstyle\frac{1}{4}}(d_{11}-d_{22})\sqrt{(d_{11}-d_{22})^2 + 16d_{16}^2}.
\end{equation}
With this choice, the optic axes defined by \eqref{optic} lie
in the symmetry plane $x_3=0$.
For simplicity, we now focus on waves polarized in the
symmetry plane, that is, we choose the normal $\mathbf{a}$
to the plane of $\mathbf{S}$ to be along $\mathbf{e}_2$.
Then $\mathbf{a}$ is obviously not coplanar with the optic
axes (Case (i) of the previous subsection).
Specifically, the angles between $\mathbf{a}$ and the
optic axes are $\phi^+ = \phi^- = \pi/2$.
It follows from \eqref{mn1} that $\mathbf{\hat{m}}$ and
$\mathbf{\hat{n}}$ are in the directions of
$\mathbf{h^+} \pm \mathbf{h^-}$, i.e.
\begin{equation} \label{e1e3}
\mathbf{\hat{m}} = \mathbf{e}_1, \quad
\mathbf{\hat{n}}= \mathbf{e}_3.
\end{equation}
We conclude that the following CPLIPW may propagate in a
monoclinic crystal with symmetry plane at $x_3=0$
and with stiffnesses satisfying \eqref{cond2},
\begin{equation}
\mathbf{u} =
\text{e}^{- k \mathbf{e}_3 \mathbf{\cdot x}}
\{\mathbf{e}_1 \cos k (\mathbf{e}_1 \mathbf{\cdot x} - vt)
- \mathbf{e}_3 \sin k (\mathbf{e}_1 \mathbf{\cdot x} - vt)\},
\end{equation}
where the orthogonal unit vectors $\mathbf{e}_1$, $\mathbf{e_3}$
are defined in \eqref{e1e2e3},
$k$ is an arbitrary real wave number,
and the real speed $v$ is given by
\begin{equation}
\rho v^2 = \Psi_1 - \Psi_3
= {\textstyle\frac{1}{2}} \sqrt{(d_{11} - d_{22})^2 + 16 d_{16}^2}.
\end{equation}
\section{Crystals with symmetries}
In this section we investigate how the conditions \eqref{cond2}
for an anisotropic crystal to admit a CPLIPW for all
choices of the polarization plane are affected when the crystal
presents certain symmetries.
\subsection{Monoclinic crystals}
Here we consider crystals with a plane of symmetry, at $x_3=0$ say.
For that class of materials,
\begin{equation} \label{mono}
d_{14} = d_{15} = d_{24} = d_{25} = d_{34} = d_{35}
= d_{46} = d_{56} = 0.
\end{equation}
It follows that four out of the nine equations \eqref{cond2} reduce
to trivial identities, automatically satisfied.
The nine conditions \eqref{cond2} reduce to a set of
\textit{five} equations,
\begin{align}
& d_{16} = d_{26} = d_{36} + 2d_{45},
\nonumber \\
& 4d_{44} = d_{22} + d_{33} - 2d_{23}, \;
4d_{55} = d_{33} + d_{11} - 2d_{13}, \;
4d_{66} = d_{11} + d_{22} - 2d_{12}.
\end{align}
\subsection{Orthorhombic crystals}
Orthorhombic crystals possess three symmetry planes,
at $x_1=0$, $x_2=0$, $x_3=0$.
In addition to \eqref{mono}, the relations
\begin{equation} \label{ortho}
d_{16} = d_{26} = d_{36} = d_{45} = 0,
\end{equation}
also hold.
The set of nine conditions \eqref{cond2} now reduces to a set of
\textit{three} equations,
\begin{equation}
4d_{44} = d_{22} + d_{33} - 2d_{23}, \;
4d_{55} = d_{33} + d_{11} - 2d_{13}, \;
4d_{66} = d_{11} + d_{22} - 2d_{12}.
\end{equation}
\subsection{Trigonal, tetragonal, and cubic crystals}
For \textit{trigonal crystals}, $d_{24} = -d_{14} = -d_{56} \ne 0$,
and one of the conditions \eqref{cond2}, namely:
$d_{24} = d_{14} + 2d_{56}$, cannot be satisfied.
For \textit{tetragonal crystals},
$d_{11} = d_{22}$, $d_{13} = d_{23}$, $d_{44} = d_{55}$, and
the condition: $4d_{66} = d_{11} + d_{22} - 2d_{12}$
reduces to: $d_{66} = (d_{11} - d_{12})/2$,
which would mean that the crystal is in fact hexagonal (transversally
isotropic).
For \textit{cubic crystals},
$d_{11} = d_{22} = d_{33}$, $d_{12} = d_{23} = d_{13}$,
$d_{44} = d_{55} = d_{66}$, and
the condition: $4d_{66} = d_{11} + d_{22} - 2d_{12}$
reduces to: $d_{66} = (d_{11} - d_{12})/2$,
which would mean that the material is in fact isotropic.
We conclude that there are no trigonal, no tetragonal,
and no cubic crystals in which CPLIPWs
may propagate for all orientations of the slowness plane.
\subsection{Hexagonal crystals}
For hexagonal crystals, the following relations hold for the
stiffnesses,
\begin{equation}
d_{11} = d_{22}, \quad
d_{13} = d_{23}, \quad
d_{44} = d_{55}, \quad
d_{66} = (d_{11} - d_{12})/2,
\end{equation}
in addition to \eqref{mono} and \eqref{ortho}.
The nine equations \eqref{cond2} reduce to a \textit{single} equation,
\begin{equation} \label{hexa}
d_{44} = \textstyle{\frac{1}{4}}(d_{11} + d_{33} - 2 d_{13}).
\end{equation}
\subsection{Isotropic materials}
For isotropic materials, the following relations hold for the
stiffnesses,
\begin{equation}
d_{11} = d_{22} = d_{33}, \quad
d_{12} = d_{23} = d_{13}, \quad
d_{44} = d_{55} = d_{66} = (d_{11} - d_{12})/2,
\end{equation}
in addition to \eqref{mono} and \eqref{ortho}.
Then, the nine equations \eqref{cond2} are all identically satisfied.
However, the propagation condition \eqref{N_L}, giving the
complex scalar slowness $N_L$ now simplifies to
\begin{equation}
\rho N_L^{-2}(\mathbf{C}) = {\textstyle\frac{1}{2}} d_{11}(C_1^2 + C_2^2 + C_3^2)
= 0,
\end{equation}
for isotropic slownesses.
It follows that CPLIPWs may not propagate in an isotropic
material, for any choice of isotropic slowness.
On the other hand, this analysis shows that ``longitudinal''
\textit{static} exponential solutions with an isotropic slowness
bivector always exist for linear isotropic elastic materials.
This result may be checked directly in the following manner.
Recall the classical equations
of equilibrium of an isotropic material,
\begin{equation}
(\lambda + \mu) u_{j,ij} + \mu u_{i,jj} = 0,
\end{equation}
where $\lambda$ and $\mu$ are the Lam\'e constants.
Then it is easy to check that the field
\begin{equation}
u_i = S_i e^{i \omega S_j x_j},
\quad
S_k S_k =0,
\end{equation}
is indeed an exact solution.
We sum up the situation.
There are no isotropic, cubic, trigonal, or tetragonal elastic
crystals such that CPLIPWs may propagate in every plane.
However, the propagation of CPLIPWs is
theoretically possible for all planes of some
triclinic, monoclinic, or orthorhombic elastic crystals
provided some relations
among the elastic stiffnesses are satisfied.
We note that there is unfortunately insufficient data for such
crystals available at present to enable us present an explicit
example, but we recall that the values of the elastic stiffnesses
change with pressure, temperature, prestress, etc. and that they may
consequently be adjusted to produce an adequate crystal.
|
1,108,101,564,146 | arxiv | \section{Introduction}
\smallskip
The main result of this paper shows that the
spectral action of Bianchi IX gravitational instantons
is arithmetic, in the sense that its asymptotic expansion
can be expressed in terms of rational combinations of (vector valued)
modular forms, which in turn can be explicitly related to classical
modular forms of weight $14$ and $18$.
\smallskip
The rationality of the spectral action for a general
triaxial Bianchi type-IX metric with an $SU(2)$-symmetry, obtained in
\cite{FanFatMar1}, suggested the existence of a rich arithmetic
structure in the Seeley-de Witt coefficients associated with the
square of the Dirac operator of these cosmological models.
Here the rationality assertion means
that each coefficient is the time integral of an
expression presented by a several variable polynomial with
{\it rational} coefficients evaluated on the expansion factors
and their derivatives, up to a certain order with respect to time.
An earlier rationality result of a similar nature was obtained in
\cite{FatGhoKha} for the Robertson-Walker metrics, proving
a conjecture formulated in \cite{ChaConRW}.
\smallskip
The present article is intended to obtain a deeper understanding of the
arithmetic properties of the spectral action, and to shed light on the role
of the rational coefficients appearing in the expansion of the spectral action
for Bianchi IX metrics.
\smallskip
By imposing the condition of self-duality of the Weyl
tensor and by employing a time-dependent conformal
factor to obtain an Einstein metric from the Bianchi IX
models, an especially interesting family of metrics called
Bianchi IX gravitational instantons have
been explored and well studied in the literature, see for example
\cite{Tod, Oku, Hit, Man2, BabKor} and references therein.
Interestingly, as explained in great detail in the latter,
the differential equations for finding
these metrics reduce to well understood equations such
as the Halphen system and the Painlev\'e VI equation
with particular rational parameters. In \cite{BabKor},
following the work carried out in \cite{Tod, Hit}, these equations
are solved by using the $\tau$-function of the Schlesinger system
formulated in terms of theta functions \cite{KitKor}, and an
explicit parametrization of the Bianchi IX gravitational instantons is given
in terms of theta functions with characteristics.
Considered along with our rationality result about
the spectral action \cite{FanFatMar1}, this parametrization of
the gravitational instantons will be the main ingredient in our construction
of the modular expression for the terms appearing in the expansion of the
spectral action in the energy scale. We will also describe an explicit
connection between the modular functions that arise in
the spectral action and well-known classical modular forms.
\smallskip
In the next section, we introduce and clarify our notation, and
we briefly review all the necessary background material on
the spectral action that we need to use throughout the paper.
While most of the material we need to recall is standard, we prefer to
maintain the paper as readable and self-contained as possible.
The reader already familiar with these notions and notations
can skip this section. We start by an explanation of the spectral
action functional \cite{ChaConSAP, ConAction}, which is based on the
Dirac operator and provides a modified Euclidean gravity model.
Implications of this action as a source of new early universe models and
inflationary mechanisms in cosmology has been studied in recent years
\cite{BallMar, KolMar, Mar, MarPieEarly, MarPieTeh2012, MarPieTehCosmic,
NelOchSal, NelSak1, NelSak2, EstMar}. We recall briefly some basic
facts about the Dirac operator, the heat kernel method and pseudo-differential
calculus, and how they can be employed to
compute the terms in the asymptotic expansion of the spectral action
in the energy scale. The terms that appear in the expansion include the
Einstein-Hilbert action and other modified gravity terms such as the Weyl
curvature and Gauss-Bonnet terms. Indeed the latter expressions
appear as the first few terms in the expansion. As a general problem it is
highly desirable to achieve an understanding of the full expansion.
In \cite{FanFatMar1}, we devised an efficient method
for computing the terms appearing in the expansion by using the
Wodzicki noncommutative residue \cite{Wod1, Wod2},
a powerful tool that, in addition to be very important for convenient
calculations, yields an elegant proof of the rationality result presented
in \cite{FanFatMar1}. After briefly recalling, for comparison purposes,
the traditional method of computing these coefficients,
we end Section \ref{PreliminariesSec} by describing
the final formulation of our noncommutative residue method, which prepares
the ground for deriving a variant of the rationality result for the specific family
of metrics that serve as a general form for the Bianchi IX gravitational
instantons.
\smallskip
Section \ref{RationalitySec} is devoted to the explicit computation
of the Dirac operator $\tilde D$ of a general time-dependent
conformal perturbation of the triaxial Bianchi IX metric. We also
prove a rationality statement for its spectral action. The general
form of the Bianchi IX gravitational instantons, which we
mentioned earlier, is
\begin{equation} \label{ConformalBianchiIXMetricEQ}
d\tilde s^2 = F \, ds^2 = F \left ( w_1 w_2 w_3 \, d\mu^2 +
\frac{w_2 w_3}{w_1} \sigma_1^2 +
\frac{w_3 w_1}{w_2} \sigma_2^2+
\frac{w_1 w_2}{w_3} \sigma_3^2 \right ),
\end{equation}
where the conformal factor $F$ is a function of the cosmic time
$\mu$. Here, the metric $ds^2$ is the Bianchi IX model with
general cosmic expansion factors $w_1$, $w_2$ and $w_3$,
and $SU(2)$-invariant 1-forms $\sigma_1$, $\sigma_2$ and
$\sigma_3$. We found it convenient to recall from \cite{FanFatMar1}
the expression for the Dirac operator $D$ of the Bianchi IX
model $ds^2$ and to use it for the presentation of the Dirac operator
$\tilde D$ of the conformally equivalent metric $d \tilde s^2 =$
$F \, ds^2$. We then obtain a rationality statement for the
general terms $\tilde a_{2n}$ appearing in the small time
asymptotic expansion of the heat kernel,
\begin{equation} \label{SmallTimeExpEq}
\text{Trace}\left ( \exp (-t \tilde{D}^2) \right )
\sim
t^{-2} \sum_{n=0}^\infty \tilde{a}_{2n} t^n \qquad (t \to 0^+).
\end{equation}
The section ends by a presentation of explicit expressions for
$\tilde a_0$ and $\tilde a_2$, while
the lengthy expression for $\tilde a_4$ is recorded in Appendix \ref{fulla_4appendix}.
As general notation, a tilde is used on top of any
symbol that represents an object associated with the conformally
perturbed metric $d \tilde s^2 =$ $F \, ds^2$. We will follow
this notational convention throughout the paper.
\smallskip
In Section \ref{InstantonsSec} we recall briefly another result
that we need to use essentially in the rest of the paper:
the derivation of explicit formulas for the Bianchi IX
gravitational instantons obtained in \cite{Tod, Hit, BabKor}.
One starts by imposing the self-duality condition on the Weyl
tensor of the Bianchi IX model and employing a conformal factor
to obtain an Einstein metric. These conditions
reduce to the Halphen system and the Painlev\'e VI equation
with particular rational parameters, which can
then be solved in terms of elliptic modular functions and theta
functions. We have written explicitly, in separate subsections, the
parametrization of the solutions in
terms of theta functions with characteristics given in \cite{BabKor}:
one subsection for a two-parametric family with non-vanishing
cosmological constants and the other for a one-parametric family
whose cosmological constants vanish.
\smallskip
One of our main goals is to understand the modular properties of
the Seeley-de Witt coefficients $\tilde a_{2n}$
appearing in \eqref{SmallTimeExpEq} when the solutions of the
gravitational instantons in terms of the theta functions are
substituted in the metric \eqref{ConformalBianchiIXMetricEQ}.
That is, by extending the real time $\mu$ to the right half-plane
$\Re (\mu) > 0$ in the complex plane, to the extent that the
argument of the theta functions that belongs to the upper
half-plane $\mathbb{H}$ can be replaced by $i \mu$, we explore the
changes in the $\tilde a_{2n}$ under modular transformations
on $\mathbb{H}$. Therefore, in Section \ref{ArithmeticsofInstantonsSec}
we deal with the modular properties of the theta functions and their
derivatives that appear in the terms $\tilde a_0,$ $\tilde a_2$ and
$\tilde a_4$. Using these properties in the explicit expressions
\eqref{a_0Eq}, \eqref{a_2Eq} and the one recorded in Appendix
\ref{fulla_4appendix}, we show by direct calculations in
Section \ref{a_0a_2a_4Sec} that, under the
linear fractional transformations $T_1(i \mu) = i \mu +1$ and $S(i \mu) = i/ \mu$,
the terms $\tilde a_0,$ $\tilde a_2$ and $\tilde a_4$ satisfy interesting
modular properties. What is especially interesting here is that
this behavior reveals modular transformation properties that are encoded in
the parameters of the metric and that are of the same nature as those of
vector-valued modular forms considered in the Eichler--Zagier theory
of Jacobi forms \cite{EicZag}.
\smallskip
It is then natural to expect that these surprising modular transformation
properties obtained by direct calculations for the coefficients $\tilde a_0,$
$\tilde a_2$ and $\tilde a_4$, will continue to hold for all $\tilde a_{2n}$.
Clearly it is impossible to resort to direct calculations to investigate
similar properties for the general terms. However, the properties seen
in the first few terms, which are reflected in the parameters of the
metric, indicate that there is a deeper relation between the Dirac
operators associated with the metrics whose parameters transform
to each other. Indeed, in Section \ref{a_2nSec} we study the
corresponding Dirac operators and we prove that they are all isospectral.
By taking advantage of this fact and of uniqueness of the
coefficients in the asymptotic expansion, we prove that indeed
all of the Seeley-de Witt coefficients $\tilde a_{2n}$ of the Bianchi
IX gravitational instantons satisfy modular transformation properties.
\smallskip
In Section \ref{ModularFormsSec}, we first prove a periodicity
property for the terms $\tilde a_{2n}$ with respect to the parameters
of the two parametric family of Bianchi IX gravitational instantons.
Combining this with their modular transformation properties, we show
that, for rational parameters, each term $\tilde a_{2n}$ defines a
vector-valued modular function of weight $2$ with values in a
finite-dimensional representation of the modular group
$PSL_2(\mathbb{Z})$. Recall that the modular group is generated
by the matrices corresponding to the linear fractional transformations
$T_1(i \mu) = i \mu +1$ and $S(i \mu) = i/ \mu$
acting on the upper half-plane, $i \mu \in \mathbb{H}$.
We then observe that, by running a summation over a finite
orbit of the modular transformations on the rational parameters,
each $\tilde a_{2n}$ gives rise to a modular function of weight $2$ with
respect to $PSL_2(\mathbb{Z})$.
This type of modular functions are sometimes called quasi-modular,
weakly modular or meromorphic modular forms to indicate that they
are allowed to possess poles, which is indeed the case for any modular
function of weight $2$.
\smallskip
In the second part of Section \ref{ModularFormsSec}, we find an intimate
connection between the modular functions that arise from the Seeley-de Witt
coefficients $\tilde a_{2n}$ and well-known modular forms.
We consider two pairs of
rational parameters that belong to two different general families.
In the first case, we prove that such modular
functions have only simple poles at infinity, hence multiplication by
the cusp form of weight $12$, $\Delta(q) =$
$q \prod_{n=1}^\infty (1-q^n)^{24}$, $q=\exp(2 \pi i z)$, $z \in \mathbb{H}$, lands them in the
1-dimensional linear space of modular forms of weight $14$,
generated by a single Eisenstein series. In the second case, we show that
the only poles of the resulting modular functions are of order 4 and located at the
point $\rho = e^{2 \pi i/3}$. Moreover, by showing that they vanish at infinity,
we prove that multiplication by $G_4^4$, where
$G_4(z)=\sum^*_{m, n \in \mathbb{Z}} 1/(mz+n)^4$, $z \in \mathbb{H}$, is the Eisenstein series of weight 4,
sends each resulting modular function in this case to the 1-dimensional linear space of cusp
forms of weight $18$. This illuminates the intimate connection between
the spectral action for Bianchi IX metrics and well-known modular forms.
\smallskip
Our main results and conclusions are summarized in the last Section.
The appendices contain proofs of some statements and lengthy expressions
that are provided for the sake of completeness.
\smallskip
\section{Dirac operator, spectral action and pseudo-differential calculus}
\label{PreliminariesSec}
\smallskip
This section is devoted to a short explanation about the
spectral action functional \cite{ChaConSAP, ConAction}, a brief summary of
the Dirac operator \cite{LawMic, FriBook},
the heat kernel method that employs pseudo-differential calculus
for computing Seeley-de Witt coefficients \cite{GilBook1}, and an efficient method
that we devised in \cite{FanFatMar1} for expressing these coefficients as
noncommutative residues of Laplacians. This method is remarkably
efficient from a computational point of view and provided
an elegant proof of the rationality result in \cite{FanFatMar1}, a variant
of which has more direct relevance to the subject of the present paper, as
explained in Section \ref{RationalitySec}.
\smallskip
A noncommutative geometric space is described
by a spectral triple, which consists of an involutive algebra
$\mathcal{A}$ represented by bounded operators on a Hilbert
space $\mathcal{H}$, and an unbounded self-adjoint operator
$D$ on $\mathcal{H}$ \cite{ConBook}. The metric information is encoded
in the operator $D$, which is assumed to
satisfy the main regularity properties of the Dirac operator.
The spectral action \cite{ChaConSAP, ConAction} for
a spectral triple $(\mathcal{A}, \mathcal{H}, D)$ is an action functional
that depends on the spectrum of the Dirac operator $D$. It employs a
cutoff function $f$ defined on the real line to consider
\[
\text{Trace} \big ( f (D/ \Lambda)\big ),
\]
where $\Lambda$ is the
energy scale. Its asymptotic expansion in the energy scale is usually,
depending on the nature of the arising poles,
of the form \cite{ConMarBook}
\begin{equation} \label{SpectActionExpEq}
\text{Trace} \big ( f (D/ \Lambda)\big )
\sim
\sum_{\beta \in \Pi}
f_\beta \Lambda^\beta \int\!\!\!\!\!\!- \, |D|^{-\beta}+ f(0) \zeta_D(0) + \cdots.
\end{equation}
The summation in the latter runs over the points $\beta$ where
the poles of the spectral zeta function of the Dirac operator
$D$, $\zeta_D(s)$, and the poles of other associated zeta functions are
located. The set of such points is called the {\it dimension spectrum}
of the spectral triple \cite{ConMosLocal}. For classical manifolds, the
poles of the spectral zeta functions are located at certain points
on the real line. However, in general, the dimension spectrum of a
noncommutative geometric space can contain points in the complex
plane that do not belong to the real line. See for instance \cite{LapFra}
for examples of geometric zeta functions whose poles are not necessarily
located on the real line.
\smallskip
The main commutative example of a spectral triple
is given by a spin$^c$ manifold $M$ \cite{ConReconstruct}, where
the algebra of smooth functions $\mathcal{A}=C^\infty(M)$
acts on the $L^2$-spinors of $M$, and $D$ is
the Dirac operator. In this case, the coefficients in
the expansion of the spectral action are determined
by the Seeley-de Witt coefficients associated with $D^2$, which
are local invariants of the geometry \cite{GilBook1}. That is,
up to considerations that merely rely on momenta of the cutoff
function $f$, the terms of the expansion \eqref{SpectActionExpEq} are
determined by the coefficients appearing in a small time asymptotic
expansion of the form
\begin{equation} \label{HeatExpEq}
\text{Trace}\left ( \exp ( -t D^2 )\right )
\sim
t^{- \text{dim} (M)/2} \sum_{n=0}^\infty a_{2n}(D^2)\, t^n \qquad ({t \to 0^+}).
\end{equation}
Chapter 1 of the book \cite{ConMarBook} contains a detailed mathematical discussion
of the asymptotic expansions related to the spectral action.
\smallskip
\subsection{Dirac operator and its pseudo-differential symbol}
\label{DiracOpSubSec}
Given a spin bundle $S$ and a spin connection $\nabla^S$ on a Riemannian
manifold $M$ of dimension $m$, the Dirac operator $D$ acting on smooth sections
of $S$ is defined by the following composition. It is given by
composing the three maps
\[
C^\infty(S) \xrightarrow{\nabla^S} C^\infty(T^*M \otimes S)
\xrightarrow{\#} C^\infty(TM \otimes S)
\xrightarrow{c} C^\infty(S),
\]
where the second arrow is essentially the musical isomorphism $\#$
identifying the cotangent and tangent bundles, and the
third arrow is obtained by considering the Clifford action of
the tangent bundle $TM$ on $S$. The spin connection $\nabla^S$
is a connection on the $Spin(m)$-principal bundle associated with $S$, which is obtained by
lifting the Levi-Civita connection $\nabla$ seen as a connection on the
$SO(m)$-principal bundle of $TM$. Thus, finding an explicit local formula
for the Levi-Civita connection is the first step in calculating the Dirac
operator.
\smallskip
The Levi-Civita connection $\nabla: C^\infty(TM) \to C^\infty(T^*M \otimes TM)$
is the unique connection on the tangent bundle that is compatible with the
metric and torsion-free. In fact, these conditions characterize this connection
uniquely and it can be computed by the method that will be explained shortly.
Let us first explain that compatibility with the metric $g$ means that
\[
g(\nabla_X Y, Z) + g(Y, \nabla_X Z) = X\cdot g(Y, Z), \qquad X, Y, Z \in C^\infty(TM).
\]
Moreover, torsion-freeness refers to vanishing of the torsion tensor
\[
T(X, Y) = \nabla_X Y - \nabla_Y X - [X, Y], \qquad X, Y \in C^\infty(TM).
\]
\smallskip
Since it is more convenient to work with coframes rather than with frames,
for explicit calculations it is useful to take advantage of the identification of
the tangent and cotangent bundles via the metric and write the
Levi-Civita connection as a map from $C^\infty(T^*M)$ to $C^\infty(T^*M \otimes T^*M)$.
Then, if $\{\theta^a\}$ is a local orthonormal coframe, i.e. an orthonormal basis for the
local sections of $T^*M$ in a local chart $U$ where it is trivialized, one can
write
\[
\nabla \theta^a =
\sum_b \omega^a_b \otimes \theta^b,
\]
where $\omega^a_b$ are local differential 1-forms. In this basis $\nabla$ can be
expressed as
\[
\nabla = d + \omega,
\]
where $d$ is the de Rham differential and $\omega = (\omega^a_b)$ is a matrix
of 1-forms. In this picture, the compatibility with the metric and the torsion-freeness
respectively read as
\[
\omega^a_b =
- \omega^b_a, \qquad d \theta^a =
\sum_b \omega^a_b \wedge \theta^b.
\]
The latter conditions yield a unique solution for the 1-forms $\omega^a_b$
and thereby one achieves an explicit calculation of $\nabla$.
\smallskip
Having computed $\nabla$, one can lift it to the spin connection as follows.
The spin group $Spin(m)$ is a double cover of the special orthogonal group
$SO(m)$ and there is
an explicit isomorphism $\mu:\mathfrak{so}(m)\to \mathfrak{spin}(m)$
identifying their Lie algebras, which is given by (see for example Lemma 4.8 in
\cite{RoeBook}),
\[
\mu(A)= \frac{1}{4}\sum_{a,b} A^{a b} e_a e_b,
\qquad A = (A^{ab})\in \mathfrak{so}(m).
\]
In the latter $\{ e_a \}$ is the standard basis of $\mathbb{R}^m$ viewed inside
the corresponding Clifford algebra, in which the
linear span of the elements $\{ e_a e_b; a < b \}$ is the Lie algebra
$\mathfrak{spin}(m)$ of $Spin(m)$. Combining this with uniqueness
of the spin representation for the Clifford algebra, one can choose
$k \times k$ matrices, $k = \textnormal{rk}(S)$, which satisfy the
relations $(\gamma^a)^2 = - I$ and
$\gamma^a \gamma^b + \gamma^b \gamma^a = 0$ for $a \neq b$,
and calculate the matrix of 1-forms $\omega^S$ representing the
spin connection
\[
\nabla^S = d + \omega^S.
\]
That is, one can write
\[
\omega^S = \frac{1}{4}\sum_{a,b} \omega^{a}_{b} \gamma^a \gamma^b.
\]
\smallskip
Now we write the Dirac operator explicitly: it follows from its definition
and the above explicit formulas for the ingredients of the definition that
\begin{eqnarray} \label{GeneralDiracOpFormula}
D &=& \sum_a \theta^a \nabla^S_{\theta_a} \nonumber \\
&=& \sum_{a, \nu} \gamma^a dx^{\nu}(\theta_a) \frac{\partial}{\partial x^{\nu}} +
\frac{1}{4} \sum_{a, b, c} \gamma^c \omega^b_{ac} \gamma^a \gamma^b,
\end{eqnarray}
where $\{ \theta_a \}$ is a pre-dual of the coframe $\{ \theta^a \}$ and the
$\omega^b_{ac}$ are defined by
\[
\omega^b_a = \sum_c \omega^b_{ac} \theta^c.
\]
\smallskip
It is clear that $D$ is a differential operator of order 1. Like any differential operator, or
more generally, like any pseudo-differential operator, using the Fourier
transform and the Fourier inversion formula, $D$ can be expressed
by its pseudo-differential symbol denoted by $\sigma(D)$. The symbol $\sigma(D)$
is defined locally from $U \times \mathbb{R}^m$ to $M_k(\mathbb{C})$ and allows
one to write the action of $D$ on a local section $s$ as
\begin{eqnarray} \label{pseudodifferentialOp}
D s (x)
&=&
(2 \pi)^{-m/2} \int e^{i x \cdot \xi} \, \sigma(D)(x, \xi) \, \hat s (\xi) \, d\xi \nonumber \\
&=& (2 \pi)^{-m} \int \int e^{i (x-y) \cdot \xi} \, \sigma(D)(x, \xi) \, s (y) \, dy\, d\xi,
\end{eqnarray}
where $\hat s$ is Fourier transform of $s$ taken component-wisely. Note that
the endomorphisms of the bundle are locally identified with $M_k(\mathbb{C})$.
The
expression given by \eqref{GeneralDiracOpFormula} makes it clear that
\begin{equation} \label{SymbolofDiracEq}
\sigma(D)(x, \xi)=
\sum_{a, \nu} \gamma^a dx^{\nu}(\theta_a) (i \xi_{\nu+1} )+
\frac{1}{4} \sum_{a, b, c} \gamma^c \omega^b_{ac} \gamma^a \gamma^b,
\end{equation}
for $x = (x^0, x^1, \dots, x^{m-1}) \in U$ and
$\xi = (\xi_1, \xi_2, \dots, \xi_m) \in \mathbb{R}^m$.
For our purpose of studying the Seeley-de Witt coefficients appearing in the asymptotic
expansion \eqref{HeatExpEq}, we need to have the pseudo-differential symbol of $D^2$. This
can be achieved by finding an explicit formula for $D^2$ from \eqref{GeneralDiracOpFormula},
or more easily by using the composition rule for pseudo-differential symbols.
\smallskip
In fact,
pseudo-differential operators are closed under composition and there is an
explicit and handy formula, which describes the symbol of the product
of two such operators modulo infinitely smoothing operators. That is, if
the operators $P_1$ and $P_2$ are associated with the symbols $\sigma(P_1)$ and
$\sigma(P_2)$,
\begin{eqnarray*}
P_j s (x)
&=&
(2 \pi)^{-m/2} \int e^{i x \cdot \xi} \, \sigma(P_j)(x, \xi) \, \hat s (\xi) \, d\xi \\
&=& (2 \pi)^{-m} \int \int e^{i (x-y) \cdot \xi} \, \sigma(P_j)(x, \xi) \, s (y) \, dy\, d\xi,
\end{eqnarray*}
then, the symbol of $P_1 P_2$ has the following asymptotic expansion:
\begin{equation} \label{SymbolCompositionRule}
\sigma(P_1 P_2) \sim
\sum_{\alpha \in \mathbb{Z}_{\geq 0}^m} \frac{(-i)^{|\alpha|} }{\alpha !}
\partial_\xi^\alpha \sigma(P_1)(x, \xi) \, \partial_x^\alpha \sigma(P_2)(x, \xi).
\end{equation}
One can find precise technical discussions in Chapter 1 of the book \cite{GilBook1}
about the pseudo-differential calculus that we use.
In the case of differential operators, since the symbols are polynomials in $\xi$
with coefficients that are matrix-valued functions defined on a local chart $U$, the
formula \eqref{SymbolCompositionRule} gives a precise formula for the symbol of the
composition. Returning to the Dirac operator $D$, one can use its symbol given
by \eqref{SymbolofDiracEq} and the composition formula \eqref{SymbolCompositionRule}
to calculate the symbol of $D^2$:
\begin{eqnarray*} \label{specialcomposition}
\sigma(D^2)(x, \xi) =
\sum_{\alpha \in \mathbb{Z}_{\geq 0}^m} \frac{(-i)^{|\alpha|} }{\alpha !}
\partial_\xi^\alpha \sigma(D)(x, \xi) \, \partial_x^\alpha \sigma(D)(x, \xi).
\end{eqnarray*}
The latter is in general a polynomial of order 2 in $\xi$ whose coefficients
are matrix-valued functions defined on the local chart $U$. In Section \ref{RationalitySec},
explicit formulas are presented for the Dirac operators of the cosmological models that we
are interested in.
\smallskip
\subsection{Heat expansion using pseudo-differential calculus}
\label{HeatExpSubSec}
Let $D$ be the Dirac operator on a compact spin manifold of dimension $m$,
as we described. For any $t >0$, the operator $\exp (- t D^2) $ is an infinitely smoothing
operator and thus can be represented by a smooth kernel. In particular
it is a trace-class operator and as $t \to 0^+$, the trace of $\exp (- t D^2) $
goes to infinity. However, it is quite remarkable that there is an
asymptotic expansion with geometric coefficients that describe the rate
of this divergence. That is, as $t \to 0^+$,
\begin{equation} \label{AsymptoticExpansion}
\text{Trace}\big( \exp(-t D^2) \big )
\sim
t^{-m/2} \sum_{n=0}^\infty a_{2n}(D^2)\, t^n,
\end{equation}
where the coefficients $a_{2n}(D^2)$ are local invariants of the metric that
encode geometric information obtained from the curvature tensor. In fact,
typically they are integrals of certain
expressions obtained from the
Riemann curvature tensor and its covariant derivatives and contractions.
\smallskip
It is evident that the left hand side of \eqref{AsymptoticExpansion} depends
only on the eigenvalues of $D^2$. Since, except in rare cases, in general the
eigenvalues of the Dirac operator are not known, it is significant that there
are methods in the literature that allow one to express the coefficients $a_{2n}(D^2)$
appearing on the right hand side of \eqref{AsymptoticExpansion} as integrals
of local expressions obtained from the metric. One of these methods, which is
quite effective, starts with the Cauchy integral formula and employs parametric
pseudo-differential calculus to approximate the kernel $\exp(- t D^2)$ and
thereby accomplishes a recursive procedure for finding formulas for the
coefficients $a_{2n}(D^2)$.
\smallskip
Let us review this method briefly from Chapter 1 of the book \cite{GilBook1}. The Dirac operator $D$
is a self-adjoint unbounded operator, with respect to which the Hilbert space of
$L^2$-spinors admits a spectral decomposition. Invoking the Cauchy integral formula one can write
\begin{equation} \label{CauchyIntegral}
\exp(-t D^2) = \frac{1}{2 \pi i} \int_\gamma e^{-t \lambda} (D^2 - \lambda )^{-1} \, d\lambda,
\end{equation}
where the integration is over a contour $\gamma$ in the complex plane that goes
around the non-negative real numbers clockwise. Since $D^2 - \lambda$ is an
elliptic differential operator, it admits a parametrix which is the same as $(D^2 - \lambda)^{-1}$ modulo an infinitely smoothing operator. Thus, the approximation of $(D^2 - \lambda)^{-1}$ amounts to finding or approximating the parametrix $R_\lambda$ of $D^2-\lambda$.
This is achieved by exploiting the calculus of pseudo-differential symbols given by
the composition rule \eqref{SymbolCompositionRule}. In order to compute the symbol of $R_\lambda$,
since $D^2 - \lambda$ is of order 2, the leading symbol of $R_\lambda$ has to be of order $-2$ and one can write
\[
\sigma(R_\lambda) \sim \sum_{j=0}^\infty r_j (x, \xi, \lambda),
\]
where each $r_j (x, \xi, \lambda)$ is a parametric pseudo-differential
symbol of order $-2 - j$. There is an important nuance that
$\lambda$ should be treated of order 2. Also, for the parametric pseudo-differential
symbols depending on the complex parameter $\lambda$, being homogeneous of order
$-2-j$ means that for any $t>0$,
\[
r_j(x, t \xi, t^2 \lambda) = t^{-2-j} r_j(x, \xi, \lambda).
\]
\smallskip
As we discussed before, the square of the Dirac operator $D^2$ is a
differential operator of order 2 and therefore for the symbol of $D^2 - \lambda$
we have
\[
\sigma(D^2 - \lambda) = \big ( p_2(x, \xi) - \lambda \big ) + p_1(x, \xi) + p_0(x, \xi),
\]
where each $p_k(x, \xi)$ is a polynomial in $\xi$ whose coefficients are
matrix-valued functions defined on the local chart.
By passing to the symbols and using the composition rule \eqref{SymbolCompositionRule},
the solution of the equation
\begin{equation} \label{ParaSymbolicEqEq}
\sigma \left ( R_\lambda (D^2 - \lambda) \right ) \sim I
\end{equation}
yields the following recursive solution of the terms $r_j$ in the expansion
of the symbol of the parametrix $R_\lambda$. In fact, by a comparison of
homogeneous terms on the two sides of \eqref{ParaSymbolicEqEq}, one finds that
\begin{equation} \label{r_0formula}
r_0(x, \xi, \lambda) = (p_2(x, \xi) - \lambda)^{-1},
\end{equation}
and for any $n \geq 1$,
\begin{eqnarray} \label{r_nformula}
r_n (x, \xi, \lambda)=
- \left ( \sum \frac{(-i)^{|\alpha|}}{\alpha!} \partial_\xi^\alpha r_j(x, \xi, \lambda) \, \partial_x^\alpha p_k(x, \xi) \right ) r_0(x, \xi, \lambda),
\end{eqnarray}
where the summations are over all 4-tuples of non-negative integers
$\alpha$, and integers $0 \leq j < n$ and $0 \leq k \leq 2$
such that $|\alpha|+j +2 -k =n$.
\smallskip
Using this recursive procedure, one can choose a large enough $N$
so that the operator corresponding to the symbol $r_0+\cdots +r_N$ gives
a desired approximation of the parametrix $R_\lambda$ of $D^2- \lambda$.
By substituting the approximation in the Cauchy integral formula \eqref{CauchyIntegral}, one
obtains an approximation of the kernel of $\exp(-tD^2)$. By integrating
the approximation of the kernel over the diagonal of $M \times M$ against
the volume form, one can derive the small time asymptotic expansion \eqref{AsymptoticExpansion}.
\smallskip
It is remarkable that this method shows instructively that each coefficient
in the expansion is given by the integral,
\begin{equation} \label{a_2nformula}
a_{2n}(D^2) = \int_M a_{2n}(x, D^2) \, dvol_g(x),
\end{equation}
where $a_{2n}(x, D^2)$ is an invariantly defined function on the manifold
defined in the local chart by
\begin{equation} \label{DensitiesFormula}
a_{2n}(x, D^2)=
\frac{(2 \pi)^{-m}}{2 \pi i} \int_{\mathbb{R}^m} \int_{\gamma} e^{-\lambda} \, \textnormal{tr}
\big ( r_{2n}(x, \xi, \lambda) \big ) \, d \lambda \, d^m \xi.
\end{equation}
\smallskip
The analysis involved for deriving the above expansion and formula
for the coefficient $a_{2n}(D^2)$ is quite intricate and as we mentioned
earlier, it is presented
in detail in Chapter 1 of the book \cite{GilBook1}. It should be stressed that the integrals
involved in the expression
for $a_{2n}(D^2)$ are possible to work out since one can show by
induction from the formulas \eqref{r_0formula} and \eqref{r_nformula} that
\[
\textnormal{tr} \big ( r_{n}(x, \xi, \lambda) \big ) =
\sum_{ \substack{n=2j - |\alpha|-2 \\ |\alpha| \leq 3n}} r_{n, j , \alpha}(x) \, \xi^\alpha \, \textnormal{tr}\left (r_0(x, \xi, \lambda)^j \right ).
\]
Therefore, using the method reviewed in this subsection, one can
calculate the $a_{2n}(D^2)$ explicitly in concrete examples. However, it should
be noted that this method involves heavy calculations that are cumbersome
even with computer assistance. In Subsection \ref{WodzickiRes},
we review an efficient method that we devised in \cite{FanFatMar1} for computing
the Seeley-de With coefficients $a_{2n}(D^2)$ by making use of
Wodzicki's noncommutative residue \cite{Wod1, Wod2} which in general is the unique trace
functional on the algebra of classical pseudo-differential
on a vector bundle on $M$.
This method is significantly more convenient from a computational point
of view and yields elegant proofs for rationality statements of the type
discussed in Section \ref{RationalitySec}.
\smallskip
\subsection{Calculation of heat coefficients using the noncommutative residue}
\label{WodzickiRes}
The symbol of a classical pseudo-differential operator of order
$d$ acting on the smooth sections of a vector bundle on an
$m$-dimensional
manifold $M$ admits by definition an expansion of the following form
as $\xi \to \infty$:
\begin{equation} \label{classicalsymbol}
\sigma (x, \xi)
\sim
\sum_{j=0}^\infty \sigma_{d-j} (x, \xi),
\end{equation}
where each
$\sigma_{d-j} : U \times \left ( \mathbb{R}^m \setminus \{ 0\}\right )
\to M_r(\mathbb{C})$ is positively homogeneous of order
$d-j$ in $\xi$. That is, identifying the endomorphisms of the vector bundle
on a local chart $U$ on $M$ with $M_r(\mathbb{C})$, each $\sigma_{d-j} :
U \times \left ( \mathbb{R}^m \setminus \{ 0\}\right ) \to M_r(\mathbb{C})$
is a smooth map such that
\[
\sigma_{d-j}(x, t\xi) = t^{d-j} \sigma_{d-j}(x, \xi),
\qquad (x, \xi) \in U \times \left ( \mathbb{R}^m
\setminus \{ 0\}\right ), \qquad
t > 0.
\]
\smallskip
The noncommutative residue \cite{Wod1, Wod2} of the pseudo-differential operator
$P_\sigma$ associated with a classical symbol $\sigma$ of the
above type is defined by
\begin{equation} \label{WodResidueDefEq}
\textrm{Res}(P_\sigma) =
\int_{S^*M} \textrm{tr} \big (\sigma_{-m}(x, \xi) \big ) \, d^{m-1}\xi \, d^mx.
\end{equation}
Some explanations are in order for the latter. First, note that
$m$ is the dimension of the manifold, which shows that the
noncommutative residue vanishes on the classical operators
of order less than $-m = - \textnormal{dim}(M)$. In particular,
it vanishes on infinitely smoothing operators, which allows one
to view the noncommutative residue as a linear functional defined
on the space of classical symbols with the following rule
for composing to classical symbols $\sigma_1$ and $\sigma_2$, inherited
from \eqref{SymbolCompositionRule}:
\[ \sigma_1 \circ \sigma_2
\sim
\sum_{\alpha \in \mathbb{Z}_{\geq 0}^m} \frac{(-i)^{|\alpha|} }{\alpha !}
\partial_\xi^\alpha \sigma_1(x, \xi) \, \partial_x^\alpha \sigma_2(x, \xi).
\]
Second, $S^*M = \{ (x, \xi) \in T^*M; ||\xi||_g=1 \}$ is the
cosphere bundle of $M$ and the formula \eqref{WodResidueDefEq} is the
integral over $M$ of a $1$-density associated with the classical symbol $\sigma$,
which is called the {\it Wodzicki residue density.} In each
cotangent fibre $\mathbb{R}^m \cong T_x^* M $, where $\xi$ belongs to,
consider the volume form on the unit sphere $|\xi|=1$,
\[
\sigma_\xi = \sum_{j=1}^m (-1)^{j-1} \xi_j \, d\xi_1
\wedge \cdots \wedge {\widehat d \xi_j} \wedge \cdots \wedge d \xi_m.
\]
Then, the mentioned 1-density associated with the classical
symbol $\sigma$ with the expansion \eqref{classicalsymbol} is
defined by
\[
\textnormal{wres}_x P_\sigma
=
\left ( \int_{|\xi |=1}
\textrm{tr} \left (\sigma_{-m}(x, \xi) \right ) |\sigma_\xi |
\right ) |dx^0 \wedge dx^1 \wedge \cdots \wedge dx^{m-1}|.
\]
\smallskip
Extensive discussions and alternative formulations of the
noncommutative residue, which was first discovered for 1-dimensional
symbols on the circle \cite{Adl, Man1},
are given in \cite{Wod1, Wod2, Kassel} and
Section 7.3 of the book \cite{GraVarFig}. The spectral formulation
of this residue plays an important role in noncommutative geometry
since it is used in the local index formula for spectral triples, developed in
\cite{ConMosLocal}. Also, formulas similar to the one given
by \eqref{WodResidueDefEq} are used in \cite{FatWon, FatKha} to
define noncommutative residues for the restriction of
Connes' pseudo-differential calculus \cite{ConCstarDiffGeo} to
noncommutative tori, which are handy computational tools, see
for example \cite{Fat} for an application.
\smallskip
Now let $D$ be the Dirac operator on a 4-dimensional
manifold. We are interested in
computing and understanding the nature of the Seeley-de Witt
coefficients $a_{2n}(D^2)$ appearing in an asymptotic
expansion of the form \eqref{AsymptoticExpansion} with $m=4$,
when $D$ is of this type, namely
the Dirac operator of a geometry describing a cosmological model.
Using an alternative spectral formulation of the noncommutative
residue and the K\"unneth formula, we showed in \cite{FanFatMar1} that
for any integer $n\geq 1$,
\[
a_{2n}(D^2) = \frac{1}{32\, \pi^{n+3}} \textnormal{Res}(\Delta^{-1}),
\]
where $\Delta^{-1}$ denotes the parametrix of the elliptic
differential operator
\[
\Delta = D^2 \otimes 1 + 1 \otimes \Delta_{\mathbb{T}^{2n-2}},
\]
in which $\Delta_{\mathbb{T}^{2n-2}}$ is the flat Laplacian on
the $(2n-2)$-dimensional torus $\mathbb{T}^{2n-2} =
\left ( \mathbb{R}/\mathbb{Z} \right )^{2n-2}$.
Evidently $\Delta$
acts on the smooth sections of the tensor product of the spin bundle of $M$ and
the 1-dimensional trivial bundle on the torus. Using the symbol of $\Delta$, which
is intimately related to the symbol of $D^2$, we showed in Corollary 4.1 of
\cite{FanFatMar1} that
\begin{equation} \label{HeatCoefsResEq}
a_{2n}(D^2)
=
\frac{1}{32 \pi^{n+3}} \int_{S^*(M \times \mathbb{T}^{2n-2})}
\textnormal{tr} \left ( \sigma_{-2n-2}(\Delta^{-1}) \right ) \, d^{2n+1} \xi' \, d^{4}x,
\end{equation}
where in the local chart $U$, $ \sigma_{-2n-2}(\Delta^{-1}): (U \times \mathbb{T}^{2n-2})
\times \mathbb{R}^{2n+2} \to M_4(\mathbb{C})$ is the homogeneous
component of order $-2n-2$ in the asymptotic expansion of the symbol of the
parametrix $\Delta^{-1}$
of $\Delta= D^2 \otimes 1 + 1 \otimes \Delta_{\mathbb{T}^{2n-2}}. $ In the proof of the
corollary, we explained in detail the calculation of a recursive formula for
$\sigma_{-2n-2}(\Delta^{-1})$, and stressed that it has no dependence on coordinates
of the torus, which is indicated in the formula \eqref{HeatCoefsResEq}.
\smallskip
As we mentioned
in the beginning of this section, this method is used crucially
for proving the rationality statements for Bianchi IX metrics, which are
elaborated on in Section \ref{RationalitySec}.
\smallskip
\section{Rationality of spectral action for Bianchi IX metrics}
\label{RationalitySec}
\smallskip
The goal of this section is to present a variant of the rationality
result proved in \cite{FanFatMar1}. That is, we consider a
time dependent conformal perturbation of the triaxial Bianchi IX
metric treated in \cite{FanFatMar1}, and show that the terms appearing
in the expansion of its spectral action in the energy scale are
expressed by several variable polynomials with {\it rational}
coefficients, evaluated on the expansion factors, the conformal
factor and their time derivatives up to a certain order. The reason
for considering this family of metrics is that they serve as a
general form for the Bianchi IX gravitational instantons \cite{Tod, Hit, BabKor}.
Indeed, combined with the parametrization of the latter in terms of theta
functions with characteristics \cite{BabKor}, the rationality result stimulates the construction
of modular functions from the spectral action, which is carried out in the
sequel.
\smallskip
For convenience, in passing, we recall the formalism and explicit calculation of the
Dirac operator of the Bianchi IX metrics from \cite{FanFatMar1}, which can then be used
for the presentation of the Dirac operator of the conformally perturbed metric, given by
\eqref{ConformalBianchiIXMetricEQ}.
\smallskip
\subsection{Triaxial Bianchi IX metrics}
Euclidean Bianchi IX metrics are of the form
\begin{equation} \label{BianchimetricEq}
ds^2 = w_1 w_2 w_3 \, d\mu^2 +
\frac{w_2 w_3}{w_1} \sigma_1^2 +
\frac{w_3 w_1}{w_2} \sigma_2^2+
\frac{w_1 w_2}{w_3} \sigma_3^2,
\end{equation}
where the cosmic expansion factors $w_i$ are functions
of the cosmic time $\mu$. The $\sigma_i$ are left-invariant
1-forms on $SU(2)$-orbits satisfying
\[
d \sigma_1 = \sigma_2 \wedge \sigma_3, \qquad
d \sigma_2 = \sigma_3 \wedge \sigma_1, \qquad
d \sigma_3 = \sigma_1 \wedge \sigma_2.
\]
\smallskip
In order to write this metric explicitly in a local chart, in \cite{FanFatMar1},
we parametrized the 3-dimensional sphere $\mathbb{S}^3$ by
\[
( \eta, \phi, \psi)
\mapsto
\left ( \cos(\eta/2) e^{i (\phi+\psi)/2}, \sin(\eta/2) e^{i (\phi-\psi)/2} \right ),
\]
with the parameter ranges $0 \leq \eta \leq \pi, 0 \leq \phi < 2 \pi, 0 \leq \psi < 4 \pi$.
We then wrote the metric \eqref{BianchimetricEq} in the local coordinates
$x = (\mu, \eta, \phi, \psi)$, the expression of
which was found to be
\begin{eqnarray}
ds^2 &=& w_1 w_2 w_3 \, d\mu \,d\mu+\frac{w_1 w_2 \cos (\eta )}{w_3}d\phi \,d\psi
+\frac{w_1 w_2 \cos (\eta )}{w_3} d\psi \,d\phi \nonumber \\
& +&\left(\frac{w_2 w_3 \sin ^2(\eta )
\cos ^2(\psi )}{w_1}+w_1 \left(\frac{w_3 \sin ^2(\eta ) \sin ^2(\psi )}{w_2}+\frac{w_2
\cos ^2(\eta )}{w_3}\right)\right) d\phi \,d\phi \nonumber \\
& +& \frac{\left(w_1^2-w_2^2\right) w_3
\sin (\eta ) \sin (\psi ) \cos (\psi )}{w_1 w_2} d\eta \,d\phi \nonumber \\
&+& \frac{\left(w_1^2-w_2^2\right) w_3 \sin (\eta ) \sin (\psi ) \cos (\psi )}{w_1
w_2}d\phi \,d\eta \nonumber \\
& +&\left(\frac{w_2 w_3 \sin ^2(\psi )}{w_1} +\frac{w_1 w_3 \cos
^2(\psi )}{w_2}\right)d\eta \,d\eta +\frac{w_1 w_2}{w_3}d\psi \,d\psi. \nonumber
\end{eqnarray}
\smallskip
Going through the definitions and the construction reviewed in Subsection \ref{DiracOpSubSec},
the Dirac operator of this metric was then explicitly calculated:
\begin{eqnarray} \label{DiracBianchiEq}
D&=&-\sqrt{\frac{w_{1}}{w_{2}w_{3}}}\cot\eta\cos\psi\cdot\gamma^{1}\frac{\partial}{\partial\psi}+\sqrt{\frac{w_{1}}{w_{2}w_{3}}}\csc\eta\cos\psi\cdot\gamma^{1}\frac{\partial}{\partial\phi} \nonumber \\
&&-\sqrt{\frac{w_{1}}{w_{2}w_{3}}}\sin\psi\cdot\gamma^{1}\frac{\partial}{\partial\eta}
-\sqrt{\frac{w_{2}}{w_{1}w_{3}}}\cot\eta\sin\psi\cdot\gamma^{2}\frac{\partial}{\partial\psi}\nonumber \\
&&+\sqrt{\frac{w_{2}}{w_{1}w_{3}}}\csc\eta\sin\psi\cdot\gamma^{2}\frac{\partial}{\partial\phi}+\sqrt{\frac{w_{2}}{w_{1}w_{3}}}\cos\psi\cdot\gamma^{2}\frac{\partial}{\partial\eta}\nonumber \\
&&+\frac{1}{\sqrt{w_{1}w_{2}w_{3}}}\gamma^{0}\frac{\partial}{\partial\mu}+\sqrt{\frac{w_{3}}{w_{1}w_{2}}}\gamma^{3}\frac{\partial}{\partial\psi}+\frac{1}{4\sqrt{w_{1}w_{2}w_{3}}}\left(\frac{w_{1}^{'}}{w_{1}}+\frac{w_{2}^{'}}{w_{2}}+\frac{w_{3}^{'}}{w_{3}}\right)\gamma^{0} \nonumber \\
&&-\frac{\sqrt{w_{1}w_{2}w_{3}}}{4}\left(\frac{1}{w_{1}^{2}}+\frac{1}{w_{2}^{2}}+\frac{1}{w_{3}^{2}}\right)\gamma^{1}\gamma^{2}\gamma^{3}.
\end{eqnarray}
The pseudo-differential symbol associated with the latter has the following expression:
\begin{eqnarray*}
\sigma(D)(x, \xi)
&=&-\frac{i \gamma^1
\sqrt{w_1} \left(\csc (\eta )
\cos (\psi ) \left(\xi _4 \cos (\eta
)-\xi _3\right)+\xi _2 \sin (\psi
)\right)}{\sqrt{w_2}
\sqrt{w_3}} \\
&&+\frac{i
\gamma^2 \sqrt{w_2} \left(\sin
(\psi ) \left(\xi _3 \csc (\eta )-\xi _4
\cot (\eta )\right) +\xi _2 \cos (\psi
)\right)}{\sqrt{w_1}
\sqrt{w_3}} \\
&&+\frac{i
\gamma^1 \xi _1}{\sqrt{w_1}
\sqrt{w_2}
\sqrt{w_3}}+\frac{i
\gamma^3 \xi _4
\sqrt{w_3}}{\sqrt{w_1}
\sqrt{w_2}} +
\frac{1}{4\sqrt{w_{1}w_{2}w_{3}}}\left(\frac{w_{1}^{'}}{w_{1}}
+\frac{w_{2}^{'}}{w_{2}}+\frac{w_{3}^{'}}{w_{3}}\right)\gamma^{0} \\
&&-\frac{\sqrt{w_{1}w_{2}w_{3}}}{4}\left(\frac{1}{w_{1}^{2}}+\frac{1}{w_{2}^{2}}
+\frac{1}{w_{3}^{2}}\right)
\gamma^{1}\gamma^{2} \gamma^3.
\end{eqnarray*}
We also note that in our calculations we use the following gamma matrices
$\gamma^0,$ $\gamma^1$, $\gamma^2$ and $\gamma^3$, which are
respectively written as
{\small
\[
\left(
\begin{array}{cccc}
0 & 0 & i & 0 \\
0 & 0 & 0 & i \\
i & 0 & 0 & 0 \\
0 & i & 0 & 0
\end{array}
\right),
\left(
\begin{array}{cccc}
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & -1 & 0 & 0 \\
-1 & 0 & 0 & 0
\end{array}
\right),
\left(
\begin{array}{cccc}
0 & 0 & 0 & -i \\
0 & 0 & i & 0 \\
0 & i & 0 & 0 \\
-i & 0 & 0 & 0
\end{array}
\right),
\left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 \\
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{array}
\right).
\]
}
\smallskip
Using the methods reviewed in Subsection \ref{HeatExpSubSec}
and Subsection \ref{WodzickiRes}, the terms
$a_0,$ $a_2$ and $a_4$ in the expansion of the spectral action
for the metric \eqref{BianchimetricEq} were computed. Moreover,
the main result of \cite{FanFatMar1} is that a general term $a_{2n}$
in the expansion, modulo an integration with respect to $\mu$, is
of the form
\[
a_{2n}
=
(w_1w_2w_3)^{1-3n}Q_{2n}
\left(
w_1, w_2, w_3, w_1', w_2', w_3', \dots, w_1^{(2n)}, w_2^{(2n)}, w_3^{(2n)}
\right ),
\]
where $Q_{2n}$ is a polynomial of several variables with
rational coefficients.
The rationality result was proved by exploiting the
$SU(2)$ invariance of the 1-forms $\sigma_i$ appearing in \eqref{BianchimetricEq} and by
making use of the method reviewed in Subsection \ref{WodzickiRes}.
\smallskip
\subsection{Time dependent conformal perturbations
of Bianchi IX metrics}
By making a correct choice of a conformal factor, an especially
interesting family of Bianchi IX metrics called Bianchi IX
gravitational instantons have been explicitly
expressed in \cite{BabKor}, which
are Einstein metrics while having self-dual Weyl tensors.
It is remarkable that starting from writing a gravitational
instanton with $SU(2)$ symmetry in the general form,
\begin{equation} \label{ConformalBianchiIXMetricEq1}
d\tilde s^2= F ds^2 = F \left ( w_1 w_2 w_3 \, d\mu^2 +
\frac{w_2 w_3}{w_1} \sigma_1^2 +
\frac{w_3 w_1}{w_2} \sigma_2^2+
\frac{w_1 w_2}{w_3} \sigma_3^2 \right ),
\end{equation}
where, like the $w_i$, $F$ is also a function of the cosmic time
$\mu$,
the solutions of the equations for the self-duality of the
Weyl tenor and proportionality of the Ricci tensor to the
metric are classified completely in terms of solutions
to Painlev\'e VI integrable systems \cite{Hit, Oku, Tod}. In turn, the latter
can be solved \cite{BabKor} by using
the $\tau$-function of the Schlesinger system
formulated in terms of theta functions \cite{KitKor}.
We will review the explicit parametrization of the Bianchi IX
gravitational instantons in Section \ref{InstantonsSec}. In this subsection
we present a rationality statement for the spectral action
of the metric \eqref{ConformalBianchiIXMetricEq1}. This result indicates the existence of an
arithmetic structure in the spectral action of these metrics, which,
combined with the parametrization in terms of theta functions with characteristics \cite{BabKor},
leads to a construction of modular functions, which is one of our main
objectives in this paper.
\smallskip
By an explicit calculation following the notions and the construction described in
Subsection \ref{DiracOpSubSec}, we find that the Dirac operator
$\tilde D$ of the metric \eqref{ConformalBianchiIXMetricEq1}
is given by
\begin{equation} \label{DiracConfBianchiIXEq}
\tilde{D}=\frac{1}{\sqrt{F}}D+\frac{3F^{'}}{4F^{\frac{3}{2}}w_{1}w_{2}w_{3}}\gamma^{0},
\end{equation}
where $D$ is the Dirac operator given by \eqref{DiracBianchiEq},
the Dirac operator of the metric \eqref{BianchimetricEq}.
\smallskip
We also find that
\begin{eqnarray*}
\tilde{D}^{2}&=&
\frac{1}{F}D^{2}
+\frac{F^{'}}{2F^{2}w_{1}w_{2}w_{3}}\left(w_{1}\gamma^{0}\gamma^{1}\sin\psi-w_{2}\gamma^{0}\gamma^{2}\cos\psi\right)\frac{\partial}{\partial \eta} \\
&&-\frac{F^{'}\csc\eta}{2F^{2}w_{1}w_{2}w_{3}}\left(w_{1}\gamma^{0}\gamma^{1}\cos\psi-w_{2}\gamma^{0}\gamma^{2}\sin\psi\right)\frac{\partial}{\partial \phi}\\
&&+\frac{F^{'}\cot\eta}{2F^{2}w_{1}w_{2}w_{3}}\left(w_{1}\gamma^{0}\gamma^{1}\cos\psi+w_{2}\gamma^{0}\gamma^{2}\sin\psi-w_{3}\gamma^{0}\gamma^{3}\tan\eta\right)
\frac{\partial}{\partial \psi} \\
&&-\frac{F^{'}}{F^{2}w_{1}w_{2}w_{3}}\frac{\partial}{\partial \mu}+\frac{9F^{'2}}{16F^{3}w_{1}w_{2}w_{3}}+\frac{F^{'}w_{1}^{'}}{8F^{2}w_{1}^{2}w_{2}w_{3}}+\frac{F^{'}w_{2}^{'}}{8F^{2}w_{1}w_{2}^{2}w_{3}} \\
&&+\frac{F^{'}w_{3}^{'}}{8F^{2}w_{1}w_{2}w_{3}^{2}}+\frac{1}{8F^{2}w_{1}^{2}w_{2}^{2}w_{3}^{2}}(\frac{1}{w_{1}^{2}}+\frac{1}{w_{2}^{2}}+\frac{1}{w_{3}^{2}})F^{'}\gamma^{0} \gamma^{1}\gamma^{2}\gamma^{3}.
\end{eqnarray*}
Therefore, we have the pseudo-differential symbol of $\tilde D^2$,
since that of $D^2$ was calculated in \cite{FanFatMar1}.
\smallskip
Now, by following a quite similar approach to the one taken in
\cite{FanFatMar1} for the rationality result, we present
a variant of that result for the metric
\eqref{ConformalBianchiIXMetricEq1}. That is, considering the
asymptotic expansion
\begin{equation} \label{ExpAsympConformalEq}
\text{Trace}\left ( \exp (-t \tilde{D}^2) \right )
\sim
t^{-2} \sum_{n=0}^\infty \tilde{a}_{2n} t^n, \qquad t \to 0^+,
\end{equation}
each $\tilde a_{2n}$ is of the general form written in the following
statement.
\smallskip
\begin{theorem} \label{ConformalRationlaityThm}
The term $\tilde{a}_{2n}$ in the above asymptotic
expansion, modulo an integration with respect to $\mu$,
is of the form
\[
\tilde{a}_{2n} = \frac{\tilde{Q}_{2n}
\left (w_1, w_2, w_3, F, w_1', w_2', w_3', F', \dots,
w_1^{(2n)}, w_2^{(2n)}, w_3^{(2n)}, F^{(2n)} \right )}{F^{2n} (w_1 w_2 w_3)^{3n-1}},
\]
where $\tilde{Q}_{2n}$ is a polynomial of several variables with
rational coefficients.
\begin{proof}
We provide an outline. One can exploit the $SU(2)$-invariance of the
1-forms $\sigma_i$ appearing in the metric \eqref{ConformalBianchiIXMetricEq1}
to show that functions of the type \eqref{DensitiesFormula}, whose integrals give
the coefficients $\tilde a_{2n}$, have no spatial dependence when the metric
is given by \eqref{ConformalBianchiIXMetricEq1}. Then, one employs the formula
\eqref{HeatCoefsResEq} along with the pseudo-differential symbol of $\tilde D^2$,
and continues with similar arguments to those of
Theorem 5.1 in \cite{FanFatMar1}.
\end{proof}
\end{theorem}
\smallskip
Let us end this section by recording explicit expressions for the
first few coefficients appearing in \eqref{ExpAsympConformalEq},
which were computed in two different ways to confirm their validity.
In fact we first computed them by the method reviewed in Subsection
\ref{HeatExpSubSec} leading to the formulas \eqref{a_2nformula}
and \eqref{DensitiesFormula}, and then confirmed that the expressions
match precisely with the outcome of our calculations based on the formula
\eqref{HeatCoefsResEq} which used the noncommutative residue.
\smallskip
The first coefficient, which is the volume term, is given by
\begin{equation} \label{a_0Eq}
\tilde{a}_{0}=4F^{2}w_{1}w_{2}w_{3}.
\end{equation}
The next term, which is the Einstein-Hilbert action term, has the following
rather short expression, which indicates occurrence of remarkable
simplifications in the
final formula:
\begin{eqnarray} \label{a_2Eq}
\tilde{a}_{2} &=& -\frac{F}{3}\Big (w_{1}^{2}+w_{2}^{2}+w_{3}^{2} \Big)+\frac{F}{6}\Big (\frac{w_{1}^{2}w_{2}^{2}-w_{3}^{'2}}{w_{3}^{2}}+\frac{w_{1}^{2}w_{3}^{2}-w_{2}^{'2}}{w_{2}^{2}}+\frac{w_{2}^{2}w_{3}^{2}-w_{1}^{'2}}{w_{1}^{2}} \Big ) \nonumber \\
&& -\frac{F}{3}\Big (\frac{w_{1}^{'}w_{2}^{'}}{w_{1}w_{2}}+\frac{w_{1}^{'}w_{3}^{'}}{w_{1}w_{3}}+\frac{w_{2}^{'}w_{3}^{'}}{w_{2}w_{3}}\Big )+\frac{F}{3}\Big (\frac{w_{1}^{''}}{w_{1}}+\frac{w_{2}^{''}}{w_{2}}+\frac{w_{3}^{''}}{w_{3}}\Big )-\frac{F^{'2}}{2F}+F^{''}.
\end{eqnarray}
The term $\tilde{a}_{4}$, which is the Gauss-Bonnet term, also enjoys
remarkable simplifications in its final formula, however, since it has a
lengthier expression, we present it in Appendix \ref{fulla_4appendix}.
\smallskip
\section{Bianchi IX gravitational instantons}
\label{InstantonsSec}
\smallskip
There is an especially interesting family of metrics called Bianchi IX
gravitational instantons, which have been explored in the literature by
imposing the self-duality
condition on triaxial Bianchi IX metrics and by employing a time-dependent conformal
factor $F(\mu)$ to obtain an Einstein metric, see \cite{Tod, Hit, BabKor} and references therein. Let us provide an outline of some of the main ideas and steps for
deriving these metrics, from
the literature. In particular, we will then present the explicit
parametrization of the solutions from \cite{BabKor} in terms of theta functions with characteristics.
\smallskip
The sought after gravitational instantons can be written in the
general form
\begin{equation} \label{ConformalBianchiIXMetricEq2}
d\tilde s^2= F ds^2 = F \left ( w_1 w_2 w_3 \, d\mu^2 +
\frac{w_2 w_3}{w_1} \sigma_1^2 +
\frac{w_3 w_1}{w_2} \sigma_2^2+
\frac{w_1 w_2}{w_3} \sigma_3^2 \right ).
\end{equation}
Then, the differential equations derived from imposing the self-duality of the Weyl tensor
and the condition of being an Einstein metric are solved \cite{Tod, Hit, BabKor} by turning
them into well-studied systems of differential equations as follows.
One can start by considering a basis of anti-self-dual 2-forms
\[
\varphi^j = w_j w_k \,d\mu \wedge \sigma_1 - w_j \sigma_k \wedge \sigma_l,
\]
where all cyclic permutations $(j, k, l)$ of $(1, 2, 3)$ are understood to be considered.
Consider the connection 1-forms $\alpha^j_k$ appearing in
$
d \varphi_j = \sum \alpha^j_k \wedge \varphi^k.
$
This leads to writing
$
\alpha^j_k = \frac{A_k}{w_k} \sigma_k,
$
where the functions $A_k$ satisfy the system of equations
\begin{equation} \label{wjAjDiffEq}
\frac{d w_j}{d \mu} = - w_j w_k + w_j (A_k + A_l).
\end{equation}
It can then be seen that the self-duality condition on the Weyl tensor
yields the classical Halphen system
\begin{equation} \label{HalphenEq}
\frac{d A_j}{d \mu}= - A_k A_l +A_j(A_k + A_l).
\end{equation}
Remarkably, the latter has well-known solutions in terms of the
theta functions that will be defined shortly by \eqref{varthetasEq}.
\smallskip
One can define a new variable
$
x= \frac{A_1-A_2}{A_3-A_2},
$
which has an evident dependence on $\mu$. Then the Halphen system \eqref{HalphenEq} reduces
to the following differential equation which is satisfied by the reciprocal
of the elliptic modular function, see \cite{Tod} and references therein:
\[
\frac{ d^3 x / d \mu^3 }{d x / d \mu}
=
\frac{3}{2} \frac{ d^2 x / d \mu^2 }{(d x / d \mu)^2}
- \frac{1}{2} \frac{dx}{d \mu} \left ( \frac{1}{x^2} + \frac{1}{x(1-x)} + \frac{1}{(1-x)^2} \right ).
\]
Therefore, one can solve the equation \eqref{HalphenEq} and substitute the solution in
\eqref{wjAjDiffEq}. The latter can then be solved more conveniently by setting
\begin{eqnarray*}
w_1 &=& \Omega_1 \frac{dx}{d \mu} \left ( x (1-x) \right )^{-1/2}, \\
w_2 &=& \Omega_2 \frac{dx}{d \mu} \left ( x^2 (1-x) \right )^{-1/2}, \\
w_3 &=& \Omega_3 \frac{dx}{d \mu} \left ( x (1-x)^2 \right )^{-1/2},
\end{eqnarray*}
and by viewing $x$ as the independent variable, which yields:
\begin{eqnarray*}
\frac{d \Omega_1}{dx} = - \frac{\Omega_2 \Omega_3}{x(1-x)},
\qquad
\frac{d \Omega_2}{dx} = - \frac{\Omega_3 \Omega_1}{x},
\qquad
\frac{d \Omega_3}{dx} = - \frac{\Omega_1 \Omega_2}{1-x}.
\end{eqnarray*}
It is well-known that these equations reduce to the
Painlev\'e VI equation with particular parameters, which
along with the condition of making the metric Einstein
in the conformal class using a time-dependent conformal factor,
one reduces to the following rational parameters \cite{Hit, Oku, Tod}:
\[
(\alpha ,\beta ,\gamma , \delta)
=
(\frac{1}{8}, -\frac{1}{8}, \frac{1}{8}, \frac{3}{8}).
\]
\smallskip
It should be noted that in general a Painlev\'e VI equation with
parameters $(\alpha ,\beta ,\gamma , \delta)$ is of the form
\begin{eqnarray*}
\frac{d^2X}{dt^2}&=&\frac{1}{2}\left(
\frac{1}{X}+\frac{1}{X-1}+\frac{1}{X-t}\right)
\left(\frac{dX}{dt}\right)^2
-\left( \frac{1}{t}+\frac{1}{t-1}+\frac{1}{X-t}\right)\frac{dX}{dt} \\&&
+\frac{X(X-1)(X-t)}{t^2(t-1)^2}
\left(\alpha +
\beta\frac{t}{X^2}+\gamma\frac{t-1}{(X-1)^2}+
\delta\frac{t(t-1)}{(X-t)^2}\right).
\end{eqnarray*}
\smallskip
Going through the process outlined above and
solving the involved equations in terms of
elliptic theta functions \cite{Tod, Hit, BabKor}, and using
the formula for the $\tau$-function of the Schlesinger equation \cite{KitKor}
with an additional elegant calculation the square root of some expressions in \cite{BabKor},
cf. \cite{Hit}, a
parametrization of the Bianchi IX
gravitational instantons can be given as follows.
\smallskip
The final solutions in \cite{BabKor} are written in terms of theta functions with characteristics,
which for $p, q, z, \in \mathbb{C}, i \mu \in \mathbb{H},$
are given by
\begin{equation} \label{ThetawithCharEq}
\vartheta [p,q](z, i\mu )
=
\sum_{m\in {\mathbb Z}} \exp \left( -\pi (m+p)^2\mu + 2\pi i (m+p)(z+q)\right).
\end{equation}
Considering Jacobi's theta function defined by
\[
\Theta( z | \tau) = \sum_{m \in \mathbb{Z}} e^{\pi i m^2 \tau} e^{2 \pi i m z},
\qquad z \in \mathbb{C}, \qquad \tau \in \mathbb{H},
\]
and by using the notation which sets $z=0$ in \eqref{ThetawithCharEq},
\begin{equation} \label{varthetapqEq}
\vartheta [p,q]( i \mu) = \vartheta [p,q] (0,i\mu ),
\end{equation}
the following functions are also necessary to be introduced:
\begin{eqnarray} \label{varthetasEq}
\vartheta_{2}(i\mu)&=&\vartheta[\frac{1}{2},0](i\mu)=\sum_{m\in\mathbb{Z}}\exp\{-\pi(m+\frac{1}{2})^{2}\mu\}=e^{-\frac{1}{4}\pi\mu}\Theta \big (\frac{i\mu}{2}|i\mu \big), \nonumber \\
\vartheta_{3}(i\mu)&=&\vartheta[0,0](i\mu)=\sum_{m\in\mathbb{Z}}\exp\{-\pi m^{2}\mu\}=\Theta(0|i\mu), \nonumber \\
\vartheta_{4}(i\mu)&=&\vartheta[0,\frac{1}{2}](i\mu)=\sum_{m\in\mathbb{Z}}\exp\{-\pi m^{2}\mu+\pi im\}=\Theta \big (\frac{1}{2}|i\mu \big ).
\end{eqnarray}
\smallskip
We are now ready to write the explicit formulas for the Bianchi IX gravitational instantons
presented in \cite{BabKor}, in the following two subsections. The first family is two-parametric,
which consists of the case of non-vanishing cosmological constants, and the second is a
one-parametric family whose cosmological constants vanish. By studying the asymptotic
behavior of these solutions as $\mu \to \infty$, it is shown in \cite{ManMar} that
they approximate Eguchi-Hanson type gravitational instantons with $ w_1 \neq w_2 = w_3 $
\cite{EguHan} for large $\mu$.
\smallskip
\subsection{The two-parametric family with non-vanishing cosmological constants}
The two-parametric family of solutions with
parameters $p, q \in \mathbb{C}$ is given by
substituting the following functions in the metric
\eqref{ConformalBianchiIXMetricEq2}:
\begin{eqnarray} \label{two-parametric}
w_{1}[p,q](i\mu)&=&-\frac{i}{2}\vartheta_{3}(i\mu)\vartheta_{4}(i\mu)\frac{\partial_{q}\vartheta[p,q+\frac{1}{2}](i\mu)}{e^{\pi ip}\vartheta[p,q](i\mu)}, \nonumber \\
w_{2}[p,q](i\mu)&=&\frac{i}{2}\vartheta_{2}(i\mu)\vartheta_{4}(i\mu)\frac{\partial_{q}\vartheta[p+\frac{1}{2},q+\frac{1}{2}](i\mu)}{e^{\pi ip}\vartheta[p,q](i\mu)}, \nonumber \\
w_{3}[p,q](i\mu)&=&-\frac{1}{2}\vartheta_{2}(i\mu)\vartheta_{3}(i\mu)\frac{\partial_{q}\vartheta[p+\frac{1}{2},q](i\mu)}{\vartheta[p,q](i\mu)}, \nonumber \\
F[p,q](i\mu)&=&\frac{2}{\pi\Lambda} \frac{1}{(\partial_{q}\ln\vartheta[p,q](i\mu))^{2}}=\frac{2}{\pi\Lambda} \left(\frac{\vartheta[p,q](i\mu)}{\partial_{q}\vartheta[p,q](i\mu)}\right)^{2}.
\end{eqnarray}
\smallskip
\subsection{The one-parametric family with vanishing cosmological constants}
The one-parametric family of the metrics with the parameter $q_0 \in \mathbb{R}$ is given by
the following solutions that need to be substituted in the metric \eqref{ConformalBianchiIXMetricEq2}:
\begin{eqnarray} \label{one-parametric}
w_1[q_0](i \mu) &=& \frac{1}{\mu+q_0} +2 \frac{d}{d\mu} \log \vartheta_2 (i \mu), \nonumber \\
w_2[q_0](i \mu) &=& \frac{1}{\mu+q_0} +2 \frac{d}{d\mu} \log \vartheta_3 (i \mu), \nonumber \\
w_3[q_0](i \mu) &=&\frac{1}{\mu+q_0} +2 \frac{d}{d\mu} \log \vartheta_4 (i \mu), \nonumber \\
F[q_0](i \mu)&=& C (\mu + q_0)^2,
\end{eqnarray}
where $C$ is an arbitrary positive constant.
\smallskip
\section{Arithmetics of Bianchi IX gravitational instantons}
\label{ArithmeticsofInstantonsSec}
\smallskip
This section is devoted to the investigation of modular properties
of the functions appearing in the formulas \eqref{two-parametric}
and \eqref{one-parametric} and that of their derivatives. When the functions
$w_1$, $w_2$, $w_3$ and $F$ are substituted from the latter identities in the
metric \eqref{ConformalBianchiIXMetricEq2}, Theorem \ref{ConformalRationlaityThm} implies that the
Seeley-de Witt coefficients
$\tilde a_{2n}$ are rational functions of $\vartheta_2, \vartheta_3, \vartheta_4$, $\vartheta[p,q]$,
$\partial_q \vartheta[p,q]$, $e^{i\pi p}$ and their derivatives with rational coefficients.
Therefore, finding out modular properties of these theta functions and consequently that of the functions
$w_1$, $w_2$, $w_3$, $F$ and their derivatives, will help us to
investigate modular transformation laws of the $\tilde a_{2n}$, under
modular transformations on $i \mu$ belonging to the upper half-plane $\mathbb{H}$.
\smallskip
We begin studying the two-parametric case \eqref{two-parametric}. First, let us note that for the derivatives
of the function $\vartheta[p, q](i \mu)$ given by \eqref{varthetapqEq}, we have
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta[p,q](i\mu) &=& \sum_{m\in\mathbb{Z}}(-\pi)^{n}(m+p)^{2n}
e^{ -\pi(m+p)^{2}\mu+2\pi i(m+p)q}, \\
\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu)&=&2i(-1)^{n}\pi^{n+1}\sum_{m\in\mathbb{Z}}(m+p)^{2n+1} e^{-\pi(m+p)^{2}\mu+2\pi i(m+p)q }.
\end{eqnarray*}
We also need to prove the following lemma, which will be of crucial use
for proving the transformation properties investigated in this section.
\smallskip
\begin{lemma} \label{PoissonsumLem}
We have
\begin{eqnarray*}
&&\sum_{m\in\mathbb{Z}}(m+p)^{n}e^{-\frac{\pi}{\mu}(m+p)^{2}+2\pi i(m+p)q} \\
&& \quad = e^{2\pi ipq}\sum_{j=0}^{[n/2]}
\Large ( \frac{(-i)^{n-2j}\mu^{n+\frac{1}{2}-j} n!}{(2\pi)^{j}(n-2j)!\cdot(2j)!!}
\sum_{m\in\mathbb{Z}}(m-q)^{n-2j}e^{-\pi\mu(m-q)^{2}+2\pi ip(m-q)} \Large ).
\end{eqnarray*}
\begin{proof}
It follows from the following identity and the Poisson summation
formula:
\begin{eqnarray*}
&&\int d\xi\cdot e^{2\pi i\cdot\xi x}(\xi+p)^{n}e^{-\frac{\pi}{\mu}(\xi+p)^{2}+2\pi i(\xi+p)q} \\
&&\quad =e^{2\pi ipq}e^{-\pi\mu(-x-q)^{2}+2\pi ip(-x-q)}\int d\xi\cdot(\xi+i\mu(x+q))^{n}e^{-\frac{\pi}{\mu}\xi{}^{2}} \\
&& =e^{2\pi ipq}e^{-\pi\mu(-x-q)^{2}+2\pi ip(-x-q)}\sum_{j=0}^{[n/2]}\frac{i^{n-2j}\mu^{n+\frac{1}{2}-j}}{(2\pi)^{j}}\frac{n!}{(n-2j)!\cdot(2j)!!}(x+q)^{n-2j}.
\end{eqnarray*}
\end{proof}
\end{lemma}
\smallskip
For convenience, we need to use the constants
\[
C(j|n)=\frac{(-i){}^{n}n!}{2^{j}(n-2j)!\cdot(2j)!!},
\]
which arise naturally in exploring the following
transformation properties.
\smallskip
The following lemma shows that the function $\vartheta[p, q](i \mu)$
and its derivatives with respect to $\mu$ possess periodic and
quasi-period properties with respect to the variables $p$ and $q$.
\smallskip
\begin{lemma} \label{transformationsvartheta}
The functions
$\vartheta[p,q]$ are holomorphic in the half-plane $\Re (\mu)>0$
and satisfy the following properties:
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta[p,q+1](i\mu) &=& e^{2\pi ip}\partial_{\mu}^{n}\vartheta[p,q](i\mu), \\
\partial_{\mu}^{n}\partial_{q}\vartheta[p,q+1](i\mu)&=&e^{2\pi ip}\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu), \\
\partial_{\mu}^{n}\vartheta[p+1,q](i\mu) &=&\partial_{\mu}^{n}\vartheta[p,q](i\mu), \\
\partial_{\mu}^{n}\partial_{q}\vartheta[p+1,q](i\mu) &=& \partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu).
\end{eqnarray*}
\begin{proof}
One can start from the definition given by \eqref{varthetapqEq} and \eqref{ThetawithCharEq} to write
the following for proving the first identity.
\begin{eqnarray*}
&&\partial_{\mu}^{n}\vartheta[p,q+1](i\mu) \\
&& \quad =
e^{2\pi ip}(-1)^{n}\pi^{n}\sum_{m\in\mathbb{Z}}(m+p)^{2n}\exp\{-\pi(m+p)^{2}\mu+2\pi i(m+p)q\} \\
&& \quad =e^{2\pi ip}\partial_{\mu}^{n}\vartheta[p,q](i\mu),
\end{eqnarray*}
The second identity can be seen to hold by writing
\begin{eqnarray*}
&&\partial_{\mu}^{n}\partial_{q}\vartheta[p,q+1](i\mu)\\
&&\quad =e^{2\pi ip}2i(-1)^{n}\pi^{n+1}\sum_{m\in\mathbb{Z}}(m+p)^{2n+1}\cdot\exp\{-\pi(m+p)^{2}\mu+2\pi i(m+p)q\} \\
&& \quad =e^{2\pi ip}\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu).
\end{eqnarray*}
This proves the quasi-periodicity property of the theta function and its derivative with the quasi-period
1 in the $q$-variable.
\smallskip
The periodicity properties with period 1 in the $p$-variable can be similarly investigated by writing
\begin{eqnarray*}
&&\partial_{\mu}^{n}\vartheta[p+1,q](i\mu) \\
&& \quad =(-1)^{n}\pi^{n}\sum_{m\in\mathbb{Z}}(m+1+p)^{2n}\exp\{-\pi(m+1+p)^{2}\mu+2\pi i(m+1+p)q\} \\
&& \quad =\partial_{\mu}^{n}\vartheta[p,q](i\mu),
\end{eqnarray*}
and
\begin{eqnarray*}
&&\partial_{\mu}^{n}\partial_{q}\vartheta[p+1,q](i\mu) \\
&& =2i(-1)^{n}\pi^{n+1}\sum_{m\in\mathbb{Z}}(m+1+p)^{2n+1}\cdot\exp\{-\pi(m+1+p)^{2}\mu+2\pi i(m+1+p)q\} \\
&&\quad =\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu).
\end{eqnarray*}
\end{proof}
\end{lemma}
\smallskip
We now focus on modular transformations. For convenience, we use the following
notation for the linear fractional transformations corresponding to the generators
of the modular group acting on the upper half-plane $\mathbb{H}$ in the
complex plane:
\begin{equation} \label{T_1andSEq}
T_1(\tau) = \tau + 1, \qquad S(\tau) = \frac{-1}{\tau}, \qquad \tau \in \mathbb{H}.
\end{equation}
\smallskip
In the following lemma, we present the transformation properties of the
$\vartheta[p, q](i \mu)$ and its derivatives under the modular transformations $T_1$,
$T_2$, and $S$ on $i \mu \in \mathcal{H}$.
\smallskip
\begin{lemma} \label{transformationsvarthetapq}
Let $\mu$ be a complex number in the right half-plane $\Re (\mu) >0$. We have
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta[p,q](i\mu+1)&=& e^{-\pi ip(p+1)}\partial_{\mu}^{n}\vartheta[p,q+p+\frac{1}{2}](i\mu),\\
\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu+1) &=& e^{-\pi ip(p+1)}\partial_{\mu}^{n}\partial_{q}\vartheta[p,q+p+\frac{1}{2}](i\mu),\\
\partial_{\mu}^{n}\vartheta[p,q](i\mu+2)&=& e^{-2\pi ip^{2}}\partial_{\mu}^{n}\vartheta[p,q+2p](i\mu),\\
\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu+2) &=& e^{-2\pi ip^{2}}\partial_{\mu}^{n}\partial_{q}\vartheta[p,q+2p](i\mu),\\
\partial_{\mu}^{n}\vartheta[p,q](\frac{i}{\mu}) &=& e^{2\pi ipq}\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta[-q,p](i\mu),\\
\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](\frac{i}{\mu}) &=& e^{2\pi ipq}\sum_{j=0}^{n}C(j|2n+1)\mu^{2n+\frac{3}{2}-j}\partial_{\mu}^{n-j}\partial_{p}\vartheta[-q,p](i\mu).
\end{eqnarray*}
\begin{proof}
Considering the formulas \eqref{varthetapqEq} and \eqref{ThetawithCharEq} we can write
\begin{eqnarray*}
&&\partial_{\mu}^{n}\vartheta[p,q](i\mu+1) \\
&&\quad =(-1)^{n}\pi^{n}\sum_{m\in\mathbb{Z}}(m+p)^{2n}e^{-\pi(m+p)^{2}\mu+2\pi i(m+p)(q+p+\frac{1}{2})-\pi ip^2 - \pi i p} \\
&& \quad =e^{-\pi ip(p+1)}\partial_{\mu}^{n}\vartheta[p,q+p+\frac{1}{2}](i\mu),
\end{eqnarray*}
and
\begin{eqnarray*}
&&\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](i\mu+1) \\
&& \quad =2i(-1)^{n}\pi^{n+1}\sum_{m\in\mathbb{Z}}(m+p)^{2n+1}\cdot e^{-\pi(m+p)^{2}\mu+2\pi i(m+p)(q+p+\frac{1}{2})-\pi ip^2 - \pi i p} \\
&& \quad =e^{-2\pi ip^{2}}\partial_{\mu}^{n}\partial_{q}\vartheta[p,q+p+\frac{1}{2}](i\mu).
\end{eqnarray*}
This establishes the first two identities for the transformation properties of $\vartheta[p, q]$ and
its derivatives with respect to the modular action $T_1$ given by \eqref{T_1andSEq}; the third and the fourth identity follows immediately from the first and the second by applying them twice.
\smallskip
In order to investigate the transformation properties with respect to the action of $S$ given in \eqref{T_1andSEq},
we need to use Lemma \ref{PoissonsumLem}, which allows us to write
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta[p,q](\frac{i}{\mu})&=&(-1)^{n}\pi^{n}\sum_{m\in\mathbb{Z}}(m+p)^{2n}\exp\{-\frac{\pi}{\mu}(m+p)^{2}+2\pi i(m+p)q\} \\
&=& e^{2\pi ipq}\sum_{j=0}^{n}\mu^{2n+\frac{1}{2}-j}\frac{(-1)^{n}(2n)!}{2^{j}(2n-2j)!\cdot(2j)!!}(-1)^{n-j}\pi^{n-j} \times \\
&& \qquad \qquad \qquad\qquad \sum_{m\in\mathbb{Z}}(m-q)^{2n-2j}e^{-\pi\mu(m-q)^{2}+2\pi ip(m-q)} \\
&=&e^{2\pi ipq}\sum_{j=0}^{n}\mu^{2n+\frac{1}{2}-j}\frac{(-i){}^{2n}(2n)!}{2^{j}(2n-2j)!\cdot(2j)!!}\partial_{\mu}^{n-j}\vartheta[-q,p](i\mu) \\
&=&e^{2\pi ipq}\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta[-q,p](i\mu).
\end{eqnarray*}
Also we have:
\begin{eqnarray*}
\partial_{\mu}^{n}\partial_{q}\vartheta[p,q](\frac{i}{\mu})&=&2i(-1)^{n}\pi^{n+1}\sum_{m\in\mathbb{Z}}(m+p)^{2n+1}\cdot e^{ -\frac{\pi}{\mu}(m+p)^{2}+2\pi i(m+p)q } \\
&=&e^{2\pi ipq}\sum_{j=0}^{n}\mu^{2n+\frac{3}{2}-j}\frac{-i(-1)^{n}(2n+1)!}{2^{j}(2n+1-2j)!\cdot(2j)!!}2i(-1)^{n-j}\pi^{n-j+1} \times \\
&& \qquad \qquad \qquad \qquad \quad \sum_{m\in\mathbb{Z}}(m-q)^{2(n-j)+1}e^{-\pi\mu(m-q)^{2}+2\pi ip(m-q)} \\
&=& e^{2\pi ipq}\sum_{j=0}^{n}\mu^{2n+\frac{3}{2}-j}\frac{(-i)^{2n+1}(2n+1)!}{2^{j}(2n+1-2j)!\cdot(2j)!!}\partial_{\mu}^{n-j}\partial_{p}\vartheta[-q,p](i\mu) \\
&=&e^{2\pi ipq}\sum_{j=0}^{n}C(j|2n+1)\mu^{2n+\frac{3}{2}-j}\partial_{\mu}^{n-j}\partial_{p}\vartheta[-q,p](i\mu).
\end{eqnarray*}
\end{proof}
\end{lemma}
\smallskip
Now we investigate the transformation properties of
the functions $\vartheta_2, \vartheta_3, \vartheta_4,$
given by \eqref{varthetasEq} and their derivatives, under
the same modular actions as the ones considered above, namely
$T_1$ and $S$ given by \eqref{T_1andSEq}
transforming $i \mu$ in the upper half-plane.
\smallskip
\begin{lemma}
\label{transformationsvartheta234}
Let $\Re(\mu) >0$.
The functions $\vartheta_2, \vartheta_3, \vartheta_4$ and their derivative
satisfy the following modular transformation properties:
\begin{align*}
\partial_{\mu}^{n}\vartheta_{2}(i\mu+1)&= e^{\frac{\pi i}{4}}\partial_{\mu}^{n}\vartheta_{2}(i\mu), &
\partial_{\mu}^{n}\vartheta_{2}(\frac{i}{\mu}) &= \sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta_{4}(i\mu), \\
\partial_{\mu}^{n}\vartheta_{3}(i\mu+1) & = \partial_{\mu}^{n}\vartheta_{4}(i\mu), &
\partial_{\mu}^{n}\vartheta_{3}(\frac{i}{\mu}) &= \sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta_{3}(i\mu),& \\
\partial_{\mu}^{n}\vartheta_{4}(i\mu+1) &= \partial_{\mu}^{n}\vartheta_{3}(i\mu), &
\partial_{\mu}^{n}\vartheta_{4}(\frac{i}{\mu}) &= \sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta_{2}(i\mu).
\end{align*}
\begin{proof}
The first identity follows from considering the formula \eqref{varthetasEq} and writing
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta_{2}(i\mu+1)&=&\partial_{\mu}^{n}\vartheta[\frac{1}{2},0](i\mu+1)
=e^{-\frac{3\pi i}{4}}\partial_{\mu}^{n}\vartheta[\frac{1}{2},1](i\mu)\\
&=& e^{\frac{\pi i}{4}}\partial_{\mu}^{n}\vartheta[\frac{1}{2},0](i\mu)
=e^{\frac{\pi i}{4}}\partial_{\mu}^{n}\vartheta_{2}(i\mu).
\end{eqnarray*}
\smallskip
For the next identity, which involves the action $S(i \mu) = i/\mu$, one needs to use
Lemma \ref{PoissonsumLem}:
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta_{2}(\frac{i}{\mu}) &=& \partial_{\mu}^{n}\vartheta[\frac{1}{2},0](\frac{i}{\mu})
=\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta[0,\frac{1}{2}](i\mu) \\
&=&\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta_{4}(i\mu).
\end{eqnarray*}
\smallskip
The analogous identities for the functions $\vartheta_3$, $\vartheta_4$, and their derivatives
can be proved in a similar manner. In the case of $\vartheta_3$ we have
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta_{3}(i\mu+1)&=&\partial_{\mu}^{n}\vartheta[0,0](i\mu+1)
=\partial_{\mu}^{n}\vartheta[0,\frac{1}{2}](i\mu)
=\partial_{\mu}^{n}\vartheta_{4}(i\mu),
\end{eqnarray*}
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta_{3}(\frac{i}{\mu}) &=&
\partial_{\mu}^{n}\vartheta[0,0](\frac{i}{\mu})
=\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta[0,0](i\mu) \\
&=&\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta_{3}(i\mu),
\end{eqnarray*}
and for $\vartheta_4$ we have
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta_{4}(i\mu+1)=\partial_{\mu}^{n}\vartheta[0,1](i\mu)=\partial_{\mu}^{n}\vartheta_{3}(i\mu),
\end{eqnarray*}
\begin{eqnarray*}
\partial_{\mu}^{n}\vartheta_{4}(\frac{i}{\mu}) &=& \partial_{\mu}^{n}\vartheta[0,\frac{1}{2}](i\mu)
=\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta[\frac{1}{2},0](i\mu) \\
&=&\sum_{j=0}^{n}C(j|2n)\mu^{2n+\frac{1}{2}-j}\partial_{\mu}^{n-j}\vartheta_{2}(i\mu).
\end{eqnarray*}
\end{proof}
\end{lemma}
\smallskip
\subsection{The case of the two-parametric family of metrics}
\label{ModularPropsTwoParametricSubSec}
Using the above lemmas, we proceed to work out the modular
transformation rules for the functions $w_j[p, q]$ given by
\eqref{two-parametric} and their
derivatives. First, let us deal with the transformation $T_1$ given
by \eqref{T_1andSEq}.
\smallskip
\begin{lemma}
\label{transformationsw_j1}
The functions $w_j[p, q]$ and their derivatives of an arbitrary order $n \geq 1$ with
respect to $\mu$ satisfy the
following identities:
\begin{align*}
w_{1}[p,q](i\mu+1) &= w_{1}[p,q+p+\frac{1}{2}](i\mu), &
w_{1}^{(n)}[p,q](i\mu+1)&= w_{1}^{(n)}[p,q+p+\frac{1}{2}](i\mu), \\
w_{2}[p,q](i\mu+1) &= w_{3}[p,q+p+\frac{1}{2}](i\mu), & w_{2}^{(n)}[p,q](i\mu+1) &=w_{3}^{(n)}[p,q+p+\frac{1}{2}](i\mu), \\
w_{3}[p,q](i\mu+1) &= w_{2}[p,q+p+\frac{1}{2}](i\mu), &
w_{3}^{(n)}[p,q](i\mu+1)&=w_{2}^{(n)}[p,q+p+\frac{1}{2}](i\mu).
\end{align*}
\begin{proof}
Using Lemma \ref{transformationsvarthetapq} and Lemma \ref{transformationsvartheta234}, we have
\begin{eqnarray*}
w_{1}[p,q](i\mu+1)&=&-\frac{i}{2}\vartheta_{3}(i\mu+1)\vartheta_{4}(i\mu+1)\frac{\partial_{q}\vartheta[p,q+\frac{1}{2}](i\mu+1)}{e^{\pi ip}\vartheta[p,q](i\mu+1)} \\
&=&-\frac{i}{2}\vartheta_{3}(i\mu)\vartheta_{4}(i\mu)\frac{e^{-\pi i p(p+1)}\partial_{q}\vartheta[p,q+\frac{1}{2}+p+\frac{1}{2}](i\mu)}{e^{-\pi i p(p+1)}e^{\pi ip}\vartheta[p,q+p+\frac{1}{2}](i\mu)}\\
&=&w_{1}[p,q+p+\frac{1}{2}](i\mu).
\end{eqnarray*}
\smallskip
Similarly for $w_2$ and $w_3$ we can write:
\begin{eqnarray*}
&&w_{2}[p,q](i\mu+1) \\
&& \quad =\frac{i}{2}\vartheta_{2}(i\mu+1)\vartheta_{4}(i\mu+1)\frac{e^{-\pi i(p+\frac{1}{2})(p+\frac{3}{2})}\partial_{q}\vartheta[p+\frac{1}{2},q+\frac{1}{2}+p+1](i\mu)}{e^{-\pi ip(p+1)}e^{\pi ip}\vartheta[p,q+p+\frac{1}{2}](i\mu)} \\
&&\quad =-\frac{1}{2}\vartheta_{2}(i\mu)\vartheta_{3}(i\mu)\frac{\partial_{q}\vartheta[p+\frac{1}{2},q+p+\frac{1}{2}](i\mu)}{\vartheta[p,q+p+\frac{1}{2}](i\mu)} \\
&& \quad =w_{3}[p,q+p+\frac{1}{2}](i\mu),
\end{eqnarray*}
\begin{eqnarray*}
&& w_{3}[p,q](i\mu+1) \\
&& \qquad =-\frac{1}{2}\vartheta_{2}(i\mu+1)\vartheta_{3}(i\mu+1)\frac{e^{-\pi i(p+\frac{1}{2})(p+\frac{3}{2})}\partial_{q}\vartheta[p+\frac{1}{2},q+p+\frac{1}{2}+\frac{1}{2}](i\mu)}{e^{-\pi ip(p+1)}\vartheta[p,q+p+\frac{1}{2}](i\mu)} \\
&&\qquad = \frac{i}{2}\vartheta_{2}(i\mu)\vartheta_{4}(i\mu)\frac{\partial_{q}\vartheta[p+\frac{1}{2},q+p+\frac{1}{2}+\frac{1}{2}](i\mu)}{\vartheta[p,q+p+\frac{1}{2}](i\mu)}
\\
&&\qquad =w_{2}[p,q+p+\frac{1}{2}](i\mu).
\end{eqnarray*}
\smallskip
The identities for the arbitrary derivatives follow easily from differentiating the above
equalities with respect to $\mu$.
\end{proof}
\end{lemma}
\smallskip
The following lemmas show that the functions $w_j$ and their derivatives satisfy
some transformation laws with respect to the modular action $S$ given by
\eqref{T_1andSEq} as well. Let us start by the properties of $w_1$.
\smallskip
\begin{lemma}
\label{transformationsw_12}
Assume that the variable $\mu$ belongs to the right half-plane $\Re(\mu) >0$. The function $w_1[p, q]$ and its derivatives up to order 4 with respect to $\mu$
satisfy the following identities:
\begin{eqnarray*}
w_{1}[p,q](\frac{i}{\mu}) &=& \mu^{2}w_{3}[-q,p](i\mu), \\
w_{1}^{'}[p,q](\frac{i}{\mu})&=& -\mu^{4}w_{3}^{'}[-q,p](i\mu)-2\mu^{3}w_{3}[-q,p](i\mu), \\
w_{1}^{''}[p,q](\frac{i}{\mu})&=&\mu^{6}w_{3}^{''}[-q,p](i\mu)+6\mu^{5}w_{3}^{'}[-q,p](i\mu)+6\mu^{4}w_{3}[-q,p](i\mu), \\
w_{1}^{(3)}[p,q](\frac{i}{\mu}) &=& -\mu^{8}w_{3}^{(3)}[-q,p](i\mu)-12\mu^{7}w_{3}^{''}[-q,p](i\mu)-36\mu^{6}w_{3}^{'}[-q,p](i\mu) \\
&&-24\mu^{5}w_{3}[-q,p](i\mu), \\
w_{1}^{(4)}[p,q](\frac{i}{\mu}) &=& \mu^{10}w_{3}^{(4)}[-q,p](i\mu)+20\mu^{9}w_{3}^{(3)}[-q,p](i\mu)+120\mu^{8}w_{3}^{''}[-q,p](i\mu) \\
&&+240\mu^{7}w_{3}^{'}[-q,p](i\mu)+120\mu^{6}w_{3}[-q,p](i\mu).
\end{eqnarray*}
\begin{proof}
Using lemmas \ref{transformationsvarthetapq} and \ref{transformationsvartheta234} we
have
\begin{eqnarray*}
w_{1}[p,q](\frac{i}{\mu}) &=&-\frac{i}{2}\vartheta_{3}(\frac{i}{\mu})\vartheta_{4}(\frac{i}{\mu})\frac{\partial_{q}\vartheta[p,q+\frac{1}{2}](\frac{i}{\mu})}{e^{\pi ip}\vartheta[p,q](\frac{i}{\mu})} \\
&=&\mu^{2} (-\frac{1}{2}\vartheta_{3}(i\mu)\vartheta_{2}(i\mu)\frac{\partial_{q}\vartheta[-q+\frac{1}{2},p](i\mu)}{\vartheta[-q,p](i\mu)}) \\
&=&\mu^{2}w_{3}[-q,p](i\mu).
\end{eqnarray*}
By taking a derivative with respect to $\mu$, it follows from the latter that
\begin{eqnarray*}
w_{1}^{'}[p,q](\frac{i}{\mu})&=&\frac{d\mu}{d\frac{1}{\mu}}\partial_{\mu}w_{1}[p,q](\frac{i}{\mu})=-\mu^{2}\partial_{\mu}(\mu^{2}w_{3}[-q,p](i\mu)) \\
&=& -\mu^{2}(2\mu w_{3}[-q,p](i\mu)+\mu^{2}w_{3}^{'}[-q,p](i\mu)) \\
&=&-\mu^{4}w_{3}^{'}[-q,p](i\mu)-2\mu^{3}w_{3}[-q,p](i\mu).
\end{eqnarray*}
\smallskip
One can then continue by taking another derivative to obtain
\begin{eqnarray*}
w_{1}^{''}[p,q](\frac{i}{\mu})&=&\frac{d\mu}{d\frac{1}{\mu}}\partial_{\mu}w_{1}^{'}[p,q](\frac{i}{\mu}) \\
&=&-\mu^{2}\partial_{\mu}(-\mu^{4}w_{3}^{'}[-q,p](i\mu)-2\mu^{3}w_{3}[-q,p](i\mu)) \\
&=&\mu^{6}w_{3}^{''}[-q,p](i\mu)+6\mu^{5}w_{3}^{'}[-q,p](i\mu)+6\mu^{4}w_{3}[-q,p](i\mu).
\end{eqnarray*}
For the next higher derivatives we find that:
\begin{eqnarray*}
w_{1}^{(3)}[p,q](\frac{i}{\mu}) &=& \frac{d\mu}{d\frac{1}{\mu}}\partial_{\mu}w_{1}^{''}[p,q](\frac{i}{\mu}) \\
&=&-\mu^{2}\partial_{\mu}(\mu^{6}w_{3}^{''}[-q,p](i\mu)+6\mu^{5}w_{3}^{'}[-q,p](i\mu)+6\mu^{4}w_{3}[-q,p](i\mu))\\
&=&-\mu^{8}w_{3}^{(3)}[-q,p](i\mu)-12\mu^{7}w_{3}^{''}[-q,p](i\mu)-36\mu^{6}w_{3}^{'}[-q,p](i\mu) \\
&&-24\mu^{5}w_{3}[-q,p](i\mu),
\end{eqnarray*}
\begin{eqnarray*}
&& w_{1}^{(4)}[p,q](\frac{i}{\mu}) \\
&& \qquad= -\mu^{2}\partial_{\mu}(-\mu^{8}w_{3}^{(3)}[-q,p](i\mu)-12\mu^{7}w_{3}^{''}[-q,p](i\mu)-36\mu^{6}w_{3}^{'}[-q,p](i\mu) \\
&&\qquad \quad -24\mu^{5}w_{3}[-q,p](i\mu)) \\
&&\qquad =\mu^{10}w_{3}^{(4)}[-q,p](i\mu)+20\mu^{9}w_{3}^{(3)}[-q,p](i\mu)+120\mu^{8}w_{3}^{''}[-q,p](i\mu) \\
&&\qquad \quad +240\mu^{7}w_{3}^{'}[-q,p](i\mu)+120\mu^{6}w_{3}[-q,p](i\mu).
\end{eqnarray*}
\end{proof}
\end{lemma}
\smallskip
We record in the next two lemmas the transformation rules for the functions
$w_2[p, q]$, $w_3[p, q]$ and their derivatives up to order 4 with respect
to the modular action $S(i \mu) = \frac{i}{\mu}$. Their proofs are presented
in Appendix \ref{transformationsw_j2appendix}, which are similar to
the proof of Lemma \ref{transformationsw_12}. First, we treat $w_2[p, q]$ and its
derivatives with respect to $\mu$ noting
that the action of $S$ on $i \mu$ yields expressions in terms of
$w_2[-q, p]$ and its derivatives.
\smallskip
\begin{lemma}
\label{transformationsw_22}
The function $w_2[p, q]$ and its derivatives up to order 4 with respect to $\mu$, $\Re(\mu) >0$,
satisfy the following identities:
\begin{eqnarray*}
w_{2}[p,q](\frac{i}{\mu}) &=& \mu^{2}w_{2}[-q,p](i\mu), \\
w_{2}^{'}[p,q](\frac{i}{\mu}) &=& -\mu^{4}w_{2}^{'}[-q,p](i\mu)-2\mu^{3}w_{2}[-q,p](i\mu),\\
w_{2}^{''}[p,q](\frac{i}{\mu}) &=& \mu^{6}w_{2}^{''}[-q,p](i\mu)+6\mu^{5}w_{2}^{'}[-q,p](i\mu)+6\mu^{4}w_{2}[-q,p](i\mu), \\
w_{2}^{(3)}[p,q](\frac{i}{\mu}) &=& -\mu^{8}w_{2}^{(3)}[-q,p](i\mu)-12\mu^{7}w_{2}^{''}[-q,p](i\mu)-36\mu^{6}w_{2}^{'}[-q,p](i\mu) \\
&&-24\mu^{5}w_{2}[-q,p](i\mu), \\
w_{2}^{(4)}[p,q](\frac{i}{\mu}) &=& \mu^{10}w_{2}^{(4)}[-q,p](i\mu)+20\mu^{9}w_{2}^{(3)}[-q,p](i\mu)+120\mu^{8}w_{2}^{''}[-q,p](i\mu) \\
&&+240\mu^{7}w_{2}^{'}[-q,p](i\mu) +120\mu^{6}w_{2}[-q,p](i\mu).
\end{eqnarray*}
\begin{proof}
It is given in Appendix \ref{transformationsw_j2appendix}.
\end{proof}
\end{lemma}
\smallskip
Now we present the transformation laws for $w_{3}[p,q]$ and its
derivatives under the modular transformation $S(i \mu) = \frac{i}{\mu}$.
We note that $w_{3}[p,q]$ behaves similarly to $w_1[p, q]$ under the action of $S$.
In fact, the statement of the following lemma can be obtained from that of
Lemma \ref{transformationsw_12} by swapping the indices 1 and 3.
\smallskip
\begin{lemma}
\label{transformationsw_32} Assuming $\Re(\mu) >0$,
the function $w_3[p, q]$ and its derivatives up to order 4 with respect to $\mu$
satisfy the following identities:
\begin{eqnarray*}
w_{3}[p,q](\frac{i}{\mu}) &=& -\mu^{2}w_{1}[-q,p](i\mu), \\
w_{3}^{'}[p,q](\frac{i}{\mu}) &=& \mu^{4}w_{1}^{'}[-q,p](i\mu)+2\mu^{3}w_{1}[-q,p](i\mu), \\
w_{3}^{''}[p,q](\frac{i}{\mu}) &=& -\mu^{6}w_{1}^{''}[-q,p](i\mu)-6\mu^{5}w_{1}^{'}[-q,p](i\mu)-6\mu^{4}w_{1}[-q,p](i\mu), \\
w_{3}^{(3)}[p,q](\frac{i}{\mu}) &=& \mu^{8}w_{1}^{(3)}[-q,p](i\mu)+12\mu^{7}w_{1}^{''}[-q,p](i\mu)+36\mu^{6}w_{1}^{'}[-q,p](i\mu) \\
&&+24\mu^{5}w_{1}[-q,p](i\mu), \\
w_{3}^{(4)}[p,q](\frac{i}{\mu})&=& -\mu^{10}w_{1}^{(4)}[-q,p](i\mu)-20\mu^{9}w_{1}^{(3)}[-q,p](i\mu)-120\mu^{8}w_{1}^{''}[-q,p](i\mu) \\
&&-240\mu^{7}w_{1}^{'}[-q,p](i\mu)-120\mu^{6}w_{1}[-q,p](i\mu).
\end{eqnarray*}
\begin{proof}
See Appendix \ref{transformationsw_j2appendix}.
\end{proof}
\end{lemma}
\smallskip
We also need to know how the function $F[p, q]$ given in \eqref{two-parametric}
and its derivatives, transform under the modular transformations on $i \mu$
in the upper half-plane. First, their properties with respect to the
action of $T_1$ given by \eqref{T_1andSEq} are presented.
\smallskip
\begin{lemma}
\label{transformationsF}
For $\mu$ in the right half-plane $\Re(\mu) >0 $, the function
$F[p, q]$ and its derivative of an arbitrary order $n \geq 1$
satisfy the following properties:
\begin{eqnarray*}
F[p,q](i\mu+1) &=& F[p,q+p+\frac{1}{2}](i\mu), \\
F^{(n)}[p,q](i\mu+1) &=&F^{(n)}[p,q+p+\frac{1}{2}](i\mu).
\end{eqnarray*}
\begin{proof}
Using Lemma \ref{transformationsvarthetapq}, we have
\begin{eqnarray*}
F[p,q](i\mu+1) &=& \frac{2}{\pi\Lambda}\cdot\left(\frac{\vartheta[p,q](i\mu+1)}{\partial_{q}\vartheta[p,q](i\mu+1)}\right)^{2}
= \frac{2}{\pi\Lambda}\cdot\left(\frac{\vartheta[p,q+p+\frac{1}{2}](i\mu)}{\partial_{q}\vartheta[p,q+p+\frac{1}{2}](i\mu)}\right)^{2} \\
&=& F[p,q+p+\frac{1}{2}](i\mu).
\end{eqnarray*}
The identity for the derivatives of $F[p, q]$ follows easily from differentiating the latter with respect to $\mu$.
\end{proof}
\end{lemma}
\smallskip
The following lemma gives the transformation properties of $F[p, q]$ and its derivatives with
respect to the action $S$ given by \eqref{T_1andSEq} on the upper half-plane.
\smallskip
\begin{lemma} \label{transformationsFSlemma}
If $\Re(\mu) > 0$, then
\begin{eqnarray*}
F[p,q](\frac{i}{\mu}) &=& -\mu^{-2} F[-q,p](i\mu), \\
F^{'}[p,q](\frac{i}{\mu}) &=& F^{'}[-q,p](i\mu)-2\mu^{-1}F[-q,p](i\mu), \\
F^{''}[p,q](\frac{i}{\mu}) &=& -\mu^{2}F^{''}[-q,p](i\mu)+2\mu F^{'}[-q,p](i\mu)-2F[-q,p](i\mu), \\
F^{(3)}[p,q](\frac{i}{\mu}) &=& \mu^{4}F^{(3)}[-q,p](i\mu), \\
F^{(4)}[p,q](\frac{i}{\mu}) &=& -\mu^{6}F^{(4)}[-q,p](i\mu)-4\mu^{5}F^{(3)}[-q,p](i\mu).
\end{eqnarray*}
\begin{proof}
We can use Lemma \ref{transformationsvarthetapq} to write
\begin{eqnarray*}
F[p,q](\frac{i}{\mu}) &=& \frac{2}{\pi\Lambda}\cdot\left(\frac{\vartheta[p,q](\frac{i}{\mu})}{\partial_{q}\vartheta[p,q](\frac{i}{\mu})}\right)^{2}=-\mu^{-2}\frac{2}{\pi\Lambda}\cdot\left(\frac{\vartheta[-q,p](i\mu)}{\partial_{q}\vartheta[-q,p](i\mu)}\right)^{2} \\
&=& -\mu^{-2}\cdot F[-q,p](i\mu).
\end{eqnarray*}
Taking the derivate of the latter, we have:
\begin{eqnarray*}
F^{'}[p,q](\frac{i}{\mu}) &=& \frac{d\mu}{d\frac{1}{\mu}}\partial_{\mu}F[p,q](\frac{i}{\mu})=-\mu^{2}\partial_{\mu}(-\mu^{-2}\cdot F[-q,p](i\mu)) \\
&=&-\mu^{2}(-\mu^{-2}\cdot F^{'}[-q,p](i\mu)+2\mu^{-3}\cdot F[-q,p](i\mu)) \\
&=&F^{'}[-q,p](i\mu)-2\mu^{-1}F[-q,p](i\mu).
\end{eqnarray*}
\smallskip
We then continue by taking another derivative to obtain
\begin{eqnarray*}
F^{''}[p,q](\frac{i}{\mu}) &=& \frac{d\mu}{d\frac{1}{\mu}}\partial_{\mu}F^{'}[p,q](\frac{i}{\mu})\\
&=&-\mu^{2}(F^{''}[-q,p](i\mu)-2\mu^{-1}F^{'}[-q,p](i\mu)+2\mu^{-2}F[-q,p](i\mu))\\
&=&-\mu^{2}F^{''}[-q,p](i\mu)+2\mu F^{'}[-q,p](i\mu)-2F[-q,p](i\mu).
\end{eqnarray*}
Continuing this process, we find the properties of the higher derivative of $F[p, q]$:
\begin{eqnarray*}
F^{(3)}[p,q](\frac{i}{\mu}) = \frac{d\mu}{d\frac{1}{\mu}}\partial_{\mu}F^{''}[p,q](\frac{i}{\mu})=\mu^{4}F^{(3)}[-q,p](i\mu),
\end{eqnarray*}
\begin{eqnarray*}
F^{(4)}[p,q](\frac{i}{\mu}) &=& \frac{d\mu}{d\frac{1}{\mu}}\partial_{\mu}F^{(3)}[p,q](\frac{i}{\mu})\\
&=&-\mu^{6}F^{(4)}[-q,p](i\mu)-4\mu^{5}F^{(3)}[-q,p](i\mu).
\end{eqnarray*}
\end{proof}
\end{lemma}
\smallskip
\subsection{The case of the one-parametric family of metrics}
We now turn to the one-parameteric case \eqref{one-parametric},
for which, similarly to the case of the two-parametric case \eqref{two-parametric}
analysed in Subsection \ref{ModularPropsTwoParametricSubSec}, the necessary modular
properties can be investigated by using the lemmas proved in the beginning of this section.
Since the proofs are based on direct calculations combined with making use of the lemmas
\ref{transformationsvarthetapq} and \ref{transformationsvartheta234},
we just present the statements.
\smallskip
First, let us deal with the functions $w_j[q_0]$ given in \eqref{one-parametric}. With respect to the
modular transformation $T_1$ given by \eqref{T_1andSEq} acting
on $i \mu$ in the upper half-plane, these functions and their derivatives
satisfy the following properties.
\smallskip
\begin{lemma}\label{transfsw_jT_2}
The functions $w_j[q_0]$ and their derivative of an arbitrary order $n \geq 0$ with respect to $\mu$
satisfy these properties when $\Re(\mu)>0$:
\begin{eqnarray*}
w_{1}^{(n)}[q_0](i\mu+1)&=&w_{1}^{(n)}[q_0-i](i\mu), \\
w_{2}^{(n)}[q_0](i\mu+1)&=&w_{3}^{(n)}[q_0-i](i\mu), \\
w_{3}^{(n)}[q_0](i\mu+1)&=&w_{2}^{(n)}[q_0-i](i\mu).
\end{eqnarray*}
\end{lemma}
\smallskip
The next three lemma presents the transformation properties of the functions $w_j[q_0]$
and their derivatives up to order 4 with
respect to the modular action of $S$ given by \eqref{T_1andSEq} on $i \mu$.
\smallskip
\begin{lemma}
\label{w_1q_0SLem}
The function $w_1[q_0]$ and its derivatives up to order 4 with respect to $\mu$ satisfy
the following identities provided that $\Re(\mu) >0$:
\begin{eqnarray*}
w_{1}[q_{0}](\frac{i}{\mu}) &=& -\mu^{2}w_{3}[\frac{1}{q_{0}}](i\mu), \\
w_{1}^{'}[q_{0}](\frac{i}{\mu}) &=& \mu^{4}w_{3}^{'}[\frac{1}{q_{0}}](i\mu)+2\mu^{3}w_{3}[\frac{1}{q_{0}}](i\mu), \\
w_{1}^{''}[q_{0}](\frac{i}{\mu}) &=&
-\mu^{6}w_{3}^{''}[\frac{1}{q_{0}}](i\mu)-6\mu^{5}w_{3}^{'}[\frac{1}{q_{0}}](i\mu)-6\mu^{4}w_{3}[\frac{1}{q_{0}}](i\mu), \\
w_{1}^{(3)}[q_{0}](\frac{i}{\mu})
&=&\mu^{8}w_{3}^{(3)}[\frac{1}{q_{0}}](i\mu)+12\mu^{7}w_{3}^{''}[\frac{1}{q_{0}}](i\mu)+36\mu^{6}w_{3}^{'}[\frac{1}{q_{0}}](i\mu) \\
&& +24\mu^{5}w_{3}[\frac{1}{q_{0}}](i\mu), \\
w_{1}^{(4)}[q_{0}](\frac{i}{\mu})&=&-\mu^{10}w_{3}^{(4)}[\frac{1}{q_{0}}](i\mu)-20\mu^{9}w_{3}^{(3)}[\frac{1}{q_{0}}](i\mu)-120\mu^{8}w_{3}^{''}[\frac{1}{q_{0}}](i\mu) \\
&&-240\mu^{7}w_{3}^{'}[\frac{1}{q_{0}}](i\mu)-120\mu^{6}w_{3}[\frac{1}{q_{0}}](i\mu).
\end{eqnarray*}
\end{lemma}
\smallskip
While in the latter lemma for $w_1[q_0]$, the expressions on the right hand sides of the identities
involve $w_3[\frac{1}{q_0}]$, in the following lemma we observe that for $w_2[q_0]$ similar
properties hold, however, the index 2 does not change in the sense that the properties are expressed
in terms of $w_2[\frac{1}{q_0}]$.
\smallskip
\begin{lemma} \label{w_2q_0SLem}
When $\Re(\mu) >0$, the function $w_2[q_0]$ and its derivatives up to order 4 with respect to $\mu$ satisfy
the following properties:
\begin{eqnarray*}
w_{2}[q_{0}](\frac{i}{\mu})&=&-\mu^{2}w_{2}[\frac{1}{q_{0}}](i\mu),\\
w_{2}^{'}[q_{0}](\frac{i}{\mu}) &=&
\mu^{4}w_{2}^{'}[\frac{1}{q_{0}}](i\mu)+2\mu^{3}w_{2}[\frac{1}{q_{0}}](i\mu),\\
w_{2}^{''}[q_{0}](\frac{i}{\mu})&=&
-\mu^{6}w_{2}^{''}[\frac{1}{q_{0}}](i\mu)-6\mu^{5}w_{2}^{'}[\frac{1}{q_{0}}](i\mu)-6\mu^{4}w_{2}[\frac{1}{q_{0}}](i\mu),\\
w_{2}^{(3)}[q_{0}](\frac{i}{\mu})&=&
\mu^{8}w_{2}^{(3)}[\frac{1}{q_{0}}](i\mu)+12\mu^{7}w_{2}^{''}[\frac{1}{q_{0}}](i\mu)+36\mu^{6}w_{2}^{'}[\frac{1}{q_{0}}](i\mu)\\
&&+24\mu^{5}w_{2}[\frac{1}{q_{0}}](i\mu),\\
w_{2}^{(4)}[q_{0}](\frac{i}{\mu})&=&
-\mu^{10}w_{2}^{(4)}[\frac{1}{q_{0}}](i\mu)-20\mu^{9}w_{2}^{(3)}[\frac{1}{q_{0}}](i\mu)-120\mu^{8}w_{2}^{''}[\frac{1}{q_{0}}](i\mu) \\
&&-240\mu^{7}w_{2}^{'}[\frac{1}{q_{0}}](i\mu)-120\mu^{6}w_{2}[\frac{1}{q_{0}}](i\mu).
\end{eqnarray*}
\end{lemma}
\smallskip
The function $w_3[q_0]$ and its derivatives behave similarly to the case of $w_1[q_0]$:
the statement given in the following lemma can be obtained from swapping the indices 1 and 3
in Lemma \ref{w_1q_0SLem}.
\smallskip
\begin{lemma} \label{w_3q_0SLem}
The function $w_3[q_0]$ and its derivatives up to order 4 with respect to $\mu$ satisfy
the following identities provided that $\Re(\mu) >0$:
\begin{eqnarray*}
w_{3}[q_{0}](\frac{i}{\mu})&=&-\mu^{2}w_{1}[\frac{1}{q_{0}}](i\mu), \\
w_{3}^{'}[q_{0}](\frac{i}{\mu})&=&
\mu^{4}w_{1}^{'}[\frac{1}{q_{0}}](i\mu)+2\mu^{3}w_{1}[\frac{1}{q_{0}}](i\mu), \\
w_{3}^{''}[q_{0}](\frac{i}{\mu})&=&
-\mu^{6}w_{1}^{''}[\frac{1}{q_{0}}](i\mu)-6\mu^{5}w_{1}^{'}[\frac{1}{q_{0}}](i\mu)-6\mu^{4}w_{1}[\frac{1}{q_{0}}](i\mu), \\
w_{3}^{(3)}[q_{0}](\frac{i}{\mu})&=&
\mu^{8}w_{1}^{(3)}[\frac{1}{q_{0}}](i\mu)+12\mu^{7}w_{1}^{''}[\frac{1}{q_{0}}](i\mu)+36\mu^{6}w_{1}^{'}[\frac{1}{q_{0}}](i\mu) \\
&&+24\mu^{5}w_{1}[\frac{1}{q_{0}}](i\mu), \\
w_{3}^{(4)}[q_{0}](\frac{i}{\mu})&=&
-\mu^{10}w_{1}^{(4)}[\frac{1}{q_{0}}](i\mu)-20\mu^{9}w_{1}^{(3)}[\frac{1}{q_{0}}](i\mu)-120\mu^{8}w_{1}^{''}[\frac{1}{q_{0}}](i\mu) \\
&&-240\mu^{7}w_{1}^{'}[\frac{1}{q_{0}}](i\mu)-120\mu^{6}w_{1}[\frac{1}{q_{0}}](i\mu).
\end{eqnarray*}
\end{lemma}
\smallskip
Finally we present the modular transformation properties of the function $F[q_0]$ given in
\eqref{one-parametric} and its derivatives.
\smallskip
\begin{lemma}
\label{transformationsFlemma}
Provided that the complex number $\mu$ belongs to the
right half-plane $\Re(\mu) > 0$, the function $F[q_0]$ and its derivatives
with respect to $\mu$ satisfy the following identities:
\begin{align*}
&F[q_{0}](i\mu+1)= F[q_{0}-i](i\mu), &
F^{'}[q_{0}](i\mu+1)= F^{'}[q_{0}-i](i\mu), \\
& F^{''}[q_{0}](i\mu+1)=F^{''}[q_{0}-i](i\mu), &
F^{(n)}[q_{0}](i\mu)=0, \qquad n \geq 3, \\
& F[q_{0}](\frac{i}{\mu})=q_{0}^{2}\mu^{-2}F[\frac{1}{q_{0}}](i\mu), &
F^{'}[q_{0}](\frac{i}{\mu}) =q_{0}\mu^{-1}F^{'}[\frac{1}{q_{0}}](i\mu).
\end{align*}
\end{lemma}
\smallskip
\section{Modular properties of $\tilde a_0,$ $\tilde a_2$ and $\tilde a_4$ by direct calculations}
\label{a_0a_2a_4Sec}
\smallskip
Our aim in this section is to investigate modular properties
of the Seeley-de Witt coefficient $\tilde a_0$, $\tilde a_2$, and $\tilde a_4$,
which are respectively given by \eqref{a_0Eq}, \eqref{a_2Eq}, and in
Appendix \ref{fulla_4appendix},
in the case of the gravitational instantons parametrized by
\eqref{two-parametric} and \eqref{one-parametric}. The
terms $\tilde a_{2n}$ appear in general in the asymptotic expansion
\eqref{ExpAsympConformalEq} of the heat kernel of $\tilde D^2$, where
$\tilde D$ with the expression \eqref{DiracConfBianchiIXEq} is the Dirac operator of a
time dependent conformal perturbation of a triaxial Bianchi IX metric, given
by \eqref{ConformalBianchiIXMetricEq1}. The latter serves as a general
form of Bianchi IX gravitational instantons, as we explained and
provided references about it in Section \ref{InstantonsSec}.
Since the expressions in \eqref{two-parametric} are parametrized
by a pair of parameters $[p, q]$, we denote the corresponding Seeley-de
Witt coefficients by $\tilde a_{2n}[p, q]$. Similarly, the terms $\tilde a_{2n}$
associated with the one-parametric case \eqref{one-parametric} with the parameter
$q_0$ will be denoted by $\tilde a_{2n}[q_0]$.
\smallskip
We begin with substituting the solutions \eqref{two-parametric} and
\eqref{one-parametric} for $w_1$, $w_2$, $w_3$ and $F$, in the explicit expressions for
$\tilde a_0$, $\tilde a_2$ and $\tilde a_4$. It is evident from the definitions
provided by \eqref{ThetawithCharEq}, \eqref{varthetapqEq} and \eqref{varthetasEq}
that we are allowed to take $\mu$ in the right half-plane
in complex numbers so that $i \mu$ belongs to $\mathbb{H}$, the upper half-plane.
Therefore, in this section, using the modular identities explored in Section \ref{ArithmeticsofInstantonsSec},
we investigate transformations of the terms $\tilde a_0[p, q](i \mu)$, $\tilde a_2[p, q](i \mu)$,
$\tilde a_4[p, q](i \mu)$, $\tilde a_0[q_0](i \mu)$, $\tilde a_2[q_0](i \mu)$ and $\tilde a_4[q_0](i \mu)$
under the modular actions $S$ and $T_1$ given by \eqref{T_1andSEq} on $i \mu \in \mathbb{H}$. Let us start studying the transformations associated with $T_1$.
\smallskip
\begin{theorem} \label{modualra024T_1thm}
Let $\mu$ be a complex number in the right half-plane $\Re(\mu)>0$. We have
\[
\tilde{a}_{0}[p,q](i\mu+1)= \tilde{a}_{0}[p,q+p+\frac{1}{2}](i\mu),
\]
\[
\tilde a_{2}[p,q](i\mu+1)= a_{2}[p,q+p+\frac{1}{2}](i\mu),
\]
\[
\tilde a_{4}[p,q](i\mu+1)=\tilde a_{4}[p,q+p+\frac{1}{2}](i\mu).
\]
\begin{proof}
According to Lemma 5.5 and Lemma 5.9, so far as the functions $w_1(i\mu)$, $w_2(i\mu)$, $w_3(i\mu)$ and $F(i\mu)$ are concerned, the transformation $i\mu\mapsto i\mu+1$ is equivalent to the transformation $(p,q)\mapsto(p,q+p+\frac{1}{2})$ followed by the exchange of $w_2$ and $w_3$.
\smallskip
Also, from the explicit expressions of $\tilde{a}_0[p,q](i\mu)$ and $\tilde{a}_2[p,q](i\mu)$ presented in (23) and (24), as well as the expression of $\tilde{a}_4[p,q](i\mu)$ shown in Appendix B, we know that $\tilde{a}_0[p,q](i\mu)$, $\tilde{a}_2[p,q](i\mu)$, and $\tilde{a}_4[p,q](i\mu)$ are invariant under exchanging $w_2$ and $w_3$, so our claim is confirmed.
\smallskip
\end{proof}
\end{theorem}
\smallskip
Moreover, the terms $\tilde a_0[p, q]$, $\tilde a_2[p, q]$ and $\tilde a_4[p, q]$
satisfy modular transformation properties with respect to the action of $S$ given in
\eqref{T_1andSEq} on $i \mu \in \mathbb{H}$.
\smallskip
\begin{theorem} \label{modualra024Sthm}
Assuming that $\Re(\mu) >0$, we have
\[
\tilde{a}_{0}[p,q](\frac{i}{\mu})=-\mu^{2} \tilde{a}_{0}[-q,p](i\mu),
\]
\[
\tilde a_{2}[p,q](\frac{i}{\mu})=-\mu^{2} \tilde a_{2}[-q,p](i\mu),
\]
\[
\tilde a_{4}[p,q](\frac{i}{\mu})=-\mu^{2} \tilde a_{4}[-q,p](i\mu).
\]
\begin{proof}
The identities can be seen to hold by applying directly lemmas \ref{transformationsw_12},
\ref{transformationsw_22}, \ref{transformationsw_32} and \ref{transformationsFSlemma}
to the explicit expressions for $\tilde a_0$, $\tilde a_2$ and $\tilde a_4$ given
respectively by \eqref{a_0Eq}, \eqref{a_2Eq} and in Appendix \ref{fulla_4appendix}. In fact, for the first term we can write
\begin{eqnarray*}
\tilde a_{0}[p,q](\frac{i}{\mu}) &=& -4\mu^{2}F^{2}[-q,p](i\mu)w_{1}[-q,p](i\mu)w_{2}[-q,p](i\mu)w_{3}[-q,p](i\mu) \\
&=&-\mu^{2} \tilde a_{0}[-q,p](i\mu),
\end{eqnarray*}
\smallskip
For the next term, first we just consider its expression, replace $i \mu$ by $S(i \mu)=i/\mu$ and
use the mentioned lemmas to write:
\begin{eqnarray*}
&&\tilde a_{2}[p,q](\frac{i}{\mu}) \\
&& \quad =\mu^{2}\frac{F[-q,p](i\mu)}{3}\Big (w_{1}^{2}[-q,p](i\mu)+w_{2}^{2}[-q,p](i\mu)+w_{3}^{2}[-q,p](i\mu) \Big ) \\
&& \qquad -\frac{\mu^{2}F[-q,p](i\mu)}{6}\Big (\frac{w_{1}^{2}[-q,p](i\mu)w_{2}^{2}[-q,p](i\mu)}{w_{3}^{2}[-q,p](i\mu)}+\frac{w_{1}^{2}[-q,p](i\mu)w_{3}^{2}[-q,p](i\mu)}{w_{2}^{2}[-q,p](i\mu)} \\
&& \qquad +\frac{w_{2}^{2}[-q,p](i\mu)w_{3}^{2}[-q,p](i\mu)}{w_{1}^{2}[-q,p](i\mu)} \Big)
+\frac{\mu^{2}F[-q,p](i\mu)}{6}\Big (\frac{w_{3}^{'2}[-q,p](i\mu)}{w_{3}^{2}[-q,p](i\mu)}
\end{eqnarray*}
\begin{eqnarray*}
&& \qquad+\frac{w_{2}^{'2}[-q,p](i\mu)}{w_{2}^{2}[-q,p](i\mu)} +\frac{w_{1}^{'2}[-q,p](i\mu)}{w_{1}^{2}[-q,p](i\mu)}\Big )+\frac{\mu F[-q,p](i\mu)}{3}\Big (\frac{2w_{3}^{'}[-q,p](i\mu)}{w_{3}[-q,p](i\mu)} \\
&& \qquad +\frac{2w_{2}^{'}[-q,p](i\mu)}{w_{2}[-q,p](i\mu)}+\frac{2w_{1}^{'}[-q,p](i\mu)}{w_{1}[-q,p](i\mu)} \Big )+2F[-q,p](i\mu)
\end{eqnarray*}
\begin{eqnarray*}
&& \qquad +\frac{\mu^{2}F[-q,p](i\mu)}{3}\Big (\frac{w_{3}^{'}[-q,p](i\mu)w_{2}^{'}[-q,p](i\mu)}{w_{3}[-q,p](i\mu)w_{2}[-q,p](i\mu)}+\frac{w_{3}^{'}[-q,p](i\mu)w_{1}^{'}[-q,p](i\mu)}{w_{3}[-q,p](i\mu)w_{1}[-q,p](i\mu)}
\end{eqnarray*}
\begin{eqnarray*}
&& \qquad +\frac{w_{1}^{'}[-q,p](i\mu)w_{2}^{'}[-q,p](i\mu)}{w_{1}[-q,p](i\mu)w_{2}[-q,p](i\mu)}\Big ) +\frac{\mu F[-q,p](i\mu)}{3}\Big (\frac{4w_{1}^{'}[-q,p](i\mu)}{w_{1}[-q,p](i\mu)} \\
&& \qquad +\frac{4w_{2}^{'}[-q,p](i\mu)}{w_{2}[-q,p](i\mu)} +\frac{4w_{3}^{'}[-q,p](i\mu)}{w_{3}[-q,p](i\mu)}\Big )+4F[-q,p](i\mu) \\
&& \qquad -\frac{\mu^{2}F[-q,p](i\mu)}{3}\Big (\frac{w_{3}^{''}[-q,p](i\mu)}{w_{3}[-q,p](i\mu)}+\frac{w_{2}^{''}[-q,p](i\mu)}{w_{2}[-q,p](i\mu)}+\frac{w_{1}^{''}[-q,p](i\mu)}{w_{1}[-q,p](i\mu)}\Big )
\end{eqnarray*}
\begin{eqnarray*}
&& \qquad -\mu F[-q,p](i\mu) \Big (\frac{2w_{3}^{'}[-q,p](i\mu)}{w_{3}[-q,p](i\mu)}+\frac{2w_{2}^{'}[-q,p](i\mu)}{w_{2}[-q,p](i\mu)}+\frac{2w_{1}^{'}[-q,p](i\mu)}{w_{1}[-q,p](i\mu)}\Big ) \\
&& \qquad -6F[-q,p](i\mu)+\mu^{2}\frac{F^{'2}[-q,p](i\mu)}{2F[-q,p](i\mu)}-\mu^{2}F^{''}[-q,p](i\mu) \\
&& \quad =-\mu^{2}\tilde a_{2}[-q,p](i\mu).
\end{eqnarray*}
\end{proof}
\end{theorem}
\smallskip
Now we turn our focus to the modular properties of the terms $\tilde a_0[q_0]$, $\tilde a_2[q_0]$ and $\tilde a_0[q_0]$ associated with the one parametric case \eqref{one-parametric}.
\begin{theorem} \label{modualra024Soneparathm}
If $\Re(\mu) >0 $, we have
\begin{align*}
\tilde a_{0}[q_{0}](i\mu+1)&= \tilde a_{0}[q_{0}-i](i\mu), & \tilde a_{0}[q_{0}](\frac{i}{\mu})&=-q_{0}^{4}\mu^{2} \tilde a_{0}[\frac{1}{q_{0}}](i\mu), \\
\tilde a_{2}[q_{0}](i\mu+1)&= \tilde a_{2}[q_{0}-i](i\mu), & \tilde a_{2}[q_{0}](\frac{i}{\mu})&=q_{0}^{2}\mu^{2} \tilde a_{2}[\frac{1}{q_{0}}](i\mu), \\
\tilde a_{4}[q_{0}](i\mu+1)&= \tilde a_{4}[q_{0}-i](i\mu), &
\tilde a_{4}[q_{0}](\frac{i}{\mu})&=-\mu^{2}\tilde a_{4}[\frac{1}{q_{0}}](i\mu).
\end{align*}
\begin{proof}
The identities follow directly from applying lemmas \ref{transfsw_jT_2},
\ref{w_1q_0SLem}, \ref{w_2q_0SLem}, \ref{w_3q_0SLem} and \ref{transformationsFlemma} to the
explicit expressions for $\tilde a_0[q_0]$, $\tilde a_2[q_0]$ and $\tilde a_4[q_0]$.
\end{proof}
\end{theorem}
\smallskip
The direct calculations carried out in this section show that the first three Seeley-de Witt
coefficients behave similarly to vector-valued modular forms \cite{EicZag} since the modular
transformations act accordingly on the parameters of the metrics. This strong evidence indicates
that there should be an interesting connection between the Dirac operators of the metrics
that are related to each other by the modular transformations. Indeed, by taking a close
look at the spectral properties of these Dirac operators, we prove in Section \ref{a_2nSec}
that the modular transformation properties proved for the first three terms hold for
all of the terms $\tilde a_{2n}$.
\smallskip
\section{Modular properties of $\tilde a_{2n}$ using identities for Dirac operators}
\label{a_2nSec}
\smallskip
Motivated by the modular transformation properties presented in theorems \ref{modualra024T_1thm},
\ref{modualra024Sthm} and \ref{modualra024Soneparathm}
for the terms $\tilde a_0$, $\tilde a_2$ and
$\tilde a_4$, the main goal in this section is to prove that all of the Seeley-de Witt $\tilde a_{2n}$ satisfy the same
properties. Clearly, this cannot be achieved by resorting to direct calculations.
However, the direct calculations performed in Section \ref{a_0a_2a_4Sec} for the first three terms
reveal that the effect of the modular transformations on the Seeley-de Witt coefficients is
reflected in a certain way on the parameters of the metric, and by studying the corresponding
Dirac operators carefully in this section we can prove that the transformation properties hold for the general terms.
\smallskip
Note that the the gravitational instantons parametrized by \eqref{two-parametric} and \eqref{one-parametric}
are given in terms of theta functions and their derivatives. Thus in the two-parametric case
\eqref{two-parametric} with the parameters $(p, q)$
as well as in the one-parametric case \eqref{one-parametric} with the parameter $q_0$,
we are allowed to consider an extension of the definitions to $p,q, q_0\in\mathbb{C}$ with $i\mu\in\mathbb{H}$, excluding possible zeros of the theta functions and their derivatives that appear in the denominators. Thus with $(p,q)$ and $q_{0}$ fixed, we may consider the operators $\tilde{D}^{2}[p,q]$
and $\tilde{D}^{2}[\frac{1}{q_{0}}]$ acting on a spin bundle over
the base manifold $M_{0}=I_{(a,b)}\times\mathbb{S}^{3}$, where $I_{(a,b)}$
is an arbitrary horizontal path in the upper-half complex plane $\mathbb{H}$. We may also consider the operators
$\tilde{D}^{2}[p,q+p+\frac{1}{2}]$ and $\tilde{D}^{2}[q_{0}-i]$ acting on
$M_{1}=(a+i,b+i)\times\mathbb{S}^{3}$, and the operators $\tilde{D}^{2}[-q,p]$
and $\tilde{D}^{2}[\frac{1}{q_{0}}]$ acting on $M_{2}=(\frac{1}{b},\frac{1}{a})\times\mathbb{S}^{3}$.
\smallskip
In the following theorem, we prove the isospectrality of the Dirac operators corresponding to the
metrics whose parameters are related to each other via the effect of the modular
transformations on the Seeley-de Witt coefficients $\tilde a_0$, $\tilde a_2$ and $\tilde a_4$,
presented in theorems \ref{modualra024T_1thm} and \ref{modualra024Sthm}.
\smallskip
\begin{theorem} \label{IsoSpecDiracs2paraThm}
For the two-parameter family of metrics parametrized by $(p,q)$,
the operators $\tilde{D}^2[p,q]$, $\tilde{D}^2[p,q+p+\frac{1}{2}]$ and
$\tilde{D}^2[-q,p]$ are isospectral on spin bundles over $M_0$, $M_1$ and $M_2$
respectively. More precisely, under the isospectral transformations $(p,q)\mapsto(p,q+p+\frac{1}{2})$ and $(p,q)\mapsto(-q,p)$, an eigenspinor ${u_n(\mu)}$ corresponding to a particular eigenvalue $\lambda_n^2[p,q]$ of $\tilde{D}^2$ transforms as $u_n(\mu)\mapsto u_n(\mu-i)$ and $u_n(\mu)\mapsto -\gamma^1u_n(\frac{1}{\mu})$ respectively, where the spatial dependence of $u_n$ is invariant and suppressed. The transformations of the eigenspinors define a bijection between the spaces of eigenspinors of the corresponding operators.
\end{theorem}
\begin{proof}
For the two-parametric family of metrics, suppose $u_{n}(\mu,\eta,\phi,\psi)$
is a section of the spin bundle such that $\tilde{D}[p,q]u_{n}(\mu,\eta,\phi,\psi)=\lambda_{n}u_{n}(\mu,\eta,\phi,\psi)$. For simplicity, let us suppress writing the dependence of $u_{n}$ on $\eta$, $\phi$, and $\psi$
and assume that it is understood. Using the explicit expression \eqref{DiracConfBianchiIXEq} for the Dirac operator we
have:
{\small
\begin{eqnarray*}
&& \left(-\tilde{D}[-q,p]\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\right)\mid_{\mu=\mu_{0}} =
\end{eqnarray*}
\[
-(F[-q,p](i\mu_{0})w_{1}[-q,p](i\mu_{0})w_{2}[-q,p](i\mu_{0}) w_{3}[-q,p](i\mu_{0}))^{-\frac{1}{2}}
\gamma^{0}\partial_{\text{\ensuremath{\mu}}}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
+ \sin\psi\cdot\left(\frac{F[-q,p](i\mu_{0})w_{2}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})}{w_{1}[-q,p](i\mu_{0})}\right)^{-\frac{1}{2}}
\gamma^{1}\partial_{\eta}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\cos\psi\cdot\left(\frac{F[-q,p](i\mu_{0})w_{1}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})}{w_{2}[-q,p](i\mu_{0})}\right)^{-\frac{1}{2}}
\gamma^{2}\partial_{\eta}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\cos\psi\csc\eta\cdot\left(\frac{F[-q,p](i\mu_{0})w_{2}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})}{w_{1}[-q,p](i\mu_{0})}\right)^{-\frac{1}{2}}
\gamma^{1}\partial_{\phi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\sin\psi\csc\eta\cdot\left(\frac{F[-q,p](i\mu_{0})w_{1}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})}{w_{2}[-q,p](i\mu_{0})}\right)^{-\frac{1}{2}}
\gamma^{2}\partial_{\phi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
+\cos\psi\cot\eta\cdot\left(\frac{F[-q,p](i\mu_{0})w_{2}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})}{w_{1}[-q,p](i\mu_{0})}\right)^{-\frac{1}{2}}\gamma^{1}\partial_{\psi}
\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
+\sin\psi\cot\eta\cdot\left(\frac{F[-q,p](i\mu_{0})w_{1}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})}{w_{2}[-q,p](i\mu_{0})}\right)^{-\frac{1}{2}}
\gamma^{2}\partial_{\psi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\left(\frac{F[-q,p](i\mu_{0})w_{1}[-q,p](i\mu_{0})w_{2}[-q,p](i\mu_{0})}{w_{3}[-q,p](i\mu_{0})}\right)^{-\frac{1}{2}}\gamma^{3}\partial_{\psi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\frac{1}{4}\frac{\frac{w_{1}^{'}[-q,p](i\mu_{0})}{w_{1}[-q,p](i\mu_{0})}+\frac{w_{2}^{'}[-q,p](i\mu_{0})}{w_{2}[-q,p](i\mu_{0})}+\frac{w_{3}^{'}[-q,p](i\mu_{0})}{w_{3}[-q,p](i\mu_{0})}+3\frac{F^{'}[-q,p](i\mu_{0})}{F[-q,p](i\mu_{0})}}{\left(F[-q,p](i\mu_{0})w_{1}[-q,p](i\mu_{0})w_{2}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})\right)^{\frac{1}{2}}}
\gamma^{0}\left(-\gamma^{0}u_{n}(\frac{1}{\mu_{0}})\right)
\]
\[ +\frac{1}{4}\left(\frac{w_{1}[-q,p](i\mu_{0})w_{2}[-q,p](i\mu_{0})w_{3}[-q,p](i\mu_{0})}{F[-q,p](i\mu_{0})}\right)^{\frac{1}{2}} \times
\]
\[
\quad \left(\frac{1}{w_{1}^{2}[-q,p](i\mu_{0})}+\frac{1}{w_{2}^{2}[-q,p](i\mu_{0})}+\frac{1}{w_{3}^{2}[-q,p](i\mu_{0})}\right) \gamma^{1}\gamma^{2}\gamma^{3}\left(-\gamma^{0}u_{n}(\frac{1}{\mu_{0}})\right).
\]
}
\smallskip
Now we can use in the latter lemmas \ref{transformationsw_12}, \ref{transformationsw_22}, \ref{transformationsw_32} and
\ref{transformationsFSlemma}, which show how the modular transformation $S(i \mu) = i/ \mu$ affects
the functions $w_j[p, q]$ and $F[p, q]$. This yields
{\small
\begin{eqnarray*}
\left(-\tilde{D}[-q,p]\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\right)\mid_{\mu=\mu_{0}} =
\end{eqnarray*}
\[
-(F[p,q](\frac{i}{\mu_{0}})w_{1}[p,q](\frac{i}{\mu_{0}})w_{2}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}}))^{-\frac{1}{2}} \left(-\partial_{\text{\ensuremath{\frac{1}{\mu}}}}\gamma^{0}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}\right)
\]
\[
+\sin\psi\cdot\left(\frac{F[p,q](\frac{i}{\mu_{0}})w_{2}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})}{w_{1}[p,q](\frac{i}{\mu_{0}})}\right)^{-\frac{1}{2}}\gamma^{0}\gamma^{1}\gamma^{0}\partial_{\eta}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\cos\psi\cdot\left(\frac{F[p,q](\frac{i}{\mu_{0}})w_{1}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})}{w_{2}[p,q](\frac{i}{\mu_{0}})}\right)^{-\frac{1}{2}}\gamma^{0}\gamma^{2}\gamma^{0}\partial_{\eta}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\cos\psi\csc\eta\cdot\left(\frac{F[p,q](\frac{i}{\mu_{0}})w_{2}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})}{w_{1}[p,q](\frac{i}{\mu_{0}})}\right)^{-\frac{1}{2}} \gamma^{0}\gamma^{1}\gamma^{0}\partial_{\phi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\sin\psi\csc\eta\cdot\left(\frac{F[p,q](\frac{i}{\mu_{0}})w_{1}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})}{w_{2}[p,q](\frac{i}{\mu_{0}})}\right)^{-\frac{1}{2}}\gamma^{0}\gamma^{2}\gamma^{0}\partial_{\phi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
+\cos\psi\cot\eta\cdot\left(\frac{F[p,q](\frac{i}{\mu_{0}})w_{2}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})}{w_{1}[p,q](\frac{i}{\mu_{0}})}\right)^{-\frac{1}{2}}\gamma^{0}\gamma^{1}\gamma^{0}\partial_{\psi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
+\sin\psi\cot\eta\cdot\left(\frac{F[p,q](\frac{i}{\mu_{0}})w_{1}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})}{w_{2}[p,q](\frac{i}{\mu_{0}})}\right)^{-\frac{1}{2}}\gamma^{0}\gamma^{2}\gamma^{0}\partial_{\psi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\left(\frac{F[p,q](\frac{i}{\mu_{0}})w_{1}[p,q](\frac{i}{\mu_{0}})w_{2}[p,q](\frac{i}{\mu_{0}})}{w_{3}[p,q](\frac{i}{\mu_{0}})}\right)^{-\frac{1}{2}}\gamma^{0}\gamma^{3}\gamma^{0}\partial_{\psi}\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)\mid_{\mu=\mu_{0}}
\]
\[
-\frac{1}{4}\frac{-\frac{w_{3}^{'}[p,q](\frac{i}{\mu_{0}})}{w_{3}[p,q](\frac{i}{\mu_{0}})}-\frac{2}{\mu_{0}}-\frac{w_{2}^{'}[p,q](\frac{i}{\mu_{0}})}{w_{2}[p,q](\frac{i}{\mu_{0}})}-\frac{2}{\mu_{0}}-\frac{w_{1}^{'}[p,q](\frac{i}{\mu_{0}})}{w_{1}[p,q](\frac{i}{\mu_{0}})}-\frac{2}{\mu_{0}}-3\frac{F^{'}[p,q](\frac{i}{\mu_{0}})}{F[p,q](\frac{i}{\mu_{0}})}+\frac{6}{\mu_{0}}}{\left(F[p,q](\frac{i}{\mu_{0}})w_{1}[p,q](\frac{i}{\mu_{0}})w_{2}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})\right)^{\frac{1}{2}}}\times
\]
\[
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\qquad \qquad\qquad\qquad\qquad \qquad \gamma^{0}\left(-\gamma^{0}u_{n}(\frac{1}{\mu_{0}})\right)+
\]
\[
\left(\frac{w_{1}[p,q](\frac{i}{\mu_{0}})w_{2}[p,q](\frac{i}{\mu_{0}})w_{3}[p,q](\frac{i}{\mu_{0}})}{F[p,q](\frac{i}{\mu_{0}})}\right)^{\frac{1}{2}}\cdot\left(\frac{1}{w_{1}^{2}[p,q](\frac{i}{\mu_{0}})}+\frac{1}{w_{2}^{2}[p,q](\frac{i}{\mu_{0}})}+\frac{1}{w_{3}^{2}[p,q](\frac{i}{\mu_{0}})}\right)
\]
\[
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \frac{1}{4} \gamma^{1}\gamma^{2}\gamma^{3}\left(-\gamma^{0}u_{n}(\frac{1}{\mu_{0}})\right)
\]
\begin{eqnarray*}
&=&-\gamma^{0}\left(\tilde{D}[p,q]u_{n}(\mu)\right)\mid_{\mu=\frac{1}{\mu_{0}}} \\
&=& \lambda_{n}\left(-\gamma^{0}u_{n}(\frac{1}{\mu}) \right )\mid_{\mu=\mu_{0}}.
\end{eqnarray*}
}
\smallskip
Thus we showed that, for any eigenspinor $u_{n}(\mu)$ of $\tilde{D}[p,q]$ with
eigenvalue $\lambda_{n}$, the spinor $-\gamma^{0}u_{n}(\frac{1}{\mu})$
is an eigenspinor of $-\tilde{D}[-q,p]$ with the same eigenvalue
$\lambda_{n}$. From the above equations, one can also verify immediately
that for any eigenspinor $u_{n}(\mu)$ of $-\tilde{D}[-q,p]$ with
eigenvalue $\lambda_{n}$, the spinor $\gamma^{0}u_{n}(\frac{1}{\mu})$
is an eigenspinor of $\tilde{D}[p,q]$ with the same eigenvalue $\lambda_{n}$.
Consequently, we see that there is a bijection $\lambda_{n}\mapsto-\lambda_{n}$
between the eigenvalues of $\tilde{D}[p,q]$ and those of $-\tilde{D}[-q,p]$.
Since the eigenspinors of $\tilde{D}^{2}$ are exactly the eigenspinors
of $\tilde{D}$ with the corresponding eigenvalues squared, it follows
that $\tilde{D}^{2}[p,q]$ and $\tilde{D}^{2}[-q,p]$ are isospectral.
\smallskip
In a similar manner, we also have that
{\small
\[
\left(\tilde{D}[p,q+p+\frac{1}{2}]u_{n}(\mu-i)\right)\mid_{\mu=\mu_{0}}=
\]
\[
(F[p,q+p+\frac{1}{2}](i\mu_{0})w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})
w_{3}[p,q+p+\frac{1}{2}](i\mu_{0}))^{-\frac{1}{2}}\times
\]
\[\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\gamma^{0}\partial_{\text{\ensuremath{\mu}}}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
-\sin\psi\cdot\left(\frac{F[p,q+p+\frac{1}{2}](i\mu_{0})w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{-\frac{1}{2}}\]
\[ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\gamma^{1}\partial_{\eta}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
+\cos\psi\cdot\left(\frac{F[p,q+p+\frac{1}{2}](i\mu_{0})w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{-\frac{1}{2}} \times \]
\[
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\gamma^{2}\partial_{\eta}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
+\cos\psi\csc\eta\cdot\left(\frac{F[p,q+p+\frac{1}{2}](i\mu_{0})w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{-\frac{1}{2}} \times
\]
\[
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \gamma^{1}\partial_{\phi}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
+\sin\psi\csc\eta\cdot\left(\frac{F[p,q+p+\frac{1}{2}](i\mu_{0})w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{-\frac{1}{2}} \times
\]
\[
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \gamma^{2}\partial_{\phi}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
-\cos\psi\cot\eta\cdot\left(\frac{F[p,q+p+\frac{1}{2}](i\mu_{0})w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{-\frac{1}{2}}\times
\]
\[
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\gamma^{1}\partial_{\psi}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
-\sin\psi\cot\eta\cdot\left(\frac{F[p,q+p+\frac{1}{2}](i\mu_{0})w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{-\frac{1}{2}} \times
\]
\[\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\gamma^{2}\partial_{\psi}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
\quad +\left(\frac{F[p,q+p+\frac{1}{2}](i\mu_{0})w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{-\frac{1}{2}} \times
\]
\[\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\gamma^{3}\partial_{\psi}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}
\]
\[
+\frac{1}{4}\frac{\frac{w_{1}^{'}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})}+\frac{w_{2}^{'}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})}+\frac{w_{3}^{'}[p,q+p+\frac{1}{2}](i\mu_{0})}{w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}+3\frac{F^{'}[p,q+p+\frac{1}{2}](i\mu_{0})}{F[p,q+p+\frac{1}{2}](i\mu_{0})}}{\left(F[p,q+p+\frac{1}{2}](i\mu_{0})w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})w_{2}(i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})\right)^{\frac{1}{2}}} \times
\]
\[\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \gamma^{0}u_{n}(\mu_{0}-i)
\]
\[
-\frac{1}{4}\left(\frac{w_{1}[p,q+p+\frac{1}{2}](i\mu_{0})w_{2}[p,q+p+\frac{1}{2}](i\mu_{0})w_{3}[p,q+p+\frac{1}{2}](i\mu_{0})}{F[p,q+p+\frac{1}{2}](i\mu_{0})}\right)^{\frac{1}{2}}\times
\]
\[
\left(\frac{1}{w_{1}^{2}[p,q+p+\frac{1}{2}](i\mu_{0})}+\frac{1}{w_{2}^{2}[p,q+p+\frac{1}{2}](i\mu_{0})}+\frac{1}{w_{3}^{2}[p,q+p+\frac{1}{2}](i\mu_{0})}\right)\gamma^{1}\gamma^{2}\gamma^{3}u_{n}(\mu_{0}-i).
\]
}
\smallskip
Therefore using lemmas \ref{transformationsw_j1} and \ref{transformationsF} in the latter, which explain how the
parameters of the functions $w_j$ and $F$ are affected by the modular transformation $T_1(i \mu)=i \mu+1$,
we can write:
{\small
\[
\left(\tilde{D}[p,q+p+\frac{1}{2}]u_{n}(\mu-i)\right)\mid_{\mu=\mu_{0}}=
\]
\[
(F[p,q](i\mu_{0}+1)w_{1}[p,q](i\mu_{0}+1)w_{2}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1))^{-\frac{1}{2}}\cdot\gamma^{0}\partial_{\text{\ensuremath{\mu}}}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[
-\sin\psi\cdot\left(\frac{F[p,q](i\mu_{0}+1)w_{2}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1)}{w_{1}[p,q](i\mu_{0}+1)}\right)^{-\frac{1}{2}}\gamma^{1}\partial_{\eta}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[+\cos\psi\cdot\left(\frac{F[p,q](i\mu_{0}+1)w_{1}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1)}{w_{2}[p,q](i\mu_{0}+1)}\right)^{-\frac{1}{2}}\gamma^{2}\partial_{\eta}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[
+\cos\psi\csc\eta\cdot\left(\frac{F[p,q](i\mu_{0}+1)w_{2}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1)}{w_{1}[p,q](i\mu_{0}+1)}\right)^{-\frac{1}{2}}\gamma^{1}\partial_{\phi}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[
+\sin\psi\csc\eta\cdot\left(\frac{F[p,q](i\mu_{0}+1)w_{1}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1)}{w_{2}[p,q](i\mu_{0}+1)}\right)^{-\frac{1}{2}}\gamma^{2}\partial_{\phi}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[
-\cos\psi\cot\eta\cdot\left(\frac{F[p,q](i\mu_{0}+1)w_{2}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1)}{w_{1}[p,q](i\mu_{0}+1)}\right)^{-\frac{1}{2}}\gamma^{1}\partial_{\psi}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[
-\sin\psi\cot\eta\cdot\left(\frac{F[p,q](i\mu_{0}+1)w_{1}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1)}{w_{2}[p,q](i\mu_{0}+1)}\right)^{-\frac{1}{2}}\gamma^{2}\partial_{\psi}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[
+\left(\frac{F[p,q](i\mu_{0}+1)w_{1}[p,q](i\mu_{0}+1)w_{2}[p,q](i\mu_{0}+1)}{w_{3}[p,q](i\mu_{0}+1)}\right)^{-\frac{1}{2}}\gamma^{3}\partial_{\psi}u_{n}(\mu)\mid_{\mu=\mu_{0}-i}
\]
\[
+\frac{1}{4}\frac{\frac{w_{1}^{'}[p,q](i\mu_{0}+1)}{w_{1}[p,q](i\mu_{0}+1)}+\frac{w_{2}^{'}[p,q](i\mu_{0}+1)}{w_{2}[p,q](i\mu_{0}+1)}+\frac{w_{3}^{'}[p,q](i\mu_{0}+1)}{w_{3}[p,q](i\mu_{0}+1)}+3\frac{F^{'}[p,q](i\mu_{0}+1)}{F[p,q](i\mu_{0}+1)}}{\left(F[p,q](i\mu_{0}+1)w_{1}[p,q](i\mu_{0}+1)w_{2}(i\mu_{0})w_{3}[p,q](i\mu_{0}+1)\right)^{\frac{1}{2}}}\cdot\gamma^{0}u_{n}(\mu_{0}-i)
\]
\[
-\frac{1}{4}\left(\frac{w_{1}[p,q](i\mu_{0}+1)w_{2}[p,q](i\mu_{0}+1)w_{3}[p,q](i\mu_{0}+1)}{F[p,q](i\mu_{0}+1)}\right)^{\frac{1}{2}} \times \]
\[
\qquad \qquad \left(\frac{1}{w_{1}^{2}[p,q](i\mu_{0}+1)}+\frac{1}{w_{2}^{2}[p,q](i\mu_{0}+1)}+\frac{1}{w_{3}^{2}[p,q](i\mu_{0}+1)}\right)
\gamma^{1}\gamma^{2}\gamma^{3}u_{n}(\mu_{0}-i)
\]
\begin{eqnarray*}
&=&\left(\tilde{D}[p,q]u_{n}(\mu)\right)\mid_{\mu=\mu_{0}-i} \\
&=&\lambda_{n}u_{n}(\mu-i)\mid_{\mu=\mu_{0}}.
\end{eqnarray*}
}
\smallskip
Thus the following identity is proved
\[
\tilde{D}[p,q+p+\frac{1}{2}]u_{n}(\mu-i)=\lambda_{n}u_{n}(\mu-i),
\]
which implies that the operators $\tilde{D}^{2}[p,q+p+\frac{1}{2}]$
and $\tilde{D}^{2}[p,q]$ are isospectral.
\smallskip
Finally, it is easy to see that the transformations $u_n(\mu)\mapsto u_n(\mu+i)$ and $u_n(\mu)\mapsto \gamma^1u_n(\frac{1}{\mu})$ are two-sided inverses of the two eigenspinor transformations defined in the theorem, so the eigenspinor transformations are indeed bijections between the sets of eigenspinors.
\end{proof}
\smallskip
Since the kernel $K_t[p, q]$ of the operator $\exp \left ( -t \tilde D^2[p, q] \right )$ can be written
as a sum that involves its eigenvalues and their corresponding eigenspinors,
it follows immediately from Theorem \ref{IsoSpecDiracs2paraThm} that
indeed the heat kernels corresponding to the parameters $(p, q)$ satisfy
modular transformation properties.
\smallskip
\begin{corollary} \label{modularheatkernelCor}
With the spatial dependence suppressed, we have
the following modular transformation properties for the heat kernel
$ K_t[p,q](i\mu_1,i\mu_2) $ of the operator $D^2[p, q]$:
\begin{eqnarray*}
K_t[p,q](i\mu_1+1,i\mu_2+1) &=& K_t[p,q+p+\frac{1}{2}](i\mu_1,i\mu_2), \\
K_t[p,q](-\frac{1}{i\mu_1},-\frac{1}{i\mu_2}) &=& (i\mu_2)^2K_t[-q,p](i\mu_1,i\mu_2).
\end{eqnarray*}
\end{corollary}
\begin{proof}
Recall that the defining equation for the heat kernel is
\begin{eqnarray*}
\int_{M} K_t[p,q](i\mu_1,i\mu_2)u_n(\mu_2)dvol(\mu_2) = e^{-t\lambda_n^2}u_n(\mu_1),
\end{eqnarray*}
for any eigenspinor $u_n(\mu)$ of $\tilde{D}^2[p,q]$ acting on $M$. As a result, for $(\mu_1,\mu_2)\in$ $M_1 \times M_1$, we have $(\mu_1-i,\mu_2-i)\in$ $M_0 \times M_0$, so
\begin{eqnarray*}
e^{-t\lambda_n^2}u_n(\mu_1-i) &=& \int_{M_0} K_t[p,q](i\mu_1+1,i\mu_2+1)u_n(\mu_2-i) \, dvol(\mu_2-i)\\
&=& \int_{M_1} K_t[p,q](i\mu_1+1,i\mu_2+1)u_n(\mu_2-i) \,dvol(\mu_2),
\end{eqnarray*}
where $u_n(\mu-i)$ is the general form of an eigenspinor of $\tilde{D}^2[p,q+p+\frac{1}{2}]$ acting on $M_1$ according to Theorem \ref{IsoSpecDiracs2paraThm}.
\smallskip
Similarly, for $(\mu_1,\mu_2)\in M_2 \times M_2$, we have $(\frac{1}{\mu_1},\frac{1}{\mu_2})\in M_0 \times M_0$, so
\begin{eqnarray*}
e^{-t\lambda_n^2}\left(-\gamma^{0}u_{n}(\frac{1}{\mu_1})\right) &=&
-\gamma^{0}e^{-t\lambda_n^2}u_{n}(\frac{1}{\mu_2}) \\
&=& \int_{M_0} K_t[p,q](-\frac{1}{i\mu_1},-\frac{1}{i\mu_2})\left(-\gamma^{0}u_{n}(\frac{1}{\mu_2})\right)dvol(\frac{1}{\mu_2})
\end{eqnarray*}
\begin{eqnarray*}
\qquad \qquad &=& \int_{M_2} \big(-\frac{1}{\mu_2^2 }K_t[p,q](-\frac{1}{i\mu_1},-\frac{1}{i\mu_2})\big)\left(-\gamma^{0}u_{n}(\frac{1}{\mu_2})\right)dvol(\mu_2),
\end{eqnarray*}
where $\left(-\gamma^{0}u_{n}(\frac{1}{\mu})\right)$ is the general form of an eigenspinor of $\tilde{D}^2[-q,p]$ acting on $M_2$ according to Theorem \ref{IsoSpecDiracs2paraThm}.
\smallskip
Thus we see that $K_t[p,q](i\mu_1+1,i\mu_2+1)$ and $-\frac{1}{\mu_2^2 }K_t[p,q](-\frac{1}{i\mu_1},-\frac{1}{i\mu_2})$ satisfy the defining equations of $K_t[p,q+p+\frac{1}{2}](i\mu_1,i\mu_2)$ and $K_t[-q,p](i\mu_1,i\mu_2)$ respectively. Now, the uniqueness of the heat kernel implies that
\begin{eqnarray*}
K_t[p,q](i\mu_1+1,i\mu_2+1) &=& K_t[p,q+p+\frac{1}{2}](i\mu_1,i\mu_2),\\
K_t[p,q](-\frac{1}{i\mu_1},-\frac{1}{i\mu_2})& = &(i\mu_2)^2K_t[-q,p](i\mu_1,i\mu_2).
\end{eqnarray*}
\end{proof}
\smallskip
Having established the modular transformation properties for the heat kernel $K_t[p, q]$,
we can now show that all of the Seeley-de Witt coefficients $\tilde a_{2n}[p, q]$ inherit
the same properties from the kernel.
\begin{corollary} \label{alltermstwoparamodularCor}
For any non-negative integer $n$ we have
\begin{eqnarray*}
\tilde{a}_{2n}[p,q](i\mu+1)&=&\tilde{a}_{2n}[p,q+p+\frac{1}{2}](i\mu), \\
\tilde{a}_{2n}[p,q](\frac{i}{\mu})&=&(i\mu)^{2}\tilde{a}_{2n}[-q,p](i\mu).
\end{eqnarray*}
In addition, at any point $i\mu=P \in\mathbb{H}$, let $v_P(f)$ be the order of zero in $Q$ of the function $f(Q)$, where $Q=e^{-\pi\mu}$. Then we have
\begin{eqnarray*}
v_P\big(\tilde{a}_{2n}[p,q](Q)\big)=v_P\big(K_t[p,q](Q)\big),
\end{eqnarray*}
where
\[
K_t[p,q](Q)=\int_{\mathbb{S}^3}\mathrm{Trace}\big\{K_t[p,q](i\mu,i\mu)\big\}dvol^3.
\]
So in particular, all of the above Seeley-de Witt coefficients have the same zeros with the same orders in $\mathbb{H}$.
\end{corollary}
\begin{proof}
The Seeley-de Witt coefficients $\tilde{a}_{2n}[p,q](i\mu)$ can be uniquely defined by the asymptotic expansion
\begin{eqnarray*}
K_t[p,q](Q) \sim t^{-2}\sum_{n=0}^\infty \tilde{a}_{2n}[p,q](Q)t^n,
\end{eqnarray*}
or equivalently,
\begin{eqnarray*}
\int_{\mathbb{S}^3}\mathrm{Trace}\big\{K_t[p,q](i\mu,i\mu)\big\}dvol^3 \sim t^{-2}\sum_{n =0}^\infty \tilde{a}_{2n}[p,q](i\mu)t^n.
\end{eqnarray*}
\smallskip
So, using Corollary \ref{modularheatkernelCor}, we have:
\[
\int_{\mathbb{S}^3}\mathrm{Trace}\big\{K_t[p,q](i\mu+1,i\mu+1)\big\}dvol^3
=
\int_{\mathbb{S}^3}\mathrm{Trace}\big\{K_t[p,q+p+\frac{1}{2}](i\mu,i\mu)\big\}dvol^3.
\]
Since the left and the right hand side of the latter have the following small time asymptotic expansions
respectively,
\[
t^{-2}\sum_{n = 0}^\infty \tilde{a}_{2n}[p,q](i\mu+1)t^n,
\qquad
t^{-2}\sum_{n =0}^\infty \tilde{a}_{2n}[p,q+p+\frac{1}{2}](i\mu)t^n,
\]
it follows from the uniqueness of the asymptotic expansion that
\[
\tilde{a}_{2n}[p,q](i\mu+1)
=
\tilde{a}_{2n}[p,q+p+\frac{1}{2}](i\mu), \qquad n \in \mathbb{Z}_{\geq 0}.
\]
Also, in a similar manner, we can write
\begin{eqnarray*}
&& \int_{\mathbb{S}^3}\mathrm{Trace}\big\{K_t[p,q](-\frac{1}{i\mu},-\frac{1}{i\mu})\big\}dvol^3
\sim t^{-2}\sum_{n=0}^\infty \tilde{a}_{2n}[p,q](-\frac{1}{i\mu})t^n \\
&& \qquad \sim (i\mu)^{2}\int_{\mathbb{S}^3}\mathrm{Trace}\big\{K_t[-q,p](i\mu,i\mu)\big\}dvol^3 \\
&& \qquad \sim t^{-2}\sum_{n=0}^\infty(i\mu)^{2}\tilde{a}_{2n}[-q,p](i\mu)t^n,
\end{eqnarray*}
which implies that
\[
\tilde{a}_{2n}[p,q](\frac{i}{\mu})
=
(i\mu)^{2}\tilde{a}_{2n}[-q,p](i\mu), \qquad n \in \mathbb{Z}_{\geq 0}.
\]
\smallskip
If $v$ is the order of zero in $Q$ of the function $K_t[p,q](Q)$ at $Q_0$, then
\begin{eqnarray*}
\lim_{Q\rightarrow Q_0}(Q-Q_0)^{-v}K_t[p,q](Q) = C_t[p,q],
\end{eqnarray*}
for $t\in(0,1)$ and some finite $C_t[p,q]\neq 0$.
On the other hand, the asymptotic expansion means that for any integer $k$ there is some $N(k)$ such that
\begin{eqnarray*}
\Big|K_t[p,q](Q)-t^{-2}\sum_{n=0}^{N(k)}\tilde{a}_{2n}[p,q](Q)t^n\Big|_{\infty,k}<C_kt^k, \qquad 0<t<1.
\end{eqnarray*}
See sections 1.1 and 1.7 of the book \cite{GilBook1} for the latter inequality and the definition
of the norm $|\cdot|_{\infty, k}$.
So it follows that
\begin{eqnarray*}
\tilde{a}_{2n}[p,q](Q) = \frac{1}{n!}\lim_{t\rightarrow 0^+}\frac{d^n}{dt^n}\big(t^2K_t[p,q](Q)\big),
\end{eqnarray*}
where the convergence is uniform. Consequently, suppose that
\begin{eqnarray*}
\lim_{Q\rightarrow Q_0}(Q-Q_0)^{-v_n}\tilde{a}_{2n}[p,q](Q) = C_{n,t}[p,q],
\end{eqnarray*}
for some finite $C_{n,t}[p,q]\neq 0$, then we can switch the order of the two limits below and obtain
\begin{eqnarray*}
C_{n,t}[p,q]&=&\lim_{Q\rightarrow Q_0}(Q-Q_0)^{-v_n}\tilde{a}_{2n}[p,q](Q)\big) \\
&=&\frac{1}{n!}\lim_{Q\rightarrow Q_0}\lim_{t\rightarrow 0^+}\frac{d^n}{dt^n}\left (t^2(Q-Q_0)^{-v_n}K_t[p,q](Q)\right )\\
&=&\frac{1}{n!}\lim_{t\rightarrow 0^+}\frac{d^n}{dt^n}\left (t^2C_t[p,q]\lim_{Q\rightarrow Q_0}(Q-Q_0)^{v-v_n}\right).
\end{eqnarray*}
As a result, for $C_{n,t}[p,q]$ to be finite and nonzero, we need to have $v_n=v$, which proves that the Seeley-de Witt coefficients have the same zeros of the same orders.
\end{proof}
\smallskip
As an important remark, it should be mentioned that the proof of the latter corollary
covers the case of poles of meromorphic functions
considered as zeros of negative orders.
\smallskip
We now turn our focus to the one-parametric case. We proved in Theorem \ref{modualra024Soneparathm} that for the one-parametric Bianchi
IX gravitational instantons \eqref{one-parametric} as well the terms $\tilde a_0[q_0]$,
$\tilde a_2[q_0]$ and $\tilde a_4[q_0]$ satisfy modular transformation properties. In
order to show that all of the terms $\tilde a_{2n}[q_0]$ satisfy the properties mentioned
in this theorem, similarly to the two-parametric case treated so far in this section,
we can use the isospectrality of the involved Dirac operators.
\smallskip
\begin{theorem} \label{IsoSpecOneParaThm}
For the one-parameter family
of metrics characterized by $q_{0}$, the operators $\tilde{D}^2[q_{0}]$,
$\tilde{D}^2[q_{0}-i]$, $-q_{0}^{-2}\tilde{D}^2[\frac{1}{q_{0}}]$
are isospectral.
\end{theorem}
\begin{proof}
It is given in Appendix \ref{IsoSpecOneParaThmPfappendix}.
\end{proof}
An immediate corollary of the latter theorem is the following statement for all of the
Seeley-de Witt coefficients $\tilde a_{2n}[q_0]$.
\smallskip
\begin{theorem}
For any non-negative integer $n$, we have:
\begin{eqnarray*}
\tilde{a}_{2n}[q_{0}](i\mu+1)&=&\tilde{a}_{2n}[q_{0}-i](i\mu), \\
\tilde{a}_{2n}[q_{0}](\frac{i}{\mu})&=&(-1)^{n+1}q_{0}^{4-2n}\mu^{2}\tilde{a}_{2n}[\frac{1}{q_{0}}](i\mu).
\end{eqnarray*}
\begin{proof}
Since the operators
$\tilde{D}^{2}[q_{0}],$ $\tilde{D}^{2}[q_{0}-i]$ and $-q_{0}^{-2}\tilde{D}^{2}[\frac{1}{q_{0}}]$
are isospectral, they have
the same small time heat kernel expansions, namely that, for all non-negative integers $n$ we have
\[
\tilde{a}_{2n}[q_{0}]=(-1)^{n}q_{0}^{4-2n}\tilde{a}_{2n}[\frac{1}{q_{0}}]=\tilde{a}_{2n}[q_{0}-i].
\]
Therefore, for arbitrary real numbers $a$ and $b$ we have
\begin{eqnarray*}
\int_{a}^{b}d\mu\cdot\tilde{a}_{2n}[q_{0}](i\mu)
&=& \int_{\frac{1}{b}}^{\frac{1}{a}}d\frac{1}{\mu}\cdot(-1)^{n}q_{0}^{4-2n}\tilde{a}_{2n}[\frac{1}{q_{0}}](\frac{i}{\mu}) \\
&=&\int_{a}^{b}d\mu\cdot\left((-1)^{n+1}\mu^{-2}q_{0}^{4-2n}\tilde{a}_{2n}[\frac{1}{q_{0}}](\frac{i}{\mu})\right) \\
&=&\int_{a+i}^{b+i}d(\mu+i)\cdot\tilde{a}_{2n}[q_{0}-i](i(\mu+i)) \\
&=&\int_{a}^{b}d\mu\cdot\tilde{a}_{2n}[q_{0}-i](i\mu-1).
\end{eqnarray*}
This show that
\[
\tilde{a}_{2n}[q_{0}](i\mu)=(-1)^{n+1}\mu^{-2}q_{0}^{4-2n}\tilde{a}_{2n}[\frac{1}{q_{0}}]=\tilde{a}_{2n}[q_{0}-i](i\mu-1),
\]
which is equivalent to the statement of this theorem.
\end{proof}
\end{theorem}
\smallskip
\section{Modular forms arising from $\tilde a_{2n}[p, q]$}
\label{ModularFormsSec}
\smallskip
In this section, we use the modular transformation properties of the
Seeley-de Witt coefficients studied in the previous sections to show that when
the parameters of the metric are rational in the two-parametric case, they give rise to vector-valued
modular forms. Then we show that by a summation over a finite orbit of
the parameters, they give rise to ordinary modular functions. At the end
we investigate their connection with well-known modular forms. Indeed,
we show that, in examples of two general cases, one with poles at infinity and the other
with no poles at infinity, the modular functions corresponding to the
Seeley-de Witt coefficients land in a direct way in the modular forms
of weight 14 or in the cusp forms of weight 18.
\smallskip
\subsection{Vector-valued and ordinary modular functions from $\tilde a_{2n}[p, q]$}
The following lemma shows the periodicity of all Seeley-de Witt coefficients
$\tilde a_{2n}[p, q]$ in both of the parameters of the metric with period 1.
This is a crucial step for showing that each $\tilde a_{2n}$ is defining
a vector-valued modular form with respect to a finite dimensional representation
of the modular group $PSL_2(\mathbb{Z})$. More importantly, it also allows one
to construct ordinary modular functions from each $\tilde a_{2n}[p, q]$, which
can then be related to well-known modular forms.
\smallskip
\begin{lemma} \label{periodicityinbothparametersLemma}
For any non-negative integer $n$ and any parameters $(p, q)$
of the Bianchi IX gravitational instantons we have:
\[
\tilde a_{2n}[p+1, q] = \tilde a_{2n}[p, q+1] = \tilde a_{2n}[p, q],
\]
\end{lemma}
\begin{proof}
Recalling the ingredients of the explicit formulas for the metric from Subsection 4.1 and using Lemma 5.2 we know that all the involved functions are invariant under $p\mapsto p+1$, so the periodicity in $p$ follows trivially. In addition, we have
\begin{eqnarray*}
w_{1}[p,q+1](i\mu)&=&-\frac{i}{2}\vartheta_{3}(i\mu)\vartheta_{4}(i\mu)\frac{e^{2\pi i p}\partial_{q}\vartheta[p,q+\frac{1}{2}](i\mu)}{e^{2\pi i p}e^{\pi ip}\vartheta[p,q](i\mu)}= w_{1}[p,q](i\mu), \\
w_{2}[p,q+1](i\mu)&=&\frac{i}{2}\vartheta_{2}(i\mu)\vartheta_{4}(i\mu)\frac{-e^{2\pi i p}\partial_{q}\vartheta[p+\frac{1}{2},q+\frac{1}{2}](i\mu)}{e^{2\pi i p}e^{\pi ip}\vartheta[p,q](i\mu)}=-w_{2}[p,q](i\mu),\\
w_{3}[p,q+1](i\mu)&=&-\frac{1}{2}\vartheta_{2}(i\mu)\vartheta_{3}(i\mu)\frac{-e^{2\pi i p}\partial_{q}\vartheta[p+\frac{1}{2},q](i\mu)}{e^{2\pi i p}\vartheta[p,q](i\mu)}=-w_{3}[p,q](i\mu),\\
F[p,q](i\mu)&=&\frac{2}{\pi\Lambda} \left(\frac{e^{2\pi i p}\vartheta[p,q](i\mu)}{e^{2\pi i p}\partial_{q}\vartheta[p,q](i\mu)}\right)^{2}=F[p,q](i\mu).
\end{eqnarray*}
Consequently, we see from the equations above that the metric
\begin{equation*}
d\tilde s^2= F \left ( w_1 w_2 w_3 \, d\mu^2 +
\frac{w_2 w_3}{w_1} \sigma_1^2 +
\frac{w_3 w_1}{w_2} \sigma_2^2+
\frac{w_1 w_2}{w_3} \sigma_3^2 \right ).
\end{equation*}
is invariant under $q\mapsto q+1$, so the periodicity in $q$ also follows.
\end{proof}
\smallskip
Now, relying on Lemma \ref{periodicityinbothparametersLemma}, we can
correspond the following maps to the generators of $PSL_2(\mathbb{Z})$ acting
on the ordered pair $(p,q)\in S=[0,1)^2 = (\mathbb{R}/\mathbb{Z})^2$:
\begin{eqnarray*}
\tilde{S}(p,q)&=&(-q,p), \\
\tilde{T}_1(p,q)&=&(p,q+p+\frac{1}{2}),
\end{eqnarray*}
in both of which the parameters are considered modulo 1.
For rational $p,q$ with $N$ being a common multiple of $2$ and the denominators of $p$ and $q$, it follows immediately from the definitions above that the orbit $\mathcal{O}_{(p,q)}$ of $(p,q)$ under the action of $PSL_2(\mathbb{Z})$ consists of $(p,q)$ pairs with $p,q\in\mathcal{N}=\{0,2/N ,\ldots, (N-1)/N \}$. Namely, we have
\[
\mathcal{O}_{(p,q)} \subset \mathcal{N}^2 \subset [0,1)^2,
\]
and thus $\mathcal{O}_{(p,q)}$ is finite, with any element of $PSL_2(\mathbb{Z})$ acting as a permutation on $\mathcal{O}_{(p,q)}$.
\begin{theorem} \label{vectorvaluedmodularformThm}
Starting from a pair of rational numbers $(p, q)$, for any non-negative integer $n$, the term $\tilde a_{2n}[p', q'](i\mu)$, where $(p', q') \in \mathcal{O}_{(p, q)}$, is a vector-valued modular function of weight 2 for the modular group $PSL_2(\mathbb{Z})$.
\begin{proof}
Since the orbit $\mathcal{O}_{(p,q)}$ is finite in this case, we can arrange the functions $\tilde{a}_{2n}[p',q'](i\mu)$ with $(p',q')\in\mathcal{O}_{(p,q)}$ into a finite column vector $\tilde{A}_{2n}\Big(i\mu;\mathcal{O}_{(p,q)}\Big)$ of some dimension $d$. Since any $M\in PSL_2 (\mathbb{Z})$ acts as a permutation on $\mathcal{O}_{(p,q)}$, we may denote by $\rho: S_d\mapsto GL(p,\mathbb{C})$ the natural permutation representation of $S_d$ which acts on $\tilde{A}_{2n}\Big(i\mu;\mathcal{O}_{(p,q)}\Big)$ by permuting its components in the corresponding way.
\smallskip
From Corollary \ref{alltermstwoparamodularCor} we know that
\begin{align*}
\tilde{A}_{2n}\Big(M(i\mu);\mathcal{O}_{(p,q)}\Big) &= \big(c\cdot i\mu+d\big)^{2}\tilde{a}_{2n} \big[M(p,q)\big] (i\mu)
\\
&\big(c\cdot i\mu+d\big)^{2}\rho(M)\tilde{A}_{2n} [(p,q)] (i\mu),
\end{align*}
for any $M\in PSL_2(\mathbb{Z})$. So by definition, $\tilde a_{2n}[p, q](i\mu)$ is a vector-valued modular
function of weight 2.
\end{proof}
\end{theorem}
\smallskip
Since the orbit is finite for a rational choice of parameters, by a summation over the
orbit we obtain ordinary modular functions as follows.
\smallskip
\begin{corollary} \label{modularfunctionbysumCor}
For any pair of rational number $(p, q)$, and any non-negative integer $n$, the sum
\[
\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big) = \sum_{(p', q')\in \mathcal{O}_{(p,q)}} \tilde{a}_{2n}[p', q'](i\mu)
\]
defines a modular function of weight 2 for the modular group $PSL_2(\mathbb{Z})$.
\begin{proof}
Summing up all the components of the column vector $\tilde{A}_{2n}\left (i\mu;\mathcal{O}_{(p,q)}\right )$ in Theorem \ref{vectorvaluedmodularformThm}, we find that the sum $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big)$ satisfies
\begin{eqnarray*}
\tilde{a}_{2n}\big(M(i\mu);\mathcal{O}_{(p,q)}\big) &=& \sum_{(p',q')\in \mathcal{O}_{(p,q)}} \big(c\cdot i\mu+d\big)^{2}\tilde{a}_{2n} \big[M(p',q')\big] (i\mu)
\\
&=&\sum_{(p',q')\in \mathcal{O}_{(p,q)}} \big(c\cdot i\mu+d\big)^{2}\tilde{a}_{2n} [(p',q')] (i\mu) \\
&=& \big(c\cdot i\mu+d\big)^{2}\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big),
\end{eqnarray*}
for any $M\in PSL_2(\mathbb{Z})$ which acts on the variable $i\mu$ as a M\"obius transformation, and on $(p,q)$ as defined previously. Namely, $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big)$ is a modular function of weight 2 with respect to $PSL_2(\mathbb{Z})$.
\end{proof}
\end{corollary}
\smallskip
\subsection{Connection between $\tilde a_{2n}[p, q]$ and well-known modular forms} Here
we investigate the connection between modular functions of type constructed in
Corollary \ref{modularfunctionbysumCor} and well-known modular forms. It was seen in this corollary that when the parameters
$(p, q)$ of the metric are rational, each $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big)$ has the standard modular transformation properties of a modular function of weight 2. However, since there
are no non-trivial holomorphic modular forms of weight 2, it is necessary for $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big)$ to have poles in the variable $i\mu$. For a detailed discussion of
holomorphic modular forms, Eisenstein series and some related fundamental results used in this
subsection, one can for example refer to \cite{SerBook}.
\smallskip
By using Corollary \ref{alltermstwoparamodularCor}, in order to find the locations and multiplicities of the zeros of $\tilde{a}_{2n}[p,q](i\mu)$, it is enough for us to investigate the poles of $\tilde{a}_0[p,q](i\mu)$, and the result would apply to all $\tilde{a}_{2n}[p,q](i\mu)$. Recall that
\begin{eqnarray*}
\tilde{a}_0[p,q]&=&4F^{2}w_{1}w_{2}w_{3} \\
&=&-\frac{2}{\pi^2\Lambda^2}\cdot\frac{\vartheta_2^2\vartheta_3^2\vartheta_4^2\vartheta[p,q]\partial_q\vartheta[p,q+\frac{1}{2}]\partial_q\vartheta[p+\frac{1}{2},q+\frac{1}{2}]\partial_q\vartheta[p+\frac{1}{2},q]}{(\partial_q\vartheta[p,q])^4}.
\end{eqnarray*}
\smallskip
Since all the theta functions and theta derivatives are holomorphic for $i\mu\in\mathbb{H}$, the singularities of
\[
\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big) = \sum_{(p', q')\in \mathcal{O}_{(p,q)}} \tilde{a}_{2n}[p', q'](i\mu)
\]
may appear only at the zeros of the function $\partial_q\vartheta[p,q](i\mu)$. In addition, because of the modular properties, it would be enough for us to look for poles in the fundamental domain $\mathbb{H}/PSL_2(\mathbb{Z})$ and at infinity.
\smallskip
Consequently, in principle, we only need to know the locations and multiplicities of the zeros of $\partial_q\vartheta[p,q](i\mu)$ in order to figure out the space of modular forms that can be constructed from $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big)$ by removing the poles. So we proceed by proving the following lemmas concerning the zeros of $\vartheta[p,q](i\mu)$ and $\partial_q\vartheta[p,q](i\mu)$ when the parameters $p$ and $q$ are both real. In fact, according to Lemma \ref{periodicityinbothparametersLemma}, $p$ and $q$ are defined modulo 1, so in what is to be presented we will restrict to $(p,q)\in S=[0,1)^2$.
\smallskip
\begin{lemma} \label{orderofzeroatinfinityLemma1}
\label{ZerosOfThetaAtInf}
Let $v_\infty(F) $ be the order of zero of any function $F(i\mu)$ at infinity, and denote $v_\infty(\partial_q\vartheta[p,q])$ by $v_\infty[p,q]$ for simplicity. We have
\begin{eqnarray*}
v_\infty(\vartheta_2) = \frac{1}{8}, \qquad v_\infty(\vartheta_3) = v_\infty(\vartheta_4) = 0, \qquad v_\infty(\vartheta[p,q]) = \frac{\langle p\rangle^2}{2},
\end{eqnarray*}
\[
v_\infty(\partial_q\vartheta[p,q])=
\begin{cases}
+\infty & \text{if } p=0, q \in \{ 0, \frac{1}{2} \}, \\
+\infty & \text{if } p=\frac{1}{2}, q=0, \\
\frac{1}{2} & \text{if } p=0, q \not\in \{ 0, \frac{1}{2} \}, \\
\frac{\langle p\rangle^2}{2} & \text{otherwise.}
\end{cases}
\]
where $\langle p\rangle$ is defined as the number $\langle p\rangle\equiv p\;\mathrm{mod}\; 1$ such that $\langle p\rangle\in[-\frac{1}{2},\frac{1}{2})$.
\begin{proof}
These results can be obtained directly by keeping only the leading order terms in the defining formula (29) and its $q$ derivative.
\end{proof}
\end{lemma}
\smallskip
The latter can be used to obtain information about the order of zero of $\tilde a_0[p, q]$ at infinity as follows.
\smallskip
\begin{corollary} \label{orderofzeroatinfinityLemma2}
For $(p,q)\not\in \big \{ (0,0),(0,\frac{1}{2}),(\frac{1}{2},0),(\frac{1}{2},\frac{1}{2}) \big \}$ we have
\begin{eqnarray*}
v_\infty(\tilde{a}_0[p,q])=\langle p\rangle^2+\langle p+\frac{1}{2}\rangle^2+\frac{1}{4}\langle p\rangle^2=|p-\frac{1}{2}|\ge 0,
\end{eqnarray*}
if $p\neq0$, and
\begin{eqnarray*}
v_\infty(\tilde{a}_0[p,q])=\frac{1}{4}+2\times\frac{1}{8}+\frac{1}{2}-4\times\frac{1}{2}=-1,
\end{eqnarray*}
if $p=0$.
\begin{proof}
Both of the above statements can be proved by the substitution of the identities given in Lemma \ref{orderofzeroatinfinityLemma1} into the formula
\begin{eqnarray*}
v_\infty(\tilde{a}_0[p,q])&=&2v_\infty(\vartheta_2)+2v_\infty(\vartheta_3)+2v_\infty(\vartheta_4)+v_\infty(\vartheta[p,q])+v_\infty[p+\frac{1}{2},q]\\
&&+v_\infty[p+\frac{1}{2},q+\frac{1}{2}]+v_\infty[p,q+\frac{1}{2}]-4v_\infty[p,q].
\end{eqnarray*}
\end{proof}
\end{corollary}
\smallskip
Now we can prove the following statements which will play crucial roles
in studying the poles of the modular functions that are studied in detail
in the sequel and have been related to well-known modular forms.
\smallskip
\begin{lemma}
\label{ZerosOfTheta}
Let $n$ be the number of points in the orbit $\mathcal{O}_{(p,q)}$ where $\mathcal{O}_{(p,q)}\neq\{(\frac{1}{2},\frac{1}{2})\}$ and $\mathcal{O}_{(p,q)}\neq\{(0,0),(\frac{1}{2},0),(0,\frac{1}{2})\}$. Then $n$ is necessarily even, and with $v_\tau[p,q]$ denoting the order of zero of the function $\partial_q\vartheta[p,q](i\mu)$ at $i\mu=\tau$, we have the equation
\begin{eqnarray*}
\sum_{(p',q')\in\mathcal{O}_{(p,q)}}\Big(\frac{1}{2}v_i[p',q'] + \frac{1}{3}v_\rho[p',q'] + \sideset{}{^*}\sum_{P\in\mathbb{H}/PSL_2(\mathbb{Z})} v_P[p',q']\Big) = \frac{n}{12}-\frac{n_0}{2},
\end{eqnarray*}
where $\rho=e^{\frac{2\pi i}{3}}$, $n_0$ is the number of points $(p', q') \in \mathcal{O}_{(p, q)}$ such that $p'=0$, and the starred summation excludes the points $i$ and $\rho$.
\begin{proof}
First notice that if $(p,q)$ and $(-p,-q)$ correspond to the same point in $S=[0,1)^2$ then we have $p\equiv-p\;\mathrm{mod}\;1$ and $q\equiv -q\;\mathrm{mod}\;1$, i.e., $p\equiv 0\;\mathrm{mod}\;\frac{1}{2}$ and $q\equiv 0\;\mathrm{mod}\;\frac{1}{2}$. So in $S$, the only possibility is that
\begin{eqnarray*}
(p,q) \in \Big \{ (0,0),(0,\frac{1}{2}),(\frac{1}{2},0), (\frac{1}{2},\frac{1}{2}) \Big \}.
\end{eqnarray*}
Thus, if $\mathcal{O}_{(p,q)}\neq\{(\frac{1}{2},\frac{1}{2})\}$ and $\mathcal{O}_{(p,q)}\neq\{(0,0),(\frac{1}{2},0),(0,\frac{1}{2})\}$, then $(p,q)$ and $(-p,-q)$ correspond to different points in $S$.
\smallskip
Also if $(p',q')\in\mathcal{O}_{p,q}$, then $(-q',p')\in\mathcal{O}_{p,q}$ and thus $(-p',-q')\in\mathcal{O}_{p,q}$.
So, when $\mathcal{O}_{(p,q)}\neq\{(\frac{1}{2},\frac{1}{2})\}$ and $\mathcal{O}_{(p,q)}\neq\{(0,0),(\frac{1}{2},0),(0,\frac{1}{2})\}$, for any $(p',q')\in\mathcal{O}_{(p,q)}$, it has a partner $(-p',-q')\neq(p',q')$ in $S$ that is also in the orbit. It is evident that only identical points in $S$ can have the same partner, so it follows that in this case $n$ is necessarily even.
\smallskip
Recall from Lemma \ref{transformationsvarthetapq} that
\begin{eqnarray*}
\partial_q^n\vartheta[p,q](i\mu+1) &=& e^{-\pi i p(p+1)}\partial_q^n\vartheta[p,q+p+\frac{1}{2}](i\mu), \\
\vartheta[p,q](-\frac{1}{i\mu}) &=& e^{2\pi ipq}\mu^{\frac{1}{2}}\vartheta[-q,p](i\mu), \\
\partial_q\vartheta[p,q](-\frac{1}{i\mu}) &=& -ie^{2\pi ipq}\mu^{\frac{3}{2}}\partial_q\vartheta[-q,p](i\mu).
\end{eqnarray*}
So we have
\begin{eqnarray*}
\frac{\partial_q\vartheta[p,q](i\mu+1)}{\vartheta[p,q](i\mu+1)} &=& \frac{\partial_q\vartheta[p,q](i\mu)}{\vartheta[p,q](i\mu)}, \\
\frac{\partial_q\vartheta[p,q](-\frac{1}{i\mu})}{\vartheta[p,q](-\frac{1}{i\mu})} &=& -i\mu\frac{\partial_q\vartheta[p,q](i\mu)}{\vartheta[p,q](i\mu)}.
\end{eqnarray*}
Consequently, with $n$ an even number, if we define the following function
\begin{eqnarray*}
f(i\mu)=\prod_{(p,q)\in\mathcal{O}_{(p,q)}}\frac{\partial_q\vartheta[p,q](i\mu)}{\vartheta[p,q](i\mu)},
\end{eqnarray*}
then we have
\begin{eqnarray*}
f(i\mu+1)&=&f(i\mu),\\
f(-\frac{1}{i\mu})&=&(-1)^n(i\mu)^nf(i\mu)=(i\mu)^nf(i\mu),
\end{eqnarray*}
which shows that $f(i\mu)$ is a modular function of weight $n$. Therefore using the valence formula
we can write
\begin{eqnarray*}
v_\infty(f) + \frac{1}{2}v_i(f) + \frac{1}{3}v_\rho(f) + \sideset{}{^*}\sum_{P\in\mathbb{H}/PSL_2(\mathbb{Z})}v_P(f)
=
\frac{n}{12}.
\end{eqnarray*}
Recall that the zeros of $\vartheta[p,q](i\mu)$ satisfy the equation
\begin{eqnarray*}
\left (p-\frac{1}{2}+m \right )i\mu + q-\frac{1}{2}+k=0
\end{eqnarray*}
for some $m, k\in\mathbb{Z}$. So, for any real $(p,q)\neq(\frac{1}{2},\frac{1}{2})$, $\vartheta[p,q](i\mu)$ does not have any zeros in $\mathbb{H}$. Also, notice that the theta functions and the theta derivatives are all holomorphic in $i\mu\in\mathbb{H}$. So it follows that the function $f(i\mu)$ has no pole away from infinity, and for $P\in\mathbb{H}$ we have exactly $v_P(f)=\sum_{(p',q')\in\mathcal{O}_{(p,q)}}v_P[p',q']$, with $v_P[p',q']\ge 0$ at any $P$ in the fundamental domain. At infinity, we know from Lemma \ref{orderofzeroatinfinityLemma1} that
\begin{eqnarray*}
v_\infty(f)&=&\sum_{(p',q')\in\mathcal{O}_{(p,q)}}\Big(v_P[p',q']-v_\infty(\vartheta[p',q'])\Big)\\
&=&\sum_{\substack{(p',q')\in\mathcal{O}_{(p,q)} \\ p'\neq 0 }}\Big(v_P[p',q']-v_\infty(\vartheta[p',q'])\Big) \\
&&+\sum_{(0,q')\in\mathcal{O}_{(p,q)}}\Big(v_P[0,q']-v_\infty(\vartheta[0,q'])\Big)\\
&=&\sum_{\substack{(p',q')\in\mathcal{O}_{(p,q)}\\ p'\neq 0}}\Big(\frac{\langle p'\rangle^2}{2}-\frac{\langle p'\rangle^2}{2}\Big)+\sum_{(0,q')\in\mathcal{O}_{(p, q)}}\Big(\frac{1}{2}-\frac{\langle 0\rangle^2}{2}\Big)\\
&=&\frac{n_0}{2},
\end{eqnarray*}
where $n_0$ is the number of points in the orbit whose first coordinates are zero, $p'=0$. Finally it follows from the valence formula for $f$ that
\begin{eqnarray*}
\sum_{(p',q')\in\mathcal{O}_{(p,q)}}\Big(\frac{1}{2}v_i[p',q'] + \frac{1}{3}v_\rho[p',q'] + \sideset{}{^*}\sum_{P\in\mathbb{H}/PSL_2(\mathbb{Z})}v_P[p',q']\Big) = \frac{n}{12}-\frac{n_0}{2}.
\end{eqnarray*}
\end{proof}
\end{lemma}
\smallskip
\begin{corollary} \label{nandn0Cor}
Let $n$ and $n_0$ be defined as in Lemma \ref{ZerosOfTheta}. Then, we have $n\ge 6n_0$, and the equality holds if and only if $\partial_q[p,q](i\mu)$ has no zeros in the upper-half complex plane for any $(p,q)$ in the orbit.
\begin{proof}
This follows directly from the non-negativity of $v_P[p,q]$.
\end{proof}
\end{corollary}
\smallskip
When $n$ is small, for instance when $n=n_0$, one can solve for all the $v_P[p',q']$ by simple arithmetic arguments. For example, when $n=8$, one has
\begin{eqnarray*}
\sum_{(p',q')\in\mathcal{O}_{(p,q)}}\Big(\frac{1}{2}v_i[p',q'] + \frac{1}{3}v_\rho[p',q'] + \sideset{}{^*}\sum_{P\in\mathbb{H}/PSL_2(\mathbb{Z})}v_P[p',q']\Big) = \frac{2}{3}.
\end{eqnarray*}
So, since $\tilde{a}_0[p',q']=\tilde{a}_0[-p',-q']$, we have $v_P[p',q']=v_P[-p',-q']$ where $(p',q')$ and $(-p',-q')$ correspond to different points in $S$. Therefore, the following can be the
only non-negative solution:
$v_\rho[p_0,q_0]$$=v_\rho[-p_0,-q_0]=1$ for some $(p_0,q_0)\in \mathcal{O}_{(p,q)}$, and
$v_P[p',q']=0$ for any other $P$ in the fundamental domain and $(p',q')\in\mathcal{O}_{(p,q)}$.
\smallskip
We now work out different examples explicitly. For instance, we look at the following orbit generated by $(p,q)=(0,\frac{1}{3})$:
\begin{eqnarray*}
\mathcal{O}_{(0,\frac{1}{3})}&=&\Big\{(\frac{1}{2},\frac{2}{3}),(\frac{1}{2},\frac{1}{3}),(\frac{5}{6},\frac{2}{3}),(\frac{5}{6},\frac{1}{3}),(\frac{5}{6},0),(\frac{1}{6},\frac{2}{3}),(\frac{1}{6},\frac{1}{3}),(\frac{1}{6},0),(\frac{2}{3},\frac{5}{6}), \\
&& (\frac{2}{3},\frac{2}{3}),
(\frac{2}{3},\frac{1}{2}),(\frac{2}{3},\frac{1}{3}),(\frac{2}{3},\frac{1}{6}),(\frac{2}{3},0),(\frac{1}{3},\frac{5}{6}),(\frac{1}{3},\frac{2}{3}),(\frac{1}{3},\frac{1}{2}),
(\frac{1}{3},\frac{1}{3}),\\
&&(\frac{1}{3},\frac{1}{6}),(\frac{1}{3},0),
(0,\frac{1}{6}),(0,\frac{2}{3}),(0,\frac{5}{6}),(0,\frac{1}{3})\Big\}.
\end{eqnarray*}
For this case we have the following statement.
\smallskip
\begin{theorem} \label{poleatinfinityThmExplicit}
For any non-negative integer $n$, $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ is in the one-dimensional space spanned by
\[
\frac{G_{14}(i\mu)}{\Delta(i\mu)},
\] where $\Delta$ is the modular discriminant (a cusp form of weight 12), and $G_{14}$ is the Eisenstein series of weight $14$.
\begin{proof}
From the orbit above one observes that $n=24=6n_0$ in this case, so Corollary \ref{nandn0Cor} shows that $\partial_q[p,q](i\mu)$ has no zeros in the upper-half complex plane for any $(p,q)$ in the orbit. Therefore, $\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ and thus $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ are holomorphic on $\mathbb{H}$ as we have argued previously.
\smallskip
In addition, by either Corollary \ref{orderofzeroatinfinityLemma2} or direct expansion in $Q=e^{-2\pi\mu}$ we know that $\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ has a simple pole at infinity, $i \mu = \infty$, $Q=0$. Since the modular discriminant $\Delta$ has a zero of order $1$ at $i\mu=\infty$, it follows that $\Delta(i\mu)\cdot\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ is a modular function holomorphic on $\mathbb{H}$ as well as at $i\mu=\infty$. Namely, $\Delta(i\mu)\cdot\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ is a modular form of weight $12+2=14$. Since the space of modular forms of weight 14 is a one-dimensional space generated by the Eisenstein series $G_{14}$, the desired result follows.
\smallskip
\end{proof}
\end{theorem}
Indeed, by writing their $Q$-expansions explicitly, we confirm that for $\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$, $\tilde{a}_{2}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ and $\tilde{a}_{4}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big)$ we have:
\begin{eqnarray*}
\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big) &=& \frac{-\frac{4}{3}Q^{-1}+262512Q+\frac{171950080}{3}Q^2+3457199880Q^3+\cdots}{\pi^3\Lambda^2}\\
&=& -\frac{6081075}{\pi^{17}\Lambda^2}\cdot\frac{G_{14}(i\mu)}{\Delta(i\mu)},
\end{eqnarray*}
\begin{eqnarray*}
\tilde{a}_{2}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big) &=& \frac{\frac{4}{3}Q^{-1}-262512Q-\frac{171950080}{3}Q^2-3457199880Q^3+\cdots}{\pi\Lambda}\\
&=& \frac{6081075}{\pi^{15}\Lambda}\cdot\frac{G_{14}(i\mu)}{\Delta(i\mu)},
\end{eqnarray*}
\begin{eqnarray*}
\tilde{a}_{4}\big(i\mu;\mathcal{O}_{(0,\frac{1}{3})}\big) &=& \frac{-\frac{4}{15}Q^{-1}+\frac{87504}{5}Q+\frac{34390016}{9}Q^2+230479992Q^3+\cdots}{\pi^{-1}\Lambda^0}\\
&=& -\frac{405405}{\pi^{13}}\cdot\frac{G_{14}(i\mu)}{\Delta(i\mu)}.
\end{eqnarray*}
\smallskip
Using this approach and taking advantage of the lemmas proved here, one can prove similar results
for many other orbits $\mathcal{O}_{(p,q)}$.
As another example, we can also look at the following orbit generated by $(p,q)=(\frac{1}{6},\frac{5}{6})$ containing 8 points:
\begin{eqnarray*}
\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}=\Big\{(\frac{1}{2},\frac{1}{6}),(\frac{5}{6},\frac{1}{2}),(\frac{5}{6},\frac{5}{6}),(\frac{5}{6},\frac{1}{6}),(\frac{1}{2},\frac{5}{6}),(\frac{1}{6},\frac{5}{6}),(\frac{1}{6},\frac{1}{2}),(\frac{1}{6},\frac{1}{6})\Big\}.
\end{eqnarray*}
In this case, the statement is as follows, in which $G_6$ denotes the Eisenstein series of weight 6.
\smallskip
\begin{theorem}
For any non-negative integer $n$, $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big)$ is in the one-dimensional space spanned by
\[
\frac{\Delta(i\mu)G_{6}(i\mu)}{G_4(i\mu)^4}.
\]
\begin{proof}
Since $\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}$ has 8 points, we know from our analysis following Corollary
\ref{nandn0Cor} that
$v_\rho[p_0,q_0]=v_\rho[-p_0,-q_0]=1$ for some $(p_0,q_0)\in \mathcal{O}_{(\frac{1}{6},\frac{5}{6})}$,
and $v_P[p,q]=0$ for any other $P$ in the fundamental domain and $(p,q)\in \mathcal{O}_{(\frac{1}{6},\frac{5}{6})}$.
\smallskip
One observes from the explicit orbit that $(\pm p,\pm q)$ cannot be identified with any of $(p+\frac{1}{2},q),(p+\frac{1}{2},q+\frac{1}{2})$, or $(p,q+\frac{1}{2})$ in $S$, so it follows that the simple zero of $\partial_q\vartheta[\pm p_0,\pm q_0](i\mu)$ at $i\mu=\rho$ is not canceled by possible zeros of $\partial_q\vartheta[\pm p+\frac{1}{2},\pm q_0],\partial_q\vartheta[\pm p+\frac{1}{2},q+\frac{1}{2}]$, or $\partial_q\vartheta[\pm p,\pm q+\frac{1}{2}]$ in the numerator of $\tilde{a}[\pm p_0,\pm q_0]_{0}(i\mu)$.\\
As a result, a factor of $(\partial_q\vartheta[\pm p_0,\pm q_0])^4$ in the denominator of $\tilde{a}_{0}[\pm p_0,\pm q_0](i\mu)$ implies that $\tilde{a}_{0}[\pm p_0,\pm q_0](i\mu)$ have simple poles of order $4$ at $\rho$, and this is their only singularity, including $i\mu=\infty$. Moreover, similar to our argument in
Theorem \ref{poleatinfinityThmExplicit}, this implies that $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big)$ is meromorphic on $\mathbb{H}\cup \{ \infty \}$ with only a pole of order $4$ at $i\mu=\rho$.
\smallskip
Recall that the Eisenstein series $G_4(i\mu)$ is a modular form of weight $4$ with a simple zero at $i\mu=\rho$, so the function
\begin{eqnarray*}
\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big)\cdot G_4(i\mu)^4
\end{eqnarray*}
is modular of weight $2+4\times4=18$, and holomorphic on $\mathbb{H}\cup \{\infty \}$, so it is in the space of modular forms of weight $18$.
\smallskip
In addition, we know from explicit $Q$-expansion of $\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big)$ that the function $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big)$ has a simple zero at $Q=0$, so the function $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big)\cdot G_4(i\mu)$ has a zero of order $1+4\times 0=1$ at $Q=0$. Namely, $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big)\cdot G_4(i\mu)$ is a cusp form of weight $18$. Since the space of cusp forms of weight $18$ is generated by $\Delta\cdot G_6$, we see that $\tilde{a}_{2n}\big(i\mu;\mathcal{O}_{(p,q)}\big)$ is contained in the one-dimensional space generated by
\[
\frac{\Delta(i\mu)G_{6}(i\mu)}{G_4(i\mu)^4}.
\]
\end{proof}
\end{theorem}
\smallskip
Indeed, explicit $Q$-expansions in the latter case also confirm that:
\begin{eqnarray*}
\tilde{a}_{0}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big) &=& \frac{-294912Q + 438829056Q^2-315542863872Q^3+\cdots}{\pi^3\Lambda^2}\\
&=&-\frac{114688\pi^7}{3375\Lambda^2}\cdot\frac{\Delta(i\mu)G_{6}(i\mu)}{G_4(i\mu)^4},
\end{eqnarray*}
\begin{eqnarray*}
\tilde{a}_{2}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big) &=& \frac{294912Q - 438829056Q^2+315542863872Q^3+\cdots}{\pi\Lambda}\\
&=&\frac{114688\pi^9}{3375\Lambda}\cdot\frac{\Delta(i\mu)G_{6}(i\mu)}{G_4(i\mu)^4},
\end{eqnarray*}
\begin{eqnarray*}
\tilde{a}_{4}\big(i\mu;\mathcal{O}_{(\frac{1}{6},\frac{5}{6})}\big) &=& \frac{-270336Q + 402259968Q^2-289247625216 Q^3+\cdots}{5\pi^{-1}\Lambda^0}\\
&=&-\frac{315392\pi^{11}}{50625}\cdot\frac{\Delta(i\mu)G_{6}(i\mu)}{G_4(i\mu)^4}.
\end{eqnarray*}
\smallskip
\section{Conclusions}
\label{ConclusionsSec}
\smallskip
The results obtained in this paper present a novel
occurrence in quantum cosmology of modular functions and the
vector-valued modular forms considered in the Eichler-Zagier theory
of Jacobi forms \cite{EicZag}. This was indeed suggested to us by
a combination of two different sources: the existence of an explicit
parametrization of Bianchi IX gravitational instantons in terms of theta
functions with characteristics \cite{BabKor} (see also \cite{Tod, Hit})
and our rationality result about the Seeley-de Witt coefficients
in the asymptotic expansion of the spectral action
for triaxial Bianchi IX metrics \cite{FanFatMar1}. These two
results combined reveal that each Seeley-de Witt coefficient in the
expansion is a rational function, with rational coefficients, in the theta
functions $\vartheta_2, \vartheta_3, \vartheta_4$, $\vartheta[p,q]$,
$\partial_q \vartheta[p,q]$, $e^{i\pi p}$ and their derivatives (the latter
theta functions are written explicitly in Section \ref{InstantonsSec}).
\smallskip
Bianchi IX gravitational instantons are especially interesting since
they admit an explicit parametrization in terms of elliptic modular functions and theta
functions with characteristics \cite{BabKor}, see also \cite{Tod, Oku, Hit}
and references therein. These are obtained by imposing the
self-duality condition on the Weyl tensor of Bianchi IX metrics,
and reducing the corresponding partial differential equations
to well known ordinary differential equations, namely the Halphen system and the
Painlev\'e VI equation. This result is followed by still another crucial step aimed at
making the result an Einstein metric. That is, a correct choice of a time-dependent
conformal factor is essential for making the Ricci tensor proportional to the metric.
This fact is relevant to our present work in the following interesting ways.
On the one hand, we have explained in this paper that a similar rationality result holds for a
general time-dependent conformal perturbation of the triaxial Bianchi IX metric
treated in \cite{FanFatMar1}. The result is proved by employing our method
based on Wodzicki's noncommutative residue \cite{Wod1, Wod2} and the K\"unneth formula.
On the other hand, it is necessary to involve the correct conformal factor in our calculations, in
order to obtain the modular transformation properties that we discussed. These properties
add to the many interesting and special features of the Bianchi IX gravitational instantons.
\smallskip
Modular forms appear in a variety of areas in mathematics and physics.
Since modular forms of a certain weight form a finite dimensional linear space and can be
computed with algorithmic methods, they have a wide range of applications. Thus,
it is of great importance in general to find an explicit way of relating any modular function
or modular form that arise from a mathematical structure or from physical problems to
well-known modular forms, whose Fourier expansion, for example, is known.
We have accomplished this task for the modular functions arising from the spectral action
for Bianchi IX metrics, in this paper, by exploring their intimate connection with modular
forms of weight $14$ and cusp forms of weight $18$, both of which form $1$-dimensional
linear spaces. That is, we have shown that, when the two parameters of a gravitational
instanton are rational, belonging to two different general families,
there is a finite orbit of the parameters, for each case, such that summation
over the orbits leads to the following. In the first case, after multiplication by the cusp form
$\Delta$ of weight 12, each modular function arising from
the Seeley-de Witt coefficient $\tilde a_{2n}$ lands in the space
of modular forms of weight $14$. This indicates that each modular function arising
in this case has only one simple pole, which is located at infinity. In the second
case, after multiplication by $G_4^4$, where $G_4$ is the Eisenstein series of
weight 4, the modular functions arising from the Seeley-de Witt coefficients
land in the 1-dimensional space of cusp forms of weight 18.
\smallskip
In order to illustrate how the present work fits in the general panorama
of other rich arithmetic and number theoretic structures in theoretical
physics,
let us mention the following examples.
A first example is the setting in which Feynman integrals are interpreted
as periods,
see \cite{MarBook} for an overview. In this case, the relevant amplitude
forms
and domains of integration are algebraic over the rationals or integers
and this fact
has direct implications on the class of numbers that arise as periods.
In particular, an interesting connection to modular forms also arises in this
setting \cite{BrSch}.
A second example in which rational coefficients play an important role is
in the
zero temperature KMS states of quantum statistical mechanical systems. For
example,
in the construction in \cite{ConMarLattices}, which is explained also in
Chapter 3 of
\cite{ConMarBook}, an arithmetic algebra of observables over the rationals is
constructed, whose link to modular functions allows to have KMS states
that take
their values in the modular field.
There are many occurrences of modular forms in
physics, especially in the context of String Theory. The literature
on the subject is extensive and we cannot mention all the relevant
results here, so we only point the reader to a couple of significant
recent examples, such as \cite{ChDuHa, DaMuZa}. The setting
we considered here is very different, as modular forms arise in the
gravity action functional (the spectral action) of a specific class of
gravitational instantons, rather than in settings such as superstring
amplitudes, or counting functions for BPS states, or mirror symmetry.
There are many other examples in the literature of arithmetic structures
arising in
physics, see for example the contributions collected in the volume
\cite{KirWill}. As it is noticeable from the present work as well, it is
in general a challenging
and promising task to further explore the hidden arithmetic structures in
different areas
of physics, including gravity and quantum cosmology.
\smallskip
|
1,108,101,564,147 | arxiv | \section{Introduction}
The advent of structured metamaterials has allowed the design of new
materials, with an unprecedented amount of control over their intrinsic
properties. These metamaterials are typically composite systems that consist of
two or more ordinary materials, that are periodically structured or arranged in
such a manner that the resulting properties differ from those of the constituent
materials. These systems have been widely explored both theoretically and
experimentally, with a plethora of new applications under development
\cite{veselagoUPS67, garlandAIP78, smithPRL00, smithSCIENCE04,husuNL12,
laroucheOC10}. The variety of available fabrication techniques such as
electron-beam lithography \cite{akahaneNATURE03, grigorenkoNATURE05,
balciJSTQE17}, ion milling \cite{gordonPRL04, seniutinasAPA16}, and even
conventional 3D printing \cite{wegenerLAMOM08, shenJO17, mikheevaAOTF18}, allow
for extremely precise designs of structured systems featuring arrays of
inclusions (or holes) with specific shapes. These methods allow the
fabrication of new devices with highly tunable optoelectronic properties
\cite{pendryPRL00, smithPRL00}. A wide variety of applications using
metamaterials have now been developed. Materials can be designed to have a
negative index of refraction \cite{shalaevOL05}; this has been implemented using
periodic noble metal inclusions within a dielectric matrix
\cite{kildishevJOSAB06}. Flat lens-like devices can be fabricated using
metamaterials that can manipulate the propagation of light with sub-wavelength
focusing capabilities; \cite{pendryPRL00} this type of device has been
implemented for cloaking \cite{pendrySCIENCE06, leonhardtSCIENCE06, haoAPL10}
and shielding applications \cite{fengPRL08}. The fabrication of these materials
is not restricted to specific ranges of the electromagnetic spectrum,
permitting, for example,
the development of new devices designed to work in the terahertz regime
\cite{alekseyevAO12, bornEL15, suzukiOE18}.
Metamaterials display a wide variety of optical phenomena \cite{chenNM10}; of
particular interest to us are their nonlinear optical properties.
The nonlinear response is strongly sensitive to
the natural atomic structure; for second-harmonic generation (SHG), the material
must have a non-centrosymmetric crystalline structure in order to have a strong
dipolar nonlinear response. Structured metamaterials, that can be designed with
almost limitless configurations, make for a promising alternative for nonlinear
optical applications. There have been numerous theoretical \cite{laroucheOC10,
obrienNM15, larouchePRA18} and experimental \cite{shadrivovJOSAB06, fengPRL08,
husuNL12} studies concerning the development of nonlinear devices using
metamaterials. Some examples of nonlinear metamaterials have been fabricated
using split-ring resonators \cite{zharovPRL03, kleinSCIENCE06} and nano-rod
inclusions \cite{marinoLPR18}, producing SHG-active, magnetic, and left-handed
materials. Other inclusions can be intrinsically noncentrosymmetric
\cite{canfieldNL07}, thus creating a strong SHG response. Tailored metamaterials
allow for the possibility to tune the nonlinear optical response
\cite{chenNANOPHOTONICS12, timbrellSR18, barSI18, galantySA18} as a function of
the geometrical configuration. These systems can be varied geometrically,
changing their degree of non-centrosymmetry, thus allowing for the
second-harmonic (SH) signal to be enhanced.
The required physical parameters (namely, the electric permittivity
and magnetic
permeability) that are used for calculating the linear optical
response can be obtained
via a homogenization procedure \cite{smithPRL00, simovskiJO10,
aluPRB11}. The formalism presented in Refs. \onlinecite{mochanPRB85a} and
\onlinecite{mochanPRB85b} is used in this work to describe the macroscopic
linear response of inhomogeneous systems in terms of an average of
certain specific
microscopic response functions of
the system. These quantities can then be used and the formalism may be
extended to calculate the linear and
non-linear optical responses of metamaterials of arbitrary composition
\cite{cortesPSSB10, mochanOE10, perezNJP13, mochanLAOPC14, mendozaPRB16}. In
this work, we explore the nonlinear SH response of a periodic nanostructured
metamaterial comprised of an array of holes of a non-centrosymmetric
geometry within a matrix made of a centrosymmeteric material, for
which we chose silver. In this case, the SH generation from a
homogeneous matrix would be strongly suppressed, but the
noncentrosymmetric geometry of the holes allows a strong signal whose
resonances may be tuned and enhanced through variations of the geometrical
parameters\cite{butetJOSAB30, butetNANO9}. We systematically study the evolution
of the nonlinear susceptibility tensor due to variations in the shape and
position of the holes. Lastly, we elucidate the
origin of the produced SH response by calculating and analyzing the charge
density and polarization field at the metallic surface.
The paper is organized as follows. In Sec. \ref{sec:theory} we present the
theoretical approach used to calculate the dielectric response of the
metamaterial that is then used to obtain the nonlinear SH polarization. In Sec.
\ref{sec:results} we present results for a nanostructured metamaterial
consisting of empty holes within a silver matrix. We explore a variety of
geometric configurations to fine-tune the SH response. Finally, in Sec.
\ref{sec:conc} we present our conclusions.
\section{Theory}\label{sec:theory}
The quadratic polarization forced at the second-harmonic (SH) frequency
$2\omega$ by an inhomogeneous fundamental field $\bm E_\omega$ at frequency
$\omega$ within an isotropic centrosymmetric material system made of
polarizable
entities within the non-retarded regime may be written as \cite{jacksonbook}
\begin{equation}\label{Pf}
\bm P^{f}({2\omega})=n\bm p({2\omega})-\frac{1}{2}\nabla\cdot n\bm
Q({2\omega})
\end{equation}
where $n$ is the number density of polarizable entities, $\bm p({2\omega})$ is
their electric dipole moment, given within the {\em dipolium}
model\cite{mendozaPRB96} by
\begin{equation}\label{p}
\bm
p({2\omega})=-\frac{n}{2e}\alpha(\omega)\alpha(2\omega)\nabla E^2(\omega),
\end{equation}
$\bm Q({2\omega})$ is their electric quadrupole moment, given by
\begin{equation}\label{Q}
\bm Q({2\omega})=\frac{1}{2e}n\alpha^2(\omega)\bm E(\omega)\bm E(\omega),
\end{equation}
and $\alpha(\nu\omega)$ are the the linear polarizabilities of each entity at
the fundamental ($\nu=1$) and at the SH ($\nu=2$), related to the dielectric
function $\epsilon(\nu\omega)$ through
\begin{equation}\label{eps}
\epsilon(\nu\omega)=1+4\pi n \alpha(\nu\omega).
\end{equation}
We allow the density $n$, the polarizability $\alpha$, the dielectric response
$\epsilon$ and the field to depend on position. The total polarization induced
at the SH is then
\begin{align}\label{P}
\bm P({2\omega})=&n\alpha(2\omega) \bm E({2\omega}) +\bm
P^f({2\omega})\nonumber\\
=&n\alpha(2\omega) \bm E({2\omega}) - \frac{n}{2e}
\alpha(\omega)\alpha(2\omega) \nabla E^2(\omega)
+\frac{1}{2e}\nabla\cdot n\alpha^2(\omega)\bm E(\omega)\bm E(\omega),
\end{align}
where we added to Eq. (\ref{Pf}) the polarization linearly induced by the
self-consistent electric field $\bm E({2\omega})$ produced by the total
SH polarization $\bm P({2\omega})$.
We want to apply the equations above to obtain the nonlinear susceptibility of a
binary metamaterial consisting of a host made up some material $A$ in which
inclusions made up of a material $B$ are embeded forming a periodic
lattice. In our actual calculations we will replace material B by vacuum.
We
denote by $\epsilon_\gamma$, $\alpha_\gamma$ and $n_\gamma$ the dielectric
function, polarizability and number density corresponding to material
$\gamma=A,$ $B$. We may describe the geometry of the metamaterial through a
periodic {\em characteristic function} $B(\bm r)=B(\bm r+\bm R)$ which takes the
values 1 or 0, according to whether the position $\bm r$ lies within the region
occupied by material $B$ or $A$, respectively, and where $\bm R$ is a lattice
vector. Thus, we may write the dielectric function as
\begin{equation}\label{epsVsB}
\epsilon(\bm r)=\frac{\epsilon_A}{u}(u-B(\bm r)),
\end{equation}
where we introduced the spectral variable
\begin{equation}\label{u}
u=\frac{1}{1-\epsilon_B/\epsilon_A},
\end{equation}
which takes complex values in general and accounts for the composition of the
materials and for their frequency dependent response.
In the long wavelength approximation, assuming that the unit cell of the
metamaterial is small compared to the wavelength of light in vacuum and the
wave- or decay-length within each of its components, we may take the electric
field within a single cell as longitudinal $\bm E=\bm E^L$ and we may identify
the longitudinal part $\bm D^L$ of the displacement field $\bm D$ as an {\em
external} field, which therefore has no fluctuations originated in the spatial
texture of the metamaterial, and is thus a macroscopic field $\bm D^L=\bm
D^L_{M}$. Thus, if we excite the system with a longitudinal external field we
may write
\begin{equation}\label{EvsD}
\bm E=(\hat{\bm{\epsilon}}^{LL})^{-1} \bm D^L,
\end{equation}
and
\begin{equation}\label{EMvsDM}
\bm E_{M}=(\hat{\bm{\epsilon}}^{LL}_{M})^{-1} \bm D^L_{M},
\end{equation}
where $\hat{\bm{\epsilon}}^{LL}=\hat {\mathcal P}^L\hat \epsilon\hat {\mathcal
P}^L$ is the longitudinal projection of the dielectric function $\epsilon$
interpreted as a linear operator,
\begin{equation}\label{epsM}
(\hat{\bm{\epsilon}}^{LL}_{M})^{-1}=\left\langle(\hat{\bm{\epsilon}}^{LL})^{-1}\right\rangle,
\end{equation}
is the inverse of the macroscopic longitudinal dielectric operator,
given\cite{mochanPRB85a, mochanPRB85b} by the spatial average,
$\langle\ldots\rangle$, of the {\em microscopic} inverse longitudinal dielectric
operator, and $\hat{\mathcal P}^L$ is the longitudinal projector operator, which
may be represented in reciprocal space by the matrix
\begin{equation}\label{PL}
\mathcal P_{\bm G\bm G'}=\hat{\bm G}\hat{\bm G}\delta_{\bm G\bm G'},
\end{equation}
with $\bm G$ and $\bm G'$ reciprocal vectors of the metamaterial, where
$\delta_{\bm G\bm G'}$ is Kronecker's delta,
\begin{equation}\label{hatG}
\hat{\bm G}=\frac{\bm k+\bm G}{||\bm k+\bm G||}
\end{equation}
a unit vector in the direction of the wavevector $\bm k+\bm G$, and $\bm k$ the
conserved Bloch's vector of the linear field which we interpret as the
relatively small wavevector of the macroscopic field.
From Eq. (\ref{epsVsB}) we may write
\begin{equation}\label{epsLL-1}
(\hat{\bm{\epsilon}}^{LL})^{-1}=\frac{u}{\epsilon_A}(u\hat{\mathcal
P}^L-\hat B^{LL})^{-1},
\end{equation}
in which we may interpret the inverse of the operator within parenthesis in
terms of a Green's function,
\begin{equation}\label{Green}
\hat{\mathcal G}(u)=(u-\hat{\mathcal H})^{-1},
\end{equation}
the resolvent of a Hermitian operator $\hat{\mathcal H}$ with matrix elements
\begin{equation}\label{HGG}
\mathcal H_{\bm G\bm G'}=\hat{\bm G}\cdot B(\bm G-\bm G')\hat{\bm G}'
\end{equation}
in reciprocal space, where $B(\bm G-\bm G')$ is the Fourier coefficient of the
periodic characteristic function $B(\bm r)$ with wavevector $(\bm G-\bm G')$.
Notice that $B^{LL}_{\bm G\bm G'}=\hat{\bm G}\mathcal H_{\bm G\bm G'}\hat{\bm
G}'$, $(\bm{\epsilon}^{LL})^{-1}_{\bm G\bm G'}=(u/\epsilon_A)\hat{\bm
G}\hat{\mathcal G}(u)\hat{\bm G'}$, and that
$(\bm{\epsilon}^{LL}_{M})^{-1}=(u/\epsilon_A)\hat{\bm k}\langle\hat{\mathcal
G}(u)\rangle\hat{\bm k}$.
To obtain the macroscopic dielectric response and the microscopic electric field
we proceed as follows. We define a normalized macroscopic state $|0\rangle$ that
represents a longitudinal field propagating with the given small wavevector $\bm
k$ and we act repeatedly on this state with the operator $\hat{\mathcal H}$ to
generate an orthonormal basis set $\{|n\rangle\}$ through
Haydock's\cite{haydock} recursion
\begin{equation}\label{iter}
\hat{\mathcal H}|n\rangle=b_{n+1}|n+1\rangle+a_{n}|n\rangle+b_{n}|n-1\rangle.
\end{equation}
In this basis, $\hat{\mathcal H}$ may be represented by a tridiagonal matrix
with elements
\begin{equation}\label{Hnn}
(\mathcal H_{nn'})=\left(
\begin{array}{ccccc}
a_0 &b_1 & 0& 0 &\cdots\\
b_1 &a_1 &b_2 & 0 &\cdots\\
0 &b_2 &a_2 &b_3 &\cdots\\
0 &0 &b_3 &a_3 &\cdots\\
\vdots&\vdots&\vdots&\vdots&\ddots
\end{array}
\right)
\end{equation}
given by Haydock's coefficients $a_n$ and $b_n$. Thus, the macroscopic inverse
longitudinal response may be obtained as a continued fraction
\cite{mochanOE10,perezNJP13}
\begin{equation}\label{epsMH}
\begin{split}
(\hat{\bm{\epsilon}}^{LL}_{M})^{-1}
&= \hat{\bm k}\hat{\bm k}\frac{u}{\epsilon_A}\langle0|(u-\hat H)^{-1}|0\rangle\\
&= \hat{\bm k}\hat{\bm k}\frac{u}{\epsilon_A}
\frac{1}{u-a_0-
\frac{b_1^2}{u-a_1-
\frac{b_2^2}{u-a_2-
\frac{b_3^2}{\ddots}}}}
\end{split}
\end{equation}
and the microscopic electric field (\ref{EvsD}) may be represented in reciprocal
space by
\begin{equation}\label{field}
E_G=\sum \zeta_n \langle\bm G|n\rangle
\end{equation}
with coefficients $\zeta_n$ obtained by solving the tridiagonal system
\begin{equation}\label{zeta_n}
\sum_{n'} (u\delta_{nn'}-H_{nn'}) \zeta_{n'}=\delta_{n0} D^L,
\end{equation}
where we write the fields in real space as
\begin{equation}\label{Dvsk}
\bm D^L(\bm r)=\hat{\bm k} D^L e^{i\bm k\cdot\bm r}
\end{equation}
and
\begin{equation}\label{EvsG}
\bm E(\bm r)=\sum_{\bm G} \hat{\bm G} E_G e^{i(\bm k +\bm G)\cdot\bm r}.
\end{equation}
Notice that the results of the calculation above depend on the direction
$\hat{\bm k}$ chosen as the propagation direction of the external field. As we
may identify
\begin{equation}\label{epsvsk}
(\bm{\hat \epsilon}_{M}^{LL})^{-1}=\frac{\hat{\bm k}\hat{\bm
k}}{\hat{\bm k}.\bm{\hat \epsilon}_{M}^{LL}\cdot\hat{\bm k}},
\end{equation}
all the components of the macroscopic dielectric tensor may be efficiently
obtained from Eq. (\ref{epsMH}) by repeating the calculation of its longitudinal
proyection for different propagation directions $\hat{\bm k}$, such as along all
independent combinations $\hat{\bm e}_i+\hat{\bm e}_j$ of pairs of cartesian
directions $\hat{\bm e}_i$ and $\hat{\bm e}_j$ ($i,j=x,$ $y$ or $z$).
Once we obtain the microscopic field from Eqs. (\ref{field}), (\ref{zeta_n}) and
(\ref{EvsG}), we may substitute it in Eqs. (\ref{Pf})-(\ref{Q}) to obtain the
forced SH polarization, which we may then substitute in Eq. (\ref{P}) to obtain
the self-consistent quadratic polarization in the SH. However, in order to solve
Eq. (\ref{P}) we need the self-consistent SH field, which in the long wavelength
approximation is simply given by the depolarization field
\begin{equation}\label{depol}
\bm E({2\omega})=-4\pi\bm P^L({2\omega})
\end{equation}
produced only by the longitudinal part of the SH polarization. Thus we write Eq.
(\ref{P}) as
\begin{equation}\label{PvsPL}
\bm P({2\omega})=-4\pi n\alpha(2\omega) \bm P^L({2\omega}) +\bm
P^f({2\omega}).
\end{equation}
By taking its longitudinal projection, we obtain a
closed equation for $\bm P^L({2\omega})$ which we solve formally as
\begin{equation}\label{P2L}
\bm P^L({2\omega})=(\hat{\bm\epsilon}^{LL}(2\omega))^{-1}\bm
P^{fL}({2\omega})
\end{equation}
using Eq. (\ref{eps}). Plugging this result back into Eq. (\ref{PvsPL}), we
finally obtain the SH polarization $\bm P(2\omega)$.
In order to perform the operation indicated in Eq. (\ref{P2L}) we perform a
Haydock recursion as in Eq. (\ref{iter}) but using $\bm
P^{fL}{(2\omega)}$ to construct a new initial normalized state
$|\tilde 0\rangle$, with components $\langle \bm G|\tilde 0\rangle$ in
reciprocal given by
\begin{equation}\label{PfvsG}
\bm P^{fL}_{\bm G}({2\omega})=\hat{\bm G} \langle
\bm G|\tilde 0\rangle f,
\end{equation}
where $f$ is a normalization constant. From this state, we build a new Haydock
orthonormal
basis $|\tilde n\rangle$ using the same procedure as in Eq. (\ref{iter}). Thus,
we write the self-consistent longitudinal SH polarization as
\begin{equation}\label{Plvsr}
\bm P^L({2\omega};\bm r)=\sum_{\bm G} P^L_{\bm G}(2\omega) \hat{\bm G}
e^{i(\bm k +\bm G)\cdot\bm r}.
\end{equation}
with
\begin{equation}\label{PLvsn}
P^L_{\bm G}{(2\omega)}=\frac{u_2}{\epsilon_{A2}}\sum_{\tilde n} \xi_{\tilde n} \langle\bm G|\tilde n\rangle
\end{equation}
and with coefficients $\xi_{\tilde n}$ obtained by solving the tridiagonal system
\begin{equation}\label{xin}
\sum_{\tilde n'} (u_{2}\delta_{\tilde n\tilde n'}-H_{\tilde n\tilde
n'}) \xi_{\tilde n'}=\delta_{\tilde n\tilde 0} f,
\end{equation}
where $u_{2}$ and $\epsilon_{A2}$ are the spectral variable (\ref{u}) and the
dielectric response $\epsilon_A$ but evaluated at the SH frequency $2\omega$.
Substitution of $\xi_{\tilde n}$ from Eq. (\ref{xin}) into Eqs. (\ref{PLvsn})
and (\ref{Plvsr}) yields the SH longitudinal polarization, which may then be
substituted into Eq. (\ref{PvsPL}) to obtain the total SH polarization in the
longwavelength limit when the system is excited by a longitudinal external field
along $\hat{\bm k}$. Averaging the result, or equivalently, taking the $\bm G=0$
contribution in reciprocal space, we obtain the macroscopic SH polarization $\bm
P_{M}(2\omega)$ which we write as
\begin{equation}\label{P2M}
\bm P_{M}({2\omega})=\frac{1}{4\pi}(\bm\epsilon_{M}(2\omega)-\bm 1)\bm
E_{M}({2\omega})+\bm P_{M}^f({2\omega}),
\end{equation}
where the first term is the contribution of the linear response at
$2\omega$ to the SH
macroscopic field, and the second term
\begin{equation}\label{P2Mf}
\bm P_{M}^f({2\omega})=\bm \chi^{(2)}_{M}:\bm E_{M}({\omega})\bm E_{M}({\omega})
\end{equation}
is the sought after contribution to the SH macroscopic polarization
forced by the
fundamental macroscopic
electric field, and $\bm \chi^{(2)}_{M}$ is the corresponding SH quadratic
macroscopic susceptibility, given by a third rank tensor. Within our
longwavelength longitudinal calculation the macroscopic field $\bm
E_{M}({2\omega})$ is simply given by the longitudinal depolarization field
\begin{equation}\label{E2MvsP2LM}
\bm E_{M}({2\omega})=\bm E^L_{M}({2\omega})=-4\pi \bm P^L_{M}({2\omega}),
\end{equation}
so that, taking the longitudinal projection of Eq. (\ref{P2M}) we
obtain
\begin{equation}\label{P2ML}
\bm P_{M}^{fL}({2\omega})=\hat{\bm k}\hat{\bm k}\cdot \bm P_{M}^f({2\omega}) =\bm\epsilon_{M}^{LL}(2\omega) \bm P_{M}^{L}({2\omega}).
\end{equation}
Substituting $\bm P_{M}^{fL}({2\omega})$ from Eq. (\ref{P2ML}) into
(\ref{E2MvsP2LM}) and then into (\ref{P2M}) we obtain the macroscopic forced
quadratic SH polarization $\bm P_{M}^f(2\omega)$ produced by a longitudinal
external $\bm D^L$ field pointing along $\hat{\bm k}$. As in the linear case, we
finally repeat the calculation above, for several independent directions of
propagation $\hat{\bm k}$ so that the corresponding Eqs. (\ref{P2Mf}) become a
system of linear equations in the unknown cartesian components
$\chi^{(2)}_{M\,ijk}$ ($i,j,k=x,$ $y,$ or $z$) which we solve to obtain the
third rank second order susceptibility tensor $\bm\chi^{(2)}_{M}$ of the
metamaterial.
In summary, to obtain the quadratic response we first obtain the nonretarded
microscopic field and the macroscopic dielectric tensor using a Haydock's
recursion starting from a macroscopic external longitudinal field, then we use
the dipolium model to obtain the microscopic {\em source} of the SH
polarization,
we screen it using Haydock's scheme again to obtain the {\em full} microscopic
polarization, which we average to obtain the full macroscopic SH polarization.
As this {\em includes} a contribution from the {\em macroscopic SH
depolarization field}, we
substract it before identifying the quadratic suceptibility tensor projected
onto the longitudinal direction. We repeat the calculation along different
independent directions so that we can extract all the components of the
quadratic susceptibility.
In the process above we assumed that the unit cell of the metamaterial is small
with respect to the wavelength at frequency $\omega$, and thus we introduced a
long-wavelength approximation and assumed the external field and the electric
field to be longitudinal. After obtaining all the components of the macroscopic
response, we should not concern ourselves anymore with the texture of the
metamaterial; the unit cell disappears from any further use we give to the
macroscopic susceptibility. Thus, we can solve any macroscopic SH related
electromagnetic problem using the suceptibility obtained above without using
again the long wavelength approximation. Once we have the full
macroscopic susceptibility tensor we may use it to calculate the
response to transverse as well as longitudinal fields.
Thus, we may use our susceptibility
above to study the generation of electromagnetic waves at the SH from a
propagating fundamental wave, in which case the macroscopic fields
{\em can no
longer} be assumed to be longitudinal.
\section{Results}\label{sec:results}
We present results for a simple geometry in which we can control the degree of
centrosymmetry. To that end, we incorporated the scheme described in the
previous section into the package {\em Photonic} \cite{photonic}, which is a
modular, object oriented system based on the Perl programming language, its
Perl Data Language (PDL) \cite{glazebrook97pdl} extension for efficient
numerical calculations, and the Moose \cite{moose} object system. The package
implements Haydock's recursive procedure to calculate optical properties of
structured metamaterials in the nonretarded as well as in the
retarded regime.
Our system consists of a square array of pairs of holes in the
shape of prisms with a rectangular cross section within a metallic host (Fig.
\ref{fig-1}).
\begin{figure}
\includegraphics[width=0.6\linewidth]{fig-1}
\caption{\label{fig-1}Unit cell of a metamaterial made up of a
horizontal and a
vertical rectangular hole within a conducting matrix. We indicate
the lattice parameter $a$ of the square array, the length
$L_\beta$ and width $W_\beta$ of each rectangle
($\beta=h, v$) and the offset $O$ of the center of the vertical
rectangle with
respect to that of the horizontal one. We indicate the
directions $x$, $y$ of the crystalline axes. The shaded regions
correspond to masks of width $\Delta m$ used to single out the
surface, edge and corner contributions to the SH response. }
\end{figure}
Each rectangle is aligned with one of the crystalline axes $x$, $y$ of the
metamaterial and is characterized by its length $L_h$ or $L_v$ and its width
$W_h$ or $W_v$, where $h$ denotes horizontal (along $x$) and $v$ vertical (along
$y$) alignment. The center of the vertical rectangle is shifted horizontally
with respect to the center of the horizontal rectangle by an offset $O$. Thus,
when $O=0$ our system is centrosymmetric and as $O$ increases it becomes
noncentrosymmetric in varying degrees.
In order to simplify our analysis, we have chosen a system that has mirror
symmetry $y\leftrightarrow -y$.
Thus, the only in-plane non-null components of the SH susceptibility
are\cite{popovbook} $\chi_{xxx}$, $\chi_{xyy}$, and $\chi_{yxy}=\chi_{yyx}$. We
omit the subindex $M$ and the superindex $(2)$ that indicate these are
components of the quadratic macroscopic susceptibility in order to simplify the
notation, as we expect it yields no confusion.
In Fig. \ref{fig-2} we show the spectra
of the magnitude of these non-null components for an Ag host\cite{yangPRB15} and
for different values of the offset $O$. The parameters we used were
$W_h=W_v=a/6$, $L_h=L_v=a/2$, $O=0\ldots a/3$.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{fig-2}
\caption{\label{fig-2} Normalized absolute value of the non-null
components of the SH susceptiblity $nea\chi_{ijk}$, with
$ijk=xxx$ (upper left), $xyy$ (upper right), and $yyx=yxy$ (lower
left), for a square
lattice of rectangular holes, as in Fig. \ref{fig-1} within an Ag
matrix,
with geometrical parameters $L_h=L_v=a/2$, $W_h=W_v=a/6$,
for different values of the offset
$O=0\ldots a/3$. The lower right panel displays the geometry
corresponding to the largest offset. Notice that for these cases
the holes overlap.
}
\end{figure}
Notice that when $O=0$ the system is centrosymmetric and there is no SH signal.
As $O$ increases towards $\pm a/3$ the system becomes noncentrosymmetric. Two
resonances become clearly visible and they grow in size as $O$ increases and the
system moves farther away from the centrosymmetric case. The lower energy
resonance of $\chi_{yyx}$ is at a different frequency than those of $\chi_{xxx}$
and $\chi_{xyy}$ and is red shifted as the offset increases. If $O$ increases
beyond $a/3$ (not shown) the two rectangles would cease to overlap and the
quadratic suceptibility would rapidly decay, until $O=a/2$ for which the system
becomes exactly centrosymmetric again and the quadratic susceptibility becomes
exactly null.
According to Fig. \ref{fig-2}, the order of magnitude of the SH
susceptibility is around
$10^2/ nea$. For typical noncentrosymmetrical materials, such as quartz, the
corresponding order of magnitude is about $1/nea_B$, where $a_B$ is Bohr
radius\cite{boydbook}. Thus, a centrosymmetric material with a
noncentrosymmetric geometry can achieve susceptibilities of the order of $10^2
a_B/a$ times that of noncentrosymmetrical materials. Thus, quadratic
metamateriales made of centrosymmetrical materials may be competitive as long as
the lattice parameter is not too large.
In order to understand the origin of the structure of the spectra discussed
above, in Fig. \ref{fig-3} we plot the non-null components $\epsilon_{M}^{xx}$
and $\epsilon_{M}^{yy}$ of the macroscopic linear dielectric tensor
$\bm\epsilon_{M}$ of a metamaterial made up of a square lattice of
{\em single}
rectangular holes with a horizontal orientation.
Notice that there is a very weak resonance
close to 3.4\,eV corresponding to polarization along the length of
the rectangle
($x$ direction) and a strong resonance corresponding to polarization along the
width of the rectangle ($y$ direction) at a slightly smaller frequency.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{fig-3}
\caption{\label{fig-3} Non-null components of the macroscopic
dielectric response, $\epsilon_{M}^{xx}$ and $\epsilon_{M}^{yy}$, of a
metamaterial made up of a square array of horizontally oriented
single rectangular holes with the same dimensions as in
Fig. \ref{fig-2} within an Ag matrix.}
\end{figure}
Although there is a strong linear resonance in the $y$ direction, this system is
centrosymmetrical and would yield no SH signal. When we combine horizontal and
vertical rectangles (Fig. \ref{fig-4}) with a null offset $O=0$ to
make a centrosymmetric array of
crosses, both resonances appear for both polarizations, although they now
interact, partially exchange their strengths and repel so that both become
clearly visible close to 3.4\,eV and 3.2\,eV.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{fig-4}
\caption{\label{fig-4} Non-null components $\epsilon_{M}^{xx}$ and
$\epsilon_{M}^{yy}$ of the macroscopic
dielectric tensor $\bm\epsilon_{M}$ of a
metamaterial made up of a square lattice of pairs of horizontally
and vertically oriented
single rectangular holes within an Ag matrix as in
Fig. \ref{fig-1} with the same parameters as in Fig. \ref{fig-2}
for different
values of the offset $O=0\ldots a/3$.}
\end{figure}
As the offset $O$ increases, there are only small changes to
the spectra corresponding to $\epsilon_{M}^{xx}$, consisting in
changes to the weights of the peaks. However, a new strong mode develops
in the spectra of $\epsilon_{M}^{yy}$. This mode is due to the strong coupling
of a quadrupolar oscillation in the vertical rectangle to the vertical dipolar
oscillation of the horizontal rectangle. This quadrupole may be visualized as a
horizontal polarization in the upper part of the vertical rectangle and a
horizontal polarization in the opposite direction in the lower part of the
rectangle, as illustrated by Fig. \ref{fig-5}. The coupling is symmetry allowed
as for a finite offset $O\ne0$ the system looses the $x\leftrightarrow -x$
symmetry.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{fig-5}
\caption{\label{fig-5}Magnitude (color coded) and direction
(arrows) of the microscopic linear electric field (left) and
induced charge density $\rho$ (right) for a metamaterial made of a square
lattice of rectangular holes within an Ag
matrix with the same parameters as in Fig. \ref{fig-2} with an
offset $O=a/3$, excited by a
macroscopic field along the $y$ (vertical) direction
for $\hbar\omega\approx 3\,eV$ and $\hbar\omega\approx 3.4\,eV$
corresponding to the two peaks
in $\epsilon_{M}^{yy}$ shown in Fig. \ref{fig-4}. The field
and the charge distribution correspond to a vertical polarization
for the horizontal rectangle, a vertical polarization for the
vertical rectangle and a nondiagonal quadrupole with opposite
horizontal polarizations above and below the symmetry plane.
}
\end{figure}
We expect the resonant structure of the quadratic susceptibility to have peaks
corresponding to the resonances of the linear response at the
fundamental and at
the SH frequency. Thus, we expect peaks at the fundamental and at the
subharmonics of those of the linear response. As there is no
structure in the linear response within the region from 1.4\,eV to
1.9\,eV shown in Fig.
\ref{fig-2}, in our system we can only expect structure at the
subharmonics, due
to a resonant excitation of the polarization at the SH frequency. For a
macroscopic field oriented along the cartesian directions $x$ or $y$ the SH
harmonic polarization can only point along the $x$ direction, due to the
$y\leftrightarrow -y$ mirror symmetry of our system. Thus, the subharmonics of
the resonances of $\epsilon_{M}^{xx}$ (Fig. \ref{fig-4}) appear in the
susceptibility components $\chi_{xxx}$ and $\chi_{xyy}$ (Fig. \ref{fig-2}). On
the other hand, a macroscopic field that points along an intermediate direction
between $x$ and $y$ may excite a quadratic polarization along $y$. Thus, the
subharmonics of the resonances of $\epsilon_{M}^{yy}$ (Fig. \ref{fig-4}) appear
in the susceptibility components $\chi^{yxy}=\chi^{yyx}$ (Fig. \ref{fig-2}).
To gain further insight into the nature of the resonances, in Fig. \ref{fig-6}
we show the polarization maps evaluated at the maxima of the SH spectra
corresponding to different directions of the macroscopic linear field, and for
the offset $O=a/3$ that yields the largest signals.
\begin{figure}
\includegraphics[width=0.85\linewidth]{fig-6}
\caption{\label{fig-6}Magnitude and direction of the quadratic
polarization induced in the same system as in Fig. \ref{fig-2}
for the largest
offset $O=a/3$ at the resonant energies
$\hbar\omega=1.62\,eV$ and the fundamental macroscopic field $\bm
E_{M}$ along the direccion $\hat x$ (upper left),
$\hbar\omega=1.62\,eV$ and $\bm E_{M}$ along $y$ (upper right) and
for $\hbar\omega=1.5\,eV$ and $\hbar\omega=1.72\,eV$ with
$\bm E_{M}$ along $\hat x+\hat y$ (bottom).
}
\end{figure}
We notice that when the fundamental macroscopic field points along the
$x$ or along the $y$
direction, the magnitude of the SH polarization is symmetric with respect to the
mirror plane, the $y$ component of the polarization points towards
opposite directions on either side of the mirror plane, yielding a macroscopic
SH polarization along $x$. In these cases, the polarization has maxima near the
four concave vertices of the vertical hole and near the convex
vertex where the horizontal and vertical rectangles meet. On the other
hand, when the fundamental
macroscopic field points along the direction of $\hat x+\hat y$, the resulting
quadratic polarization has no symmetry at all, and it yields a macroscopic SH
polarization that has a $y$ component.
Finally, in Fig. \ref{fig-7} we illustrate the contributions of the surface
region to the total quadratic susceptibility by adding only the contributions
within bands of varying widths $\Delta m$ around the surface.
\begin{figure}
\includegraphics[width=0.5\linewidth]{fig-7}
\caption{\label{fig-7}Contributions to the quadratic susceptibility
$\chi_{xyy}$ of the same system as in Fig. \ref{fig-6} from the
region within a distance $\Delta m$ from the surface, as defined in
Fig. \ref{fig-1} for various values of $\Delta m=a/120, a/60,
a/12$, and the full susceptibility.
}
\end{figure}
We notice that although there is a very strong surface polarization, its
contribution to the macroscopic quadratic susceptibility is relatively
small, as it is confined to a very narrow region and it is partially
cancelled by the polarization at other parts of the surface, so that
for the geometry studied here, most of the SH signal comes from the
bulk of the host.
\section{Conclusions}\label{sec:conc}
We have developed a formalism for the calculation of the second order
susceptibility of structured binary metamaterials formed by a lattice of
particles embedded within a host, for the case where both components
consists of centrosymmetric materials but where the geometry is not
centrosymmetric. Although SH is strongly suppressed within a
homogeneous centrosymmetric material, the noncentrosymmetric surface
is capable of sustaining a surface nonlinear polarization and to
induce a strongly varying linear field which induces a multipolar
nonlinear polarization within the metamaterial components.
We implemented our formalism using the Haydock recursive scheme
within the {\em Photonic} modular package and
applied it to the calculation of
the second-order nonlinear susceptibility of a structured
metamaterial composed of a homogeneous Ag host with a
lattice of pairs of rectangular holes. By
modifying the geometry of the holes, we modify the degree of
non-centrosymmetry of the material, allowing us to fine-tune both the
peak position and intensity of the SH response. The SH
signal is very sensitive
to changes in the geometrical parameters of the structure.
After establishing the inclusion shape that most enhances this signal,
we analyzed the polarization field and showed that the SH response is
largest at resonance close to the concave and convex corners but it
extends well into the host material. The order of magnitude of the
susceptibility obtained in this calculation is comparable to that of
typical non-centrosymmetric materials.
Although this study was carried out for one particular combination of materials,
the employed procedure is equally valid for calculating the nonlinear properties
for any metamaterial composed of arbitrary materials and inclusions. Only
\emph{a priori} knowledge of the dielectric function of each constituent
material is required. This approach affords the opportunity to quickly and
efficiently study a limitless range of possible metamaterial designs,
with manifold optical applications in mind. Our hope is that this
methodology will prove to be an important tool for future metamaterial design
and fabrication.
\acknowledgments
This work was supported by DGAPA-UNAM under grants IN113016 and
IN111119 (WLM) and
by CONACyT under scholarship 589138 (URM). We acknowledge useful
talks with Raksha Singla and Sean M. Anderson.
|
1,108,101,564,148 | arxiv | \section{\large Introduction}
Multiple testing problems of the sequential or, more generally, multistage nature occur frequently in statistics. For example, in sequential fault detection and diagnosis \citep{Nikiforov95,Lai00}, after detecting that a change in the system has occurred at some time point, the task is to isolate this changepoint to one of $k$ time intervals or diagnose it as one of $k$ change types. Another area rich with examples is sequential clinical trials with multiple endpoints \citep[e.g.,][]{Tang93,Tang99} in which patients are accrued, treated, and evaluated sequentially with regard to~$k$ different features, none of which may have priority. Some recent new areas of applications are quantitative finance, in empirical tests of the profitability of trading strategies~\citep{Romano05b}, and genomics \citep{Ge03}.
Based on data coming from a parametric family $F_\theta$, $\theta\in\Theta$, of distributions, we will be concerned with testing a set of hypotheses $H_1,\ldots,H_k\subseteq\Theta$. A hypothesis $H_i$ is \emph{true} if the true~$\theta$ lies in $H_i$. If $I\subseteq\{1,\ldots,k\}$ is the set of indices of the true hypotheses, then the \emph{family-wise error rate~(FWE)} is defined as the probability
\begin{equation}
\label{eq:FWE}P(\mbox{some $H_i$, $i\in I$, is rejected}).
\end{equation} Other and somewhat less stringent notions of error rate have been proposed \citep[cf.][]{Hochberg87} but we focus here on FWE, the bounding of which is sometimes called \emph{strong error control}, because of its prominence in clinical research \citep{Lehmacher91} and other applications.
A set of hypotheses $H_1,\ldots,H_k$ is called \emph{closed} if the set $\{H_1,\ldots,H_k\}$ is closed under intersection. \citet{Marcus76} introduced a method of testing a closed set of hypotheses $H_1,\ldots, H_k$ that controls the FWE by requiring that there be an $\alpha$-level test of every intersection hypothesis $\cap_{i\in J} H_i$, $J\subseteq \{1,\ldots,k\}$. Let~$\bm{H}$ be the set of all such intersections, and note that closedness of $\{H_1,\ldots,H_k\}$ is equivalent to it being equal to~$\bm{H}$. Beginning with the \emph{global hypothesis} $\cap_{i=1}^k H_i$, Marcus et al.'s procedure tests the non-empty elements of $\bm{H}$ in order of decreasing \emph{dimension} (defined as the maximum number of~$H_i$ being intersected) by their corresponding $\alpha$-level tests, and proceeding by using the rule that $H\in\bm{H}$ is tested if and only if all elements of $\bm{H}$ contained in $H$ are tested and rejected. Multistage extension of closed tests is relatively straightforward because of the nature of the hypotheses and the test statistic. It is essentially a repeated closed testing procedure, performing closed testing of the hypotheses not yet rejected at every stage; see \citet{Tang93} and \citet{Tang99}.
When the set of hypotheses is not closed, Holm's~\citeyearpar{Holm79} step-down procedure is commonly used for fixed sample size problems. Beginning with a brief review of Holm's~\citeyearpar{Holm79} procedure, Section~\ref{sec:SD} then proceeds to provide a multistage extension, which aims to capture the generality of Holm's procedure for controlling the FWE and to be able to take advantage of the closed testing structure when it exists. A simulation study of the proposed procedure's power and expected sample size is given in Section~\ref{sec:power}, and further applications and relation to the existing litertaure are discussed in Section~\ref{sec:disc}.
\section{\large A Multistage Step-Down Procedure}\label{sec:SD}
\citet{Holm79} proposed the following general step-down method of testing $H_1,\ldots,H_k$ that does not assume closedness. Although Holm's procedure has been criticized for lack of power in some settings, it does preserve the FWE without making any assumptions about the structure of the hypotheses or correlations between the individual test statistics, only that for each hypothesis $H_i$ there is a computable $p$-value $\what{p}_i$ such that
\begin{equation}
P(\widehat{p}_i\le \alpha |H_i)\le \alpha\label{eq:unif}
\end{equation}
for all $0<\alpha<1$. The $\alpha$-level Holm's procedure proceeds as follows. Compute and order the $p$-values $\widehat{p}_{i(1)}\le\ldots\le\widehat{p}_{i(k)}$. For $j=1,\ldots, k$, if
\begin{equation}
\widehat{p}_{i(j)}\ge\alpha/(k-j+1),\label{eq:Holmacc}
\end{equation}
then accept $H_{i(j)},\ldots,H_{i(k)}$; otherwise, reject $H_{i(j)}$ and move on to stage $j+1$ (if $j<k$). A simple proof that the FWE of Holm's procedure is bounded by $\alpha$ is given in \citet{Lehmann05}.
As noted above, when closedness exists it is unnecessary to use the step-down correction~(\ref{eq:Holmacc}), or any Bonferroni-type correction for that matter. However, it is illuminating to now consider how the step-down procedure is related to closed testing procedures. When closedness exists, Marcus et al.'s~\citeyearpar{Marcus76} procedure can be viewed as a special case of Holm's step-down procedure in the following sense. Assume that the $p$-values ``respect'' the closedness in the sense that
\begin{equation}
\label{eq:Holmmono}H_i\subseteq H_j\quad \Rightarrow\quad \what{p}_i\le\what{p}_j.
\end{equation} In this case Holm's procedure will test the elements of $\bm{H}=\{H_1,\ldots,H_k\}$ in order of decreasing dimension (provided we agree to use dimension to break any ``ties''). Assuming that closedness and~(\ref{eq:Holmmono}) hold, we make two slight modifications of Holm's procedure to utilize these properties. First, upon rejection of~$H_{i(j)}$, we accept any of the remaining hypotheses whose complements are implied by~$H_{i(j)}$, i.e., accept any~$H_{i(j+1)},\ldots, H_{i(k)}$ containing $H_{i(j)}^c$, the complement of $H_{i(j)}$. This does not change the FWE~$\le\alpha$ bound since~(\ref{eq:Holmmono}) guarantees that the intersection of all true hypotheses, denoted by $G$, is the first true hypothesis tested. Otherwise, if $G$ has already been accepted, then all true hypotheses are subsequently accepted according to this rule. Next, note that the Bonferroni-type correction in~(\ref{eq:Holmacc}) is now unnecessary, i.e., the right-hand-side of~(\ref{eq:Holmacc}) may be replaced by $\alpha$ while maintaining FWE $\le\alpha$, since, letting~$j_G$ denote the rank of the $p$-value associated with~$G$,
\begin{eqnarray*}
\mbox{FWE}&=&P(\mbox{$G$ rejected})\\
&\le&P(\what{p}_{i(j_G)}< \alpha)\\
&\le& \alpha.
\end{eqnarray*}
We now introduce a multistage generalization of Holm's~\citeyearpar{Holm79} step-down procedure. As with the original version of Holm's procedure, we make no assumptions about the structure of the set of hypotheses $H_1,\ldots, H_k$ to be tested, and the only assumptions are about the family of available tests of the individual hypotheses through their significance levels, following the approach of~\citet{Romano05}. That this procedure satisfies FWE~$\le\alpha$ is proved in Theorem~1.
For each hypothesis $H_i$, assume there is a sequential test statistic $T_{i,n}$, a (non-random) critical value function $C_n(\rho)$ that is non-increasing in $\rho\in(0,1)$, and a set $N$ of possible sample sizes such that
\begin{equation}
\label{eq:pval} \sup_{\theta\in H_i} P_\theta \left(\sup_{n\in N} [T_{i,n}-C_n(\rho)]\ge 0\right)\le\rho
\end{equation} for all $0<\rho<1$. This requirement is the multistage analog of~(\ref{eq:unif}). The existence of such a family of sequential test statistics may at first seem restrictive, but in many settings there are natural choices for the $T_{i,n}$; see the examples in Sections~\ref{sec:power} and \ref{sec:disc}. The use of the set $N$ of possible sample sizes allows for the possibilities of fully sequential (e.g., $N=\{1,2,3,\ldots\}$) or group sequential (e.g., $N=\{m, 2m, 3m, 4m, 5m\}$ for some $m$) in both the truncated and non-truncated settings.
The $\alpha$-level multistage step-down procedure with no more than $k$ stages is defined as follows. Let $I_1=\{1,\ldots,k\}$, $n_0=0$, and let $|\cdot |$ denote set cardinality. For~$j=1,\ldots, k$:
\begin{enumerate}
\item Sample up to
\begin{equation}
\label{eq:seqss}n_j=\inf\left\{n\in N: n> n_{j-1}\quad\mbox{and}\quad\max_{i\in I_j}T_{i,n}\ge C_{n}(\alpha/|I_j|)\right\}.
\end{equation}
\item Order the test statistics $$T_{i(j,1),n_j}\ge T_{i(j,2),n_j}\ge \ldots\ge T_{i(j, |I_j|),n_j}.$$
\item Reject $H_{i(j,1)},\ldots,H_{i(j,m_j)}$, where
\begin{equation}
m_j=\max\left\{m\ge 1: \min_{1\le \ell\le m}\left[T_{i(j,\ell),n_j}-C_{n_j}\left(\frac{\alpha}{|I_j|-\ell+1}\right)\right]\ge 0\right\}.\label{eq:numrej}
\end{equation}
\item Stop if $j=k$, if $n_j=\sup N$, or if all remaining hypotheses contain the complement of some rejected hypothesis. Otherwise, let $I_{j+1}$ be the indices of the remaining hypotheses and continue on to stage~$j+1$.
\end{enumerate}
\noindent\textbf{Theorem 1.} The multistage step-down procedure satisfies FWE $\le\alpha$.
\medskip
\noindent\textbf{Proof.} Let $I$ be the indices of the true hypotheses. If an error occurs, then for some~$I_j\supseteq I$ and index $\ell$ such that $|I_j|-\ell+1\ge|I|$,
$$\max_{i\in I} T_{i,n_j}\ge C_{n_j}\left(\frac{\alpha}{|I_j|-\ell+1}\right)\ge C_{n_j}(\alpha/|I|),$$ which implies that $$ \max_{i\in I} \sup_{n\in N} \left[T_{i,n}-C_{n}(\alpha/|I|)\right]\ge 0.$$ Then, using the Bonferroni inequality and~(\ref{eq:pval}),
\begin{eqnarray*}
\mbox{FWE}&\le&P\left(\max_{i\in I} \sup_{n\in N} \left[T_{i,n}-C_{n}(\alpha/|I|)\right]\ge 0\right)\\
&\le&\sum_{i\in I}P\left(\sup_{n\in N} \left[T_{i,n}-C_{n}(\alpha/|I|)\right]\ge 0\right)\\
&\le&\sum_{i\in I}\alpha/|I|=\alpha.
\end{eqnarray*}\qed
We point out additionally that the procedure may stopped at Step~4 at any point as long as the remaining hypotheses are accepted, since this action can only serve to decrease the FWE. This feature may be of use in clinical trial applications; see the last paragraph of Section~\ref{sec:clinical}.
Theorem~1 holds regardless of the structure of the $H_i$ or the joint distribution of the test statistics~$T_{i,n}$. When the $H_i$ are closed, the above multistage step-down procedure can be modified slightly to take advantage of this additional structure, analogous to the discussion in the second paragraph of this section for the fixed-sample case. To this end, assume that the set $H_1,\ldots,H_k$ is closed, and that the test statistics respect the closedness in the sense that
\begin{equation}
\label{eq:multmono}H_i\subseteq H_j\quad\Rightarrow\quad T_{i,n}\ge T_{j,n}\quad\mbox{for all $n\in N$.}
\end{equation} As in the fixed-sample case, this guarantees that the hypotheses are analyzed by decreasing dimension. First, we modify the multistage step-down procedure by changing Step~3 to:
\bigskip
3.$'$ For $\ell=1,\ldots,m_j$: Reject $H_{i(j,\ell),n_j}$ and accept any remaining hypotheses containing~$H_{i(j,\ell),n_j}^c$.
\bigskip
\noindent Next, we replace the fractions of $\alpha$ in~(\ref{eq:seqss})-(\ref{eq:numrej}) by $\alpha$. These modifications do not cause violation of FWE~$\le\alpha$ by the same proof given in the second paragraph of this section, proving the following.
\bigskip
\noindent\textbf{Theorem 2.} If the set of hypotheses $H_1,\ldots,H_k$ is closed and the test statistics $T_{i,n}$ satisfy~(\ref{eq:multmono}), then the multistage step-down procedure with Step~3 replaced by~3$'$ and $\alpha$ as the argument of $C_n$ in~(\ref{eq:seqss})-(\ref{eq:numrej}) satisfies FWE~$\le\alpha$.
\section{\large Power and Expected Sample Size}\label{sec:power}
The Holm step-down procedure's attractive quality is its generality, i.e., no assumptions about the structure of the hypotheses $H_1,\ldots,H_k$ or the individual test statistics, other than~(\ref{eq:unif}), are necessary. This generality is provided by the Bonferroni-type adjustment~(\ref{eq:Holmacc}) which also can cause Holm's procedure to be conservative, in terms of FWE and power, relative to procedures that take into account correlations between the individual test statistics. This conservativeness is shared by the multistage step-down procedure because of its use of an analogous step-down rule~(\ref{eq:numrej}). However, as pointed out above, the utility of either the multistage or fixed sample step-down procedure lies in cases where such correlations are difficult to model.
\begin{table}[htdp]
\caption{\;A 3-Endpoint Trial}
\begin{center}
\begin{tabular}{l|cccccc}
Procedure&$(\mu_1,\mu_2,p)$&$EM$&$P(\mbox{rej.~$H_1$})$&$P(\mbox{rej.~$H_2$})$&$P(\mbox{rej.~$H_3$})$&FWE\\
\hline
H&&105&1.7\%&1.7\%&0.8\%&4.0\%\\
Mult&$(0,0,.5)$&104.7&1.6\%&1.6\%&1.8\%&4.9\%\\
MultH&&104.6&1.5\%&1.5\%&2.0\%&4.8\%\\\hline
H&&105&2.3\%&2.3\%&76.0\%&4.4\%\\
Mult&$(0,0,.75)$&98.4&1.7\%&1.7\%&79.9\%&3.2\%\\
MultH&&98.3&2.1\%&2.1\%&80.7\%&4.2\%\\\hline
H&&105&2.5\%&95.7\%&2.1\%&4.4\%\\
Mult&$(0,.65,.5)$&96.9&1.6\%&94.5\%&1.9\%&3.4\%\\
MultH&&96.9&2.2\%&94.6\%&2.9\%&4.9\%\\\hline
H&&105&4.3\%&82.9\%&83.9\%&4.3\%\\
Mult&$(0,.5,.75)$&92.8&1.5\%&76.3\%&80.3\%&1.5\%\\
MultH&&92.3&3.2\%&79.9\%&85.2\%&3.2\%\\\hline
H&&105&83.4\%&83.4\%&3.8\%&3.8\%\\
Mult&$(.5,.5,.5)$&93.5&76.3\%&76.3\%&1.8\%&1.8\%\\
MultH&&93.1&79.6\%&79.6\%&2.7\%&2.7\%\\\hline
H&&105&70.9\%&70.9\%&86.8\%& NA\\
Mult&$(.4,.4,.75)$&89.9&55.3\%&55.3\%&80.2\%& NA\\
MultH&&89.3&64.7\%&64.7\%&85.4\%& NA\\\hline
H&&105&88.6\%&88.6\%&90.0\%&NA\\
Mult&$(.5,.5,.75)$&87.2&76.4\%&76.4\%&80.0\%& NA\\
MultH&&86.1&84.4\%&84.4\%&87.0\%& NA\\\hline
\end{tabular}
\end{center}
\label{table:3end}
\end{table}%
Consider a multistage step-down procedure with maximum sample size $n=\max N$. Relative to the fixed-sample Holm step-down procedure of size $n$, the multistage procedure will have a reduction in expected sample size, provided the set $N$ is chosen reasonably. But, by the Neyman-Pearson lemma, the power of the multistage procedure for rejecting a given false hypothesis cannot exceed the power of the fixed-sample Holm procedure. However, as shown by the following simulation study, this loss in power is usually slight while the savings in expected sample size tends to be substantial.
Consider a three-endpoint clinical trial where two of the endpoints concern continuous data and the third concerns probability of a certain binary outcome. For example, let the data be $\bm{X}_{i}=(X_{i1}, X_{i2}, X_{i3})$, $i=1,2,\ldots$, where for $j=1,2$, the $X_{ji}$ are i.i.d.~normal random variables with unknown mean $\mu_j$ and variance~1, and the $X_{3i}$ are independent Bernoulli random variables where $p=P(X_{3i}=1)$ is unknown. Suppose the $\bm{X}_i$ represent clinical treatment outcomes for three endpoints of interest, and it is desired to test efficacy of the treatment in the form of the three one-sided null hypotheses $$H_1:\mu_1\le 0,\quad H_2:\mu_2\le 0,\quad H_3:p\ge 1/2.$$ In cases such as this, the correlation between the components of $\bm{X}_{i}$ is likely to be unknown or difficult to model. In the following simulation study we compare the performance of the multistage step-down procedure with two other procedures in the three cases of independent, positively correlated, and negatively correlated components of $\bm{X}_i$. Table~1 contains the results for the independent case; the other two are discussed below. Whatever the correlation between the components, the three procedures evaluated are equally applicable since they do not depend on the correlation structure of the individual hypotheses. For Holm's step-down procedure, we use standard $\alpha=.05$-level likelihood ratio tests of $H_1, H_2, H_3$ of size $n=35$ to have power around 90\% at $(\mu_1,\mu_2,p)=(.5,.5,.75)$. For the multistage step-down procedure, we use the same test statistics in one-sided group sequential tests with $N=\{26,29,35\}$ and use a normal approximation for $\sum_i X_{3i}$ to compute $C_n(\rho)$ to satisfy~(\ref{eq:pval}). Table~1 contains the expected sample size, probability of rejecting each $H_i$, and FWE (when $\cup_{i=1}^3 H_i$ is true) for the Holm procedure (denoted by H) and the multistage step-down procedure (denoted by MultH). To see the effects of the step-down rule~(\ref{eq:seqss})-(\ref{eq:numrej}), we also include the multistage test (denoted by Mult) identical to MultH but with $\alpha$ divided by~$k=3$ in place of the larger fractions of $\alpha$ in~(\ref{eq:seqss})-(\ref{eq:numrej}); see the first paragraph of the Section~\ref{sec:k}. Each entry in Table~1 is computed from 50,000 simulation runs. The last six FWE entries are marked NA (not applicable) because none of the null hypotheses are true for those parameter values. The multistage procedures Mult and MultH show substantial savings in expected size sample over the Holm procedure's fixed sample size of 105 at significant deviations from the ``worst-case'' null $(\mu_1,\mu_2,p)=(0,0,.5)$. The expected sample size of Mult and MultH are nearly identical, the former being somewhat larger due to its larger critical values in~(\ref{eq:seqss})-(\ref{eq:numrej}). For the same reason MultH has slightly higher power than Mult; in particular, see the last six rows of Table~1. Although the power of MultH is lower than H due to its multiple looks, this difference is slight, usually within a few percentage points. This relative relationship does not change when the components of $\bm{X}_i$ are correlated, it simply tends to decrease slightly in magnitude when positively correlated, and increase when negatively correlated. For example, when $(\mu_1,\mu_2,p)=(0,.5,.75)$ and the two normal components have a correlation coefficient of $.75$, $P(\mbox{reject $H_1$})$ increases to 4.5\%, 2.0\%, and 3.6\% for H, Mult, and MultH, respectively, while the power $P(\mbox{reject $H_2$})$ decreases to 80.7\%, 73.4\%, and 77.0\%, respectively. Here Mult and MultH have expected sample size of 94.7 and 94.2.
\section{\large Applications and Discussion}\label{sec:disc}
\subsection{Sequential $k$-Hypothesis Testing}\label{sec:k}
A straightforward application of~(\ref{eq:pval}) to sequential testing of $k$ null hypotheses is to use Bonferroni's inequality so that the $k$ hypotheses can be treated separately by setting~$\rho=\alpha/k$ in~(\ref{eq:pval}). This approach to sequential multiple testing has been taken by many authors. \citet{Paulson64} noticed that further sample size savings might be possible by eliminating (rejecting) some hypotheses during the course of the experiment, similar to the test Mult in the preceding section. We have refined the multistage rejection of hypotheses by using Holm's step-down procedure to sharpen the Bonferroni bounds. In fact, when the set of hypotheses is closed, a slight modification of our multistage test can dispense with Bonferroni bounds, as shown in Theorem~2.
Sequential multiple hypothesis testing dated back to~\citet{Sobel49} in deciding which of three simple hypotheses $H_1: \theta=-d$, $H_2:\theta=0$, or $H_3:\theta=d$ is true about a normal mean $\theta$, for a fixed value $d>0$. This is basically a classification problem. The Sobel-Wald test combines two sequential probability ratio tests~(SPRTs) for different pairs of the three hypotheses, the comparison of $H_1$ versus $H_3$ being superfluous. \citet{Armitage50} generalized the Sobel-Wald problem to $k$~hypotheses, corresponding to an error matrix $\alpha_{ij}=P_i(\what{i}=j)$ for $i\ne j$, where $P_i$ denotes the probability measure under $H_i$ and $\what{i}$ is the hypothesis chosen by the test. Armitage's test combines the corresponding ${k\choose 2}$ SPRTs for the $k$ hypotheses, which leads to stopping boundaries with slope $\pm 1/2$ in the $(n,S_n/d)$-plane, where $S_n=\sum_{i=1}^n X_i$ is the sum of the first $n$ i.i.d. normal observations. \citet{Simons67} considered a generalization of a special case of Armitage's test for $k=3$ in which the stopping boundaries' slopes can be chosen arbitrarily. Whereas the Sobel-Wald, Armitage, and Simons tests stop and decide on a hypothesis $H_j$ when all the component SPRTs simultaneously prefer $H_j$ to all other alternatives, \citet{Lorden72,Lorden76} introduced various multistage tests that decide on $H_j$ only when all other hypotheses can be rejected, based on generalized likelihood ratios. Lorden's work was extended to composite hypotheses by \citet{Pavlov88}. \citet{Eisenberg91} gives a detailed summary of these problems.
\citet{Paulson63} introduced another generalization of the Sobel-Wald test, considering $k\ge 2$ intervals $(-\infty,\theta_1)$, $(\theta_1,\theta_2),\ldots$, $(\theta_{k-1},\infty)$ and testing sequentially to which interval $\theta$ belongs. In its symmetric form, Paulson's test stops the first time the interval $(u_n,v_n)$ is contained in one of the intervals $(-\infty,\theta_1+\delta)$, $(\theta_{k-1}-\delta,\infty)$, or $(\theta_i-\delta,\theta_{i+1}+\delta)$, $i=1,\ldots,k-2$, where $\delta>0$ is a chosen parameter and
\begin{eqnarray}
u_n=\max_{1\le m\le n}\wt{u}_m=\max_{1\le m\le n} (S_m/m-\delta/2-A/m)\label{eq:un}\\
v_n=\min_{1\le m\le n}\wt{v}_m=\min_{1\le m\le n} (S_m/m+\delta/2+A/m)
\end{eqnarray} in which $A>0$ is a critical value that can be chosen to give desired coverage probability. Paulson's procedure can be viewed as a special case of the multistage step-down procedure, as follows. Defining $H_0^{(i)}: \theta=\theta_i-\delta$ and $H_1^{(i)}: \theta=\theta_i$ ($1\le i\le k-1$), the one-sided SPRT of $H_0^{(i)}$ versus~$H_1^{(i)}$ stops sampling and rejects $H_0^{(i)}$ if
\begin{equation}
S_n-n[(\theta_i-\delta)+\theta_i]/2\ge A,\label{eq:SPRTrr}
\end{equation} for some critical value $A>0$. Dividing both sides of~(\ref{eq:SPRTrr}) by $n$ and rearranging terms gives
$$A/n\le S_n/n-(\theta_i-\delta/2)=S_n/n-\delta/2-(\theta_i-\delta),$$ which by~(\ref{eq:un}) is equivalent to $\wt{u}_n\ge\theta_i-\delta$. Similarly, the one-sided SPRT of $\wt{H}_0^{(i)}: \theta=\theta+\delta$ versus $H_1^{(i)}$ stops sampling and rejects $\wt{H}_0^{(i)}$ if $\wt{v}_n\le\theta_i+\delta$. Hence running the multistage step-down procedure until a coherent classification is made is precisely Paulson's procedure. In the small-probability event that no coherent classification is made -- say, if at some stage a hypothesis is rejected containing the complement of a previously rejected hypothesis -- classification based on $S_n/n$ or randomization can be used, as \citet{Paulson63} suggests.
Although we have used $H_0^{(i)}$ and $\wt{H}_0^{(i)}$ to denote null hypotheses for combining SPRTs above, there is no natural notion of a ``null'' hypothesis in the actual classification problem. We could have populated our list of hypotheses to test in any way that suited our needs. Thus it is also perhaps interesting to consider what closed testing has to say about the Sobel-Wald-Paulson problem. In particular, for the case $k=3$ in the Paulson problem, let $H_1:\theta<\theta_1$, $H_2:\theta>\theta_1$, $H_3:\theta<\theta_2$, and $H_4:\theta>\theta_2$. The set $\{H_1,H_2,H_3,H_4\}$ is not closed, but adding $H_5=H_2\cap H_3=(\theta_1,\theta_2)$ to the list of hypotheses completes its closure $\bm{H}=\{H_1,H_2,H_3,H_4,H_5\}$. Since all intersections of dimension~3 or higher are empty, the closed testing principle suggests beginning by testing the hypotheses of dimension~2, namely $H_1=H_1\cap H_3=(-\infty,\theta_1)$, $H_4=H_2\cap H_4=(\theta_2,\infty)$, and $H_5=(\theta_1,\theta_2)$, which is of course the original Paulson problem with $k=3$. Hence it seems that the closed testing principle does not give any new insight into the Paulson problem.
A closely related problem to sequential $k$-hypothesis testing is selecting the one of $k$ normal populations with the largest mean. \citet{Bechhofer54} considered this problem when the variance of the observations is known, and proposed a fixed sample procedure that compares the sample means of the individual populations. For unknown variance, \citet{Bechhofer54b} proposed a two-stage procedure, and \citet{Robbins68} proposed a sequential procedure with improved efficiency, also based on sample means. The special case of when the mean is an integer was considered by \citet{Robbins70}, and later generalized by \citet{McCabe73}. \citet{Robbins70} introduced the notion of ``distinguishability'' of a family of populations, and \citet{Khan73} studied the asymptotic efficiency of stopping rules that distinguish within such families. \citet{Mukhopadhyay83} proposed likelihood-based methods for the largest mean problem, and used Khan's results to show asymptotic efficiency. Likelihood-based methods were shown to be useful in a number of related selection problems as well; see \citet{Mukhopadhyay94}. Further references are given in Section~2 of \citet{Chan05}.
\subsection{Multiple Endpoint Clinical Trials}\label{sec:clinical}
The multistage step-down procedure provides a general method of testing multiple endpoints in clinical trials. The adaptive rejection times~(\ref{eq:seqss}) and rejection rule~(\ref{eq:numrej}) have the effect of adaptively ``dropping'' (i.e., rejecting) hypotheses when enough information has accumulated to do so, to focus on the statistically most interesting endpoints. As discussed above, when closedness exists, closed testing methods should be used since they are are in general more powerful than step-down methods (simulations verifying this were conducted by \citet{Lehmacher91} and \citet{Tang97}) because they forgo the need for Bonferroni-type corrections, such as in~(\ref{eq:Holmacc}), (\ref{eq:seqss}), and (\ref{eq:numrej}). The multistage step-down method could be useful in cases where closedness does not exist. For example, in clinical trials for AIDS treatments, it is common \citep[e.g.,][]{Fischl87} to have multiple endpoints of both the continuous and categorical types, like CD4 (T-cell) level, which is commonly modeled as a normal random variable, and the binary indicator of opportunistic infectious disease like a cold, modeled as a Bernoulli random variable. Moreover, if one or some subset of the endpoints is of primary interest, the multistage step-down procedure can be used as a pilot or screening phase with the option of immediately stopping and proceeding to secondary testing when one of the primary hypotheses is rejected. That this does not increase the FWE is pointed out following the proof of Theorem~1. The multistage step-down procedure provides a general framework that can be applied to multiple testing in these clinical trials.
\section*{\large Acknowledgments}
Bartroff's work was supported by the Borchard Foundation and grant DMS-0907241 from the National
Science Foundation. Lai's work was supported by grant DMS-0805879 from the National
Science Foundation.
|
1,108,101,564,149 | arxiv |
\section{Background}
In many settings, variables of interest maybe too expensive or too impractical to measure precisely on a large cohort. Generalized raking is an important technique for using whole population or full cohort information in the analysis of a subsample with complete data, \citep{deville1992calibration, sarndal2007calibration, breslow2009using} closely related to the augmented inverse probability weighted (AIPW) estimators of Robins and co-workers.\citep{robins1994estimation, firth1998robust, lumley2011connections} Raking estimators use auxiliary data measured on the full cohort to adjust the weights of the Horvitz-Thompsonn estimator in a manner that leverages the information in the auxiliary data and improves efficiency. The technique is also, and perhaps more commonly, known as ``calibration of weights'', but we will avoid that term here because of the potential confusion with other uses of the word ``calibration''. An obvious competitor to raking is multiple imputation of the non-sampled data.\citep{rubin1996multiple} While multiple imputation was initially used for relatively small amounts of data missing by happenstance, it has more recently been proposed and used for large amounts of data missing by design, such as when certain variables are only measured on a subsample taken from a cohort.\citep{marti2011multiple, keogh2013using, jung2016fitting, seaman2012combining, morris2014tuning}
In this paper we take a different approach. We use multiple imputation to construct new raking estimators that are more efficient than the simple adjustment of the sampling weights \cite{breslow2009using} and compare these estimators to direct use of multiple imputation in a setting where the imputation model may be only mildly misspecified. Our work has connections to the previous literature, where multiple imputation and empirical likelihood are used in the missing data paradigm to construct multiply robust estimators that are consistent if any of a set of imputation models or a set of sampling models are correctly specified.\cite{han2016combining} We differ from this work in assuming known subsampling probabilities, which allows for a complex sampling design from the full cohort, and in evaluating robustness and efficiency under contiguous (local) misspecification following the ``nearly-true models'' paradigm.\cite{lumley2017robustness} Known sampling weights commonly arise in settings, such as retrospective cohort studies using electronic health records (EHR) data, where a validation subset is often constructed to estimate the error structure in variables derived using automated algorithms rather than directly observed. Lumley (2017) \cite{lumley2017robustness} considered the robustness and efficiency trade-off of design-based estimators versus maximum likelihood estimators in the setting of nearly-true models. We build on this work by comparing multiple imputation with the standard raking estimator, and examine to what extent raking that makes use of multiple imputation to construct the auxiliary variable may affect the bias-efficiency trade-off for this setting.
We first introduce the raking framework in Section 2. In Section 3, we describe the proposed raking estimator, which makes use of multiple imputation to construct the potentially optimal raking variable. In Section 4, we compare design-based estimators with standard multiple imputation estimators in two examples using simulation, a classic case-control study and a two phase study where the linear regression model is of interest and an erroprone surrogate is observed on the full cohort in place of the target variable. For this example, we additional study the relative performance of regression calibration, a popular method to address covariate measurement error. \citep{carroll2006} In section 5, we consider the relative performance of multiple imputation versus raking estimators in the National Wilms Tumor Study. We conclude with a discussion of the robustness efficiency trade-off in the studied settings.
\section{Introduction to raking framework}
Assume a full cohort of size $N$ and a probability subsample of size $n$ with known sampling probability $\pi_i$ for the $i$-th individual. Further, assume we observe an outcome variable $Y$, predictors $Z$, and auxiliary variables $A$ on the whole cohort, and observe predictors $X$ only on the sample. Our goal is to fit a model $P_\theta$ for the distribution of $Y$ given $Z$ and $X$ (but not $A$). Define the indicator variable for being sampled as $R_i$. We assume an asymptotic setting in which as $n\to\infty$, a law of large numbers and central limit theorem exist. In some places we will make the stronger asymptotic assumption that the sequence of cohorts are iid samples from some probability distribution and that the subsamples satisfy $\inf_i \pi_i>0$.\cite{breslow2009using,lumley2011connections,lumley2017robustness}
With full cohort data with complete observations we would solve an estimating equation
\begin{equation}
\sum_{i=1}^N U(Y_i,X_i,Z_i;\theta)=0,
\label{eq-census}
\end{equation}
where $U = U(Y,X,Z;\theta)$ is an estimate of the efficient score or influence function for giving at least locally efficient estimation of $\theta$ with complete data. We write $\tilde\theta_N$ for the resulting estimator with complete data from the full cohort, and assume it converges in probability to some limit $\theta^*$. If the cohort is truly a realization of the model $P_\theta$ we write $\theta_0$ for the true value of $\theta$. We assume $\tilde\theta_N$ would be a locally efficient estimator in the model $P_\theta$ at $\theta_0$, given compete data.
The Horvitz-Thompson-type estimator $\hat\theta_{HT}$ of $\theta$ solves
\begin{equation}
\sum_{i=1}^N \frac{R_i}{\pi_i}U(Y_i,X_i,Z_i;\theta)=0.
\label{eq-ht}
\end{equation}
Under regularity conditions, for example the existence of a central limit theorem and sufficient smoothness for $U$, it is also consistent for $\theta^*$, and thus for $\theta_0$ if $P_\theta$ is correctly specified.
A generalized raking estimator using an auxiliary variable $H=H(Y, Z,A;\eta)$, which may depend on some parameter $\eta$, solves a weighted estimating equation
\begin{equation}
\sum_{i=1}^N \frac{g_iR_i}{\pi_i} U(Y_i,X_i,Z_i;\theta)=0,
\label{eq-aipw}
\end{equation}
where the weight adjustments $g_i$ are chosen to satisfy the calibration constraints
\begin{align}
\sum_{i=1}^N \frac{R_ig_i}{\pi_i} H(Y_i, Z_i,A_i;\eta) = \sum_{i=1}^N H(Y_i, Z_i,A_i;\eta) \label{cal-adj}
\end{align}
while minimizing a distance function $\sum_{i=1}^n d(g_i/\pi_i, 1/\pi_i)$. Lagrange multipliers can be used to construct an iteratively weighted least squares algorithm for computing $g_i$.\cite{deville1992calibration}
In the standard multiple imputation, we use a model for the distribution of $X$ given $Z$, $Y$ and $A$. For this, we generate $M$ samples from the predictive distribution to produce $M$ imputations $X_i^{(1)},\ldots, X_i^{(M)}$, giving rise to $M$ complete imputed datasets that represent samples from the unknown conditional distribution of the complete data given the observed data. It is now straightforward to solve equation (\ref{eq-census}) for each of the $m$-th imputed dataset, giving $M$ values of $\tilde{\theta}_{N,(m)}$ with estimated variances $\tilde{\sigma}_{N,(m)}^2$, $1 \leq m \leq M$. The imputation estimator $\hat\theta_{\mathrm{MI}}$ of $\theta$ is the average of the $\tilde{\theta}_{N,(m)}$, and the variance can be estimated from the variance of the $\tilde{\theta}_{N,(m)}$ and the average of $\tilde{\sigma}_{N,(m)}^2$.\citep{rubin1996multiple}
\section{Imputation for calibration} \label{impute-cal}
\subsection{Estimation}
The optimal function $H_i$ is $E[U_i|Y_i, Z_i, A_i]$, and using this optimal $H_i$ would give the optimal design-consistent estimator of $\theta$,\citep{robins1994estimation}. However, the optimal $H_i$ is typically not available explicitly. In practice, one may estimate the optimal function $H_i$ with a single regression imputation $\hat X_i$ of $X_i$, where we first solve
$$\sum_{i=1}^N U(Y_i,\hat X_i,Z_i;\theta)=0,$$
with respect to $\theta$, and then, compute $U(Y_i,\hat X_i,Z_i;\theta)$ at the solution.\cite{breslow2009using,rivera2016using} We write such a calibration estimator of $\theta$ with a single regression imputation by $\hat\theta_{\mathrm{cal,1}}$.
In this study, we propose a raking estimator using multiple imputation. Specifically, we first solve the sets of equations
$$\sum_{i=1}^N U(Y_i,\hat{X}_i^{(m)},Z_i;\theta)=0,$$
where $\hat{X}_1^{(m)}, \ldots, \hat{X}_N^{(m)}$ are imputed values of $X_i$ for each $m$-th imputation procedure to get multiple estimates $\hat\theta^{(m)}$, $1 \leq m \leq M$. Define $H_i$, for each $1 \leq i \leq N$, as the average of the $M$ resulting $U(Y_i,\hat{X}_i^{(m)},Z_i;\hat\theta^{(m)})$:
\begin{align}
H_i = \frac{1}{M} \sum_{m=1}^M U(Y_i,\hat{X}_i^{(m)},Z_i;\hat\theta^{(m)}). \label{multical-adj}
\end{align}
Finally, we solve \eqref{eq-aipw} with the weight adjustments under the calibration constraint \eqref{cal-adj}, and write the final estimator $\hat{\theta}=\hat\theta_{\mathrm{cal,M}}$ of $\theta$.
\subsection{Efficiency and robustness} \label{efficient-robust}
When all three of the sampling probability, the imputation model, and the regression model are correctly specified, the standard calibration estimator $\hat\theta_{\mathrm{cal,1}}$ gives a way to compute the efficient design-consistent estimator. If we are willing to only assume the regression model and imputation model are correct, there appears to be no motivation for requiring a design-consistent estimator. In this case, the standard multiple imputation estimator $\hat\theta_{\textrm{MI}}$ will also be consistent and typically more efficient than a design-based approach.
If the regression model and the imputation model are correctly specified with all the available variables, it is clear that the empirical average \eqref{multical-adj} over multiple imputations in $H_i$ will converge to the optimal value $E[U_i|Y_i,Z_i, A_i]$ as $M$ and $N$ increase, so that the proposed raking estimator using multiple imputation provides the optimal calibration estimator. However, it is unreasonable in practice to assume that both the regression and imputation models are exactly correct. Recently, in the special case where the full cohort is an iid sample and the subsampling is independent, so-called Poisson sampling, it has been shown that the inverse probability weighting adjusted by multiple imputation attains the semi-parametric efficiency bound for a model that assumes only $E[U_i]=0$ and $E[R_i|Z_i,Y_i,A_i]=\pi_i$,\cite{han2016combining} where the proposed estimator $\hat\theta_{\mathrm{cal,M}}$ also solves a weighted estimating equation \eqref{eq-aipw} subject to the calibration constraints \eqref{cal-adj} computed by multiple imputation.
In this paper, we argue one step further that the interesting questions of robustness and efficiency arise when the imputation model and potentially also the regression model are slightly misspecified. Under what conditions are $\|\hat\theta_{\mathrm{cal,M}}-\theta^*\|_2^2$ and $\|\hat\theta_{\mathrm{MI}}-\theta^*\|_2^2$ comparable, and do these correspond to plausible misspecifications of the regression model, the imputation model, or both? These questions were considered in a more abstract context by Lumley (2017)\cite{lumley2017robustness}, where the model is only nearly-true such that
$$\sqrt{n}(\hat\theta_{\mathrm{cal,M}}-\theta^*){\rightsquigarrow} N(0,\sigma^2+\omega^2)$$
and
$$\sqrt{n}(\hat\theta_{\mathrm{MI}}-\theta^*){\rightsquigarrow} N(\kappa\rho\omega,\sigma^2).$$
In the above equations, $\kappa$ is the limit of Kullback--Leibler divergence between the true model $P_n$ and the outcome model $Q_n$ defined as the sequence of misspecified distributions chosen to be contiguous to the true model. We assume $\kappa$ is bounded. $\rho$ is the asymptotic correlation between the log-likelihood ratio of two distributions, $P_n$ and $Q_n$, and the difference in influence functions for $\hat\theta_{\mathrm{cal,M}}$ and $\hat\theta_{\mathrm{MI}}$ under $P_n$ and $Q_n$, respectively. That is, the ``nearly-true'' models are defined by a sequence of outcome models such that one may not reliably reject misspecification, even using the most powerful test comparing the truly data-generating distribution. In simple but common cases, including the case-control design study and the linear regression analysis in the two-phase study, the model misspecification may neutralize the advantage of the standard multiple imputation. \cite{lumley2017robustness} Indeed the mean-squared error of $\hat\theta_{\mathrm{MI}}$ will be asymptotically larger than that for $\hat\theta_{\mathrm{cal,M}}$ whenever $|\kappa \rho|>1$.\cite{lumley2017robustness} We study the relative numerical performance of these two estimators and other standard competitors under nearly-true model setting in the next section.
\section{Simulations} \label{sec-sim}
In this section we are interested in three questions; how much precision is gained by multiple versus single imputation in raking, whether imputation models can maintain an efficiency advantage while being more robust, and how these affect the efficiency-robustness trade-off between weighted and imputation estimators. Source code in R for these simulations is available at \url{https://github.com/kyungheehan/calib-mi}.
\subsection{Case-control study}\label{sim1}
We first demonstrate numerical performance of multiple imputation for the case-control study where calibration is not available but the maximum likelihood estimator can be easily computed. Let $X$ be a standard normal random variable and $Y$ be a binary response taking values in $\{0,1\}$ such that for a given $X=x$ the associated logistic model is given by
\begin{align}
\textrm{logit}\,\mathbb{P}(Y=1 | X=x) = \alpha_0 + \beta_0 x + \delta_0(x-\xi) \mathbb{I}(x > \xi) \label{true1}
\end{align}
for some fixed $\delta_0$ and $\xi$, and $\textrm{logit}(p) = \log \big( \frac{p}{1-p} \big)$ for $0 < p < 1$. In accordance with the usual case-control study design, we assume $Y$ is known for everyone, but $X$ is available with sampling probability of 1 when $Y=1$ and a lower sampling probability when $Y=0$. To be specific, we first generate a full cohort $\mathcal{X}_N = \{ (Y_i, X_i) : 1 \leq i \leq N \}$ following the true model \eqref{true1} and denote the index set of all the $n$-case subjects in $\mathcal{X}_N$ by $S_1 \subset \{ 1, \ldots, N \}$, $n < N$. Thus, $Y_i=1$ if $i \in S_1$, otherwise $Y_i = 0$. Then a balanced case-control design is employed which consists of observing $(Y_i, X_i)$ for all the subjects in $S_1$ and a randomly chosen $n$-subsample $S_0$ from $\{1, \ldots, N \} \setminus S_1$. For cohort members $\{1, \ldots, N \} \setminus S_0 \cup S_1$, only $Y_i$ is observed. Define $\mathcal{X}^\ast_n = \{ (Y_i, X_i) : i \in S_0 \cup S_1 \}$.
We examine the sensitivity of the multiple imputation approach in the setting of nearly-true models.\citep{lumley2017robustness} For a practical definition of a nearly-true model, we consider a working model that may not be reliably rejected, even when using the oracle test statistic of the likelihood ratio with the true model \eqref{true1} used to generate the data as the null. In other words, instead of fitting the true model \eqref{true1}, we employ a simpler outcome model
\begin{align}
\textrm{logit} \, \mathbb{P}(Y=1 | X=x) = \alpha + \beta x.
\label{nearly-true1}
\end{align}
We note that when $\delta_0=0$ the working model \eqref{nearly-true1} is correctly specified, but misspecified when $\delta_0\neq 0$. It is worthwhile to mention that the single knot linear spline logistic model \eqref{true1} is the worst case of misspecified model of \eqref{nearly-true1} when $\alpha_0 = -5$, $\beta_0 = 1$ and $\xi \approx 1.8$, which maximizes correlation between the most powerful test to reject the model misspecification and the bias of the misspecified maximum likelihood estimator. \citep{lumley2017robustness} In this case, the maximum likelihood estimator of \eqref{nearly-true1} is the unweighted logistic regression \citep{prentice1979logistic} for the complete case analysis only with $\mathcal{X}_n^\ast$.
Four different methods are compared in our example for estimating the nearly-true slope $\beta$ in \eqref{nearly-true1}; (i) the maximum likelihood estimation (MLE), (ii) a design-based inverse probability weighting (IPW) approach, (iii) a multiple imputation with a parametric imputation model (MI-P) and (iv) a multiple imputation with non-parametric imputation based on bootstrap resampling (MI-B). Formally, the parametric MI (MI-P) imputes covariates $X_i$, $i \not\in S_0\cup S_1$, from a parametric model such that $X|Y=y$ is assumed to be distributed as $N(\mu + \eta y, \sigma^2)$, where $\mu = \mathbb{E}(X | Y=0)$, $\eta = \mathbb{E}(X | Y=1) - \mu$, and $\sigma^2 = \mathbb{V}\text{ar}(X)$. Here, the parameters $\mu$, $\eta$ and $\sigma^2$ are estimated from $\mathcal{X}_n^\ast$. On the other hand, the bootstrap method (MI-B) resamples covariates $X_i$, $i \not\in S_0\cup S_1$, from the empirical distribution of $X$ given $Y=0$. We note that MLE only utilizes the sub-cohort information $\mathcal{X}_n^\ast$ but the other estimators additionally use response observations $\{Y_i : i \not\in S_0 \cup S_1\}$ so that efficiency gains can be expected for estimating the nearly-true slope $\beta$, depending on the level of model misspecification.
Using Monte Carlo iterations, we summarized the empirical performance of the four different estimators based on fitting the nearly-true model \eqref{nearly-true1} with the mean squared error (MSE) of the target parameter $\beta$,
\begin{align}
\textrm{MSE}(\hat{\beta}) = \frac{1}{K} \sum_{k=1}^K \big( \hat{\beta}^{[k]} - \beta \big)^2 \label{mse},
\end{align}
where $\hat{\beta}^{[k]}$ is the estimate of $\beta$ from the $k$-th Monte Carlo replication, $1 \leq k \leq K$. Similarly the empirical bias-variance decomposition,
\begin{align}
\textrm{Bias}(\hat{\beta}) = \textrm{E}{\hat{\beta}} - \beta \quad \textrm{and}\quad \textrm{Var}(\hat{\beta}) = \frac{1}{K} \sum_{k=1}^K \Big( \hat{\beta}^{[k]} - \textrm{E}{\hat{\beta}} \Big)^2, \label{bias-var}
\end{align}
was also reported to compare precision and efficiency, where $\textrm{E}{\hat{\beta}} = K^{-1} \sum_{k=1}^K \hat{\beta}^{[k]}$. For all simulations, we fixed $\beta=1$, $\alpha_0=-5$, $\xi_0=1.8$, $N=10^4$, and the number of cases was around $n=110$ in average. We used $M=100$ multiple imputations and $K=1000$ Monte Carlo simulations. Results are provided in Table \ref{table1}.
Table \ref{table1} demonstrates two principles. First, the parametric MI (MI-P) estimator closely matches the maximum likelihood estimator, but the resampling (MI-B) estimator closely matches the design-based estimator. Second, more importantly, the design-based estimator is less efficient than the maximum likelihood estimator when the model is correctly specified, but has lower mean squared error when $\delta_0$ was greater than about $1.6$. In this case, even the most powerful one-sided test of the null $\delta_0=0$ based on the alternative model \eqref{nearly-true1} would have power less than approximately $0.5$, so that any model diagnostic used in a practical setting would have lower power. Figure 1 shows the relative efficiency of the methods as a function of the level of mispecification. In summary, we conclude that the efficiency gain of the model-based analysis is not robust even to mild forms of misspecification that would not be detectable in practical settings.
\subsection{Linear regression with continuous surrogate}\label{sim2}
We now evaluate the performance of the multiple imputation raking estimator in a two-phase sampling design. Let $Y$ be a continuous response associated with covariates $X=x$ and $Z=z$ such that
\begin{align}
\mathbb{E}(Y | X=x, Z=z)= \alpha_0 + \beta_0 x + \delta_0 x \cdot \mathbb{I}(|z| > \zeta_0), \label{true2}
\end{align}
for some fixed $\delta_0$ and $\zeta_0 = F_Z^{-1}(0.95)$, where $\mathbb{V}ar(Y|X,Z)=1$, $X$ is a standard normal random variable, $Z$ is a continuous surrogate of $X$ and $F_Z^{-1}$ is the inverse cumulative distribution fuction for $Z$. Similarly to the simulation study in the previous section \ref{sim1}, instead of the true model \eqref{true2} which generally will not be known in a real data setting, we are interested in the typical linear regression analysis with an outcome model
\begin{align}
\mathbb{E}(Y | X=x) = \alpha + \beta x. \label{nearly-true2}
\end{align}
Two different scenarios of the surrogate variable $Z$ are considered such that (a) $Z = X + \varepsilon$ for $\varepsilon \sim N(0,1)$ and (b) $Z= \eta X$ for $\eta \sim \Gamma(4,4)$, which represent additive and multiplicative error, respectively. In the first phase of sampling, we assume that outcomes $Y$ and auxiliary variables $Z$ are known for everyone, whereas covariate measurements of $X$ are available only at the second stage. The sampling for the second phase will be stratified on $Z$. Specifically, we will observe $X_i$ for all individuals if $|Z_i| > \zeta_0$, otherwise $5\%$ of subjects subjects in the intermediate stratum $|Z_i| \leq \zeta_0$ are randomly sampled, where $1 \leq i \leq N$. We write $S_2 \subset \{1, \ldots, N \}$ to be the index set of subjects collected in the second phase so that $\mathcal{X}_I = \{ (Y_i, Z_i) : 1 \leq i \leq N \}$ and $\mathcal{X}_{II} = \{ (Y_i, X_i, Z_i) : i \in S_2 \}$ denote the first and second stage samples, respectively.
We compare five different methods of estimating the nearly-true parameter $\beta$: (i) maximum likelihood estimation (MLE), (ii) a standard generalized raking estimation using the auxiliary variable, (iii) regression calibration (RC), a single imputation method that imputes the missing covariate $X$ with an estimate of $\mathbb{E}[X|Z]$,\citep{carroll2006} (iv) multiple imputation without raking (MI), and (v) the proposed approach combining raking and the multiple imputation (MIR). We note that when $Y$ is Gaussian, the semi-parametric efficient maximum likelihood estimator of $\beta$ is available in the \texttt{missreg3} package in R,\citep{wild2013missreg3} using the stratification information.\cite{scott2006calculating} We employ this for the MLE (i).
For the standard raking method (ii), we construct a design-based efficient estimator \citep{breslow2009using} as below:
\begin{itemize}
\item[R1.] Find a single imputation model $X = a + b Y + c Z + \epsilon$, where $\epsilon \sim N(0,\tau^2)$ based on the second phase sample $\mathcal{X}_{II}$.
\item[R2.] Fit the nearly-true model \eqref{nearly-true2} using $(Y_i, \hat{X}_i)$ for $1 \leq i \leq N$, where $\hat{X}_i$ are fully imputed from (R1).
\item[R3.] Calibrate sampling weights for raking using the influence function induced from the nearly-true fits in (R2).
\item[R4.] Fit the design-based estimator of the nearly-true model \eqref{nearly-true2} with the second phase sample $\mathcal{X}_{II}$ and calibrated sampling weights from (R3).
\end{itemize}
For the conventional regression calibration approach (iii), we simply fit a linear model regressing $X_i$ on $Z_i$ for $i \in S_i$ and then impute missing observations $\hat{X}_i$ in the first phase so that the nearly-true model \eqref{nearly-true2} is evaluated using $\{ (Y_i, \hat{X}_i) : i \not\in S_2\}$ and $\{ (Y_i, X_i) : i \in S_2 \}$.
We consider two resampling techniques for the multiple imputation method (iv): the wild bootstrap \citep{cao1991rate, mammen1993bootstrap,hardle1993comparing} and a Bayesian approach with a non-informative prior. Note, the wild bootstrap gives consistent estimates for settings where the conventional Efron's bootstrap does not work, such as under heteroscedasticity and high-dimensional settings. We refer to Appendix \ref{App-cal} for implementation details of multiple imputation with the wild bootstrap and a parametric Bayesian resampling. We now illustrate the proposed method that calibrates sampling weights using multiple imputation.
\begin{itemize}
\item[M1.] Resample $\hat{X}_i^\ast$ independently for all $1 \leq i \leq N$ by using either the wild bootstrap or the parametric Bayesian resampling.
\item[M2.] Fit the nearly-true model \eqref{nearly-true2} based on a resample $\{ (Y_i, \hat{X}_i^\ast) : 1 \leq i \leq N\}$.
\item[M3.] Repeat (M1) and (M2) in multiple times, and take the average of influence functions, induced by the nearly-true models fitted in (M2).
\item[M4.] Calibrate sampling weights using the average influence function as auxiliary information.
\item[M5.] Fit the design-based estimator of the nearly-true model \eqref{nearly-true2} with the second phase sample $\mathcal{X}_{II}$ and calibrated sampling weights obtained from (M4).
\end{itemize}
Setting {$N=5000$}, we ran $M=100$ multiple imputations over {$1000$} Monte Carlo replications. For all simulations, $\beta=1$, $\alpha_0=0$, $\zeta_0\approx2.3$ when $Z$ is a surrogate of $X$ with an additive measurement error but $\zeta_0\approx1.8$ with a multiplicative error in our simulation settings, and the phase two sample with $|S_2|=750$ in average. We considered several values of $\delta_0$ and the level of misspecification is described by the empirical power to reject the misspecified model for the level $0.05$ likelihood ratio test comparing the null \eqref{true2} and alternative \eqref{nearly-true2}.
The numerical results with additive measurement errors are summarized in Table \ref{table2} and Figure \ref{figure2}. In this scenario, regression calibration (RC) performed the best for $\delta_0$ less than approximately 0.15, since RC correctly assumes a linear model for imputing $X$ from $Z$. The two standard multiple imputation had estimation bias due to a misspecified imputation model and had a larger MSE than the RC method. However, we note once again the model diagnostic for linearity, i.e. $\delta_0=0$, had at most $20\%$ power for the level of misspecifictation studied, which means one may not reliably reject the misspecified model even when $\delta_0=0.3$ and imputation with the correctly specified model is also unlikely. Indeed the standard and proposed MIR raking estimators achieved lower MSE when $\delta_0 \geq 0.15$. Thus, raking successfully leveraged the information from the cohort not in the phase two sample while maintaining its robustness, as seen in previous literature.\citep{deville1992calibration, sarndal2007calibration, breslow2009using} In this simulation we further found that the standard raking estimation efficiency can be improved by using multiple imputation to estimate the optimal raking variable, with efficiency gains of about $10\%$ in this example. Table \ref{table3} and Figure \ref{figure3} summarize the results for the multiplicative error scenario. In this case, even for $\delta_0=0$, the RC and multiple imputations have appreciable bias and worse relative performance compared to the two raking estimators, because of the misspecified imputation model. The two raking estimators outperformed all estimators for all levels of misspecfication. In this scenario, the MIR had smaller gains over the standard raking estimator.
\section{Data Example: The National Wilms Tumor Study} \label{sec-data}
We apply our proposed approach to the data from National Wilms Tumor Study (NWTS). In this example, we assume a key covariate of interest is only available in a phase 2 subsample, and compare the proposed MIR method with other standard estimators for this setting. In the data example with NWTS, we are interested in the logistic model for the binary relapse response with predictors histology (UH: unfavorable versus FH: favorable versus), the stage of disease (III/IV versus I/II), age at diagnosis (year) and the diameter of tumor (cm) as
\begin{eqnarray}
\begin{split}
\qquad
&\textrm{logit} \, \mathbb{P}(\textrm{Relapse} \, | \, \textrm{Histology}, \textrm{Stage}, \textrm{Age}, \textrm{Diameter})\\
&\quad = \alpha + \beta_1 (\textrm{Age}) + \beta_2 (\textrm{Diameter}) + \beta_3 (\textrm{Histology}) + \beta_4 (\textrm{Stage}) + \beta_{3,4} (\textrm{Histology}\ast\textrm{Stage}),
\end{split} \label{wilms-model}
\end{eqnarray}
where $\beta_{3,4} $ indicates an interaction coefficient between histology and stage.\cite{lumley2011complex} We consider \eqref{wilms-model} is a nearly-true model of the relapse probability associated with covariates, as it is difficult to specify the true model in this real data setting.
Histology was evaluated from both a central laboratory and a local laboratory, where the latter is subject to misclassification due to the difficulty of diagnosing this rare disease. For the first phase data, we suppose that the $N=3915$ observations of outcomes and covariates are available for the full cohort, except that the histology is obtained only from the local laboratory. Central histology is then obtained on a phase 2 subset. By considering the outcome-dependent sampling strategies,\cite{breslow1999design,lumley2011complex} we sampled individuals for the second phase by stratifying on relapse, local histology and disease stage levels. Specifically, all the subjects who either relapsed or had unfavorable local histology were selected, while only a random subset in the remaining strata (non-relapsed and favorable histology strata for each stage level) were selected so that there was a 1:1 case-control sample for each stage level.\cite{lumley2011complex}
Similarly to previous numerical studies, we compared four estimators, where the ``true parameters'' in \eqref{wilms-model} are given by estimates from the full cohort analysis: (i) the maximum likelihood estimates (MLE) of the regression coefficients in \eqref{wilms-model} based on the complete case analysis of the second phase sample; (ii) the standard raking estimator, which calibrates sampling weights by using the local histology information in the first phase sample, where the raking variable was generated by the influence functions. We imputed (unobserved) a central histology path by using a logistic model regressing the second phase histology observations on the age, tumor diameter and three-way interaction among the relapse, stage and local histology together with their nested interaction terms. The reason for introducing interaction in the imputation model is that subjects at advanced disease stage or with unfavorable histology were mostly relapsed in the observed data. We also consider (iii) the conventional bootstrap procedure was employed for multiple imputation (MI) with the second phase sample, and (iv) we combined the raking and multiple imputation (MIR) as proposed in the previous section.
The relative performance of the methods were assessed by obtaining estimates for 1000 two-phase samples. 100 multiple imputations were applied for each two-phase sample. Table \ref{table4} summarizes the results. Similarly to the numerical illustration in the previous section, we found that the proposed method (MIR) had the best performance in terms of achieving lowest MSE for the target parameter available only on the subset. While raking does not provide the lowest MSE for all parameters, in this example, MIR had the lowest squared error summed over the model parameters.
\section{Discussion}
There are many settings in which variables of interest are not directly observed, either because they are too expensive or difficult to measure directly or because they come from a convenient data source, such as EHR, not originally collected to support the research question. In any practical setting, the chosen statistical model to handle the mismeasured or missing data will be at best a close approximation to the targeted true underlying relationship. A general discussion of the difficulty of testing for model misspecification demonstrates that the data at hand cannot be used to reliably test whether or not the basic assumptions in the regression analysis hold without good knowledge of the potential structure.\cite{freedman2009} Here, we have considered the robustness-efficiency trade of several estimators in the setting of mild model misspecification, where idealized tests with the correct alternative have low power. When the misspecification is along the least-favorable direction contiguous to the true model, the bias will be in proportion to the efficiency gain from a parametric model.\cite{lumley2017robustness} We studied the relative performance of design-based estimators for a nearly-true regression model in two cases, logistic regression in a case-control study and linear regression in a two-phase design, where the misspecification was approximately in the least favorable direction. In both cases, the misspecification took the form of a mild departure from linearity, and as expected, the raking estimators demonstrated better robustness compared to the parametric MLE and standard multiple imputation models.
Our approach to local robustness is related to that of Watson and Holmes (2016),\cite{watson2016} who consider making a statistical decision robust to model misspecification around the neighborhood of a given model in the sense of Kullback--Leibler divergence. Our approach is simpler than theirs for two reasons: we consider only asymptotic local minimax behavior, and we work in a two-phase sampling setting where the sampling probabilities are under the investigator's control and so can be assumed known. In this setting, the optimal raking estimator is consistent and efficient in the sampling model and so is locally asymptotically minimax. In more general settings of non-response and measurement error, it is substantially harder to find estimators that are local minimax, even asymptotically, and more theoretical work is needed.
Another contribution of our study is that we demonstrated a practical approach for the efficient design-based estimator under contiguous misspecification. Without an explicit form of an efficient influence function, the characterization of the efficient estimator may not always lead to readily attainable computation of the efficient estimator in the standard raking method. We examined the use of multiple imputation to estimate the raking variable that confers the optimal efficiency. \citep{han2016combining} Our proposed raking estimator is easy to calculate and provides better efficiency than any raking estimator based on a single imputation auxiliary variable. In the two cases studied, the improvement in efficiency was evident, though at times small. On the other hand, the degree of improvement of the MI-raking estimator over the standard raking approach is expected to increase with the degree of non-linearity of the score for the target variable. In additional simulations, not shown, we did indeed see larger efficiency gains for MI-raking over single-imputation raking with large measurement error in $Z$.
In many settings, there is a preference to choose simpler models when there is a lack of evidence to support a more complicated approach, because of the clarity of interpretation with simpler models. \citep{box2005statistics, stone1985additive} In such settings, design-based estimators are easy to implement in standard software and provide a desired robustness. More theoretical work is also needed to find a more practical representation of the least-favorable contiguous model for the general setting in order to better understand how much of a practical concern this type of misspecification may be. The bias--efficiency trade-off we describe is also important in the design of two-phase samples. The optimal design for the raking estimator will be different from the optimal design for the efficient likelihood estimator, and the optimal design when the outcome model is ``nearly-true'' may be different again.
\section*{Acknowledgments}
This work was supported in part by the Patient Centered Outcomes Research Institute (PCORI) Award R-1609-36207 and U.S. National Institutes of Health (NIH) grant R01-AI131771. The statements in this manuscript are solely the responsibility of the authors and do not necessarily represent the views of PCORI or NIH.
\section*{Data availability}
Source code in R for these simulations and the National Wilms Tumor Study data are available at \url{https://github.com/kyungheehan/calib-mi}.
\nocite{*
\bibliographystyle{unsrtnat}
|
1,108,101,564,150 | arxiv | \section{Introduction}
As e-commerce has emerged as an indispensable part of the retail sector, timely and efficient last-mile delivery solutions have materialized as its catalyst. A steady increase in the amount of e-commerce activities has been noted in numerous reports. An analysis by ACI Worldwide states that the transaction volumes in most retail sectors saw a 74 percent increase in March 2020, compared to the same period in 2019, due in part to the COVID-19 pandemic \cite{ACI}. The increase in e-commerce activities also increased the load on the delivery sector. The need for viable solutions to this problem became more apparent throughout the pandemic with skyrocketing demands and prolonged delivery times. To maintain efficient operations, delivery services involving autonomous devices have emerged as a promising solution. In this view, conventional delivery services, involving cars and trucks, will be supplemented by emerging aerial delivery fleets. The autonomy of these devices will range from a fully human operated level 0 to a fully autonomous level 5.
The future of delivery networks are expected to function based on the basis of combined operations between transportation networks and information and communication technology (ICT) networks. This will be supported by autonomous aerial delivery fleets operating over designated airspace. We envision the joint use of cargo drones alongside hybrid or electric vertical take-off and landing (VTOL) aircraft, which can be used both for the delivery of parcels and the transportation of cargo drones.
Drone-supported aerial delivery networks have long been studied by retail sector giants, including Amazon and Alibaba \cite{Nesrine}. However, the extensive use of such networks in metropolitan areas has not yet been considered. In trials by Amazon Prime Air and Tesco (in the UK), the target areas for drone-supported aerial deliveries are rural areas, where the population and settlements are sparse. Yet, the impact of such deliveries will be effective only by servicing the densely populated metropolitan areas.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.88\linewidth]{figures/SEKIL-1.eps}
\caption{An overview of the 3D connected smart delivery network. Real-time action is possible within the vehicles of the autonomous delivery fleet through the connected multi-industry artificial intelligence engine. The warehouse will become a fully automated architecture through the use of the autonomous delivery fleet.} \label{fig-ML}
\end{figure}
A technological limitation of current wireless network architecture is that it fails to provide sufficient support for for aerial vehicle assisted delivery solutions. Although high data rates are achievable for users on the ground and in highrises through meticulous planning of extant 5G networks, the coverage of up to the maximum permitted height for drone delivery nodes does not support high data rate and low-latency solutions. This is mainly because of the non-isotropic radiation patterns of terrestrial base station antennas. The areas above the antennas do not receive coverage. It is expected that this lack of coverage will hinder the fully autonomous operations of drone delivery nodes. Furthermore, the next generation delivery networks are expected to be fully managed by artificial intelligence (AI) involving innovative distributed machine learning (ML) algorithms, as depicted in Fig. \ref{fig-ML}. Yet the computational loads of these algorithms may be too intensive for drone nodes, where energy efficiency will remain a strict design goal.
To address these challenges, this paper examines offloading possibilities and delivery route planning for 3D highways within a vertical heterogeneous network (VHetNet) paradigm. We present a realistic vision of a next-generation delivery network with a focus on issues pertaining to connectivity as well as computation and caching. The main components of providing a fully connected, high rate, and low-latency 3D network are described. The solution we envision makes use of high altitude platform station (HAPS) systems as an essential component. This architecture is also in line with the emerging literature on 6G networks. In our view, the main catalyzer will be HAPS constellations, which offer an excellent synergy between evolving terrestrial networks and emerging low Earth orbit (LEO) satellite constellations. By using HAPS for connectivity, caching, and computational offloading, we predict that next-generation delivery networks will soon become a reality, even in densely populated metropolitan areas.
The rest of this paper is organized as follows. The evolution of wireless architecture to support fully autonomous air fleets is described in Section II. In addition, the main architectural components are described from the perspectives of connectivity, caching, and computation. Section III describes the main features of HAPS systems. In Section IV, we present our vision of AI-powered and connected delivery networks with rapid response rates. Open issues are highlighted in Section V. Section VI concludes the paper.
\section{Towards 6G: The Interaction Between the ICT Network and the Delivery Network}
We envision that delivery networks in the near future will be semi-autonomous and that the goal of achieving full autonomy will be realized in the next two decades. A significant change in delivery networks has been the introduction of drone-based deliveries, trials of which have long been under consideration by the leading retail sector players. However, as noted above, such trials mainly concentrate on rural areas with sparse populations and housing. To be economically viable, next-generation last-mile delivery services need to address metropolitan areas. Yet, the current ICT networks’ capabilities fail to address the operational needs of such delivery networks in densely populated urban areas. Ambient interference, shadowing, and a lack of global navigation satellite system (GNSS) signaling in urban corridors have been noted as the main obstacles encountered in the trials of Unmanned Aircraft System Traffic Management (UTM), supported by the collaboration of NASA and FAA \cite{UTM}.
The VHetNet paradigm, currently being studied by researchers, has the potential to address the needs of next generation delivery networks. A VHetNet is composed of three layers: a terrestrial network, a space network (satellites), and an aerial network \cite{RAPOR1}. The terrestrial network is the main functional block of the VHetNet, which mainly connects users and devices to the core network. As the lowest layer, the terrestrial network includes various network generations, including 4G and 5G cellular networks, in combination with unlicensed band systems, such as WiFi. Satellite networks are composed of three satellite layers: LEO, medium Earth orbit (MEO), and geosynchronous Earth orbit (GEO) satellite systems and their corresponding ground stations.
There are several commercially operated systems, including GEO and MEO constellations, which mostly provide communication and surveillance/monitoring services. Forthcoming constellation deployment plans, including those of OneWeb, Amazon's Project Kuiper, and SpaceX, will introduce densely populated LEO constellations. Hence, the interaction between these LEO constellations and terrestrial networks is expected to increase in the very near future.
The aerial ICT network's architecture will consist of unmanned aerial vehicles (UAVs) in addition to airships, balloons, and HAPS systems. The drones are envisioned to be at a height of up to a few hundred meters. As for HAPS systems, the International Telecommunications Union (ITU) defines their operating altitude to be between 20 km and 50 km. However, most commercial HAPS trials target 18 km to 21 km, including Airbus Zephyr, Google Loon, and Stratobus of ThalesGroup \cite{Survey}. The aerial network, which is connected to the terrestrial network, improves the flexibility of the network design in terms of both capacity and coverage. With an aerial ICT network, coverage of highly populated metropolitan areas will then be possible while supporting high data rates. Coverage of highly populated metropolitan areas will then be possible while supporting high data rates.
The aerial network in the VHetNet architecture needs to be carefully designed. The two interacting sub-layers in the aerial network will introduce agility to the network functionalities. The first sub-layer includes the ultra-mobile UAV nodes, which can work as a base station, a relay node or a user equipment. The second complementary sub-layer is composed of HAPS systems, the quasi-stationary network elements. This HAPS sub-layer will provide important functions in terms of coverage, computation, and caching, and it has the potential to solve important problems in next-generation delivery networks, as detailed below.
In a VHetNet, the network elements mainly target three complementary objectives, addressing everything needed to make the 3D connected smart delivery network a reality:
\begin{enumerate}
\item \textbf{Increased overall throughput:} The individual data rates and/or the total number of fleet elements that can be served can be increased by deploying new aerial base stations.
\item \textbf{Improved coverage:} The outage probability can be reduced by the use of mobile base stations; hence, coverage can be always provided to address the fully connected 3D networks to support autonomous delivery fleet elements.
\item \textbf{Perform near-user computation:} The delay due to computation through the core network can be significantly reduced by performing the computation and caching functionalities near the fleet elements.
\end{enumerate}
The management of this highly complex multi-connectivity network with multi-layer computation offloading is supported through the use of ML algorithms. Additionally, in a highly dynamic environment, classical radio resource management approaches may fail to address the tight quality of service (QoS) requirements of the end-users. Data-driven ML algorithms will serve as a solution to such problems, especially in a distributed sense, to address the quick decision making need of the next generation delivery networks.
The three objectives listed above will be enabled by the VHetNet, and the HAPS components will serve as an indispensable element.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.88\linewidth]{figures/SEKIL-2.eps}
\caption{A depiction of the antenna gain patterns of a HAPS node that is not beam-aligned. The almost uniform gain observed due to the geometry provides a connectivity advantage with respect to terrestrial base station towers and high speed LEO satellites.} \label{fig-HAPS1}
\end{figure}
\section{The Key Enabler: High Altitude Platform Station (HAPS)}
Current ICT networks aim to provide coverage on the ground level, and inside buildings \cite{Mozaffari}. Such networks can not address the challenging requirements of next generation delivery networks. One of the significant benefits of VHetNets is the quasi-stationary HAPS sub-layer of the aerial network, which may serve as an essential component for aerial network planning and management. This can address the needs of next-generation delivery networks. An extensive description of the use of HAPS in VHetNets is given in \cite{Survey}.
Drones can travel up to a height of 121 meters (400 feet), and there may not be coverage at this level. The reason is basic geometry: the antenna patterns are sectorized and directional, which means they do not transmit signals skyward. Although under ideal conditions 3D spherical coverage can be modeled, in practice no antenna can satisfy this ideal design. 3GPP study items TR 22.926 \textit{Guidelines for Extra-territorial 5G Systems} and TR 22.839 \textit{Study on vehicle-mounted relays} investigate the use of drone-mounted base stations for coverage. Although these drone-mounted base stations may provide coverage for a specific height, for a specific sphere, the continuous coverage probability at all heights up to 121 meters is not feasible for metropolitan areas. The problem is mainly due to the beam patterns of base station antennas, which results in a high number of sidelobes according to the height. This set-up also introduces another problem in the case of densely deployed devices with ultra-high mobility, such as swarms of drones, where the handoff of each node may introduce a substantial delay to the system. Considering LEO satellites, although they can provide service to the operating altitudes of drones, the high speed of the satellites (with speeds of approximately 7 km/s) introduce a high load from a mobility management perspective \cite{MobilitySat}. This leaves HAPS, with their quasi-stationary nature, as an indispensable enabler to address the requirements of next generation delivery networks as opposed to the patchy coverage currently provided by terrestrial communication networks \cite{Grace}.
The signal transmission advantage, provided by the use of a HAPS as a mega-tower in the sky is depicted in Fig.~\ref{fig-HAPS1}. This figure highlights the antenna gain cross-sections at varying heights within the operating range of a cargo drone. It uses the sectoral antenna pattern as defined in Recommendation ITU-R F.1336-5, \textit{recommends} 3.2.1, and considers the directional antenna gain for peak side-lobe patterns in the frequency range from 6 GHz to 70 GHz. Symmetric elevation and azimuth beamwidths are considered. Even when the beam is pointing almost 10 km away from the depicted region, almost uniform gain are observed from the HAPS antenna. Furthermore, as the likelihood of the presence of a line-of-sight path is high, the performance degrading impact of the small-scale fading is relatively low compared to that of terrestrial networks. A Ricean fading model has often been considered in HAPS connections \cite{Survey}. The geometric advantages can compensate for the high path loss of the ground-HAPS connections as the HAPS nodes are located at 20 km above ground level. This enables the HAPS nodes to have a wide line-of-sight area that rather advantageous while sensing the ground. Additionally, due to the large form factor of HAPS systems that can be equipped with large payloads of high computation capabilities, ML can also play a role in these set-ups, targeting the corresponding optimization problems in terms of radio resource managements and beamforming.
In addition to coverage, the size of the HAPS provides the opportunity to allocate computational and caching resources as the payload. These resources will enable quick responses to the connected devices by enabling computing at a close proximity to the fleet elements (as opposed to the cloud data center), which can reduce the overall end-to-end delay and provide a unified and seamless computation resource for fog computing. From a caching perspective, instantaneous traffic data, maps, and navigation related information can be quickly accessed from the data storage elements in the HAPS node. Due to the position of a HAPS node in the sky, an unobstructed line-of-sight channel is likely to be encountered. This scenario can provide the full benefit of massive multi-input multi-output (MaMIMO) architectures.
Due to these facts, the use of HAPS to support aerial delivery networks is a natural fit.
\section{AI-Powered and Connected Aerial Delivery Networks}
The delivery operation for each delivery fleet member, including cargo drones, is configured independently from the rest of the delivery requirements. To be able to address the ever-increasing demands from consumers, the scalability of the delivery network needs to be addressed jointly with the corresponding constraints of the fleet elements with a focus on densely populated urban areas. A prominent solution is the aerial delivery networks. A vision of a next-generation delivery network is depicted in Fig. \ref{fig-ML}. The autonomous delivery fleet can be composed of terrestrial vehicles, marine vehicles, and aerial vehicles of varying levels of automation, i.e. each of these nodes can be both solely human operated, or autonomous from level 1 to level 5 \cite{autonomous}. The level of the autonomous navigation can change according to the corresponding conditions, and a fully autonomous fleet without any human operator is envisioned as the final goal.
The autonomous delivery fleet operates between the warehouse and the customers \cite{MacKenzie}. The management of this fleet requires ubiquitous connectivity at all times. However, the ubiquitous connectivity alone does not guarantee optimal operations. The advances in AI needs to be exploited for real time operational planning, which may include instantaneous route changes due to impulsive effects, such as weather conditions. To enable such low-latency transport network responses in a distributed sense within the autonomous delivery fleet, caching and computation services need to be accessible by each transportation element, in line with the benefits of fog computing.
\subsection{The Expected Role of Artificial Intelligence (AI)}
The retail sector currently makes active use of AI. For instance, recommendation engines on the consumer side, stocking on the warehouse side, and side route planning on the delivery solutions have been well-studied problems, and customized solutions are already available in terms of commercial products. However, the integration of a real-time autonomous fleet and the full 3D connectivity along with caching and offloading functionalities can enable a multi-sector AI implementation that is also available to provide low-latency responses to the changing conditions and requirements. A real-time AI across the domains is not yet implemented.
Coherent operations between a warehouse, aerial delivery network, ground delivery network and the consumer can be enabled through the use of multi-faceted ML techniques. Based on the inherent complexity of the corresponding optimization problem, the ML-based solutions can be a practically feasible alternative to be deployed across different sectors. A distributed learning architecture is expected to pave the way for this multi-industry perspective. In this optimization problem, the operating states can be jointly processed with the demand forecast even instantaneously, based on consumer trends. Despite the apparent benefits, the joint considerations of demand and route planning have not yet been implemented due to the associated complexity and technical limitations. However, 3D connectivity offers the potential to enable each of the delivery nodes to behave in an autonomous manner for supply chain management and delivery via computation and caching services. We envision that the ML approaches will serve to connect the supply management in the warehouse, the delivery network along with the instantaneous demands of the consumers.
As it is expected that a single model may not accurately help solving the problem, ensemble learning techniques can be of benefit \cite{MLSurvey}. Distributed learning approaches, including the parameter server paradigm, federated learning, or the fully distributed sufficient factor broadcasting approaches, can serve as powerful tools to enable this vision. Such a network has the potential to increase customer satisfaction by addressing their needs as fast as possible, along with the OPEX reduction in terms of delivery architecture. Further savings from the fuel consumption perspective is also possible \cite{Hanzo}.
\subsection{Operating Aerial Networking Components in Urban Areas}
Autonomous delivery fleets will be comprised of various devices. These include cargo trucks, vans, motors, cargo planes, autonomous or human operated VTOL aircraft, and autonomous drones for parcel delivery. The ranges and energy efficiencies of these devices are determined according to the properties of individual fleet element. For example, autonomous ships mainly operate over designated seaways. Trucks mainly operate on highways. The optimization of the route planning for heterogenous fleet components is currently under investigation and promising results have been already recorded \cite{Optimizasyon}.
Most delivery network elements have predetermined operational characteristics, whose boundaries are still under development for aerial delivery network, including VTOL aircraft and drones. The aerial delivery network needs to be coherently operational with the ground network, which is also composed of conventional delivery trucks with drivers and autonomous vehicles of varying sizes.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.88\linewidth]{figures/SEKIL-3.eps}
\caption{A hierarchical aerial platooning is depicted, where coverage, computation and caching services are provided by a HAPS, located above approximately 20 km from the ground level.} \label{Platoon}
\end{figure}
\subsubsection{Aerial Platooning}
Platooning in vehicular networks aims to control multiple vehicles on the basis of a leading vehicle and the use of cruise control. The following vehicles adjust their speeds and paths on the basis of the leading vehicle. The fuel savings and increased traffic efficiency benefits make platooning an attractive paradigm in today’s intelligent transportation systems. Next generation delivery networks can also benefit from the advantages provided by platooning in 3D aerial settings. The leading aerial vehicles can be VTOL aircraft or higher capacity drones. From the perspective of battery power and fuel, the path and task planning can be instantaneously executed in the HAPS node, and the corresponding flight commands can be transferred back to the platoon’s leading aerial vehicle. Also instantaneous traffic data can be cached at the HAPS node, which can enable the generation of accurate path and task commands. The envisioned operation is depicted in Fig. \ref{Platoon}. As we can see, on the basis of available fuel and battery resources of the cargo drones, co-transportation on the higher capacity aerial vehicles, such as VTOL aircraft, can also be planned instantaneously, depending on the final destination of the cargo drones. For the task planning and route planning, the potential of the reinforcement learning techniques are visible even from the terrestrial vehicular platoons \cite{RL-platoon}.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.88\linewidth]{figures/SEKIL-4.jpg}
\caption{The 3D aerial highway from the warehouse to a metropolitan area is depicted. The community pick-up/drop-off stations enable a relatively simpler path planning for the final part of the route. The connectivity of the aerial devices is maintained by the HAPS.} \label{community}
\end{figure}
\subsubsection{Community Pick-up/Drop-off Stations}
A challenge for drone delivery nodes will be reaching consumers at their homes. Unlike traditional delivery services, which can deliver a package to a consumer with the ring of a doorbell, drone delivery nodes may face challenging operational environments. For these cases, we envision community pick-up/drop-off points where packages can be picked up, and notifications for the arrival of packages can be sent in advance, in accordance with user preferences, as shown in Fig. \ref{community}, for example 15 minutes before the expected delivery time. These community places can also be used to transfer other packages that may be picked up by the autonomous drones or droids.
\subsubsection{3D Aerial Highways}
It is expected that the number of aerial delivery vehicles will increase. For the sustainable management of these vehicles, along with corresponding regulations, proper route planning is a must. To help in planning, 3D highways are being considered by NASA and FAA under the UTM activities \cite{UTM}. The guidelines and regulations aim to provide a monitored speed region for metropolitan deliveries. This solution evokes the famous 1980s cartoon series, the Jetsons.
Mimicking the highway/street hierarchy of terrestrial roads, multiple regulated speed limits within these drone highways can make the operation of multiple fleet elements possible, which can provide scalable retail solutions for next generation consumer networks. The quasi-structured mobility restrictions will facilitate the operations of the drone fleet, especially in densely populated areas. The ubiquitous access to the optimization engine supported by AI will enable near real-time planning over these 3D highways, enabling the instantaneous reactions of the drones while operating. The availability of the navigation and the route planning data will also provide an increased level of safety and reliability. The advantageous channel characteristics of the 3D coverage provided by a HAPS are clearly shown in Fig. \ref{fig-HAPS2}. In line with the models in the literature, varying line-of-sight power (K parameter) values are considered for a Ricean channel, and the average outage probability values are shown for an aerial highway of the dimensions noted in Fig. \ref{community}. The corresponding simulation parameters are given in Table \ref{sim}. The terrestrial base stations simply cannot provide coverage at this height, as opposed to the acceptably low outage probabilities that a HAPS can provide in a 3D aerial highway.
\begin{table}
\caption{Simulation parameters.}
\label{sim}
\begin{tabular}{|p{3cm}|p{5cm}| }
\cline{1-2}
\textbf{Parameter} & \textbf{Value} \\ \cline{1-2}
Carrier frequency & 10 GHz \\ \cline{1-2}
Bandwidth & 10 MHz\\ \cline{1-2}
Temperature & 24 $^\circ$C \\ \cline{1-2}
Normalized Rate & 1 b/s/Hz \\ \cline{1-2}
Channel & Ricean with varying K values\\ \cline{1-2}
HAPS Antenna Pattern & ITU-R F.1336-5, \textit{recommends} 3.2.1\\ \cline{1-2}
HAPS Beam & Pointing towards 10Km in $x$ direction 500 m in $y$ direction\\ \cline{1-2}
Dimensions of the 3D Aerial Highway & $\Delta x = 10$ m, $\Delta y = 10$ m, $\Delta z = 10$ m, $X_{\textrm{m}} = 100$ m, $Y_{\textrm{m}} = 10$ m, $Z_{\textrm{m}}= 100$ m \\ \cline{1-2}
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/Outage.eps}
\caption{The volumetric average outage probability of the 3D aerial highway. As the line-of-sight component (K parameter) increases, the performance changes favorably as expected from the models of the HAPS-to-ground channel models. }\label{fig-HAPS2}
\end{figure}
\section{Open Issues}
The relevant open research problems are discussed below.
\paragraph*{Seamless integration of the delivery network and the VHetNet} Overall operations of the integration of the next generation delivery networks and the next generation ICT network, VHetNet, need to be seamless from the end-user perspective. This level of seamless integration needs additional care rather than simply assigning a network slice corresponding to the delivery services. The related services can be offered via a multi-industry standardization activity among retailers and communications service providers. Furthermore, security, privacy, and safety of the flights also need to be monitored by the same entity.
\paragraph*{Joint connectivity between the fleet, edge, and core elements} Functionalities of the aerial fleet elements need to be identified for network agility aspects. The waveform designs and the use of non-contiguous bandwidth is an interesting challenge that has not been encountered before. The clustered aerial platooning architecture forces the use of device-to-device links, whereas the edge connection at the HAPS and core-based cloud computing functionalities also need to be supported. Due to the geometric advantages provided by the HAPS nodes, extremely narrow pencil beams can be used for high data rate connections to the fleet elements. Inter-HAPS handoff strategies also need to be investigated for a worldwide deployment.
\paragraph*{Cognitive radio resource management} Enhanced cognition capabilities that are dependent not only on the spectrum usage status, but that also include the energy storage aspects of the individual components of the fleet elements are needed. This will eliminate a high control plane load on the serving HAPS node. As a single HAPS node will provide service to tens of kilometers, even high-speed aerial nodes can be served with a single HAPS cell without the need for a sophisticated mobility management framework, so that these high-speed nodes can remain operational in these selected frequency bands after performing spectrum sensing. The use of higher frequency bands, including the terahertz bands, can be a remedy to alleviate potential packet collisions in the cognitive interfaces.
\paragraph*{Computation algorithms at the edge} The computation algorithms to address instantaneous customer demand along with the instantaneous changes in the fleet management environment need special attention since they do not only address the e-commerce activities, but they also perform fleet management via path planning and task scheduling while addressing the connectivity in the VHetNet. Customized algorithms to address this multi-industry operation are needed to enable a scalable extension of the targeted services in metropolitan areas.
\paragraph*{Energy management} There are two perspectives to energy management in next generation delivery networks. From the communication perspective, the HAPS nodes are always considered a green solution, for they mainly extract energy from solar panels \cite{Survey}. They also use hybrid energy sources, including wind and solar energy, along with possible RF energy harvesting approaches. Maintaining a quasi-stationary position against strong winds and varying weather conditions requires supplementary energy in addition to the energy needed for the payload. Hence, improved efficiency levels will be needed in terms of both harvesting and storage. Even nuclear energy may be an option for a sustainable operation.
Considering the delivery network perspective, the energy management of the drones will be continuously monitored. In case of emergency power needs, simultaneous wireless information and power transfer (SWIPT) based energy transfer from the HAPS node to the fleet elements may be an effective approach. The potential savings in the delivery network from OPEX and CAPEX perspectives need to be quantified in a realistic manner.
\section{Conclusions}
A new wireless network architecture is needed to enable the functionality of a fully autonomous parcel delivery network with aerial fleet elements, including cargo VTOL aircraft and cargo drones. In this paper, we presented our vision of a network that not only assists with sensing capabilities while providing reliable connectivity, but also serves as a computational and caching platform, powered by the HAPS systems of the VHetNets.
\section*{Acknowledgment}
The authors would like to thank Prof. Abbas Yongacoglu for the valuable discussions. This work was supported by Huawei Canada Co., Ltd.
\bibliographystyle{IEEEtran}
|
1,108,101,564,151 | arxiv | \section{Technical Proofs}
\label{apx:technical-proofs}
\begin{proof}(Proposition~\ref{prop:unbiased})
Since $\{u^\star,v\} \in E'$, we have in the unweighted case that $\effresG{G_\star}{u^\star}{v}$ is the number of spanning
trees of $G_\star$ that contain $\{u^\star,v\}$ divided by the number of all spanning trees of
$G_\star$ (follows from Kirchhoff's theorem, see~\cite[Ch.~II]{DBLP:books/daglib/0009415}).
In the weighted case, replace ``number'' by ``total weight'', respectively
(where the weight of a UST is the product of all edge weights).
We focus on the unweighted case in the following for ease of exposition; the proof for the weighted case works
in the same way.
Clearly, $R[v] / \tau$, as used by Algorithm~\ref{alg:approx-diag-omega}, is an estimator for
$\effresG{G_\star}{u^\star}{v}$. It remains to show that it is unbiased, i.\,e.,\xspace ${\ensuremath{\mathbb{E}}}[R[v]/\tau] = \effresG{G_\star}{u^\star}{v}$.
To this end, let $T_i$ be the UST sampled in iteration $i$ and $X_{i,v}$ the random indicator variable with
$X_{i,v} = 1$ if $\{u^\star,v\} \in T_i$ and $0$ otherwise. Then:
\begin{align*}
{\ensuremath{\mathbb{E}}}[R[v]/\tau] &= \frac{1}{\tau} {\ensuremath{\mathbb{E}}}[R[v]]
= \frac{1}{\tau} \sum_{i=1}^\tau \mathbb{P}[\{u^\star,v\} \in T_i] \cdot X_{i,v} \\
& = \frac{1}{\tau} \sum_{i=1}^\tau \effresG{G_\star}{u^\star}{v} \cdot 1
= \effresG{G_\star}{u^\star}{v},
\end{align*}
which follows from the definition of expectation and the above correspondence between
(the relative fre\-quency of an edge in) USTs and effective resistances.
\end{proof}
\begin{proof}(Proposition~\ref{prop:ust-wilson-time})
By plugging the augmented graph $G_\star$ (with constant diameter)
into the proof of Lemma~10 of Ref.~\cite{angrimanPGM20}, we obtain for the running time $W(n)$
on a graph with $n$ vertices: $W(n) = \ensuremath{\mathcal{O}}(\operatorname{vol}(G_\star)) = \ensuremath{\mathcal{O}}(\alpha \operatorname{vol}(G) + n)$ expected time per call
in Line~\ref{line:ust-sampling}.\hfill
\end{proof}
\begin{proof}(Theorem~\ref{thm:time-approx})
For the linear system in Line~\ref{line:linear-system}, we employ the
SDD solver by Cohen et al.\xspace~\cite{CohenKyng14}; it takes $\tilde{\ensuremath{\mathcal{O}}}(m \log^{1/2} n \log(1/\eta))$
time to achieve a relative error bound of $\Vert \myvec{\tilde{x}} - \myvec{x} \Vert_{\mat{L'}}$, where $\mat{L'} := \alpha \mat{L} + \mat{I}$.
We can express the equivalence of this matrix-based norm with the maximum norm by
adapting Lemma~12 of Ref.~\cite{angrimanPGM20} with the norm for $\mat{L'}$ (instead of $\mat{L}$):
$\sqrt{\mu_1} \cdot \Vert \myvec{x} \Vert_\infty \leq \Vert \myvec{x} \Vert_{\mat{L'}} \leq \sqrt{\alpha (c+2) \operatorname{vol}(G)} \Vert \myvec{x} \Vert_\infty$,
where $\mu_1$ is the smallest eigenvalue of $\mat{L'}$. In fact, $\mu_1 = \alpha \lambda_1 + 1 = 1$,
where $\lambda_1 = 0$ is the smallest eigenvalue of $\mat{L}$, so that we can simplify:
\begin{equation}
\Vert \myvec{x} \Vert_\infty \leq \Vert \myvec{x} \Vert_{\mat{L'}} \leq \sqrt{\alpha (c+2) \operatorname{vol}(G)} \Vert \myvec{x} \Vert_\infty.
\end{equation}
Let us set $c := \frac{n}{\alpha \cdot \operatorname{vol}(G)}$; by our assumption in the theorem,
$c$ is a constant. Hence, if we set $\eta := \kappa \epsilon / 6 \sqrt{\alpha (c+2) \operatorname{vol}(G)}$,
the SDD solver's accuracy can be bounded by:
\begin{align*}
\Vert \myvec{\tilde{x}} - \myvec{x} \Vert_\infty & \leq \Vert \myvec{\tilde{x}} - \myvec{x} \Vert_{\mat{L'}} \leq \eta \cdot \Vert \myvec{x} \Vert_\mat{L'} \\
& \leq \eta \sqrt{\alpha (c+2) \operatorname{vol}(G)} \Vert \myvec{x} \Vert_\infty \\
& = \frac{\kappa \epsilon}{6} \Vert \myvec{x} \Vert_\infty \leq \frac{\kappa \epsilon}{3}.
\end{align*}
The last inequality follows from the fact that the values in $\myvec{x}$ are bounded by the effective
resistance, which in turn is bounded by the graph distance and thus $2$ (via the edges to/from $u$).
If each entry has accuracy of $\kappa \epsilon / 3$ (or better),
then Eq.~(\ref{eq:forest-dist-pair}) is solved with accuracy $\kappa \epsilon$ (or better).
The resulting running time for the SDD solver is thus $\tilde{\ensuremath{\mathcal{O}}}(m \log^{1/2} n \log(1 / \eta))
= \tilde{\ensuremath{\mathcal{O}}}(m \log^{1/2} n \log(\sqrt{\alpha \operatorname{vol}(G)} / \epsilon))$.
According to Proposition~\ref{prop:ust-wilson-time} and with $n \leq c \cdot \alpha \cdot \operatorname{vol}(G)$,
sampling one UST takes $\ensuremath{\mathcal{O}}(\alpha \operatorname{vol}(G))$ expected time. It remains to identify
a suitable sample size $\tau$ for the approximation to hold. To this end, let $\epsilon' := (1-\kappa)\epsilon$
denote the tolerable absolute error for the UST-based approximation part.
Plugging $\tau := \lceil \log(2m/\delta) / 2(\epsilon')^2\rceil$ into the proof of Theorem~3 of Ref.~\cite{angrimanPGM20}
(and thus essentially Hoeffding's bound) with the fact that the eccentricity of $u$
is $1$, we obtain the desired result.\hfill
\end{proof}
\begin{proof}(Lemma~\ref{lemma:a-vc})
The proof in Li et al.\xspace~\cite[Lemma~4.1]{DBLP:conf/www/0002PSYZ19} exploits (among others) that the diagonal is constant.
If we replace $3$ by $4$, this argument and all others (such as positive definiteness)
still hold and the result becomes $(n-k)/4$ instead of $(n-k)/3$.\hfill
\end{proof}
\begin{proof}(Theorem~\ref{thm:GFC-NP-hard})
Let $G$ be 3-regular and let $S \subset V$, $|S| = k$.
We prove that $f(S) \geq \frac{4}{3n+k} + (\frac{1}{4} + \frac{1}{4(3n+k)}) (n-k) =: t(n,k)$, where equality
holds if and only if $S$ is a vertex cover of $G$.
Let $\mat{A}$ be the $(n-k) \times (n-k)$ submatrix of $\msub{\mat{L}_\star}{S}$ that corresponds
to all vertices except the universal vertex, i.\,e.,\xspace $\mat{A} := \msub{\mat{L}}{S} + \mat{I}$.
Note that $\mat{A}$ is symmetric.
%
Since $G$ is 3-regular, all diagonal entries
of $\mat{A}$ are 4. All non-diagonal entries
have value $-1$ and there can be
at most three such entries per row / column of $\mat{A}$.
In particular, the row and column sums of $\mat{A}$
are all $\geq 1$.
An elementary calculation (i.\,e.,\xspace expanding the $ij$-th
element of the matrix multiplication $A$ times $A^{-1}$, and
summing over $j$) shows:
\begin{equation}
\label{eq:rowcol-relation}
\left(\sum_{\ell} \mat{A}_{i \ell}\right) \left(\sum_{\ell} \mat{A}^{-1}_{\ell i}\right) = 1,
\end{equation}
hence the row sums and column sums of $\mat{A}^{-1}$
are all $\leq 1$.
Let us now decompose $\msub{\mat{L}_\star}{S}$ into blocks as follows:
\[
\msub{\mat{L}_\star}{S} = \left(\begin{array}{c|cccc}
n & -1 & \ldots & -1 \\ \hline
-1 & & & \\
\ldots & & \mat{A} & \\
-1 & & & \\
\end{array}\right).
\]
By blockwise inversion we obtain:
\[
(\msub{\mat{L}_\star}{S})^{-1} = \left(\begin{array}{c|cccc}
\frac{1}{n - \myvec{1}^T \mat{A}^{-1} \myvec{1}} & & \ldots & \\ \hline
& & & \\
\ldots & & (\mat{A} - \frac 1n \mat{J})^{-1} & \\
& & & \\
\end{array}\right),
\]
where $\mat{J}$ is the $(n - k) \times (n - k)$ matrix of all ones.
To compute $(\mat{A} - \frac{1}{n} \mat{J})^{-1}$, we notice that
$-\frac{1}{n}\mat{J}$ can be written as $\myvec{1}^T \cdot (-1 \frac 1n) \myvec{1}$
and apply the Sherman-Morrison formula. This yields
\begin{equation}
\label{eq:ablock-inverse}
(\mat{A} - \frac{1}{n} \mat{J})^{-1} = \mat{A}^{-1} + \frac{1}{n - \myvec{1}^T \mat{A}^{-1} \myvec{1}} \mat{A}^{-1} \mat{J} \mat{A}^{-1}.
\end{equation}
We note that $\myvec{1}^T \mat{A}^{-1} \myvec{1}$ is equal to
the sum of all entries of $\mat{A}^{-1}$ and this is bounded by
the sum of all column sums of $\mat{A}^{-1}$, i.\,e.,\xspace
$\myvec{1}^T \mat{A}^{-1} \myvec{1} \leq n - k < n$
and the denominator of Eq.~(\ref{eq:ablock-inverse}) is well-defined.
%
Also, we have $\trace{\mat{A}^{-1} \mat{J} \mat{A}^{-1}} = \sum_{v \in V \setminus S} (\sum_j \mat{A}^{-1}_{vj}) (\sum_i \mat{A}^{-1}_{iv})$
and thus $\trace{(\mat{A}^{-1} - \frac 1n \mat{J})^{-1}}$ only depends on $\trace{\mat{A}^{-1}}$
and row/column sums of $\mat{A}^{-1}$.
Now consider the case that $S$ is a vertex cover.
In this case, $\mat{A}$ has no off-diagonal entries
(and all row (or column) sums of $\mat{A}$ are 4).
For the entry $(\msub{\mat{L}_\star}{S})^{-1}[1][1]$, we then obtain using Lemma~\ref{lemma:a-vc}:
$1/(n-(n-k)/4) = 4/(3n+k)$. The inverse $(\mat{A} - \frac 1n \mat{J})^{-1}$, in turn,
resolves to $\frac{1}{4} (\mat{I} + \frac{1}{3n+k} \mat{J})$, so that we obtain
$\trace{(\msub{\mat{L}_\star}{S})^{-1}} = t(n,k)$.
On the other hand, assume that $S$ is not a vertex cover.
In this case, $\mat{A}$ is entry-wise smaller than or equal to
the vertex cover case.
Furthermore, at least one element is now strictly smaller,
i.\,e.,\xspace there exists rows/columns of $\mat{A}$
whose sum is smaller than 4. Due to Eq.~(\ref{eq:rowcol-relation}),
this implies that some row/column sums of $\mat{A}^{-1}$
are strictly larger than in the vertex cover case
(namely, the rows/columns of $\mat{A}$ that sum to less than 4)
and all others are equal to the vertex cover case
(i.\,e.,\xspace the rows/columns of $\mat{A}$ that still sum to 4).
Furthermore, by applying Lemma~\ref{lemma:a-vc},
we notice that $\trace{\mat{A}^{-1}}$ is now larger
compared to the vertex cover case.
Since $\trace{(\mat{A} - \frac{1}{n} \mat{J})^{-1}}$ only depends
on $\trace{\mat{A}^{-1}}$ and the row/column sums of $\mat{A}^{-1}$,
the final trace can only be strictly larger than in the vertex cover case.
\hfill
\end{proof}
\section{Algorithmic Details}
\label{sec:app-algorithmic-details}
\begin{algorithm}[H]
\begin{algorithmic}[1]
\begin{small}
\Function{SamplingUST}{$G$, $u^\star$}
\State \textbf{Input:} graph $G=(V,E)$, universal vertex $u^\star \in V$
\State \textbf{Output:} $R :=$ estimated effective resistance values
\State $R[v] \gets 0 ~\forall v \in V$ \label{line:start-init}
\State $T \gets \{u^\star\}$
\State Let $v_1, \ldots, v_n$ be a reordering of $V$ according to ascending degree
\For{$i \gets 1$ to $n$}
\State $P \gets $ random walk on $G$ from $v_i$ to $T$
\State $LE(P) \gets$ loop erasure of $P$ in order of appearance
\State $T \gets T \cup LE(P)$
\If{ last vertex of $LE(P) = u^\star$}
\State $w \gets$ last visited vertex before $u^\star$
\State $R[w] \gets R[w] + 1$
\label{line:effres-update}
\EndIf
\EndFor \label{line:end-sampling}
\State \textbf{return} $R$
\EndFunction
\end{small}
\end{algorithmic}
\caption{Sampling algorithm for USTs (based on Wilson's algorithm)}
\label{alg:sampling-ust}
\end{algorithm}
\section{Additional Experimental Results}
\label{apx:additional-exp}
\paragraph{Average Accuracy.} To confirm that
\tool{UST}\xspace performs well on average and not only when considering
the maximal error over many instances, we additionally report the
\emph{average} (over all instances from Table~\ref{tab:time-kt}) of the absolute error
in Figure~\ref{fig:avg-abs-err}.
\begin{figure}[tb]
\centering
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/distr-scal-small-diameter.pdf}
\end{subfigure}\hfill
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/distr-scal-high-diameter.pdf}
\end{subfigure}
\caption{Geometric mean of the speedup of \tool{UST}\xspace with $\epsilon =
\numprint{0.1}$ on multiple compute nodes over a single compute node ($1
\times 24$ cores). Data points are aggregated over the instances in
\Cref{tab:large}.}
\label{fig:distr-scalability}
\end{figure}
\paragraph{Parallel Scalability.} In Figure~\ref{fig:par-scalability} we report the parallel scalability of \tool{UST}\xspace
on multiple cores. We hypothesize that the moderate speedup is mainly due to memory latencies:
while sampling a UST, our algorithm performs several random accesses to the graph data structure
(i.\,e.,\xspace an adjacency array), which are prone to cache misses.
Furthermore, Table~\ref{tab:large} reports detailed statistics about the instances used for
experiments in distributed memory along with running times of \tool{UST}\xspace on $16\times 24$ cores
with $\epsilon = 0.1$ and $\epsilon = 0.3$.
\paragraph{Vertex Classification.} Figure~\ref{fig:vertex-class-lcc} shows the accuracy in semi-supervised vertex
classification in connected graphs when using different strategies to create the training set.
Compared to disconnected graphs, the competitors perform better in this setting.
However, as described in Section~\ref{sec:ex-group}, choosing the training set by group forest
closeness maximization yields nearly the same accuracy as the best competitors in
our datasets.
\begin{figure}[tb]
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics{./plots/legend-quality}
\end{subfigure}
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/avg-abs-small-diam-unweighted}
\end{subfigure}\hfill
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/avg-abs-high-diam-unweighted}
\end{subfigure}
\caption{Arithmetic mean of the absolute errors $|\max_v \mat{\Omega}[v, v] -
\widetilde{\mat{\Omega}}[v, v]|$ over the instances in Table~\ref{tab:time-kt}.}
\label{fig:avg-abs-err}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/par-scal-small-diameter}
\end{subfigure}\hfill
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/par-scal-high-diameter}
\end{subfigure}
\caption{Geometric mean of the speedup of \tool{UST}\xspace with $\epsilon =
\numprint{0.05}$ on multiple cores over a sequential run (shared memory).
Data points are aggregated over the instances in Table~\ref{tab:time-kt}.}
\label{fig:par-scalability}
\end{figure}
\begin{figure}[p]
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics{plots/legend-node-class}
\end{subfigure}
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{plots/node-class-cora_lcc}
\end{subfigure}\hfill
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{plots/node-class-citeseer_lcc}
\end{subfigure}
\caption{Accuracy in semi-supervised vertex classification on the largest
connected component of the datasets when using different strategies to create
the training set. Cora-lcc: $|V| = \numprint{2485}, |E| = \numprint{5069}$,
Citeseer-lcc: $|V| = \numprint{2110}, |E| = \numprint{3668}$.}
\label{fig:vertex-class-lcc}
\end{figure}
\begin{table}[p]
\centering\footnotesize
\begin{tabular}{c}
Complex networks
\end{tabular}
\input{tables/insts_large_small_diam}\medskip
\begin{tabular}{c}
Road networks
\end{tabular}
\input{tables/insts_large_high_diam}
\caption{Large networks used for scalability experiments
in distributed memory and running time of \tool{UST}\xspace on $16\times 24$ cores.}
\label{tab:large}
\end{table}
\begin{table}[p]
\centering\footnotesize
\input{tables/group-forest-time}
\caption{Running time of our greedy algorithm for group forest maximization.}
\label{tab:time-group}
\end{table}
\begin{table}[p]
\centering\footnotesize
\begin{tabular}{c}
Complex networks
\end{tabular}
\input{tables/ust_time_small_diam}\medskip
\begin{tabular}{c}
Road networks
\end{tabular}
\input{tables/ust_time_high_diam}
\caption{Running time in seconds of \tool{UST}\xspace on the networks in
\Cref{tab:time-kt}.}
\end{table}
\section{Introduction}
\label{sec:intro}
Massive graph data sets with millions of edges (or more) have become abundant.
Today, applications come from many different scientific and
commercial fields~\cite{newman2018networks,barabasi2016network}.
Network analysis algorithms shall uncover non-trivial relationships
between vertices or groups of vertices in these data.
One popular concept used in network analysis is \emph{centrality}.
Centrality measures assign to each vertex (or edge) a score
based on its structural importance; this allows to rank the vertices and to identify the important \changed{ones~\cite{DBLP:journals/im/BoldiV14,DBLP:conf/kdd/WhiteS03}.}
\changed{Measures that capture not only local graph properties are often more meaningful, yet relatively
expensive to compute~\cite{Grinten2020ScalingUN}.
Also,} different applications may re\-quire different centrality measures, none is universal.
\changed{Algebraic measures such as random-walk betweenness, electrical closeness
(see Refs.\ in~\cite{angrimanPGM20,Grinten2020ScalingUN}), and}
\emph{forest closeness centrality}~\cite{Zhang19} are gaining \changed{increasing} attention.
\changed{Forest closeness} is based on forest distance, which was
introduced by Chebotarev and Shamis~\cite{Chebotarev00} to account not only for shortest
paths.\footnote{Instead, all paths are taken into account, but shorter ones are more important.
This notion of distance/proximity has many applications in graph/data mining and beyond~\cite{Chebotarev00}.}
Moreover, it applies to disconnected graphs as well.
\changed{ In sociology, forest distances are shown to better capture more than one sensitive relationship index,
such as social proximity and group cohesion~\cite{chebotarev06matrixforest}.}
Consequently, forest closeness centrality has two main advantages \changed{over many other
centrality} measures~\cite{Zhang19}: (i) by taking not only shortest paths into account,
it has a high discriminative power and (ii) unlike \changed{related algebraic measures such as the above},
it can handle disconnected graphs out of the box.
Recently, Jin et al.\xspace~\cite{Zhang19} provided an approximation algorithm for forest closeness centrality
with nearly-linear time complexity. Their algorithm uses the Johnson-Lindenstrauss transform (JLT)
and fast linear solvers; it
can handle much larger inputs than what was doable before,
but is still time-consuming. For example, graphs with $\approx$1M vertices and $\approx$2-3M edges
require more than $2.3$ or $4.7$ \emph{hours} for a reasonably accurate ranking in their study~\cite{Zhang19}.
Obviously, this hardly scales to massive graphs with $> 50$M edges; corresponding applications
would benefit significantly from faster approximation methods.
To this end, we devise new approximation algorithms for two problems:
first, for the individual forest closeness centrality value of each node --
by adapting uniform spanning tree techniques from recent related work on electrical closeness centrality~\cite{angrimanPGM20,barthelm2019estimating}.
In a next step, we consider \emph{group} forest closeness centrality, where one
seeks a set of vertices that is central jointly. To the best of our knowledge,
we are the first to address the group case for this centrality measure. We prove that group
forest closeness is $\mathcal{NP}$-hard and adapt the greedy algorithm by Li et al.\xspace~\cite{DBLP:conf/www/0002PSYZ19}
to this problem.
Our experiments on common benchmark graphs show that our algorithm for ranking individual
vertices is always substantially faster than Jin et al.\xspace's~\cite{Zhang19} -- for sufficiently large
networks by one (better accuracy) to two (similar accuracy) orders of magnitude in a sequential setting.
Our new algorithm can now rank all vertices in networks of up to $334$M edges with reasonable accuracy
in less than 20 minutes if executed in an MPI-parallel setting.
Also, experiments on semi-supervised vertex classification
demonstrate that our new group forest closeness measure
improves upon existing measures in the case of disconnected graphs.
\section{Definitions and Notation}
\label{sec:prelim}
As input we consider finite and simple undirected graphs $G = (V, E, \myvec{w})$ with $n$ vertices,
$m$ edges, and edge weights $\myvec{w} \in \mathbb{R}_{\geq 0}^{m}$.
By $\mat{L}$ we denote the Laplacian matrix of $G$, defined as $\mat{L}
= \mat{diag}(\deg_G(1), \ldots, \deg_G(n)) - \mat{A}_G$, where $\mat{A}_G$ denotes
the (weighted) adjacency matrix of $G$ and $\deg_G(v)$ the (weighted) degree of vertex $v$.
\paragraph{Closeness centrality.}
Let $d(u, v)$ denote the graph distance in $G$.
The \emph{farness} of a vertex $u$ is defined as
$f^d(u) := \sum_{v \neq u} d(u, v)$,
i.\,e.,\xspace up to a scaling factor of $\frac 1 n$, the farness of $u$ quantifies the average
distance of $u$ to all other vertices.
Given this definition, the \emph{closeness centrality}
of $u$ is defined as $C^d(u) := \frac n{f^d(u)}$.
Closeness is a widely used centrality measure;
the higher the numerical value of $C^d(u)$ is, the more central is $u$ within the graph.
It is often criticized for mapping the vertex scores into a rather narrow interval~\cite{newman2018networks}.
\paragraph{Forest Distance / Closeness.}
Forest distance generalizes the common graph distance and takes not only shortest paths
into account~\cite{Chebotarev00}. It is expressed in terms of the (parametric) forest matrix of a graph $G$ defined as
$\mat{\Omega} := \mat{\Omega}_{\alpha} := (\alpha\mat{L} + \mat{I})^{-1}$,
where $\mat{I}$ is the identity matrix and $\alpha > 0$ controls the importance
of short vs long paths between vertices
(some papers prefer the expression $(\mat{L} + \alpha \mat{I})^{-1}$,
which is equivalent to $\mat{\Omega}_{\alpha}$ up to scaling;
non-parametric variants of forest closeness fix
$\alpha$ to $1$~\cite{chebotarev2006proximity}):
\begin{definition}[Forest distance~\cite{Chebotarev00}]
\label{def:forest-dist-pair}
The forest distance $\fdistp{u}{v}$ for a vertex pair $(u,v)$ is defined as:
\begin{equation}
\label{eq:forest-dist-pair}
\begin{split}
\fdistp{u}{v} & := \fdistpalpha{u}{v} := (\uvec{u} - \uvec{v})^T \mat{\Omega}_{\alpha} (\uvec{u} - \uvec{v})
\\ & = \ment{\mat{\Omega}_{\alpha}}{u}{u} + \ment{\mat{\Omega}_{\alpha}}{v}{v} -2\ment{\mat{\Omega}_{\alpha}}{u}{v}.
\end{split}
\end{equation}
\end{definition}
\changed{Chebotarev and Shamis~\cite{Chebotarev00} show} that forest distance is a metric and list other desirable properties.
The name \emph{forest} distance stems from the fact that an entry $\ment{\mat{\Omega}}{u}{v}$ equals the fraction
of spanning rooted forests in $G$ in which $u$ and $v$ belong to the same tree, see~\cite{Zhang19}.
Forest distance closeness centrality, or forest closeness for short, then uses forest distances
instead of the usual graph distance in the sum over all other vertices:
\begin{definition}[Forest closeness~\cite{Chebotarev00}]
\label{forest-dist-vertex}
The \emph{forest farness} $\fdistu{u}$ of a vertex $u$
is defined as $\fdistu{u} := \sum_{v \in V \setminus \{u\}}\fdistp{u}{v}$.
Likewise, the \emph{forest distance closeness centrality} of $u$ is defined as:
$\fdistinv{u} := \frac{n}{\fdistu{u}}$.
\end{definition}
To simplify notation and when clear from the context, we often omit $\alpha$ in the following.
\paragraph{Effective Resistance and Electrical Closeness.}
As already realized by Chebotarev and Shamis~\cite{Chebotarev00}, there is a close
connection between forest distance and effective resistance, a.\,k.\,a. resistance distance
(more details on this connection in Section~\ref{sub:connection-forest-resistance}).
Effective resistance is a pairwise metric on the vertex set of a graph
and also plays a central role in several centrality
measures~\cite{teixeira2013spanning, DBLP:conf/stacs/BrandesF05}.
The notion of effective resistance comes from viewing $G$
as an electrical circuit in which each edge $e$ is a resistor
with resistance $1/\vent{w}{e}$.
Following fundamental electrical laws, the effective resistance $\effres{u}{v}$
between two vertices $u$ and $v$ (that may or may not share an edge)
is the potential difference between $u$ and $v$ when a unit
of current is injected into $G$ at $u$ and extracted at $v$.
\changed{Effective resistance is also proportional to hitting times of
random walks~\cite{DBLP:books/daglib/0009415} and thus has connections to Markov chains.}
Computing the effective resistance $\effres{u}{v}$
of a vertex pair $(u,v) \in V \times V $ can be done by means of the Laplacian
pseudoinverse $\mat{L_{G}}^\dagger$ as
\changed{
\begin{equation}
\label{eq:eff-res}
\effres{u}{v} = \ment{\mat{L_{G}}^\dagger}{u}{u} + \ment{\mat{L_{G}}^\dagger}{v}{v} - 2 \ment{\mat{L_{G}}^\dagger}{u}{v}
\end{equation}
}(or by solving a Laplacian linear system).
Given the definition of $\effres{u}{v}$, one obtains the well-known
definition of \emph{electrical closeness} by replacing
$\fdistu{u}$ by $\effres{u}{v}$
in Definition~\ref{forest-dist-vertex}.
Electrical closeness (aka \emph{current-flow closeness} or \emph{information centrality})
has been widely studied (see e.\,g.,\xspace~\cite{DBLP:conf/stacs/BrandesF05,DBLP:conf/www/0002PSYZ19,DBLP:conf/siamcsc/BergaminiWLM16,Grinten2020ScalingUN}),
but only in the context of connected graphs.
\section{Related Work}
\label{sc:rel-work}
The most relevant algorithmic work regarding forest closeness
was proposed by Jin et al.\xspace~\cite{Zhang19}, who presented an $\epsilon$-approximation algorithm for
forest distance and forest closeness for all graph nodes.
The authors exploit the Johnson-Lindenstrauss lemma~\cite{johnson1984extensions}, thus use random projections
and rely on fast Laplacian solvers~\cite{CohenKyng14}
to avoid matrix inversions.
The algorithm has a running time of $\ensuremath{\mathcal{O}}(m\epsilon^{-2}\log^{2.5}{n}\log(1/\epsilon)\poly{\log\log n})$
and provides a $(1\pm \epsilon)$-approximation guarantee with high probability (assuming an exact Laplacian solver).
In practice, as mentioned above, their approach takes $> 2$ hours
on graphs with $\approx$1M vertices and $\approx$2-3M edges for a reasonably accurate ranking.
\changed{Our aim is a better algorithmic solution for forest centrality by
leveraging our recent results on the approximation of
the diagonal entries of \changed{$\mat{L_{G}}^\dagger$}~\cite{angrimanPGM20}.
The latter exploits the connection to effective resistances
and electrical closeness and is stated here for completeness:}
\begin{proposition}[\cite{angrimanPGM20}]
\label{effres:time-complexity}
Let $G = (V,E)$ be an undirected and weighted graph with diameter $\diam{G}$
and volume $\operatorname{vol}(G)$.
There is an algorithm that computes with probability $1-\delta$ an approximation of $\diag{\mat{L_{G}}^\dagger}$
with absolute error $\pm \epsilon$ in expected time
$\ensuremath{\mathcal{O}}(\operatorname{vol}(G) \cdot \operatorname{ecc}^3(u) \cdot \epsilon^{-2} \cdot \log(\operatorname{vol}(G)/\delta))$
\changed{, where $\operatorname{ecc}(u)$ is the eccentricity of a selected node $u$}.
\end{proposition}
That algorithm exploits three major insights:
(i) to compute the electrical closeness of a node $u$, one only needs $\ment{\mat{L_{G}}^\dagger}{u}{u}$
and the trace of $\mat{L_{G}}^\dagger$;
(ii) after obtaining the $u$-th column of $\mat{L_{G}}^\dagger$ (by solving one Laplacian linear system)
and all effective resistances $\effres{u}{v}$ between $u$ and all $v$,
the remaining elements of $\operatorname{diag}(\mat{L_{G}}^\dagger)$ can be calculated via Eq.~(\ref{eq:eff-res}),
(iii) effective resistances can be approximated by sampling uniform spanning trees (USTs), e.\,g.,\xspace with Wilson's algorithm~\cite{Wilson:1996:GRS:237814.237880},
by exploiting Kirchhoff's theorem.
For our purposes, we can state it as the effective resistance of an edge $\{u,v\} \in E$ being the probability that
$\{u,v\}$ is in a spanning tree drawn uniformly at random from all spanning trees of $G$ (comp.~\cite{DBLP:books/daglib/0009415}).
The algorithm proposed in this paper for approximating individual centrality scores is
based on the above insights, transfers them to a different
graph and provides a new analysis with an improved running time for the case at hand.
Barthelm\'e et al.\xspace~\cite{barthelm2019estimating} proposed an algorithm that uses techniques similar to
the ones in Ref.~\cite{angrimanPGM20} to estimate inverse traces
that arise in regularized optimization problems.
Their algorithm is based on uniform spanning forests,
also sampled with Wilson's algorithm.
Finally, for the group centrality case, the most relevant algorithm is Li et al.\xspace's~\cite{DBLP:conf/www/0002PSYZ19};
it employs JLT and fast Laplacian solvers to approximate group electrical closeness
centrality in nearly-linear time.
\section{Forest Closeness of Individual Vertices}
\label{sec:algorithm}
By definition, forest closeness for a vertex $u$ can be computed from
all forest distances $\fdistp{u}{v}$, $v \in V \setminus \{u\}$,
e.\,g.,\xspace by matrix inversion. Yet, inversion takes cubic time in practice and is thus impractical for large graphs.
Hence, we exploit a relation between forest distance and effective resistance
to approximate the forest farness more efficiently than existing approximation algorithms.
By adapting our algorithm for electrical closeness~\cite{angrimanPGM20}, we obtain
an algorithm with a (probabilistic) \changed{additive} approximation guarantee of $\pm \epsilon$;
it runs in nearly-linear (in $m$) expected time.
\subsection{From Forest Farness to Electrical Farness (And Back Again).}
\label{sub:connection-forest-resistance}
As mentioned, we exploit a result that relates
forest distances to effective resistances.
This requires the creation of an \emph{augmented} graph $G_\star := G_{\star, \alpha} := (V',E')$
from the original graph $G = (V,E)$.
To this end, a new \emph{universal vertex} $u^\star$ is added to $G$,
such that $V' = V \cup \{u^\star\}$ and $E' = E \cup \{u^\star, v\}, ~\forall v \in V$.
In particular, $u^\star$ is connected
to all other vertices of $G_\star$ with edges of
weight one.
Furthermore, the weights of all edges in $E'$ that belong to $E$ are
\changed{multiplied} by $\alpha$.
\begin{proposition}[comp.\ Ref.~\cite{Chebotarev00}]
\label{forest-resistance}
For a weigh\-ted graph $G = (V,E)$ and any vertex pair $ (v_1,v_2) \in V \times V$,
the forest distance $\fdistp{v_1}{v_2}$ in $G$ equals
the effective resistance $\effres{v_1}{v_2}$ \changed{in the augmented graph $G_\star$.}
\end{proposition}
The full proof of Proposition~\ref{forest-resistance} can be found
in Ref.~\cite{Chebotarev00}. Nevertheless, we provide here an explanation of
why the above proposition holds.
Recall that the effective resistance between any two vertices
of $G$ is computed by means of $\mat{L_{G}}^\dagger$,
while the forest distances of the same pair are computed by means of
the forest matrix of $G$, $\mat{\mat{\Omega}} = (\alpha\mat{L}+\mat{I})^{-1}$.
When calculating the effective resistance in $G_\star$, we use its Laplacian matrix
$\mat{L}_\star$, which consists of a block matrix corresponding to $(\alpha\mat{L} + \mat{I})$ and
an additional row and column that corresponds to the universal
vertex $u^\star$.
It turns out that the Moore-Penrose pseudoinverse of $\mat{L}_\star$ is the block matrix
that consists of
$\mat{\Omega}$ with an additional row and column corresponding to $u^\star$~\cite{Chebotarev00}.
Thus, \changed{$\ment{\mat{\Omega}}{u^\star}{u^\star} + \ment{\mat{\Omega}}{v}{v} - 2\ment{\mat{\Omega}}{u^\star}{v} $ equals
$\ment{\mat{L}_\star^\dagger}{u^\star}{u^\star} + \ment{\mat{L}_\star^\dagger}{v}{v} - 2\ment{\mat{L}_\star^\dagger}{u^\star}{v} $},
which corresponds to the pairwise effective resistance \changed{$\effres{u^\star}{v}$} in $G_\star$.
\begin{corollary}
\label{fcl-elcl}
Forest closeness in graph $G$ equals electrical closeness in the augmented graph $G_\star$.
\end{corollary}
\subsection{Forest Farness Approximation Algorithm.}
\label{sub:new-forest-algo}
As mentioned, our new algorithm for forest closeness exploits previous algorithmic results for
approximating $\operatorname{diag}(\mat{L_{G}}^\dagger)$ and electrical
closeness. To do so, we rewrite forest farness $\fdistu{v}$ following Ref.~\cite{Merris98}:
\begin{small}
\begin{align}
\label{eq-fdistu-tr}
\begin{split}
\fdistu{v} & = n \cdot \ment{\mat{\Omega}}{v}{v} + \trace{\mat{\Omega}} - 2 \sum_{w \in V} \ment{\mat{\Omega}}{v}{w} \\
& = n \cdot \ment{\mat{\Omega}}{v}{v} + \trace{\mat{\Omega}} - 2,
\end{split}
\end{align}
\end{small}
where the last equation holds since $\mat{\Omega}$ is doubly stochastic
($\ment{\mat{\Omega}}{v}{v} = 1 - \sum_{w \neq v}\ment{\mat{\Omega}}{v}{w}$)~\cite{Merris98}.
From Eq.~(\ref{eq-fdistu-tr}) it is clear that we only require the diagonal elements
of $\mat{\Omega}$ to compute $\fdistu{v}$ for any $v \in V$.
We approximate the diagonal elements of $\mat{\Omega}$ with Algorithm~\ref{alg:approx-diag-omega},
whose main idea is to sample uniform spanning trees (USTs) to approximate $\diag{\mat{L}_\star^{\dagger}}$:
\begin{enumerate}
\item We build the augmented graph $G_\star$ (Line~\ref{line:build-augmented-graph}) and let the universal vertex
$u^\star$ of $G_\star = (V',E')$ be the so-called \emph{pivot vertex}
(Line~\ref{line:set-pivot}) -- due to its optimal eccentricity of $1$.
Later, we compute the column of $\mat{\Omega}$ that corresponds
to $u^\star$, $\ment{\mat{\Omega}}{:}{u^\star}$, by solving the Laplacian linear system
$\mat{L}_\star \myvec{x} = \uvec{u^\star} - \frac{1}{n+1}\cdot \mathbf{1}$ (Line~\ref{line:solve-system}).
The solver's accuracy is controlled by $\eta$, which is set in Line~\ref{line:set-eta}
($\kappa$ is used to trade the accuracy of the solver with the accuracy of the following sampling step).
\item We sample $\tau$ USTs in $G_\star$ with Wilson's algorithm~\cite{Wilson:1996:GRS:237814.237880}
(also see
\changed{\arxivOrCamera{\Cref{alg:sampling-ust} in \Cref{sec:app-algorithmic-details}}{Algorithm 3 in the full version~\cite{full-version}}}),
where the sample size $\tau$ is yet to be determined.
With this sample we approximate the effective resistance $\effresG{G_\star}{u^\star}{v}$ for all $v \in V$
(Lines~\ref{line:ust-sampling-loop}-\ref{line:ust-sampling}). More precisely, if an edge $\{u^\star,v\}$ appears
in the sampled tree, we increase $R[v]$ by $1$ (unweighted case) or by the weight of the current
tree (weighted case) -- and later ``return'' $R[v] / \tau$ (unweighted case) or the relative total weight of
all sampled trees (weighted case) that contain edge $\{u^\star,v\}$ in Line~\ref{line:diag-remain}.
\item We compute the remaining $\ment{\mat{\Omega}}{v}{v}$ for $v \in V $ in Lines~\ref{line:fill-loop} and~\ref{line:diag-remain}
following Eqs.~(\ref{eq:forest-dist-pair}) and~(\ref{eq:eff-res}):
\begin{align*}
\label{eq:diag-comp}
\ment{\mat{\Omega}}{v}{v} & = \fdistp{u^\star}{v} - \ment{\mat{\Omega}}{u^\star}{u^\star} + 2 \ment{\mat{\Omega}}{v}{u^\star} \\
& = \effresG{G_\star}{u^\star}{v}- \ment{\mat{\Omega}}{u^\star}{u^\star} + 2 \ment{\mat{\Omega}}{v}{u^\star},
\end{align*}
where $\effresG{G_\star}{u^\star}{v}$ is then approximated by $R[v] / \tau$
(the weighted case is handled as described above).
\end{enumerate}
\begin{algorithm}[bt]
\begin{algorithmic}[1]
\begin{small}
\Function{ApproxDiagForestMatrix}{$G$, $\alpha$, $\epsilon$, $\delta$}
\State \textbf{Input:} Undir.\ graph $G = (V, E)$, control parameter $\alpha$,
error bound $0 < \epsilon < 1$, probability $0 < \delta < 1$
\State \textbf{Output:} $\diag{\widetilde{\mat{\Omega}}}$, i.\,e.,\xspace an $(\epsilon, \delta)$-approximation of $\diag{\mat{\Omega}}$
\State Create augmented graph $G_\star = (V',E')$ as described in Proposition~\ref{forest-resistance}; compute $\operatorname{vol(G)}$ and $c$ \label{line:build-augmented-graph}
\Comment{$\ensuremath{\mathcal{O}}(m+n)$}
\State $u^\star \gets $ universal vertex of $G_\star$ \label{line:set-pivot}
\State Pick constant $\kappa \in (0, 1)$ arbitrarily \label{line:pick-kappa}
\State $\eta \gets \frac{\kappa \epsilon}{6 \sqrt{\alpha (c+2) \operatorname{vol}(G)}}$ \label{line:set-eta}
\State $\tau \gets \lceil \log(2m/\delta) / 2(1-\kappa)^2\epsilon^2\rceil$
\For{$i \gets 1$ to $\tau$}\label{line:ust-sampling-loop}
\Comment {\small $\tau$ times}
\State $R \gets$ \textsc{SamplingUST}($G_\star$, $u$)
\label{line:wilson}
\Comment{\small $\ensuremath{\mathcal{O}}(\alpha \operatorname{vol}(G) + n)$}\label{line:ust-sampling}
\EndFor
\State Solve \changed{ $\mat{L}_\star \myvec{x} = \uvec{u^\star} - \frac{1}{n+1}\cdot \mathbf{1}$ }for $\myvec{x}$
\Comment {\small accuracy: $\eta$, $\tilde{\ensuremath{\mathcal{O}}}(m \log^{1/2} n \log(1/\eta))$} \label{line:linear-system} \label{line:solve-system}
\For{$v \in V$} \label{line:fill-loop}
\Comment {\small All iterations: $\ensuremath{\mathcal{O}}(n)$}
\State \changed{ $\ment{\widetilde{\mat{\Omega}}}{v}{v} \gets R[v] / \tau - \vent{x}{u^\star} + 2 \vent{x}{v}$} \label{line:diag-remain}
\Comment{unweighted case, for weighted see text}
\EndFor \label{line:end-fill}
\State \textbf{return} $\operatorname{diag}(\widetilde{\mat{\Omega}})$
\EndFunction
\end{small}
\end{algorithmic}
\caption{Approximation algorithm for $\diag{\mat{\Omega}}$}
\label{alg:approx-diag-omega}
\end{algorithm}
By using $G_\star$ and thus a universal vertex $u^\star$ as pivot, there are several noteworthy changes
compared to the algorithm in Ref.~\cite{angrimanPGM20}. First, the graph $G_\star$ has constant
diameter and the vertex $u^\star$ constant eccentricity $1$. This will be important for our refined
running time analysis. Second, the approximation of the effective resistances can be simplified:
while Ref.~\cite{angrimanPGM20} requires an aggregation along shortest paths, we notice that
here $u^\star$ and all other vertices are connected by paths of one edge only; thus,
the relative frequency of an edge $\{u^\star,v\}$ in the UST sample for $G_\star$ is sufficient here
for our approximation:
\begin{proposition}
\label{prop:unbiased}
Let $u^\star$ be the universal vertex in $G_\star$.
Then, for any edge $\{u^\star,v\} \in E'$ holds: its relative frequency (or weight) in the UST sample
is an unbiased estimator for $\effresG{G_\star}{u^\star}{v}$.
\end{proposition}
The proof of Proposition~\ref{prop:unbiased} relies
on Kirchhoff's theorem (see~\cite[Ch.~II]{DBLP:books/daglib/0009415})
and can be found in \changed{\arxivOrCamera{\Cref{apx:technical-proofs}}{the full version
of this paper~\cite{full-version}}}.
As we will see in our main algorithmic result (Theorem~\ref{thm:time-approx}),
Algorithm~\ref{alg:approx-diag-omega} is not only an unbiased estimator,
but even provides a probabilistic approximation guarantee. To bound its running
time, we analyze Wilson's algorithm for generating a UST first.
\begin{proposition}
\label{prop:ust-wilson-time}
For an undirected graph $G$ with constant diameter, each call to Wilson's algorithm
on $G_\star$ (in
Line~\ref{line:ust-sampling})
takes $\ensuremath{\mathcal{O}}(\alpha \operatorname{vol}(G) + n)$ expected time,
where $\operatorname{vol}(G) = \sum_{v \in V} \deg(v)$ is the (possibly weighted) volume of $G$.
\end{proposition}
The proof of Proposition~\ref{prop:ust-wilson-time} can be found
in \changed{\arxivOrCamera{\Cref{apx:technical-proofs}}{the full version~\cite{full-version}}}.
Note that in the case of unweighted graphs with $\alpha = 1$ and $m = \Omega(n)$ (which is not uncommon in
our context, see for example Ref.~\cite{Zhang19}), we obtain
a time complexity of $\ensuremath{\mathcal{O}}(m)$ (the volume is $2m$ by the handshake lemma).
Taking all the above into account, we arrive at our main algorithmic result on running time
and approximation bounds of Algorithm~\ref{alg:approx-diag-omega}.
The result and its proof are adaptations of Theorem~3 in Ref.~\cite{angrimanPGM20}.
When considering forest (as opposed to electrical) closeness centrality, we exploit the constant diameter of $G_\star$
and improve the time by a factor of $(\operatorname{ecc}(u))^3$, \changed{ where $u$ is a selected pivot node}.
This expression is $\ensuremath{\mathcal{O}}(\log^3 n)$ for the small-world graphs in the focus of Ref.~\cite{angrimanPGM20}
(but can be larger for general graphs). In the following,
$\tilde{\ensuremath{\mathcal{O}}}(\cdot)$ hides polyloglog factors from the linear solver~\cite{CohenKyng14}.
\begin{theorem}
\label{thm:time-approx}
Let $\frac{n}{\alpha \cdot \operatorname{vol}(G)}$ be bounded from above by a constant\footnote{The condition ensures that the algorithm is not affected by unduly heavy additional edges
to $u^\star$. If the condition is met, the graph edges still play a reasonable role in the distances
and in the UST computations.} and
let $0 < \epsilon, \delta < 1$.
Then, with probability $1-\delta$, Algorithm~\ref{alg:approx-diag-omega}
computes an approximation of $\diag{\mat{\Omega}}$ with absolute error $\pm \epsilon$ in (expected) time
$\tilde{\ensuremath{\mathcal{O}}}((m \log^{1/2} n \log(\sqrt{\alpha \operatorname{vol}(G)} / \epsilon))) +
\ensuremath{\mathcal{O}}(\log(n / \delta) / \epsilon^2 \cdot \alpha \operatorname{vol}(G))$.
\end{theorem}
Theorem~\ref{thm:time-approx} is proved in \changed{\arxivOrCamera{\Cref{apx:technical-proofs}}{
the full version~\cite{full-version}}}.
Let us simplify the result for a common case:
\begin{corollary}
If $G$ is unweighted, $\alpha$ a constant and $\delta := 1/n$ to get high probability,
the (expected) running time of Algorithm~\ref{alg:approx-diag-omega}
becomes $\tilde{\ensuremath{\mathcal{O}}}(m(\log^{1/2}n \log(n/\epsilon) + \epsilon^{-2} \log n))$.
Assuming $\epsilon$ is small enough so that $\log n \leq 1/\epsilon$, we can further simplify this to $\tilde{\ensuremath{\mathcal{O}}}(m \epsilon^{-2} \log^{3/2}n)$.
\end{corollary}
This is nearly-linear in $m$, which is also true for the JLT-based approximation
(with high probability) of Jin et al.\xspace~\cite{Zhang19}.
They state a running time of $\tilde{\ensuremath{\mathcal{O}}}(m \epsilon^{-2} \log^{5/2} n \log(1/\epsilon))$
for unweighted $G$ and fixed $\alpha = 1$. While we save at least a factor of $\log n$,
they achieve a relative approximation guarantee, which is difficult to compare to ours.
\section{Group Forest Closeness Centrality}
\changed{
Since their introduction by Everett and Borgatti~\cite{everett99gc},
group centrality measures have been used in various
applications (see~\cite{Grinten2020ScalingUN}).
These measures indicate the importance of whole vertex
sets -- together as a group. They usually favor sets that
\enquote{cover} the graph well. Intuitively, a
group variant of forest closeness should reward vertex sets
that are ``forest-close'' to the remainder of the graph.
More formally,} to extend the concept of forest closeness
to groups of vertices, it is enough
to define the forest farness $\fdistu{S}$ of a set $S$ of vertices;
the forest closeness of $S$ is then
given by $\fdistinv{S} := \frac 1{\fdistu{S}}$.
Recall (from Proposition~\ref{forest-resistance})
that the forest farness of a single vertex $v$
of $G$ is identical to the electrical farness of $v$ in
the augmented graph $G_\star$.
We use this fact to generalize the forest farness
of a set $S$ of vertices of $G$. In particular, we
define $\fdistu{S} := \trace{(\msub{\mat{L}_\star}{S})^{-1}}$,
where $\mat{L}_\star$ is the Laplacian matrix of the augmented graph $G_\star$
and by $\msub{\mat{L}_\star}{S}$ we denote the matrix that is obtained from
$\mat{L}_\star$ by removing all rows and columns with indices in $S$.
This definition is based on a corresponding
definition of electrical farness by Li et al.\xspace~\cite{DBLP:conf/www/0002PSYZ19}.
For $|S| = 1$, it coincides with the definition of
electrical closeness from Section~\ref{sec:prelim}
~\cite{Izmailian_2013};
thus, our definition of group forest closeness
is compatible with the definition of the
forest closeness of individual vertices
(i.\,e.,\xspace Definition~\ref{forest-dist-vertex}).
Given our definition, it is natural to ask for
a set $S$ of $k$ vertices that maximizes
$\fdistinv{S}$ over all possible size-$k$ sets $S$;
indeed, this optimization problem has also
been considered for many other group centrality measures~\cite{Grinten2020ScalingUN}.
The following theorem
settles the complexity of the problem:
\begin{theorem}
\label{thm:GFC-NP-hard}
Maximizing \textsc{GroupForestCloseness} subject to a cardinality constraint is $NP$-hard.
\end{theorem}
As Li et al.\xspace's~\cite{DBLP:conf/www/0002PSYZ19} hardness proof for group electrical closeness,
our reduction is from the vertex cover problem on 3-regular graphs.
Let $G = (V, E)$ be a 3-regular graph with $n$ vertices.
Our proof shows that there is a vertex cover of size $k$ in $G$ if and only if
the maximum group forest
closeness over all sets of size $k$ in $G$ exceeds a certain threshold.
We make use of the following property
that is adapted from a similar result by Li et al.\xspace:
\begin{lemma}
\label{lemma:a-vc}
Let $G$ be a connected and unweighted 3-regular graph and let $S \subset V$, $|S| = k \geq 1$.
Then $\trace{(\msub{\mat{L}}{S}+\mat{I})^{-1}} \geq (n - k) / 4$
and equality holds if and only if $S$ is a vertex cover of $G$.
\end{lemma}
Our proof of Theorem~\ref{thm:GFC-NP-hard}
exploits the fact that we can decompose
$\msub{\mat{L}_\star}{S}$ into a block that corresponds to the
universal vertex of $G_\star$ and into a block
that equals $\msub{\mat{L}}{S} + \mat{I}$.
This allows us to
apply the block-wise inversion and the Sherman-Morrison formula
to partially invert $\msub{\mat{L}_\star}{S}$.
In turn, we can apply
Lemma~\ref{lemma:a-vc} to bound
$\trace{(\msub{\mat{L}_\star}{S})^{-1}}$. The proof of
Lemma~\ref{lemma:a-vc} and the full proof
of Theorem~\ref{thm:GFC-NP-hard} can be
found in \changed{\arxivOrCamera{\Cref{apx:technical-proofs}}{the full version~\cite{full-version}}}.
Since an efficient algorithm for maximizing group forest closeness is unlikely to exist
(due to Theorem~\ref{thm:GFC-NP-hard}),
it is desirable to construct an inexact algorithm
for this problem.
The next two results enable the construction of
such an algorithm;
they follow immediately
from respective results on group electrical closeness
on $G_\star$
(see Ref.~\cite[Theorem 5.4 and Theorem 6.1]{DBLP:conf/www/0002PSYZ19}).
\begin{lemma}
$\fdistu{.}$ is a non-increasing
and supermodular set function.
\end{lemma}
For the following corollary, we consider a greedy algorithm
that constructs a set $S$ of size $k$.
This set is initially empty; while $|S|$ is smaller than $k$,
the algorithm adds the vertex $v$ to $S$ that
maximizes the marginal gain: $v = \operatorname{argmax}_{x \in V \setminus S} \fdistu{S} - \fdistu{S \cup \{v\}}$.
\begin{corollary}
The greedy algorithm computes a set $S$ such that:
\[ \fdistu{\{v_0\}} - \fdistu{S} \geq \left(1 - \frac k{e(k - 1)}\right) \left(\fdistu{v_0} - \fdistu{\widetilde{S}}\right), \]
where $v_0$ is the vertex with highest (individual) forest closeness
and $\widetilde{S}$ is the set of size $k$ that maximizes group forest closeness.
\end{corollary}
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\State \textbf{Input:} Undir.\ graph $G = (V, E)$, group size $k$
\State \textbf{Output:} Group $S \subseteq V$ of $k$ vertices
\State $\mat{P} \gets \Call{pseudoInverse}{\mat{L}_\star}$
\State $v \gets \operatorname{argmin}_{v \in V} n (\mat{L}_\star^\dagger[v, v]) + \trace{\mat{P}}$
\State $\mat{M} \gets \Call{inverse}{\msub{\mat{L}_\star}{\{v\}}}$
\Comment{Invariant: $\mat{M} \gets \msub{\mat{L}_\star}{S}^{-1}$ throughout the algorithm}
\State $S \gets \{v\}$
\While{$|S| \leq k$}
\State $v \gets \operatorname{argmax}_{v \in V \setminus S} \frac{(\mat{M} e_v)^T (\mat{M} e_v)}{e_v^T \mat{M} e_v}$
\State $\mat{M} \gets \msub{\mat{M}
- \frac{\mat{M} e_v e_v^T \mat{M}}{e_v^T \mat{M} e_v}}{\{v\}}$
\label{line:inv-update}
\State $S \gets S \cup \{v\}$
\EndWhile
\end{algorithmic}
\caption{Greedy algorithm for group forest closeness maximization adapted from Li et al.\xspace}
\label{algo:li-greedy}
\end{algorithm}
Note that a na\"ive implementation of the greedy algorithm
would invert $\msub{\mat{L}_\star}{(S \cup \{v\})}$ for
each $v$, i.\,e.,\xspace it would require $k \cdot n$
matrix inversions in total.
By using the ideas of Li et al.\xspace for group electrical
closeness~\cite{DBLP:conf/www/0002PSYZ19}
(depicted in Algorithm~\ref{algo:li-greedy}
for the case of group forest closeness),
these inversions can be avoided, such
that only a single matrix inversion is required in total.
This makes use of the fact that whenever a vertex $u$
is added to the set $S$, we can decompose
$\msub{\mat{L}_\star}{S}$ into a block that consists
of $\msub{\mat{L}_\star}{(S \cup \{u\})}$ and
a single row/column that corresponds to $u$.
It is now possible to apply block-wise matrix inversion
to this decomposition to avoid the need to
recompute $(\msub{\mat{L}_\star}{(S \cup \{u\})})^{-1}$
from scratch (in line~\ref{line:inv-update} of the pseudocode).
We remark that the
greedy algorithm can be further accelerated
by utilizing the Johnson-Lindenstrauss
lemma~\cite{DBLP:conf/www/0002PSYZ19};
however, since this necessarily results in lower accuracy, we do not consider this extension in our experiments.
Furthermore, we note that by applying a standard reduction
by Gremban~\cite{GrembanPHD}, it would also be possible
to apply our UST-based algorithm (i.\,e.,\xspace Algorithm~\ref{alg:approx-diag-omega})
to the case of group forest closeness.
However, if the aforementioned
block-wise matrix inversion
is not applied, this would require us to
sample USTs for each of the $k \cdot n$ vertex
evaluations.
On the other hand, in order to apply block-wise inversion,
the entire inverse of $\msub{\mat{L}_\star}{S}$ must be available (and not only the diagonal).
Computing this inverse via UST sampling is prohibitively expensive so far.
Hence, in our experiments, we prefer the algorithmic approach
by Li et al.\xspace (adapted for group forest closeness).
\section{Experiments}
\label{sec:experiments}
We study the empirical performance of our algorithms
on real-world graphs
and their impact on graph mining tasks.
\paragraph{Settings.}
Unless stated otherwise, all algorithms are implemented
in C++, using the NetworKit~\citep{DBLP:journals/netsci/StaudtSM16}
graph APIs. All experiments are conducted on
Intel Xeon Gold 6126 machines with $2 \times 12$ cores and 192 GiB of RAM each.
Unless stated otherwise, all experiments run on a
single core.
To ensure reproducibility, all experiments are managed by the
\textsc{SimExPal}~\citep{angriman2019guidelines} software.
For the evaluation, we use a large collection of undirected graphs
of different sizes, coming from a diverse set of domains.
All graphs have been downloaded from the public repositories
KONECT~\cite{kunegis13}, OpenStreetMap\footnote{\url{ https://www.openstreetmap.org}}
and NetworkRepository~\cite{nr}.
We denote our proposed algorithm for forest closeness by \tool{UST}\xspace and set
$\alpha = 1$ (as done in Ref.~\cite{Zhang19})
in all experiments.
\paragraph{Competitors.}
\changed{For the forest closeness of individual vertices,
the main competitor is the JLT-based algorithm by Jin et al.\xspace~\cite{Zhang19},
which uses the Laplacian solver from Ref.~\cite{kyng16}.
We compare against two implementations of this algorithm;
one provided by the authors written in Julia v1.0.2
and our own implementation based on \textsc{Eigen}'s CG algorithm.\footnote{
\url{http://eigen.tuxfamily.org}.}
We denote them by \tool{JLT-Julia}\xspace and \tool{JLT-CPP}\xspace, respectively.}
Like in Ref.~\cite{Zhang19}, we compute the number
of linear sytems for \tool{JLT-Julia}\xspace and \tool{JLT-CPP}\xspace as
$\left\lceil \frac{\log n}{\epsilon^2} \right\rceil$
\changed{(which gives an $\epsilon \cdot c$ approximation for a fixed constant $c > 1$)}.
\subsection{Performance of \tool{UST}\xspace.}
\label{sec:ex-indiviual}
We measure now the performance
of \tool{UST}\xspace compared to the state-of-the art competitors.
Each method is executed with multiple settings
of its respective quality parameter.
\begin{figure}[tb]
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics{./plots/legend-quality.pdf}
\end{subfigure}
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/max-abs-small-diam-unweighted.pdf}
\end{subfigure}\hfill
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{./plots/max-abs-high-diam-unweighted.pdf}
\end{subfigure}
\caption{$\max_u |\mat{\Omega}[v, v] - \widetilde{\mat{\Omega}}[v, v]|$ over the instances
in Table~\ref{tab:time-kt}.}
\label{fig:quality-single-vertex}
\end{figure}
\paragraph{Accuracy and Running Time.}
We report the maximum absolute error
of the estimated diagonal values
(i.\,e.,\xspace $\max_v |\mat{\Omega}[v, v] - \widetilde{\mat{\Omega}}[v, v]|$)
over all vertices and instances from Table~\ref{tab:time-kt}.\footnote{Note that the top vertices in the forest closeness ranking
are the ones with the \emph{lowest} $\mat{\Omega}[v, v]$ (see Eq.~\eqref{eq-fdistu-tr});
hence, we also evaluate the ranking accuracy in a following
experiment.}
As ground truth, we take $\mat{\Omega}[v, v]$ values
that are computed using Eigen's CG solver with a tolerance of $10^{-9}\xspace$;
exact inversion of $(\mat{L} + \mat{I})$ would
be infeasible for many of the input graphs.
A preliminary comparison against the values of $\mat{\Omega}[v, v]$ computed with the
NumPy \texttt{pinv} function demonstrated that CG provides a sufficiently
accurate ground truth.
Figure~\ref{fig:quality-single-vertex} shows that \tool{UST}\xspace achieves the best
results in terms of quality and running time for both complex and road
networks. More precisely, for complex networks and $\epsilon = 0.4$, \tool{UST}\xspace
yields a maximum absolute error of $\numprint{0.14}\xspace$, which is less than the
most accurate result of both competitors ($\numprint{0.15}\xspace$ achieved by
\tool{JLT-Julia}\xspace with $\epsilon = 0.1$), while being $\numprint{397.5}\times\xspace$ faster.
Also, the running time of \tool{UST}\xspace does not increase sub\-stantially for
lower values of $\epsilon$, and its quality does not deteriorate quickly for
higher values of $\epsilon$. A similar pattern is observed for road networks as
well.
\setlength{\tabcolsep}{2pt}
\begin{table}[tb]
\centering
\footnotesize
\begin{tabular}{c}
Complex networks
\end{tabular}
\input{tables/corr_small_diam}
\smallskip
\begin{tabular}{c}
Road networks
\end{tabular}
\input{tables/corr_high_diam_short}
\caption{Running time and KT ranking scores of \tool{UST}\xspace and JLT-based algorithms.
In the JLT column we report, for each instance, the competitor with highest KT
score. For equal KT scores (up to the second decimal place) we choose the
fastest competitor.}
\label{tab:time-kt}
\end{table}
\paragraph{Vertex Ranking.}
Moreover, we measure the accuracy in terms of vertex rankings,
which is often more relevant than individual scores~\citep{newman2018networks,okamoto2008ranking}.
In Table~\ref{tab:time-kt} we report
the Kendall's rank correlation coefficient (KT) of the vertex ranking w.\,r.\,t.\xspace\
the ground truth along with running times for complex and road networks.
For each instance, we pick the best \changed{run, i.\,e.,\xspace the UST and JLT columns display the run
with highest respective KT value.}
If the values are the same up to the second decimal place, we pick the fastest one.
\tool{UST}\xspace has consistently the best vertex ranking scores; at the same time, it is faster than the competitors.
In particular, \tool{UST}\xspace is on average $\numprint{7.6}\times\xspace$ faster than the JLT-based approaches
on complex networks and $\numprint{1.9}\times\xspace$ faster on road networks.
\paragraph{Parallel Scalability.}
\tool{UST}\xspace is well-suited for parallel implementations since
each UST can be sampled independently in parallel.
Hence, we provide parallel implementations of \tool{UST}\xspace
based on OpenMP (for multi-core parallelism) and MPI
(to scale to multiple compute nodes).
The OpenMP implementation on 24 cores exhibits
a speedup of $\numprint{8.7}\times\xspace$ on complex networks and
$\numprint{9.2}\times\xspace$ on road networks --
more detailed results can be found in
\changed{\arxivOrCamera{\Cref{fig:par-scalability}, \Cref{apx:additional-exp}}{Figure 5 in the full
version~\cite{full-version}}}.
The results for MPI are depicted in \changed{\arxivOrCamera{\Cref{fig:distr-scalability},
\Cref{apx:additional-exp}}{Figure 3 in
the full version~\cite{full-version}}}.
In this setting, \tool{UST}\xspace obtains a speedup
of $\numprint{12.2}\times\xspace$ on complex and $\numprint{11.5}\times\xspace$ on road networks
on up to 16 compute nodes -- for this experiment we set $\epsilon = 0.1$ and
we use the instances in
\changed{\arxivOrCamera{\Cref{tab:large}, \Cref{apx:additional-exp}}{Table 2 in
the full version~\cite{full-version}}}.
More sophisticated load balancing
techniques are likely to increase the speedups
in the MPI setting; they are left for future work.
Still, the MPI-based algorithm can rank complex networks with up to $334$M edges in
less than $20$ minutes. Road networks with $31$M edges take less than $25$ minutes.
\subsection{Semi-Supervised Vertex Classification.}
\label{sec:ex-group}
To demonstrate the relevance of \changed{group forest closeness} in
graph mining applications, we
apply them to semi-supervised vertex classification~\cite{chapelleSZ09}. Given
a graph $G$ with labelled vertices, the goal is to predict the labels of all
vertices of $G$ by training a classifier using a small set of labelled vertices
as training set. The choice of the vertices for the training set can influence
the accuracy of the classifier, especially when the number of labelled vertices
is small compared to $|V|$~\citep{oleks2018, Avrachenkov2013}.
A key aspect in semi-supervised learning problems is the so-called
\emph{cluster assumption} i.\,e.,\xspace vertices that are close or that belong to the
same cluster typically have the same label~\cite{zhouBLWS03,chapelleWS02}.
Several models label vertices by propagating information through the
graph via diffusion~\cite{chapelleSZ09}.
\changed{We expect group forest closeness to cover the graph
more thoroughly than individual forest closeness.
Hence, we conjecture that} choosing vertices
with high group centrality improves diffusion and thus the accuracy of
propagation-based models.
We test this hypothesis by
comparing the classification accuracy of the label
propagation model~\cite{chapelleSZ09, zhouBLWS03}
where the training set is chosen using different strategies.\footnote{While this
model is less powerful than state-of-the-art predictors,
our strategy to select the training set could also
be applied to more sophisticated models like graph neural networks.}
The main idea of label propagation is to start from a small number of labelled
vertices and each vertex iteratively propagates its label to its neighbors
until convergence.
In our experiments, we use the Normalized Laplacian variant of label
propagation~\cite{zhouBLWS03}. We set the return probability hyper-parameter to
$0.85$, and we evaluate its accuracy on two well-known disconnected graph
datasets: Cora ($|V| = \numprint{2708}, |E| = \numprint{5278}$) and
Citeseer($|V| = \numprint{3264}, |E| = \numprint{4536}$)~\cite{senNBGGE08}.
Since this variant of label propagation cannot handle graphs with isolated
vertices (i.\,e.,\xspace zero-degree vertices), we remove all isolated vertices from these datasets.
For a fixed size $k$ of the training set, we select its vertices as the group of
vertices computed by our greedy algorithm for group forest
maximization and as the top-$k$ vertices with highest estimated forest
closeness. We \changed{include several well-known (individual) vertex selection
strategies for comparison}:
average over 10 random trials, the top-$k$ vertices with highest degree, the
top-$k$ vertices with highest betweenness centrality and the top-$k$ vertices
with highest Personalized PageRank.
Figure~\ref{fig:vertex-class} shows that on
graphs with disconnected components and for a moderate number of
labelled vertices, selecting the training
set by group forest closeness maximization yields consistently
superior accuracy than strategies based on existing
centrality measures (including top-$k$ forest closeness).
As expected, the accuracy of existing measures
improves if one considers connected graphs
(\changed{\arxivOrCamera{\Cref{fig:vertex-class-lcc}, \Cref{apx:additional-exp}}{Figure 6 in the full version~\cite{full-version}}});
yet, group forest closeness is nearly as accurate as the best competitors
on these graphs.
\changed{The running time of our greedy algorithm for group forest maximization
is reported in \arxivOrCamera{\Cref{tab:time-group}, \Cref{apx:additional-exp}}{the full version~\cite{full-version}}}.
\begin{figure}
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics{plots/legend-node-class.pdf}
\end{subfigure}
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{plots/node-class-cora.pdf}
\end{subfigure}\hfill
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics{plots/node-class-citeseer.pdf}
\end{subfigure}
\caption{Accuracy in semi-supervised vertex classification
when using different strategies to create the training set.}
\label{fig:vertex-class}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we proposed a new algorithm to approximate
forest closeness faster and more accurately than previously possible.
We also generalized the definition of forest closeness
to group forest closeness and demonstrated that
for semi-supervised vertex classification in disconnected
graphs, group forest closeness outperforms existing
approaches.
In future work, we want to consider extensions of our approaches
to directed graphs. Another challenging
extension would involve generalizing an approach
based on USTs to group forest closeness
to improve upon the performance of our greedy algorithm.
\ifblind
\else
\section{Acknowledgments}
\begin{acks}
TODO: here only people, grants on the first page with thanks
\end{acks}
\fi
\bibliographystyle{abbrv}
{\footnotesize
\itemsep1em
|
1,108,101,564,152 | arxiv | \section{Introduction}
Deep neural networks have recently enjoyed some success at modeling natural language \cite{mikolov2010recurrent, zaremba2014recurrent, kim2015character}. Typically, recurrent and convolutional language models are trained to maximize the likelihood of observing a word or character given the previous observations in the sequence $P(w_1 \ldots w_n) = p(w_1) \prod_{i=2}^{n} P(w_i|w_1 \ldots w_{i-1})$. These models are commonly trained using a technique called \textit{teacher forcing} \cite{williams1989learning} where the inputs to the network are fixed and the model is trained to predict only the next item in the sequence given all previous observations. This corresponds to maximum-likelihood training of these models. However this one-step ahead prediction during training makes the model prone to \textit{exposure bias} \cite{ranzato2015sequence, bengio2015scheduled}. Exposure bias occurs when a model is only trained conditioned on ground-truth contexts and is not exposed to its own errors \citep{Wiseman16beam}. An important consequence to exposure bias is that generated sequences can degenerate as small errors accumulate.
Many important problems in NLP such as machine translation and abstractive summarization are trained via a maximum-likelihood training objective \cite{bahdanau2014neural, rush2015neural}, but require the generation of extended sequences and are evaluated based on sequence-level metrics such as BLEU \cite{papineni2002bleu} and ROUGE \cite{lin2004rouge}.
One possible direction towards incorporating a sequence-level training objective is to use Generative Adversarial Networks (GANs) \cite{goodfellow2014generative}. While GANs have yielded impressive results for modeling images \cite{radford2015unsupervised, dumoulin2016adversarially}, advances in their use for natural language generation has lagged behind.
Some progress has been made recently in incorporating a GAN objective in sequence modeling problems including natural language generation.
\citet{lamb2016professor} use an adversarial criterion to match the hidden state dynamics of a teacher forced recurrent neural network (RNN) and one that samples from its own output distribution across multiple time steps. Unlike the approach in \citet{lamb2016professor}, sequence GANs \cite{yu2016seqgan} and maximum-likelihood augmented GANs \cite{che2017maximum} use an adversarial loss at outputs of an RNN. Using a GAN at the outputs of an RNN however isn't trivial since sampling from these outputs to feed to the discriminator is a non-differentiable operation. As a result gradients cannot propagate to the generator from the discriminator. \citet{yu2016seqgan} use policy gradient to estimate the generator's gradient and \cite{che2017maximum} present an importance sampling based technique. Other alternatives include REINFORCE \cite{williams1992simple}, the use of a Gumbel softmax \cite{jang2016categorical} and the straighthrough estimator \cite{bengio2013estimating} among others.
In this work, we address the discrete output space problem by simply forcing the discriminator to operate on continuous valued output distributions. The discriminator sees a sequence of probabilities over every token in the vocabulary from the generator and a sequence of 1-hot vectors from the true data distribution as in Fig. \ref{cnn_arch}. This technique is identical to that proposed by \citet{gulrajani2017improved}, which is parallel work to this. In this paper we provide a more complete empirical investigation of this approach to applying GANs to discrete output spaces. We present results using recurrent as well as convolutional architectures on three language modeling datasets of different sizes at the word and character-level. We also present quantitative results on generating sentences that adhere to a simple context-free grammar (CFG), and a richer probabilistic context-free grammar (PCFG). We compare our method to previous works that use a GAN objective to generate natural language, on a Chinese poetry generation dataset. In addition, we present a conditional GAN \cite{mirza2014conditional} that generates sentences conditioned on sentiment and questions.
\section{Generative Adversarial Networks}
GANs \cite{goodfellow2014generative} are a general framework used in training generative models by formulating the learning process as a two player minimax game as formulated in the equation below. A generator network G tries to generate samples that are as close as possible to the true data distribution $P(x)$ of interest from a fixed noise distribution $P(z)$. We will refer to the samples produced by the generator as $G(z)$. A discriminator network is then trained to distinguish between $G(z)$ and samples from the true data distribution $P(x)$ while the generator network is trained using gradient signals sent by the discriminator by minimizing $\log(1 - D(G(z)))$.
\citet{goodfellow2014generative} have shown that, with respect to an optimal discriminator, the minimax formulation can be shown to minimize the Jensen Shannon Divergence (JSD) between the generator's output distribution and the true data distribution.
\begin{align*}
\displaystyle \min_{G} \displaystyle \max_{D} V(D, G) = \mathop{\mathbb{E}}_{x \sim P(x)} [\log D(x)] \\ + \mathop{\mathbb{E}}_{z \sim P(z)} [\log(1 - D(G(z)))]
\end{align*}
However, in practice, the generator is trained to maximize $\log(D(G(z)))$ instead, since it provides stronger gradients in the early stages of learning \cite{goodfellow2014generative}.
GANs have been reported to be notoriously hard to train in practice \cite{arjovsky2017towards} and several techniques have been proposed to alleviate some of the complexities involved in getting them to work including modified objective functions and regularization \cite{salimans2016improved, arjovsky2017wasserstein, mao2016least,gulrajani2017improved}. We discuss some of these problems in the following subsection.
\citet{nowozin2016f} show that it is possible to train GANs with a variety of f-divergence measures besides JSD. Wasserstein GANs (WGANs) \cite{arjovsky2017wasserstein} minimize the earth mover's distance or Wasserstein distance, while Least Squared GANs (LSGANs) \cite{mao2016least} modifies replaces the log loss with an L2 loss. WGAN-GP \cite{gulrajani2017improved} incorporate a gradient penalty term on the discriminator's loss in the WGAN objective which acts as a regularizer.
In this work, we will compare some of these objectives in the context of natural language generation.
\subsection{Importance of Wasserstein GANs}
\citet{arjovsky2017towards} argue that part of the problem in training regular GANs is that it seeks to minimize the JSD between the $G(z)$ and $P(x)$. When the generator is trying to optimized $log(1 - D(G(z)))$, the gradients that it receives vanish as the discriminator is trained to optimality. The authors also show that when trying to optimize the more practical alternative, $-log(D(G(z)))$, the generator might not suffer from vanishing gradients but receives unstable training signals. It is also important to consider the fact that highly structured data like images and language lie in low-dimensional manifolds (as is evident by studying their principal components). Wassterstein GANs \cite{arjovsky2017wasserstein} overcome some of the problems in regular GAN training by providing a softer metric to compare the distributions lying in low dimensional manifolds. A key contribution of this work was identifying the importance of a lipschitz constraint which is achieved by clamping the weights of the discriminator to lie in a fixed interval. The lipschitz constraint and training the discriminator multiple times for every generator gradient update creates a strong learning signal for the generator.
\citet{gulrajani2017improved} present an alternative to weight clamping that they call a gradient penalty to enforce lipschitzness since model performance was reported to be highly sensitive to the clamping hyperparameters. They add the following penalty to the discriminator training objective - $(||\triangledown_{G(z)} D(G(z))||_{2} - 1)^2$.
A potential concern regarding our strategy to train our discriminator to distinguish between sequence of 1-hot vectors from the true data distribution and a sequence of probabilities from the generator is that the discriminator can easily exploit the sparsity in the 1-hot vectors to reach optimality. However, Wassterstein distance with a lipschitz constraint / gradient penalty provides good gradients even under an optimal discriminator and so isn't a problem for us in practice. Even though it is possible to extract some performance from a regular GAN objective with the gradient penalty (as we show in one of our experiments), WGANs still provide better gradients to the generator since the discriminator doesn't saturate often.
\section{Model architecture}
Let $\textbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ be the input to our generator network $G$ from which we will attempt to generate natural language. For implementation convenience, the sample $\mathbf{z}$ is of shape $n \times d$ where $n$ is the length of sequence and $d$ is a fixed length dimension of the noise vector at each time step. The generator then transforms $\mathbf{z}$ into a sequence of probability distributions over the vocabulary $G(z)$ of size $n \times k$ where $k$ is the size of our true data distribution's vocabulary. The discriminator network $D$ is provided with fake samples $G(z)$ and samples from the true data distribution $P(x)$. Samples from the true distribution are provided as a sequence of 1-hot vectors with each vector serving as an indicator of the observed word in the sample. As described in section 2, the discriminator is trained to discriminate between real and fake samples and the generator is trained to fool the discriminator as in Fig. \ref{cnn_arch}.
We investigate recurrent architectures as in \cite{lamb2016professor,yu2016seqgan,che2017maximum} and convolutional architectures in both the generator as well as the discriminator. The following subsections detail our architectures.
\begin{figure}
\begin{center}
\hspace{0.7cm}
\includegraphics[width=8cm,height=6cm,keepaspectratio]{Architecture}
\end{center}
\caption{Model architecture}
\label{cnn_arch}
\end{figure}
\subsection{Recurrent Models}
Recurrent Neural Networks (RNNs), particularly Long short-term memory networks (LSTMs) \cite{hochreiter1997long} and Gated Recurrent Networks \cite{cho2014learning} are powerful models that have been successful at modeling sequential data \cite{graves2009offline,mikolov2010recurrent}. They transform a sequence of input vectors $ \mathbf{x} = x_1 \ldots x_n$ into a sequence of hidden states $\mathbf{h} = h_1 \ldots h_n$ where each hidden state maintains a summary of the input up until then. RNN language models are autoregressive in nature since the input to the network at time $t$ depends on the output at time $t-1$. However, in the context of generating sequences from noise, the inputs are pre-determined and there is no direct correspondence between the output at time $t-1$ and the input at time $t$ this fundamentally changes the auto-regressiveness of the RNN. The RNN does however carry forward information about its output at time $t$ through subsequent time steps via its hidden states $\mathbf{h}$ as evident from its recurrent transition function. In order to incorporate an explicit dependence between subsequent RNN outputs, we add a peephole connection between the \textit{output} probability distribution $\mathbf{y_{t-1}}$ at time $t-1$ and the hidden state $\mathbf{h_t}$ at time $t$ as show in the LSTM equations below.
Typical RNN language models have a shared affine transformation matrix $\mathbf{W_{out}}$ that is shared across time all steps that projects the hidden state vector to a vector of the same size as the target vocabulary to generate a sequence of outputs $\mathbf{y} = y_1 \ldots y_t$. Subsequently a softmax function is applied to each vector to turn it into a probability distribution over the vocabulary.
\begin{align*}
\mathbf{y}_{t} &= \mathrm{softmax}(\mathbf{W}_{out}\mathbf{h}_{t} + \mathbf{b}_{out}),
\end{align*}
During inference, an output is sampled from the softmax distribution and becomes the input at the subsequent time step. While training the inputs are pre-determined. In all of our models, we perform greedy decoding where we always pick $\operatorname{argmax} y_t$. When using the LSTM as a discriminator we use a simple binary logistic regression layer on the last hidden state $h_{n}$ to determine the probability of the sample being from the generator's data distribution or from the real data distribution. $P(real) = \sigma(\mathbf{W}_{pred}\mathbf{h}_{n} + \mathbf{b}_{pred})$.
The LSTM update equations with an output peephole are :
\begin{align*}
\mathbf{i}_{t} &= \sigma(\mathbf{W}_{xi}\mathbf{x}_{t} + \mathbf{W}_{hi}\mathbf{h}_{t-1} + \mathbf{W}_{pi}\mathbf{y}_{t-1} + \mathbf{b}_{i})\\
\mathbf{f}_{t} &= \sigma(\mathbf{W}_{xf}\mathbf{x}_{t} + \mathbf{W}_{hf}\mathbf{h}_{t-1} + \mathbf{W}_{pf}\mathbf{y}_{t-1} + \mathbf{b}_{f})\\
\mathbf{o}_{t} &= \sigma(\mathbf{W}_{xo}\mathbf{x}_{t} + \mathbf{W}_{ho}\mathbf{h}_{t-1} + \mathbf{W}_{po}\mathbf{y}_{t-1} + \mathbf{b}_{o})\\
\mathbf{c}_{t} &= \tanh(\mathbf{W}_{xc}\mathbf{x}_{t} + \mathbf{W}_{hc}\mathbf{h}_{t-1} + \mathbf{W}_{pc}\mathbf{y}_{t-1} + \mathbf{b}_{c})\\
\mathbf{c}_{t} &= \mathbf{f}_{t} \odot \mathbf{c}_{t-1} + \mathbf{i}_{t} \odot \mathbf{c}_{t}\\
\mathbf{h}_{t} &= \mathbf{o}_{t}\odot\tanh(\mathbf{c}_{t}),
\end{align*}
where $\sigma$ is the element-wise sigmoid function, $\odot$ is the hadamard product, $\tanh$ is the element-wise $\tanh$ function. $\mathbf{W_{\cdot}}$ and $\mathbf{b_{\cdot}}$ are learn-able parameters of the model and $\mathbf{i_t}$, $\mathbf{f_t}$, $\mathbf{o_t}$ and $\mathbf{c_t}$ constitute the input, forget, output and cell states of the LSTM respectively.
\subsection{Convolutional Models}
Convolutional neural networks (CNNs) have also shown promise at modeling sequential data using 1-dimensional convolutions \cite{dauphin2016language, zhang2015character}. Convolution filters are convolved across time and the input dimensions are treated as channels. In this work, we explore convolutional generators and discriminators with residual connections \cite{he2016deep}.
\citet{gulrajani2017improved} use a convolutional model for both the generator and discriminator. The generator consists of 5 residual blocks with 2 1-D convolutional layers each. A final 1-D convolution layer transforms the output of the residual blocks into a sequence of un-normalized vectors for each element in the input sequence (noise). These vectors are then normalized using the softmax function. All convolutions are 'same' convolutions with a stride of 1 followed by batch-normalization \cite{ioffe2015batch} and the ReLU \cite{nair2010rectified, glorot2011deep} activation function without any pooling so as to preserve the shape of the input. The discriminator architecture is identical to that of the generator with the final output having a single output channel.
\subsection{Curriculum Learning}
In likelihood based training of generative language models, models are only trained to make one-step ahead predictions and as a result it is possible to train these models on relatively long sequences even in the initial stages of training. However, in our adversarial formulation, our generator is encouraged to generate entire sequences that match the true data distribution without explicit supervision at each step of the generation process. As a way to provide training signals of incremental difficulty, we use curriculum learning \cite{bengio2009curriculum} and train our generator to produce sequences of gradually increasing lengths as training progresses.
\section{Experiments \& Data}
GAN based methods have often been critiqued for lacking a concrete evaluation strategy \cite{salimans2016improved}, however recent work \cite{wu2016quantitative} uses an annealed importance based technique to overcome this problem.
In the context of generating natural language, it is possible to come up with a simpler approach to evaluate compute the likelihoods of generated samples. We synthesize a data generating distribution under which we can compute likelihoods in a tractable manner. We propose a simple evaluation strategy for evaluating adversarial methods of generating natural language by constructing a data generating distribution from a CFG or P$-$CFG. It is possible to determine if a sample belongs to the CFG or the probability of a sample under a P$-$CFG by using a constituency parser that is provided with all of the productions in a grammar. \citet{yu2016seqgan} also present a simple idea to estimate the likelihood of generated samples by using a randomly initialized LSTM as their data generating distribution. While this is a viable strategy to evaluate generative models of language, a randomly initialized LSTM provides little visibility into the complexity of the data distribution itself and presents no obvious way to increase its complexity. CFGs and PCFGs however, provide explicit control of the complexity via their productions. They can also be learned via grammar induction \cite{brill1993automatic} on large treebanks of natural language and so the data generating distribution is not synthetic as in \cite{yu2016seqgan}.
Typical language models are evaluated by measuring the likelihood of samples from the true data distribution under the model. However, with GANs it is impossible to measure likelihoods under the model itself and so we measure the likelihood of the model's samples under the true data distribution instead.
We divide our experiments into four categories:
\begin{itemize}
\item Generating language that belongs to a toy CFG and an induced PCFG from the Penn Treebank \cite{marcus1993building}.
\item Chinese poetry generation with comparisons to \cite{yu2016seqgan} and \cite{che2017maximum}.
\item Generated samples from a dataset consisting of simple English sentences, the 1-billion-word and Penn Treebank datasets.
\item Conditional GANs that generate sentences conditioned on certain sentence attributes such as sentiment and questions.
\end{itemize}
\subsection{Simple CFG}
We use a simple and publicly available CFG\footnote{\url{http://www.cs.jhu.edu/~jason/465/hw-grammar/extra-grammars/holygrail}} that contains 248 productions. We then generate two sets of data from this CFG - one consisting of samples of length 5 and another of length 11. Each set contains 100,000 samples selected at random from the CFG. The first set has a vocabulary of 36 tokens while the second 45 tokens. We evaluate our models on this task by measuring the fraction of generated samples that satisfy the rules of the grammar and also measure the diversity in our generated samples. We do this by generating 1,280 samples from noise and computing the fraction of those that are valid under our grammar using the Earley parsing algorithm \cite{earley1970efficient}. In order to measure sample diversity, we simply the count the number of unique samples; while this assumes that all samples are orthogonal it still serves as a proxy measure of the entropy. We compare various generator, discriminator and GAN objectives on this problem.
\subsection{Penn Treebank PCFG}
To construct a more challenging problem than a simple CFG, we use sections 0-21 of the WSJ subsection of the Penn Treebank to induce a PCFG using simple count statistics of all productions.
\begin{align*}
P(A \rightarrow B C) = \dfrac{count(A \rightarrow B C)}{count(A \rightarrow *)}
\end{align*}
We train our model on all sentences in the treebank and restrict the output vocabulary to the top 2,000 most frequently occurring words. We evaluate our models on this task by measuring the likelihood of a sample using a Viterbi chart parser \cite{klein2003parsing}. While such a measure mostly captures the grammaticality of a sentence, it is still a reasonable proxy of sample quality.
\subsection{Chinese Poetry}
\citet{zhang2014chinese} present a dataset of Chinese poems that were used to evaluate adversarial training methods for natural language in \cite{yu2016seqgan} and \cite{che2017maximum}. The dataset consists of 4-line poems with a variable number of characters in each line. We treat each line in a poem as a training example and use lines of length 5 (poem-5) and 7 (poem-7) with the train/validation/test split\footnote{\url{http://homepages.inf.ed.ac.uk/mlap/Data/EMNLP14/}} specified in \cite{che2017maximum}. We use BLEU-2 and BLEU-3 to measure model performance on this task. Since there is no obvious "target" for each generated sentence, both works report corpus-level BLEU measures using the entire test set as the reference.
\subsection{Language Generation}
We generate language from three different datasets of varying sizes and complexity. A dataset comprising simple English sentences\footnote{\url{https://github.com/clab/sp2016.11-731/tree/master/hw4/data}} which we will henceforth refer to as CMU$-$SE, the version of the Penn Treebank commonly used in language modeling experiments \cite{zaremba2014recurrent} and the Google 1-billion word dataset \cite{chelba2013one}. We perform experiments at generating language at the word as well as character-level. The CMU$-$SE dataset consists of 44,016 sentences with a vocabulary of 3,122 words, while the Penn Treebank consists of 42,068 sentences with a vocabulary of 10,000 words. We use a random subset of 3 million sentences from the 1-billion word dataset and constrain our vocabulary to the top 30,000 most frequently occurring words. We use a curriculum learning strategy in all of our LSTM models (with and without the output peephole connection) that starts training on sentences of length 5 at the word level and 13 for characters and increases the sequence length by 1 after a fixed number of epochs based on the size of the data. Convolutional methods in \cite{gulrajani2017improved} are able to generate long sequences even without a curriculum, however we found it was critical in generating long sequences with an LSTM.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6cm,height=4cm]{plot}
\end{center}
\caption{Negative log-likelihood of generated samples under the PCFG using an LSTM trained with the WGAN-GP, GAN-GP and a standard MLE objective on the PTB dataset}
\label{pcfg}
\end{figure}
\begin{table*}[h!]
\begin{center}
\begin{tabular}{| c| c| c| c| c| c| c| c| c|}
\hline
Gen & Disc & Objective & \multicolumn{2}{c|}{Length 5} & \multicolumn{2}{c|}{Length 11}\\
\hline
& & & Acc (\%) & Uniq & Acc (\%) & Uniq \\
\hline
LSTM & LSTM & GAN & 99.06 & 0.913 & 0 & 0.855\\
\hline
LSTM & LSTM & LSGAN & 99.45 & 0.520 & 0 & 0.855\\
\hline
LSTM & LSTM & WGAN & 93.98 & 0.972 & 98.04
& 0.924\\
\hline
LSTM-P & LSTM & WGAN & 97.96 & 0.861 & 99.29
& 0.653\\
\hline
LSTM & LSTM & WGAN-GP & 99.21 & 0.996 & 96.25 & 0.992\\
\hline
CNN & CNN & WGAN-GP & 98.59 & 0.990 & 97.01 & 0.771\\
\hline
LSTM-P & LSTM & GAN-GP & 98.68 & 0.993 & 96.32 & 0.995\\
\hline
\end{tabular}
\end{center}
\caption {Accuracy and uniqueness measure of samples generated by different models. LSTM, LSTM-P refers to the LSTM model with the output peephole and the WGAN-GP and GAN-GP refer to models that use a gradient penalty in the discriminator's training objective}
\label{cfg}
\end{table*}
\subsection{Conditional Generation of Sequences}
GANs are able to leverage explicit conditioning on high-level attributes of data \cite{mirza2014conditional, gauthier2014conditional,radford2015unsupervised} to generate samples which contain these attributes. Recent work \cite{hu2017controllable} generates sentences conditioned on certain attributes of language such as sentiment using a variational autoencoders (VAEs) \cite{kingma2013auto} and holistic attribute discriminators. In this paper, we use two features inherent in language - sentiment and questions. To generate sentences that are questions, we use the CMU$-$SE dataset and label sentences that contain a "?" as being questions and the rest as been statements. To generate sentences of positive and negative sentiment we use the Amazon review polarity dataset collected in \cite{zhang2015character} and use the first 3 million \textit{short} reviews with a vocabulary of the top 4,000 most frequently occurring words. Conditioning on sentence attributes is achieved by concatenating a single feature map containing either entirely ones or zeros to indicate the presence or absence of the attribute as in \cite{radford2015unsupervised} at the output of each convolutional layer. The conditioning is done on both the generator and the discriminator. We experiment with conditional GANs using only convolutional methods since methods adding conditioning information has been well studied in these architectures.
\begin{table*}[htb!]
\begin{center}
\begin{tabular}{| c| c| c| c| c| c| c| c| c|}
\hline
Models & \multicolumn{4}{c|}{Poem 5} & \multicolumn{4}{c|}{Poem 7}\\
\hline
& \multicolumn{2}{c|}{BLEU-2} & \multicolumn{2}{c|}{BLEU-3} & \multicolumn{2}{c|}{BLEU-2} & \multicolumn{2}{c|}{BLEU-3} \\
\hline
& Val & Test & Val & Test & Val & Test & Val & Test \\
\hline
MLE \cite{che2017maximum} & - & 0.693 & - & - & - & 0.318 & - & -\\
\hline
Sequence GAN \cite{yu2016seqgan} & - & 0.738 & - & - & - & - & - & -\\
\hline
MaliGAN-basic \cite{che2017maximum} & - & 0.740 & - & - & - & 0.489 & - & -\\
\hline
MaliGAN-full \cite{che2017maximum} & - & 0.762 & - & - & - & 0.552 & - & -\\
\hline
LSTM (ours) & 0.840 & 0.837 & 0.427 & \textbf{0.372} & 0.660 & 0.655 & 0.386 & \textbf{0.405}\\
\hline
LSTM Peephole (ours) & 0.845 & \textbf{0.878} & 0.439 & 0.363 & 0.670 & \textbf{0.670} & 0.327 & 0.355\\
\hline
\end{tabular}
\end{center}
\caption {BLEU scores on the poem-5 and poem-7 datasets}
\label{poem_generation}
\end{table*}
\subsection{Training}
All models are trained using the back-propagation algorithm updating our
parameters using the Adam optimization method \cite{kingma2014adam} and stochastic gradient descent (SGD) with batch sizes of 64. A learning rate of $2 \times 10^{-3}$, $\beta_1 = 0.5$ and $\beta_2 = 0.999$ is used in our LSTM generator and discriminators while convolutional architectures use a learning rate of $1 \times 10^{-4}$. The noise prior and all LSTM hidden dimensions are set to 128 except for the Chinese poetry generation task where we set it to 64.
\section{Results and Discussion}
Table. \ref{cfg} presents quantitative results on generating sentences that adhere to the simple CFG described in Section 4.1. The Acc column computes the accuracy with which our model generates samples from the CFG using a sample of 1,280 generations. We observe that all models are able to fit sequences of length 5 but only the WGAN, WGAN-GP objectives are able to generalize to longer sequences of length 11. This motivated us to use only the WGAN and WGAN-GP objectives in our subsequent experiments. The GAN-GP criterion appears to perform reasonably as well but we restrict our experiments to use the WGAN and WGAN-GP criteria only. GANs have been shown to exhibit the phenomenon of "mode dropping" where the generator fails to capture a large fraction of the modes present in the data generating distribution \cite{che2016mode}. It is therefore important to study the diversity in our generated samples. The Uniq column computes the number of unique samples in a sample 1,280 generations serves as a rough indicator of sample diversity. The WGAN-GP objective appears to encourage the generation of diverse samples while also fitting the data distribution well.
Fig. \ref{pcfg} shows the negative-log-likelihood of generated samples using a LSTM architecture using the WGAN-GP, GAN-GP and MLE criteria. All models used an LSTM generator. The sequence length is set to 7 and the likelihoods are evaluated at the end of every epoch on a set of 64 samples.
Table. \ref{poem_generation} contains quantitative results on the Chinese poetry generation dataset. The results indicate that our straightforward strategy to overcome back-propagating through discrete states is competitive and outperforms more complicated methods.
Table. \ref{cond_gen} contains sequences generated by our model conditioned on sentiment (positive/negative) and questions/statements. The model is able to pick up on certain consistent patterns in questions as well as when expressing sentiment and use them while generating sentences.
Tables \ref{1_billion} and \ref{ptb_cmu} contain sequences generated at the word and character-level by our LSTM and CNN models. Both models are able to produce realistic sentences. The CNN model with a WGAN-GP objective appears to be able to maintain context over longer time spans.
\begin{table*}[htb!]
\begin{center}
\begin{tabular}{|p{2cm}|p{2cm}|p{11cm}|}
\hline
Level & Method & 1-billion-word \\
\hline
\multirow{12}{*}{Word}
& \multirow{6}{*}{LSTM} &An opposition was growing in China . \\
& & This is undergoing operation a year . \\
& & It has his everyone on a blame . \\
& &Everyone shares that Miller seems converted President as Democrat .\\
& & Which is actually the best of his children . \\
& & Who has The eventual policy and weak ?
\\\cline{2-3}
& \multirow{4}{*}{CNN} & Companies I upheld , respectively patented saga and Ambac. \\
& & Independence Unit have any will MRI in these Lights \\
& & It is a wrap for the annually of Morocco \\
& & The town has Registration matched with unk and the citizens \\
\hline
\multirow{4}{*}{Character}
& \multirow{4}{*}{CNN} & To holl is now my Hubby ,\\
& & The gry timers was faller\\
& & After they work is jith a\\
& & But in a linter a revent\\
\hline
\end{tabular}
\end{center}
\caption{Word and character-level generations on the 1-billion word dataset}
\label{1_billion}
\end{table*}
\begin{table*}[htb!]
\begin{center}
\begin{tabular}{|p{0.8cm}|p{0.9cm}|p{6.5cm}|p{6.8cm}|}
\hline
Level & Model & PTB & CMU-SE \\
\hline
\multirow{12}{*}{Word}
& \multirow{7}{*}{LSTM} & what everything they take everything away from . & \textless s\textgreater will you have two moment ? \textless/s\textgreater \\
& & may tea bill is the best chocolate from emergency . & \textless s\textgreater i need to understand deposit length . \textless/s\textgreater \\
& & can you show show if any fish left inside . & \textless s\textgreater how is the another headache ? \textless/s\textgreater \\
& &room service , have my dinner please . & \textless s\textgreater how there , is the restaurant popular this cheese ? \textless/s\textgreater
\\\cline{2-4}
& \multirow{4}{*}{CNN} & meanwhile henderson said that it has to bounce for. & \textless s\textgreater i 'd like to fax a newspaper . \textless/s\textgreater \\
& & I'm at the missouri burning the indexing manufacturing and through . & \textless s\textgreater cruise pay the next in my replacement . \textless/s\textgreater \\
& & & \textless s\textgreater what 's in the friday food ? ? \textless/s\textgreater \\
\hline
\end{tabular}
\end{center}
\caption{Word level generations on the Penn Treebank and CMU-SE datasets}
\label{ptb_cmu}
\end{table*}
\begin{table*}[htb!]
\begin{center}
\begin{tabular}{|p{7.5cm}|p{6.8cm}|}
\hline
POSITIVE & NEGATIVE \\
\hline
best and top notch newtonmom .& usuall the review omnium nothing non-functionable \\
good buy homeostasis money well spent & \\
kickass cosamin of time and fun . & extreme crap-not working and eeeeeew \\
great britani ! I lovethis. & a horrible poor imposing se400 \\
\cline{1-2}
\hline
QUESTION &STATEMENT \\
\hline
\textless s\textgreater when 's the friday convention on ? \textless /s\textgreater & \textless s\textgreater i report my run on one mineral . \textless /s\textgreater \\
\textless s\textgreater how many snatched crew you have ? \textless /s\textgreater& \textless s\textgreater we have to record this now . \textless /s\textgreater
\\
\textless s\textgreater how can you open this hall ? \textless /s\textgreater& \textless s\textgreater i think i deeply take your passenger .\textless /s\textgreater\\
\hline
\end{tabular}
\end{center}
\caption{Coditional generation of text. Top row shows generated samples conditionally trained on amazon review polarity dataset with two attributes 'positive' and 'negative'. Bottom row has samples conditioned on the 'question' attribute}
\label{cond_gen}
\end{table*}
\section{Conclusion and Future work}
In conclusion, this work presents a straightforward but effective method to train GANs for natural language. The simplicity lies in \textit{forcing the discriminator to operate on continuous values} by presenting it with a sequence of probability distributions from the generator and a sequence of 1-hot vectors corresponding to data from the true distribution. We propose an evaluation strategy that involves learning the data distribution defined by a CFG or PCFG. This lets us evaluate the likelihood of a sample belonging to the data generating distribution. The use of WGAN and WGAN-GP objectives produce realistic sentences on datasets of varying complexity (CMU-SE, Penn Treebank and the 1-billion dataset). We also show that it is possible to perform conditional generation of text on high-level sentence features such as sentiment and questions.
In future work, we would like to explore GANs in other domains of NLP such as non goal-oriented dialog systems where a clear training and evaluation criterion does not exist.
\section*{Acknowledgements}
The Authors would like to thank Ishaan Gulrajani, Martin Arjovsky, Guillaume Lample, Rosemary Ke, Juneki Hong and Varsha Embar for their advice and insightful comments. We are grateful to the Fonds de Recherche du Québec -- Nature et Technologie for their financial support. We would also like to acknowledge NVIDIA for donating a DGX-1 computer used in this work.
\section*{Appendix}
We demonstrate that our approach to solve the problem of discrete outputs produces reasonable outputs even when applied to images. Figure \ref{binary_mnist} shows samples generated on the binarized MNIST dataset \cite{salakhutdinov2008quantitative}. We used a generator and discriminator architecture identical to \cite{radford2015unsupervised} with the WGAN-GP criterion. The generator's outputs are continuous while samples from the true data distribution are binarized.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6cm,height=6cm,keepaspectratio]
{binarized_mnist_final}
\end{center}
\caption{Binarized MNIST samples using a DCWGAN with gradient penalty}
\label{binary_mnist}
\end{figure}
\clearpage
|
1,108,101,564,153 | arxiv | \section{Introduction}
\label{sec:intro}
Traditionally, the design of industrial systems was based on an isolation model, where the control of the operational technology was separated from the information technology. Today, both operational and information technology are integrated. Industrial physical processes are controlled by \CH{Cyber-Physical Systems (CPS)} that integrate modern computation and networking resources into traditional physical environments. They have emerged mainly on the industrial control system domain, using data acquisition and processing over networked control systems~\cite{ge_distributed_2017} to automate the remote execution of industrial tasks~\cite{zhang_survey_2016}.
Such integration has several advantages, for example, low maintenance costs, high reliability, flexibility, efficiency, and effectiveness to control the physical process~\cite{6305473}. The use of computation and networking resources to build a new generation of \gls*{cps} plays an important role in current critical nation-wide infrastructures, such as electrical transmissions, energy distribution, manufacturing, supply chain, waste recycling, public transportation, health care, industrial process control, water infrastructure, and several others~\cite{ge_distributed_2017, lun_cyber-physical_2019}.
\CH{\gls*{cps} are composed by a physical process, sensors, actuators and controllers. The sensors collect information about the physical process and send it to the controllers. Then, the controllers analyze the received information and calculate how to optimize the behavior of the physical process. As a result, the controllers send commands to the actuators to execute the corrective actions on the physical process. For example, to maintain the stability of the physical processes.} However, \gls*{cps} can be disrupted by cyber-physical attacks \cite{teixeira2012attack, Teixeira2015}, i.e., situations resulting from a cyber-attack, but manifesting physical effects, such as performance degradation~\cite{7954148}. These situations may put human safety at risk, cause harm in natural environments, interrupt industrial process continuity, and violate environmental regulation. Hence, cyber-physical attacks can lead to large economic losses, generate legal problems, and damage the reputation of the affected organizations~\cite{alguliyev_cyber-physical_2018}.
Many concerns have been raised about the vulnerabilities of control systems.
Recent history provides several cases of attacks on industrial infrastructures, which illustrate the threat that they represent. \CH{In particular, the security of industrial \gls*{cps} is drawing great attention after the Stuxnet malware~\cite{falliere2011w32,SPpanel2014} that considerably affected the performance of a uranium enrichment plant. The consequences of this event showed the dangers of successful cyber-threats carried out against \gls*{cps}.} Also, the well-known Ukraine attack \cite{case} targeted power distribution networks causing outages as well as lasting damage. Another example is the Australian water services attacked by a disgruntled employee who infiltrated the system network and altered the control signals \cite{10.1007/978-0-387-75462-8_6}. The adversary took control of 150 sewage pumping stations resulting in the evacuation of one million liters of untreated sewage, over three months, into stormwater drains and on to local waterways. More examples of similar events can be found in \cite{sanchez_bibliographical_2019}.
Although pure cyber-attacks have shown limited damages to recent \gls*{cps}~\cite{HUANG200973}, full damages are feasible when considering adversaries that perpetrate control-theoretic manipulation, resulting from cyber-attacks, but leveraging physical disruption. This puts the focus into cyber-physical integrity attacks, which can rapidly move the system to unsafe states.
Ensuring the control of \gls*{cps} data exchanges is a challenging problem that requires a combination of both network and industrial control security.
In addition, cyber-physical attacks may be hard to detect \cite{rubio2016nordsec,Rubio17ETT}. For this reason, resilience\footnote{In this article, we use the words \textit{resilience} and \textit{cyber-resilience} indifferently.} is especially relevant~\cite{10.1145/3462513}. Developing \gls*{cps} that can safely survive an attack is a current challenge~\cite{book_resilience_scott}.
Ensuring safety using only information security tools is not enough in the \gls*{cps} domain. Cybersecurity approaches do not cover all the possible vulnerabilities in the cyber components. For example, specific vulnerabilities may not have remediation mechanisms or they may be too expensive to implement. \CH{Even when the approach is implemented, detection algorithms are not free of false negatives and the remediation techniques may not be triggered.} As pointed out in \cite{8239925}, large research efforts have focused on intrusion detection for \gls*{cps}, but there is little discussion about what to do after the intrusion is detected, i.e., in remediation approaches that mitigate the effects of an attack. Most of the responses are manual or hardwired with a fixed response that cannot be configured. For this reason, attack tolerance should be enforced in critical systems to provide a correct service under the presence of successful attacks against the system \cite{RATHNAYAKA2022103123}. The resulting \gls*{cps} should satisfy high availability\footnote{In our work, availability means that legitimate users and processes have access to the system (and the resources of the system) whenever they need.} requirements to guarantee the execution of the critical tasks. It should be able to guarantee that the whole system remains operational even in the presence of attacks, even if that means to work under graceful degradation modes. As a result, cybersecurity approaches should be complemented with secure control theory that provides attack models and a description of the interaction between the physical world and the control system. This will provide a better understanding of the attacks' consequences, development of new detection methods, response mechanisms, and architectures. It will also make the control systems more resilient to possible attacks and failures.
In this article, we focus on cyber-resilience techniques to build \gls*{cps} tolerant to cyber-physical attacks. We consider that the \gls*{cps} is a combination of cyber and physical components working together under discrete and continuous industrial environments \cite{ZHANG2013HighSystems}. We devote our work to protection techniques addressing networked control systems, i.e., a subset of \gls*{cps} dedicated to industrial control processes, usually performing critical functions. We analyze strategies that combine or have the potential to combine cybersecurity and control-theoretic approaches to build a solution that contemplates the cyber and the physical components of a \gls*{cps} to face the challenges created by cyber-physical adversaries. We differentiate research work from traditional risk management approaches, based on the general acceptance that it is unfeasible to prevent and mitigate all possible risks threatening a \gls*{cps}. We also discuss questions and research challenges, with a focus on the practical aspects of cyber-resilience, such as the use of metrics and evaluation methods, as well as testing and validation environments.
\begin{figure*}[!h]
\centering
\includegraphics[width=.6\textwidth]{images/survey-toc}
\caption{\CH{Organization of this article.}}
\label{fig:classif}
\end{figure*}
\CH{The remainder of this article is outlined as follows.
Section~\ref{sec:ch2_CPS} explains how to model a \gls*{cps} and define the feedback control executed in the \gls*{cps} controller. Section~\ref{sec:CPS_attacks} provides a control-theoretic model of the cyber-physical integrity and availability attacks that we address in this article. Section~\ref{sec:ch2_techniques} provides our literature survey on cyber-resilience techniques to address the previously defined attacks. The selected literature was analyzed and classified based on risk-oriented techniques and resilience-by-design techniques.
These two approaches are closely related. Remediation techniques are sometimes considered as part of cyber-resilience. For this reason, we analyze the difference between them and we classify the collected proposals into two categories -- (1) \textit{detection and reaction} techniques, and (2) \textit{resilience-by-design} proposals. The approaches in each category are further classified into subcategories, as depicted in Figure~\ref{fig:classif}.
Sections~\ref{sec:ch2_why_CT} and~\ref{sec:ch6_future_work} discuss research challenges and present open issues in the cyber-resilience area to lead future research. Section~\ref{sec:conclusion} concludes the article with our conclusions
and main remarks.}
\section{Background on Cyber-Physical Systems}
\label{sec:ch2_CPS}
Cyber-Physical Systems (CPS), mathematically modeled in our work as networked control systems, are composed by distributed control systems and autonomous agents that need to make decisions in real-time. They consist of two main parts. First, a cyber layer, containing the computing and network functionalities. Second, a physical layer, representing dynamic automation processes. Both together manage the distributed resources that monitor the behavior of physical phenomena and take the necessary actions to get control over them~\cite{ge_distributed_2017}. The \gls*{cps} becomes easier to automate at the cost of increasing the interaction between physical and cyber layers~\cite{zhang_survey_2016}. However, as a consequence, they get more vulnerable to new threats. Malicious actions in these systems are usually conducted by cross-layer adversaries that aim at harming the physical processes through the integration of physical and cyber layer attacks to cause, e.g., physical damages \cite{sanchez_bibliographical_2019}.
\gls*{cps} use a model able to manage and control the physical evolution of the system states. Controlling the states is a challenge since they follow the laws of the involved physical process, e.g., energy, water, or moving systems \cite{urbina2016survey}. For this reason, the physical properties of the system are used to create a model represented for the feedback control. This feedback control has to be able to regulate and manage the behavior of the system, i.e., a model able to confirm that the commands sent to the physical layer are executed correctly and the information coming from the physical states (through the sensors) is consistent with the predicted behavior of the system.
In a \gls*{cps}, the {\it plant} (also referred to as \textit{system} by some authors) is the physical process that we want to control. The {\it actuators} perform physical actions over that process and the {\it sensors} collect the modifications produced at the physical layer. Using the data collected by the sensors, the feedback {\it controller} generates a residue between the data received from the sensors and the reference obtained after modeling the system. This residue, named {\it control error} by some authors, is used by the controller to create the {\it control input} to rectify, if necessary, the physical states using the actuators. The threat models explained in Section~\ref{sec:CPS_attacks} use some of the parameters and equations explained next.
\vspace{-.35cm}
\subsection{Physical Model}
How to obtain the model used in the feedback controller is a well-known problem in the control domain. Different techniques have been developed to provide a reference and generate the control input at each time step \cite{ljung2010perspectives,error_estimated_TF_Goodwin,ljung1987system,ARX_ARMAX}; and also to create feedback control \cite{ricker1993model,barenthin2008complexity,Lee2004RobustEstimation}. The model can be obtained using a representation that relates to each possible input signal, the corresponding output signal. The two main mathematical approaches to model this are the \textit{transfer function} and the \textit{state-space model}. Both representations are equivalent since they are based on the differential equations that model the behavior of the physical process being controlled.
Normally, a \gls*{cps} design process starts with the transfer function since it is the most direct form starting from the differential equations of the physical process. The transfer function $G(s)$ is the ratio of the Laplace transformation using the complex variable $s$ of the output $Y(s)$ to that of the input $U(s)$. It is represented as shown in Equation~\eqref{eq:trFn} by the division of two polynomials, the numerator is created by taking the coefficients $b_i$ of the output differential equation and the denominator using the coefficients $a_i$ of the input differential equation.
\vspace{-0.3cm}
\begin{equation}
\label{eq:trFn}
G(s) =
\dfrac
{Y(s)}
{U(s)} =
\dfrac
{\sum\limits_{i=0}^{m} b_is^{m-i}}
{\sum\limits_{i=0}^{n} a_is^{n-i}}
\end{equation}
A transfer function with multiple inputs and multiple outputs is usually represented in a matrix which indicates the relationship of each input and each output of the system. Using well-known control theory techniques~\cite{Ogata}, it is possible to transform the transfer function into a state-space model by expressing the differential equations into matrices forms, cf. Equation~\eqref{eq:ch2_statespace} as follows:
\vspace{-0.6cm}
\begin{equation}
\centering
\label{eq:ch2_statespace}
\left.
\begin{array}{ll}
x_{k+1}=Ax_{k}+Bu_{k}+ w_{k} \\
y_{k}=Cx_{k}+v_{k}
\end{array}
\right.
\end{equation}
where $x_{k}\in \mathbb{R}^n$ is the vector of the state variables at the $k$-th time step, $u_{k}\in \mathbb{R}^p$ is the control signal and $w_{k}\in\mathbb{R}^n$ is the process noise that is assumed to be a zero-mean Gaussian white noise with covariance $Q$, {\it i.e.} $w_k \sim N(0,Q)$. Controllers are normally implemented in discrete form.
Moreover, $A\in \mathbb{R}^{n\times n}$ and $B\in \mathbb{R}^{n\times p}$ are respectively the {\it state} matrix and the {\it input} matrix.
The value of the output vector $y_{k} \in \mathbb{R}^m$ represents the measurements produced by the sensors that are affected by a noise $v_{k}$ assumed as a zero-mean Gaussian white noise with covariance $R$, {\it i.e.} $v_k \sim N(0,R)$ and $C\in \mathbb{R}^{m\times n}$ is the output matrix that maps the state $x_k$ to the system output.
\subsection{Feedback Control}
The previous equations define mathematically the behavior of a physical system. These equations are used by the feedback control to generate a closed-loop system. The output of the feedback control influences the input signal, e.g., to rectify the possible errors generated by the system. To build this type of feedback, two relevant mechanisms are \textit{Proportional-Integral-Derivative} (PID) controllers and \textit{Linear Quadratic Gaussian} (LQG) controllers.
LQG controllers provide feedback that holds better results than PID controllers \cite{LQG_PID}. LQG is a well-known technique for designing optimal dynamic feedback control laws. This optimal solution combines a Linear-Quadratic Estimator (LQE) with a Linear-Quadratic Regulator (LQR). These two components are independent, but work together taking into account the measurement noise and process disturbance.
The goal of an LQG controller is to produce a control law $u_k$ such that a quadratic cost $J$, that is a function of both the state $x_k$ and the control input $u_k$, is minimized:
\vspace{-0.3cm}
\begin{equation}
J = \lim_{n \rightarrow \infty} E\left[\frac{1}{n}\sum_{i=0}^{n-1}(x_i^T \Gamma x_i + u_i^T \Omega u_i) \right]
\label{eq:ch3_control_cost}
\end{equation}
where $\Gamma$ and $\Omega$ represent positive definite cost matrices~\cite{CDS_1998}.
It is well-known that a \textit{Kalman filter}-based LQE can be combined with a traditional LQR to solve the aforementioned control problem, as follows:
\begin{enumerate}
\item \textit{Kalman filter}-based LQEs use noisy measurements and produce an optimal state estimation $\hat x_k$ of $x$ (state);
\item the LQR, based on the state estimation $\hat x_k$, provides the control law $u_k$ that solves the problem (cf. Equation~(\ref{eq:ch3_control_cost})).
\end{enumerate}
A Kalman filter can estimate the state as follows:
\begin{itemize}
\item Predict (\textit{a priori}) system state $\hat{x}_{k|k-1}$ and covariance:
\vspace{-0.6cm}
\begin{equation*}
\hat{x}_{k|k-1}=A\hat{x}_{k-1} + Bu_{k-1}
\end{equation*}
\vspace{-0.6cm}
\begin{equation*}
\label{eq:ch3_Covariance_error_apriori}
P_{k|k-1}=AP_{k-1}A^T + Q
\end{equation*}
\item Update parameters and (\textit{a posteriori}) system state and covariance:
\vspace{-0.6cm}
\begin{equation*}
\label{eq:ch3_Kalman_gain}
K_{k}=(P_{k|k-1}C^T)(CP_{k|k-1}C^T + R)^{-1}
\end{equation*}
\vspace{-0.6cm}
\begin{equation*}
\hat{x}_{k}=\hat{x}_{k|k-1} + K_{k}(y_{k} - C\hat{x}_{k|k-1})
\end{equation*}
\vspace{-0.6cm}
\begin{equation*}
\label{eq:ch3_Covariance_error}
P_{k}=(I - K_{k}C)P_{k|k-1}
\end{equation*}
\end{itemize}
\noindent where $K_k$ and $P_{k}$ denote, respectively, the Kalman gain and the \textit{a posteriori} error covariance matrix, and $I$ is the identity matrix of appropriate dimensions.
The optimal control law $u_k$ provided by the LQR is a linear controller:
$u_{k}=L\hat{x}_{k}$,
where $L$ denotes the feedback gain of the LQR that minimizes the control cost (cf.~ Equation~(\ref{eq:ch3_control_cost})), which is defined as follows \cite{Mo_2015}:
\vspace{-0.4cm}
\begin{equation*}
L=-(B^{T}SB + \Omega)^{-1}B^{T}SA
\end{equation*}
with $S$ being the matrix that solves the following discrete-time
algebraic Riccati equation:
\vspace{-0.4cm}
\begin{equation*}
S=A^{T}SA + \Gamma - A^TSB[B^{T}SB + \Omega]^{-1}B^{T}SA
\end{equation*}
\section{Background on Cyber-Physical Threats}
\label{sec:CPS_attacks}
Control systems use safety mechanisms to handle failures and avoid accidents. Nevertheless, these control mechanisms cannot detect intentional malicious actions, such as cyber-physical attacks. Next, we present some existing cyber-physical adversaries models and attack families.
\vspace{-0.2cm}
\subsection{Adversary Models}
The consequences of a successful cyber-physical attack can be more damaging than aggression on other networks because control systems are at the core of many critical infrastructures. We differentiate three main adversaries models~\cite{rubio2017EurasipWatermak}:
\begin{itemize}
\item \textbf{Physical Adversary --} The adversary has physical access to the \gls*{cps} and can damage it by performing physical actions. For example, the adversary may cut the brakes of a connected autonomous car, destroy the valves that release the pressure in an industrial system, or perturb temperature sensor measurements by modifying their local surroundings \cite{weerakkody_resilient_2020, Teixeira2015}.
\item \textbf{Cyber Adversary --} The adversary can perform cybersecurity attacks (e.g., man-in-the-middle, buffer overflow, shell exploits, or others). The adversary has only knowledge about computation, storage and network resources. Because of that, the attack can be easily detected by control-theoretic fault detection techniques~\cite{smith2015covert}.
Authors have systematized existing \gls*{cps} security research analyzing the taxonomy of threats, vulnerabilities, and attacks from the \gls*{cps} components perspective, with a special focus on cyber components~\cite{humayed_cyber-physical_2017} and cyber adversaries~\cite{alguliyev_cyber-physical_2018}. They also present the main difficulties and solutions in the estimation of the consequences of cyber-attacks, in terms of modeling, detection, and the development of security architectures.
\item \textbf{Cyber-Physical Adversary --} The adversary perpetrates cyber-attacks to cause tangible damage to physical components, for instance, by adding disturbances to a physical process via the exploitation of vulnerabilities in some computing and networking resources of the system.
The cyber-physical adversary is a combination of the two previous adversaries \cite{krotofil2015rocking}. First, the adversary uses a cyber-attack to gain position into the system from a remote location. Then, the adversary learns about the physical model to generate an attack with physical consequences but without being physically placed in the \gls*{cps} physical location. It can be hard to detect and locate a cyber-physical adversary, whose attacks may often be confused with faults in the system.
\end{itemize}
\subsection{Attack Families}
\label{sec:ch2_attack_taxonomy}
Different cyber-physical attack families have been reported in the literature. Authors in \cite{HUANG200973} provide control-theoretic models for integrity and denial-of-service (DoS) attacks. Similar techniques have been reported in \cite{DIBAJI2019394}, naming them deception and disruption cyber-physical attacks, respectively. The work in \cite{HUANG200973} shows that a traditional DoS attack does not have a significant effect when the system is in a steady state. However, the violation of integrity properties in such attacks can rapidly move the system to unsafe states.
A convenient attack classification in the existing literature is the one proposed in \cite{Teixeira2015}, which introduced the attack space as a three-dimensional graphical characterization of the attacks. It considers the following three dimensions: the adversary’s a priori knowledge of the system’s model, the disruption of resources, and the disclosure of resources. The knowledge of the system’s model allows the adversary to develop sophisticated attacks, which have more severe consequences and are harder to detect with traditional approaches. The disclosure of resources let the adversary to obtain sensitive information, which may be used to generate knowledge about the system, but cannot be used to disrupt the system operation. Finally, the disruption of resources can be used to affect the system operation (e.g., maintaining the stability of the system).
Fig. \ref{fig:ch2_all} depicts block diagrams representation of cyber-physical adversaries attacking a control loop. The $\bigoplus$ symbol represents a \textit{summing junction}, i.e., the sum of input signals.
To take control of the physical process, the adversary may send a malicious command $u_{attack}$ to the \textit{System} that will be executed by the actuators. After that, to deceive the controller and go unnoticed, the adversary may modify the sensors' readings $y_{attack}$ to inject a measurement value $y$. The adversary may use a combination of different commands $u$ and measurements $y$ to deceive the controller and damage the system.
Next, we outline some cyber-physical attacks following the taxonomy presented in \cite{Teixeira2015}. Cyber-physical adversaries use integrity attacks to exploit vulnerabilities in the control mechanism and take control of the physical process. For this reason, all the attacks are assumed to inject malicious traffic. However, they are classified into different categories because they exploit different vulnerabilities in the control loop. As a consequence, these attacks produce different effects on the physical process and they may require different approaches to be solved.
\begin{figure}[!hptb]
\begin{center}
\includegraphics[width=\columnwidth]{images/fig1-all-in-one}
\caption{(a) Stealth attack. (b) Replay attack. (c) Covert attack. \label{fig:ch2_all}}
\end{center}
\vspace{-0.6cm}
\end{figure}
\subsubsection{False-Data Injection Attack or Stealth Attack}
In this attack family (cf. Fig.~\ref{fig:ch2_all}(a)), the adversary modifies some sensors readings by applying physical interference, at the sensor device, or by perturbing the communication channel to disrupt the system \cite{Cardenas_Risk_Detection, teixeira2012attack, sanchez_bibliographical_2019}.
To carry out attacks from this family, the adversary needs knowledge about the behavior of the system, such as the system dynamics, the command signals, and the control detection threshold. The adversary drives slowly the control decisions out of the correct behavior and produces wrong control decisions to cause a malfunction in the system. From a control-theoretic perspective, the injected false data should not affect the system residues (cf. Section~\ref{sec:ch2_CPS}). This means that the injected data should not alter the sensor measurement variations. Otherwise, the attack would be easily detected.
\subsubsection{Replay Attack}
Fig.~\ref{fig:ch2_all}(b) shows adversaries conducting a cyber-physical replay attack by modifying some sensors readings (e.g., by replicating previous measurements, corresponding to normal operating conditions). Then, the adversaries modify the control input to affect the system state. These adversaries are not required to know the system process model, but access to all the sensors is required to carry out a successful attack. This type of adversary is undetectable with a monitor detector which only verifies sensors' measurements. To detect the attack, it is required to add some protection to the input control signal $u_k$ \cite{Mo_2014}, defined in Section~\ref{sec:ch2_CPS}.
\subsubsection{Covert Attack}
Adversaries, depicted in Fig.~\ref{fig:ch2_all}(c), read and add to both, the control data and the sensors measurements. The difference with the replay attack is that the adversary needs \textit{a priori} knowledge about the system process to create a transformation that is correlated with the control model, i.e., the attack requires knowing the behavior of the physical system as well as the behavior of the feedback control. This type of adversary is considered undetectable if measurements are compatible with the physical process. In other words, the attack cannot be distinguished from the regular system operations \cite{smith2015covert}.
\subsubsection{DoS Attack}
A denial-of-service (DoS\footnote{We can also consider distributed denial-of-service (DDoS), where multiple nodes attack one or many other components.}) aims at disrupting the communication between the remote elements (e.g., elements related to supervisory control and data acquisition protocols) and local elements closely related to the system (e.g., terminal units and programmable logic controllers connected to the sensors and actuators of the system), hence disrupting the availability of feedback control~\cite{Dos_Yuan}. By disconnecting the controller from the physical device, it is possible to avoid the process monitoring and let the system vulnerable to other malicious actions \cite{Dos_injectionAt_Wei}.
It is worth noting that cyber-physical DoS attacks are launched using integrity attacks to cause significant damage. In this case, the attack compromises the integrity of the messages, as shown in Fig.~\ref{fig:ch2_all}(b), with two objectives. First, to disrupt the communication between the controller and the system, generating a loss of the system supervision that may be not easy to detect. Second, to inject malicious messages to move the system from the stability point. This way, the adversary generates unavailability of the system to the authorized users to make it available just for the malicious actions. As a result, this adversary affects the integrity of the system to generate also an availability problem.
\subsubsection{Command Injection Attack} This attack uses the protocols and devices vulnerabilities to inject false commands into the control systems to disrupt control actions or system settings. It is similar to the attack shown in Fig.~\ref{fig:ch2_all}(a), but the adversary injects the malicious traffic in the control command, i.e., in $u_k$. For example, by overwriting the remote registers associated with some supervisory control or exploiting the data acquisition protocols \cite{gao2014cyber}.
\subsubsection{Zero Dynamics Attack}
This attack family assumes vulnerabilities present in the dynamics of the system concerning properties used to monitor and control the behavior. This attack is similar to the command injection attack, but it makes an unobservable state unstable and disrupts this unobservable part of the system without being detected by the controller~\cite{Teixeira2015, Dynamic_attackChen}. A solution to avoid this kind of attack is to update the architecture of the system to make all the states observable, e.g., by deploying more sensors to avoid unobservable situations in the system.
As we have seen in this section, the existence of availability and integrity vulnerabilities is the main security issue in \gls*{cps}. Although pure cyber-attacks may have a limited impact on the system, combined with control-theoretic strategies may cause important physical damages~\cite{HUANG200973}. Indeed, cyber-physical integrity attacks can rapidly move the system to unsafe states. Also, cyber-physical DoS attacks can benefit from integrity issues, to cause significant damages. In this case, the integrity of the messages is compromised with two objectives. First, to disrupt the communication between the controller and the system, hence leading to supervision loss (which is hard to detect). Second, to inject malicious messages to move the system from its stability point. This way, the adversary generates unavailability of the system to authorized users, e.g., to make it available just for the malicious actions. In the next section, we present existing resilience techniques to face cyber-physical adversaries and reduce the impact they may have on the system safety.
\section{Systematic Survey on Cyber-Resilience Literature}
\label{sec:ch2_techniques}
Cyber-resilience is the ability of a system to \emph{prepare, absorb, recover}, and \emph{adapt} to adverse effects \cite{book_resilience}.
The \emph{preparation} phase is characterized by identifying the critical functions or services and stakeholders.
It is important to understand the critical functionalities to guide the planning actions. The \emph{absorption} phase involves the capacity of the system to contain the attack under degraded performance. It is the ability of a system to tolerate the stress. Thresholds are important to determine whether a system can absorb a shock or not. During the \emph{recovery} phase, the system starts the process to restore its normal behavior as quickly and efficiently as possible. Finally, the \emph{adaptation} phase involves a postmortem evaluation to improve the response and learn from past experiences.
Although the previously mentioned definition provides a clear view of the resilience stages, it may also be too broad for the \gls*{cps} domain.
A given \gls*{cps} with unlimited resources (e.g., unlimited time) will eventually recover from all failures and attacks. Hence, resilience should be established considering a minimum group of conditions, e.g., in terms of temporal and computational resources. Under this assumption, and with the \gls*{cps} context in mind, a more appropriate definition of resilience points out to
the necessity of providing~\cite{clark_cyber-physical_2019}: (1) full correctness maintenance of the core set of crucial functionalities despite ongoing adversarial misbehavior (i.e., it is acceptable for non-crucial functionalities to be affected temporarily, such as partially degraded or complete failure); and (2) guaranteed recovery of the normal operation of the affected functionalities within a predefined cost limit. In addition, attack tolerance and graceful degradation are two properties that we may want to satisfy in a resilient system. Attack tolerance assumes that attacks can happen and be successful. The overall system must remain operational and provide a correct service. Graceful degradation is the ability of a system to continue functioning even in a lower performance after parts of the system have been damaged, compromised, or destroyed. The efficiency of the system working in graceful degradation usually is lower than the normal performance. It may decrease as the number of failing components grows. The purpose is to prevent a catastrophic failure of the system.
\begin{table}[!b]
\vspace{-0.4cm}
\begin{center}
\caption[Resilience Approaches for CPS]{\label{tab:techniques}Proposed resilience approaches for CPS. (Top) Resilient Control Techniques. (Down) Cyber-Resilience Techniques.}
\small
\begin{tabular}{| p{5.6cm} | c |c |c |c |c | c |p{4.1cm} |}\hline
\multirow{2}{*}{\textbf{\Longunderstack{Resilient Control Techniques\\Section \ref{resControl}}}} &
\multicolumn{4}{c|}{\textbf{Layer}} & \multirow{2}{*}{\textbf{Proposals}} \\ \cline{2-5}
&
\rotatebox{90}{\textbf{Physical~}} &
\rotatebox{90}{\textbf{Network~}} &
\rotatebox{90}{\textbf{Control}} &
\rotatebox{90}{\textbf{Cyber}} & \\ \hline\hline
\textbf{Detection} & \multicolumn{4}{c|}{}&\\ \hline
Data-based Approach & &\checkmark & & \checkmark & \Longunderstack{\cite{10.5555/1162264}, \cite{shawe-taylor_cristianini_2004}, \cite{Hofmann_ML}, \cite{10.1145/2542049}, \cite{cheminod_review_2013}, \cite{6942184}, \cite{ahmed_survey_2015}, \cite{ding_survey_2018}, \cite{6786081}}\\ \hline
Model-based Approach & & &\checkmark & & \Longunderstack{\cite{Mo_2015}, \cite{Miao2013StochasticDetection}, \cite{rubio2017EurasipWatermak}, \cite{do2014statistical}, \cite{arvani2014detection}, \cite{Correlation_detectorLokhov}, \cite{Wang2014},\cite{anomaly_detectionChen}, \\ \cite{6307833}, \cite{detection_using_modelbased_Dehghani}, \cite{Zhu2015Game-theoreticSystems}, \cite{bobbadetecting}, \cite{pasqualetti2015control}, \cite{luenberger_introduction_1971}, \cite{shoukry_event-triggered_2016}, \cite{schellenberger_detection_2017}, \cite{weerakkody_resilient_2020}}\\ \hline \hline
\textbf{Reaction} & \multicolumn{4}{c|}{}& \\ \hline \hline
Resilient State Estimation & & & \checkmark& & \Longunderstack{\cite{weerakkody_resilient_2020}, \cite{fawzi_secure_2014}, \cite{pajic_robustness_2014}, \cite{pajic_attack-resilient_2017}, \cite{6881627}, \cite{doi:10.1080/00207721.2014.906683}, \cite{weimer_attack-resilient_2014}, \cite{shoukry_secure_2017}, \cite{mishra_secure_2017}, \cite{10.1145/1995376.1995394}, \\ \cite{7299903}, \cite{6730927}, \cite{TAN2017313}}\\ \hline
Reconfiguration & \checkmark& \checkmark & \checkmark& \checkmark & \Longunderstack{\cite{giraldo_security_2017}, \cite{8443136}, \cite{Dos_Yuan}, \cite{reflect_Ana}, \cite{8530771}, \cite{10.1145/3232848}, \cite{6425820}, \cite{DIBAJI2019394}, \cite{Cetinkaya2016}, \\ \cite{YANG2017145}, \cite{LEI2016286} } \CH{\cite{Sun__event_triggered_2022}}\\ \hline
Programmable Networking & &\checkmark & & & \Longunderstack{\cite{Campbell_opensignal}, \cite{Tennenhouse_active_network}, \cite{Netconf}, \cite{Kreutz_SDN_survey}, \cite{sahay}, \cite{hadega}, \cite{itl2018}, \cite{molina_software-defined}, \cite{PIEDRAHITA2018}}\\
\hline
\hline
\end{tabular}
\vspace{0.15cm}
\begin{tabular}{| p{4cm} | c | c | c || c |c |c |c |c | c |p{3.8cm} |}\hline
\hline
\multirow{2}{*}{\textbf{\Longunderstack{Cyber-Resilience Techniques\\Section \ref{subsec:ch2_resilience_sota}}}} &
\multicolumn{3}{c|}{\textbf{Phase}} &
\multicolumn{4}{c|}{\textbf{Layer}} & \multirow{2}{*}{\textbf{Proposals}} \\ \cline{2-8}
&
\rotatebox{90}{\textbf{Absorb}} &
\rotatebox{90}{\textbf{Survive}} &
\rotatebox{90}{\textbf{Recover}} &
\rotatebox{90}{\textbf{Physical~}} &
\rotatebox{90}{\textbf{Network~}} &
\rotatebox{90}{\textbf{Control}} &
\rotatebox{90}{\textbf{Cyber}} & \\ \hline\hline
\textbf{Architecture Design} & \multicolumn{7}{c|}{}&\\ \hline
Diversity & &\checkmark & &\checkmark & \checkmark & & \checkmark & \Longunderstack{\cite{larsen_sok_2014}, \cite{8029792}, \cite{532621}, \cite{chaves_improving_2017}, \cite{COHEN1993565}, \cite{Forrest_buildingdiverse}, \cite{10.1145/2508859.2516675},\\ \cite{Jackson2011}, \cite{10.1007/978-3-642-00730-9_10}, \cite{10.1007/978-1-4614-5416-8_8}, \cite{10.1007/978-3-540-70542-0_1}, \cite{10.1145/948109.948146}, \cite{6494997}, \cite{cispa450}}\\ \hline
Segmentation &\checkmark & & & & \checkmark & & & \cite{gengeSegmentation}, \cite{10.1007/978-3-642-41488-6_12}\\ \hline \hline
\textbf{Reconfiguration} & \multicolumn{7}{c|}{}&\\ \hline
Isolation and Containment &\checkmark & & &\checkmark & \checkmark & &\checkmark & \Longunderstack{\cite{bellini_cyber_2019}, \cite{chen_robustness_2020}, \cite{avizienis_concept_2016}, \cite{haque_modeling_2019}, \cite{kwasinski_modeling_2020}, \cite{xu_islanding_2020}}\\ \hline
Dynamic Network Composition & \checkmark & \checkmark & & & & & & \Longunderstack{\cite{PIEDRAHITA2018}, \cite{segovia_reflective_2020}, \cite{januario_distributed_2019}, \cite{marshall_context-driven_2019}, \cite{chen_adaptive_2020}} \\ \hline
Non-Persistence & \checkmark & \checkmark & & & & & & \cite{9147232}, \cite{PRADHAN2016344} \\ \hline \hline
\textbf{Moving Target Defense (MTD)} & \multicolumn{7}{c|}{}&\\ \hline
Network MTD & \checkmark & \checkmark & \checkmark & & \checkmark & & & \Longunderstack{\cite{kanellopoulos_moving_2019}, \cite{10.1007}, \cite{ANTONATOS20073471}, \cite{8390877}, \cite{10.1145/2663474.2663479}, \cite{Macfarl_thesdn}, \cite{6924217}, \cite{Aseeri}, \\ \cite{6682715}, \cite{978-3-319-50011}, \cite{lei_moving_2018}, \cite{zheng_survey_2019}, \CH{\cite{Bradley_Potteiger_2022},
\cite{Xiaoyu_moving_2022}, \cite{Azab_moving_2022}}}\\ \hline
Node MTD & \checkmark & \checkmark & \checkmark & & & \checkmark & \checkmark & \Longunderstack{\cite{kanellopoulos_moving_2019}, \cite{griffioen_moving_2019},\cite{giraldo_moving_2019}, \cite{10.1145/2663474.2663479}, \cite{lei_moving_2018}, \cite{zheng_survey_2019}, \cite{9266030}, \cite{giraldo_moving_2019}, \\ \cite{weerakkody_moving_2016}, \cite{9266030} \CH{\cite{giraldo_moving_2022}, \cite{Liu_moving_2021}
}}\\ \hline \hline
\textbf{Dynamic Software Evolution} & \checkmark & \checkmark & & & & & \checkmark & \cite{10.1016/j.cose.2011.08.007}, \cite{reflect_Ana}, \cite{he_software_2008}, \cite{kon_case_2002}\\ \hline \hline
\textbf{Consensus \& Distributed Trust} &\checkmark & \checkmark & & & \checkmark &\checkmark & & \Longunderstack{\cite{5605238}, \cite{severson_resilient_2020}, \cite{wen_distributed_2018}, \cite{mahmoud_distributed_2013}, \cite{amini_performance_2019}, \\ \cite{saldana_resilient_2017}, \cite{meng_studies_2014}, \cite{yan_resilient_2020}, \cite{usevitch_resilient_2019}, \cite{shabbir_resilient_2020}, \cite{zegers_event-triggered_2019}} \\ \hline \hline
\textbf{Game Theory} & \checkmark & \checkmark & & & & & \checkmark & \Longunderstack{\cite{hasan_game-theoretic_2020}, \cite{huang_dynamic_2020},
\cite{sanjab_bounded_2016}, \cite{kanellopoulos_non-equilibrium_2019}, \cite{zhu_game-theoretic_2013}, \cite{rao_resilience_2015}}\\ \hline \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Risk Management vs. Cyber-Resilience}
\label{sec:risk_resilience}
Risk management and resilience are different but related concepts~\cite{arghandeh_definition_2016}. Although they are both grounded in a similar mindset (e.g., reviewing systems for weaknesses and identifying policies or actions that could mitigate or resolve such weaknesses), substantial differences exist \cite{linkov_science_2019}.
On the one hand, a risk is assessed by the likelihood of an undesirable event and the consequence of that event using probability distribution functions. On the other hand, resilience is about the remediation of unexpected rare extreme failures, whose likelihood cannot be estimated from historical data. Risk management is concerned with analyzing threat-by-threat to derive a precise quantitative understanding of how a given threat generates harmful consequences. Such exercise works well when the threats are categorized and understood, yet develops limitations when working with complex interconnected systems. Building from this limitation, resilience complements traditional risk-management approaches by reviewing how systems perform and function in a variety of scenarios, agnostic of any specific threat.
In addition, resilience requires thinking in terms of how to manage systemic, cascading effects to other directly and indirectly connected nodes. While risk management centers around the probability of hitting the weak points of a system, resilience is grounded upon ensuring system survival. It finds strategies to keep the functionality of the core system in the face of extreme events. Hence, resilience is based on a general acceptance that it is virtually impossible to prevent or remediate all categories of risk simultaneously, and before they occur~\cite{book_flammini}.
New adversary models, as those presented in Section \ref{sec:CPS_attacks}, create new challenges to achieve
resilient systems. Indeed, achieving security in a \gls*{cps} requires solutions that extend beyond what is offered by state-of-the-art cybersecurity products. As a result, a new research area must focus on strategies to face cyber-physical adversaries. In the control-theoretic community, this new area is known as \textit{Resilient Control}~\cite{weerakkody_resilient_2020}. It is worth noting that although these approaches are called \textit{resilient} by control theorists, from a cybersecurity standpoint, resilient control is still dependent on a \emph{detection} and \emph{reaction} paradigm. In other words, although resilient control incorporates in the traditional fault-tolerant control new strategies to face cybersecurity breaches, it still aims at determining how a controller can detect, correctly estimate the system state and recalculate the required commands despite malicious data. It also aims at responding to the attacks with appropriate countermeasures, to achieve stability and graceful degradation while the system is under attack. This objective can be achieved through a system theoretical analysis of the \gls*{cps}.
Next, we present our survey of solutions in both categories. First, we survey in Section~\ref{resControl} \emph{resilient control} techniques under the traditional \emph{detection} and \emph{reaction} paradigm.
Then, we survey in Section~\ref{subsec:ch2_resilience_sota} \emph{cyber-resilience} techniques that provide system recovery without triggering any additional behavior. Our literature survey is summarized in Table~\ref{tab:techniques}.
\vspace{-.25cm}
\subsection{Resilient Control}
\label{resControl}
Detection and mitigation for cyber-physical attacks are not trivial. It requires incorporating control-theoretic strategies into traditional cybersecurity approaches to contemplate the new vulnerabilities. In this section, we present resilient control strategies based on detection and reaction mechanisms for \gls*{cps}.
\subsubsection{\textbf{Detection Approaches}}
\label{detection}
There are two main strategies for attack detection in \gls*{cps}: \textit{data-based} and \textit{model-based} approaches~\cite{7954148}. Data-based and model-based approaches are complementary solutions, together they consider the interaction between both cyber and physical layers.
\subsubsubsection{Data-based Approach}
This approach does not require system and attack models for detection. It is based on traditional machine learning and pattern recognition techniques \cite{10.5555/1162264, shawe-taylor_cristianini_2004, Hofmann_ML} for analyzing hidden patterns in the observed training dataset, for example, control signals and sensor measurements. Mitchell \textit{et al.}~ \cite{10.1145/2542049}, Cheminod \textit{et al.}~ \cite{cheminod_review_2013}, and Han \textit{et al.}~ \cite{6942184} provide surveys of intrusion detection techniques focusing only on data-based approaches using traditional intrusion detection systems. Ahmed \textit{et al.}~ \cite{ahmed_survey_2015} provide a survey of trust-based detection and isolation approaches for malicious nodes in sensor networks. In addition, Ding \textit{et al.}~ \cite{ding_survey_2018} survey the development of attack detection for industrial \gls*{cps} and discuss control and state estimation in the case of an attack. Also, Beaver \textit{et al.}~ \cite{6786081} provide an evaluation of machine learning methods to detect malicious communications in supervisory control and data acquisition protocols.
\textit{Advantages and limitations:} This detection technique considers cyber and network patterns to identify attacks. For this reason, it can detect cyber-attacks. However, it is not able to detect all kinds of cyber-physical attacks, since it does not consider the control model. It has a partial view that does not include the physical components.
\subsubsubsection{Model-based Approach} This approach uses the model of the systems to detect attacks. The decision is based on the comparison between system observations and model outputs. The system is under attack if the observed data is no longer consistent with the estimated outputs of the normal mode. This comparison may not be obvious because of the presence of model uncertainties, nuisance parameters, and random noise.
There are five main strategies for control-theoretic model-based attack detection \cite{rubiohernan:tel-01810321}: \textit{watermark-based detectors}, \textit{signal-based detectors}, \textit{state relation-based detectors}, \textit{cross layer-based resilient detectors}, and \textit{auxiliary systems detectors}. Next, we summarize the main ideas underlying each strategy.
\begin{itemize}
\item
In the case of \textit{watermark-based detectors}, a low amplitude noise, called watermark, is added to the control measurements to verify, by using a detection mechanism, that the sensor measurements and commands are not modified, i.e., the control measurements with the watermark have to be correlated with the sensor measurements. For example, Mo \textit{et al.}~ \cite{Mo_2015} propose the use of Kalman filters to detect cyber-physical replay attacks by adapting traditional failure detection mechanisms via watermarking. Miao \textit{et al.}~ \cite{Miao2013StochasticDetection} improve the performance of the aforementioned detection mechanism using a stochastic game approach. The work has also been improved by Rubio-Hernan \textit{et al.}~ \cite{rubio2017EurasipWatermak} to incorporate more advanced adversaries capable of learning the physical model. In the same way, Do \textit{et al.}~ \cite{do2014statistical} propose a detection approach based on the knowledge of the system's behavior and its stochastic variations to detect data manipulation.
\item
\textit{Signal-based detectors} use the signal statistical properties and the system behavior to detect attacks. For example, Arvani \textit{et al.}~ \cite{arvani2014detection} describe a model to detect and identify random signal data-injections attacks. It is based on discrete wavelet transform analysis to exploit the statistical properties of the signal and the dynamic model of the system. It also uses a chi-square detector to identify anomalies. Lokhov \textit{et al.}~ \cite{Correlation_detectorLokhov} present a protocol for detection and localization of disturbance based on a special correlation matrix. The matrix allows (1) detecting anomalies using spectral methods; (2) localizing a subset of anomalous nodes within the system; and (3) identifying the functional role of the inferred anomaly based on the sensor labels.
\item
\textit{State relation-based detectors} use the correlation of system states and the system behavior, to identify anomalies. For example, Wang \textit{et al.}~ \cite{Wang2014} propose a relation-graph-based detector scheme to detect false data injection attacks, even when the injected data may seemly fall within a valid and normal range. A correlation model extracts the relation among the different variables of the system to create a graph model with the possible valid system states. The correlation model uses a forward correlation that is not affected by time and a feedback correlation that depends on time. Chen \textit{et al.}~ \cite{anomaly_detectionChen} present a distributed anomaly detection algorithm using graph theory and spatiotemporal correlations to analyze the physical process in real-time. Amin \textit{et al.}~ \cite{6307833} develop a model-based scheme for detection and isolation. The scheme is based on a group of unknown input observers designed for a linear delay-differential system obtained as an analytically approximate model. The generated conditions are delay-dependent, and can also incorporate communication network-induced time-delays in the sensor-control data. To detect and isolate the attacks, they use a residual generation procedure. Also, Dehghani \textit{et al.}~ \cite{detection_using_modelbased_Dehghani} present a static state estimation algorithm able to detect integrity attacks against smart grids.
\item
\textit{Cross-layer based resilient detectors} combine control and cyber techniques in a single intrusion detection system. For example, Zhu \textit{et al.}~ \cite{Zhu2015Game-theoreticSystems} propose a game-theoretic framework that integrates the discrete-time Markov model for modeling the evolution of cyber states with continuous-time dynamics describing the controlled physical process. The cross-layer design is created between physical and cyber detection layers to maximize the chances of identifying security events. Bobba \textit{et al.}~ \cite{bobbadetecting} show that protecting only a set of basic measurements is enough to detect attacks against physical and network malicious actions. In addition, Pasqualetti \textit{et al.}~ \cite{pasqualetti2015control} use geometric control theory to optimize cross-layer resilient control systems. They conclude that by using a geometric model of the system, it is possible to detect and estimate the system state in the presence of unknown inputs.
\item
\textit{Auxiliary system detectors} use state observer techniques (e.g., Luenberger observers \cite{luenberger_introduction_1971}) to build a digital copy of the system and be able to control its behavior. For example, Shoukry and Tabuada~\cite{shoukry_event-triggered_2016} describe an algorithm for state reconstruction from sensor measurements that are corrupted using a Luenberger observer. Also, Schellenberger \textit{et al.}~~\cite{schellenberger_detection_2017} extend an original plant with an auxiliary system that does not add additional delay into the system. The auxiliary system is designed as a linear discrete-time digital copy with similar dynamics to the original system, but capable of conducting attack detection. For this detection strategy, a model of the overall system dynamics and the switching signal of the auxiliary system are needed. The residuals of the Luenberger observer are then monitored for deviations from zero, which indicates the existence of attacks.
\end{itemize}
\textit{Advantages and limitations:} This detection technique considers the physical model to identify attacks. It is suitable to identify cyber-physical attacks using feedback control. However, the information on traffic patterns and cyber-attacks identification may be limited. For this reason, it is complementary to the previous approach which is based on cyber and network data. Both techniques working together, have a more complete and integral view of the system.
\subsubsection{\textbf{Reaction Approaches}}
\label{reaction}
As pointed out in \cite{8239925}, large research efforts have focused on intrusion detection. There is little less discussion about what to do after the intrusion is detected, i.e., in remediation approaches that mitigate the effects of an attack. Most of the responses in \gls*{cps} are manual or hardwired with a fixed response that cannot be configured. In the sequel, we survey some representative proposals under the reaction (after detection) paradigm.
\subsubsubsection{Resilient State Estimation} When an adversary modifies data, system recovery requires knowing the real state of the system. For this reason, resilient state estimation is a technique that can help in terms of system reaction. It allows a remote defender to maintain an understanding of the system state under attack, even when a subset of inputs and outputs are under the control of an adversary \cite{weerakkody_resilient_2020}. As a result, the defender can still have reliable state information to apply an appropriate feedback control law, to better understand the portions of the system that have been compromised and to design attack-specific countermeasures.
\CH{Approaches for resilient state estimation can be found in the following literature. Fawzi \textit{et al.}~ \cite{fawzi_secure_2014} propose an efficient state reconstructor inspired by techniques used in compressed sensing and error correction over the real numbers. They also characterize the maximum number of attacks that can be detected and corrected as a function of the system state matrices. Pajic \textit{et al.}~ \cite{pajic_robustness_2014} present a method for state estimation in the presence of attacks, for systems with noise and modeling errors such as jitter, latency, and synchronization problems that are mapped into parameters of the state estimation procedure. Pajic \textit{et al.}~ \cite{pajic_attack-resilient_2017} also propose a state estimation approach in the presence of bounded-size noise for sensor attacks where any signal can be injected via compromised sensors.}
\CH{In addition, Mo and Sinopoli \cite{6881627} propose a state estimator based on $m$ measurements that can be potentially manipulated by an adversary. The adversary is assumed to have full knowledge about the true value of the state to be estimated and about the value of all the measurements. If the adversary can manipulate up to $l$ of the $m$ measurements, then the estimator works properly when the adversary compromises less than half of the measurements, i.e., $(l < m/2)$. The solution is formulated as an optimization problem where one seeks to construct an optimal estimator that minimizes the worst-case expected cost against all possible manipulations by the adversary. Keller \textit{et al.}~ \cite{doi:10.1080/00207721.2014.906683} propose a state estimation of stochastic discrete-time linear systems in the case of malicious disturbance that switches between unknown input and constant bias. This means that when corrupted control signals are received by the controller, detectors based on Kalman Filters are used to estimate the state of the system and the exogenous unknown input of the system (i.e., the malicious inputs). In addition, the malicious control signal is blocked at the occurrence of data losses, and the unknown input is transformed to a constant bias at the input of the system. Weimer \textit{et al.}~ \cite{weimer_attack-resilient_2014} introduce a resilient estimator for stochastic systems using a mean squared error for the state that remains finitely bounded and is independent of attacks in measurements.}
Shoukry \textit{et al.}~ \cite{shoukry_secure_2017} and Mishra \textit{et al.}~ \cite{mishra_secure_2017} propose secure state estimation algorithms for linear dynamical systems under sensor attacks and in the presence of noise. The approaches are based on satisfiability modulo theory, which is a technique used to express problems that should satisfy constraints, i.e., decision problems using logical formulas expressed in first-order logic \cite{10.1145/1995376.1995394, 7299903}.
Another technique used to improve the state estimation accuracy is to consider multiple sensor systems instead of one single sensor system \cite{6730927, TAN2017313}. In this case, data fusion is a process in which the received data is integrated from different sensors observing the same system.
\textit{Advantages and limitations:} This approach is useful when sensors, actuators, or network traffic have been compromised. It provides a reliable state of the system even when an adversary injects malicious traffic into the system. As a result, this technique helps the system recovery because it allows maintaining an understanding of the state under attack, even when a subset of inputs and outputs are malicious. The limitation is that it can only repair a maximum number of compromised values. In addition, it is hard to ensure that the control commands are executed correctly by simply using state estimation techniques. For that, complementary actions need to be included in the response plan and to ensure that the estimated data properly reaches its destination.
\vspace{-.25cm}
\subsubsubsection{Reconfiguration} Once the system is compromised, it is required to ensure that the control commands arrive correctly to the actuators. One possibility to do this is to alter dynamically the configuration of the system to minimize the effects of the attack. For example, changing the network topology, configuration of the devices, firewall rules, or quarantining (rerouting) traffic. In other words, the system structure is modified to face the attacks. For instance, one option would be to increase the number of sensors such that attacks are identified faster or add extra layers of security to those elements that are more vulnerable to cyber-attacks \cite{giraldo_security_2017}. Components may also be isolated. Li \textit{et al.}~ \cite{8443136} propose a decision-making approach for intrusion response aiming to determine the optimal security strategy against the attacks. The strategy tries to secure attack paths with higher priority, in addition to responding to functional failures. Authors assess both cyber and physical domains with an in-depth analysis of attack propagation. Yuan \textit{et al.}~ \cite{Dos_Yuan} propose a resilient controller design for \gls*{cps} under DoS attacks. The proposal uses a framework that incorporates an IDS and robust control. The robust control in the physical layer is based on an algorithm with value iteration methods and linear matrix inequalities, e.g., for computing the optimal security policy and control laws. The cyber state is modeled as a continuous Markov process to defend against malicious behavior.
Other techniques incorporate dynamically new on-demand capabilities to face the attacks. For example, using pre-configured virtual machines to help affected components, adding new cloud-based services to help with denial-of-service attacks, or distributing tasks in a different organization.
Ismail \textit{et al.}~ \cite{8530771} propose an optimization of the defense countermeasures deployment. To design the approach, the available information is presented in an attack graph, representing the evolution of the state of the attacker in the system. Then, they find the optimal security policy to maximize the system protection using Markov decision processes. This way, countermeasures are prioritized to respond efficiently to the intrusion. Also, game-theoretic approaches can be used to improve the system response. Kiennert \textit{et al.}~ \cite{10.1145/3232848} survey strategies capable of analyzing the interactions between attackers and defenders, then responding to attacks, via game theory and Markov decision processes.
Based on how frequent the attacks occur, \textit{event-triggered control} schemes instead of time-triggered schemes emerged as appropriate tools to increase the resilience of control systems \cite{6425820, DIBAJI2019394}. The application of event-triggered control to the resilience of \gls*{cps} has been studied in \cite{Cetinkaya2016, YANG2017145, LEI2016286} where the triggering function to generate a new control input is based on the errors of the state variables. Sun \textit{et al.}~ \cite{Sun__event_triggered_2022} propose an adaptive event-triggered resilient control to resist asynchronous data attack injection in industrial \gls*{cps} network communication. Their proposal uses a threshold that dynamically changes and adjusts the control strategy, according to the attack.
\textit{Advantages and limitations:} This approach provides a flexible and dynamic response mechanism that can work only when the system is under attack to provide graceful degradation. It may be designed to protect sensors, actuators, controllers of the network traffic. However, this approach may be hard to test and to ensure the stability of the control feedback when combining malicious and defensive actions over the physical process. It may have hidden undesirable actions or cascade effects that may be harmful to the system. In addition, it increases the complexity, making it complicated to test all possible combinations of malicious actions and dynamic defensive configurations.
\vspace{-.25cm}
\subsubsubsection{Programmable Networks} Some other proposals are based on programmable networking that enables efficient network configuration that can be used for neutralizing the attacks. New networking functionality can be programmed using a minimal set of APIs (Application Programming Interfaces) to compose high-level services. This idea was proposed as a way to facilitate network evolution. Some solutions such as Open Signaling \cite{Campbell_opensignal}, Active Networking \cite{Tennenhouse_active_network}, and Netconf \cite{Netconf}, among others, are early programmable networking efforts and precursors to current technologies such as Software Defined Networking (SDN) \cite{Kreutz_SDN_survey}. In particular, SDN is a programmable networking paradigm in which the forwarding hardware is decoupled from control decisions. SDN proposes three different functionality planes: (1) data plane, (2) control plane, and (3) management plane. The data plane corresponds to the networking devices, which are responsible for forwarding the data. The control plane represents the protocols used to manage the data plane, such as, to populate the forwarding tables of the network devices. The management plane includes the high-level services and tools, used to remotely monitor and configure the control functionality. Security aspects may have an impact on different plans. For example, a network policy is defined in the management plane, then the control plane enforces the policy and the data plane executes it by forwarding data accordingly.
The idea of using programmable networks for improving security includes the management of denial-of-service (DoS) attacks~\cite{sahay} and segmentation of malicious traffic \cite{hadega,itl2018}. Programmable networks provide higher global visibility of the system, which is favorable for attack detection. In addition, a centralized control plane may allow further possibilities to achieve dynamic reconfiguration of network properties, e.g., application of countermeasures. Molina \textit{et al.}~ \cite{molina_software-defined} survey approaches for SDN controllers that are able to establish different paths between sensors and actuators. Piedrahita \textit{et al.}~ \cite{PIEDRAHITA2018} use SDN and network function virtualization to facilitate automatic incident response to a variety of attacks against industrial networks. The resources are assigned after an attack is detected. SDN and cloud-enabled virtual infrastructure help to respond automatically to sensor attacks and controller attacks by rerouting malicious traffic to a honeypot and transfer the services from the compromised device to a new virtualized device.
\textit{Advantages and limitations:} The programmable networks also provide a dynamic reconfiguration to respond at runtime to malicious actions in the network traffic. These approaches are flexible, however, it may be hard to analyze how the new network configuration affects the network delay and jitter, which is vital in real-time applications. Also, the reconfiguration increases the network complexity and the restoration work may induce hidden undesirable behaviors within the system.
\medskip
In this section, we have presented detection and reaction mechanisms for cyber-physical adversaries. However, despite the implemented mechanisms, it is still possible to have a system breach. For this reason, it is desired to implement cyber-resilience by design approaches to absorb, survive or recover from threats. Another cyber-resilience taxonomy can be found in \cite{book_resilience}. Cyber-resilience demands a system design that provides flexibility, adaptability, and agility to react in real-time to disturbances. In the next section, we survey techniques to build cyber-resilient systems.
\subsection{Cyber-Resilience Approaches}
\label{subsec:ch2_resilience_sota}
A growing number of technologies and architectural practices can be used to improve cyber-resilience. In the rest of this section, we cover techniques that may be used to build resilient systems. We provide a taxonomy of cyber-resilience techniques and a literature survey of different proposals that apply them.
We analyze the techniques according to the cyber-resilience phase they react and the \gls*{cps} layer they protect. A resilience solution may work in the absorb, survival or recovery phase. The absorb phase limits the damage of the attack or extends the surface that the adversary has to attack to be successful. For example, by isolating resources, limiting adversary access, changing or removing resources.
The survival phase objective is to maintain or maximize the duration of the correct function of the essential system mission. The recovery phase aims at transforming or reconstituting the resources to recover the functionalities after the attack. We also analyze at which level of the system design the resilience approach works. For example, it may be at the physical level considering the hardware of the components, at the control level to face adversaries that exploit the control theory mechanism that is running in the controllers, at the network or cyber level considering the communications or the software of the system. Table \ref{tab:techniques} sums up the different cyber-resilience strategies and scientific proposals that use them.
\subsubsection{\textbf{Architecture Design}}
These strategies involve modifying the system architecture to improve the resilience of the system to absorb or survive the attack impact \cite{9646342}.
\subsubsubsection{Diversity} It uses a heterogeneous set of technologies to minimize the impact of the attack. Different technologies will have different and independent vulnerabilities, which will make the adversary task harder to achieve. In addition, this technique increases the adversary uncertainty and the resources required for a successful attack.
This technique can be applied, for example, using different hardware, software, firmware, or protocols \cite{9760016}. It is worth noting, that this technique requires adding new components. These components should be different from the previous ones because just adding redundancy makes the system still exploitable by the same adversaries using the same vulnerabilities as in the primary components.
When designing software diversification technique, it is required to decide what to diversify and when to diversify it \cite{larsen_sok_2014}. To decide what to diversify, possible techniques are: (1) randomization which works as a compiler optimization and can be applied, for example, at the instruction level by substituting equivalent instruction or sequence of instructions; (2) randomizing the register allocation, or reordering instruction. Another option is to apply this technique also at block, loops, functions, data, or even program levels. For example, at the functions level, it is possible to randomize the order of function parameters or the layout in the stack to prevent buffer overflow attacks. At the program level, similar strategies can be applied to randomize the order of the functions within executables and libraries. Different options to decide when to apply the diversification are at implementation time (i.e., when coding) \cite{532621}, at compiling and linking the source code \cite{COHEN1993565, Forrest_buildingdiverse, 10.1145/2508859.2516675, Jackson2011, 10.1007/978-3-642-00730-9_10, 10.1007/978-1-4614-5416-8_8, 10.1007/978-3-540-70542-0_1} or at installation, loading, or execution time \cite{10.1145/948109.948146, 6494997, cispa450, 4768651}.
Other diversity solutions may work also in a detection-reaction manner. For example, Ouffoué \textit{et al.}~ \cite{8029792, ouffoue:hal-03113828} use diversification to create attack tolerant web services. They modeled the services to extract different implementations using variation in style, encoding, and language. The multiple services' implementations allow monitoring for attacks and react by changing the active implementation.
In the case of hardware diversification, it is required to design if all the different components will be active at the same time or if they will act as a cold backup that is activated after the primary system is attacked. For example, authors in \cite{chaves_improving_2017} use diversity to improve cyber-resilience for industrial control systems. The strategy is implemented using primary and redundant PLCs from different vendors to enhance cyber-resilience.
\textit{Advantages and limitations:} When the diversity is used with different implementations, it helps to create a system with independent vulnerabilities. This way, when a component is attacked, it can be disabled to continue working with the diversified copy. This may be applied to sensor, actuators, and controllers attacks. The advantage is that the system keeps all the functionalities working and it is possible to ensure the correct behavior of the control. However, this approach may be expensive due to the required extra hardware and it only addresses attacks at the endpoints. Also, it requires extra management and maintenance effort, for example, to apply the software updates to a wider and more diverse group of components.
\vspace{-.25cm}
\subsubsubsection{Segmentation} The design of a \gls*{cps} must consider how to prevent attacks and be more tolerant to intrusions from the beginning. Network segmentation strategy separates logically or physically the components to reduce the attack surface, contain and limit the damage of a successful attack. The components may be separated based on their criticality, trustworthy or functionality \cite{gengeSegmentation, 10.1007/978-3-642-41488-6_12}.
According to the results achieved in \cite{gengeSegmentation}, this technique also contributes to building more intrusions tolerant \gls*{cps}. Network segmentation may be designed considering the Process-Aware Control approach presented in \cite{10.1007/978-3-642-41488-6_12}. It establishes that attacks on some components generate a greater risk than attacks on other components in the same system. For this reason, it is important to classify the different network components and the control loops according to the impact they may have on the operation of the \gls*{cps}. This approach would allow protecting the essential components in a better way. Following this idea, it also allows having the notion of \textit{more insecure} nodes (for example, a node that uses wireless communication technologies) and therefore place them in a network segment separate from the other nodes that are considered as a trust zone.
A segmented architecture can help to absorb the impact of a compromise and prevent cascading failures \cite{9693217}. A network susceptible to large cascade failures is likely to have severe damage to disturbances, which limits the absorption and recovery required to build a resilient system. For this reason, the dependencies and links between nodes should be designed to minimize the likelihood that a failure propagates easily from one node to another.
\textit{Advantages and limitations:} This approach is easy to implement and effective to contain the consequences of an compromised component. It also limits cascade effects and the propagation of the attack within the network. The main limitation is that it does not help to recover the system to its normal behavior.
\subsubsection{\textbf{Reconfiguration}}
There are different possible reconfiguration options. This technique requires a situational awareness to select pre-considered options, ensuring the intended consequences. For example, in a denial-of-service (DoS) attack, we might dynamically over-provision additional processing capabilities. If an attack comes from the outside, we may reconfigure boundary protections and security policies. During a failure, we may shut down non-essential functions or initialize alternative capabilities to execute critical processing. We classify possible reconfiguration in the following categories.
\subsubsubsection{Isolation and Containment} These strategies aim at limiting the spread of the adversary by separating compromised from non-compromised components. For example, if an adversary controls a part of the system, it may be necessary to temporarily shut down it to close the adversary’s channel while critical mission functions are completed in another portion of the system.
Kwasinski in \cite{kwasinski_modeling_2020} analyzes this problem for power grid and he shows how service buffers, such as energy storage or a data connectivity reestablishment ensured time, help limit the impact of intra-dependencies on resilience. They explain that without service buffers, failures in an infrastructure component may immediately cascade within the system or onto other infrastructures. For this reason, resource buffers play a critical role to understand cyber-physical interactions, limit the negative effect of intra-dependencies and improve resilience.
Xu \textit{et al.}~ \cite{xu_islanding_2020} show that isolation and reconfiguration are effective approaches for service restoration and resilience enhancement. They propose a multi-stage switch strategy based on dynamic programming, considering both isolating and fault reconfiguration. They construct numerous expected fault scenarios, then they select some of them and develop their information entropy. Second, for each typical scenario, a multi-stage switch strategy considering both isolating and fault reconfiguration through dynamic programming.
Bellini \textit{et al.}~ \cite{bellini_cyber_2019} analyze IoT resilience considering a network-based epidemic spreading approach. The mathematical model assesses infection and communication interactions to reduce a malware outbreak while maintaining the network functionalities at an acceptable level. Disconnecting a network region compromises connectivity. The mobility of resources to an affected area is of critical value for the immediate local control of outbreaks and to prevent the spread.
Chen \textit{et al.}~ \cite{chen_robustness_2020} analyze how attacks in communication networks may cause cascading failure in a physical power grid. They find that clusters in physical power grid and communication network are mutually interdependent to survive in cascading failure, operating in the form of isolated subsystems the failures remain interdependent to stay alive when cascading attacks occur. Hence, they consider survival clusters to adjust intra- and inter-links and study the robustness of the system in various attack scenes.
Haque \textit{et al.}~ \cite{haque_modeling_2019} analyze resilience for energy delivery systems considering cyber components and services criticality. They estimate the criticality using graph Laplacian matrix and network performance after removing links (i.e., disabling control functions or services) and also analyze the cyber resilience by determining the critical devices using TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytical Hierarchy Process) methods. They consider paths as a sequence of services or control functions and assume the removal of links as disabling the service or deactivating the control function rendered by the particular device.
\textit{Advantages and limitations:} As in the previous approach, this technique is effective to contain the consequences of an attack but it does not help to recover the system to its normal behavior. Isolating or disconnecting a component or part of the system in case of compromise prevents the spread and cascade failures. However, this might be also detrimental to the overall resilience of the system if the isolated component is needed to support other components that execute damage-absorbing actions. The recovery actions should be planned with this in mind.
\vspace{-.25cm}
\subsubsubsection{Dynamic Network Composition} This technique designs the system with dynamic capabilities to face the attacks. For example, distributing tasks in different organizations. Januario \textit{et al.}~ \cite{januario_distributed_2019} propose a hierarchical multi-agent framework that is implemented over a distributed middleware with distributed physical devices. The architecture uses Software-Defined Networks and cloud-based virtual infrastructures. Physical and cyber vulnerabilities are taken into account, and state and context awareness of the whole system are targeted. Each multi-agent executes a specific task and adapts its behavior depending on its location and environmental changes. In addition, Chen \textit{et al.}~ \cite{chen_adaptive_2020} propose an approach to improve resilience using the synchronization of multi-agent systems that address faults and uncertainties on communication links. For that, they transform the resilient control problem into distributed state observers.
Marshall \textit{et al.}~ \cite{marshall_context-driven_2019} present a context-driven decision engine for adaptive resilient control. It integrates diagnostic and prognostic heuristics to establish situational awareness and drives actions. The proposal assesses the system state of health based on operational availability and drives control decisions based on scenario-specific constraints and priorities. Similarly, Ratasich \textit{et al.}~ \cite{ratasich_self-healing_2017} presented a self-healing framework that uses structural adaptation, by adding and removing components, or by changing their interaction, at runtime. Segovia \textit{et al.}~ \cite{segovia_reflective_2020} proposed an attenuation strategy that uses software-defined networks and software reflection. In case of attack, the approach creates dynamically in the network domain a component on the fly to help or assume the functions of the victim node.
\textit{Advantages and limitations:} This approach changes the configuration of the system periodically and increases the attack effort. In addition, the new configuration may force the adversary to re-implement the attack with each system change. This technique may be effective for attacks that compromise controllers of the network traffic. The limitation of this approach is that it may be hard to ensure the stability of the control feedback when combining malicious and defensive actions over the physical process. Also, it increases the complexity of the system and it is harder to test, manage and debug.
\subsubsubsection{Non-Persistence} This technique reduces the adversaries' opportunity to identify and exploit vulnerabilities or maintain access over resources whose access is not continuous in time. It can be applied, for example, to data, applications, or connectivity, making them only accessible during a particular time. In addition, with this technique, a system can periodically refresh to a known previous image to ensure that the current image complies with a secure configuration.
Another option is to implement reversibility. This way, components are designed in a manner that allows them to revert to a safe mode when failed or compromised. This means that the component in the failed mode should not cause any further harm to other components in the system; and second, it should be possible to reverse the state of the component in the process of recovering the system. The system can periodically refresh to a previously known image to ensure that the current system image is correct.
For example, Griffioen \textit{et al.}~ \cite{9147232} present a decentralized control system and a procedure to determine when agents should communicate with one another after having been disconnected from the network for a period of time. When agents communicate with one another, they guarantee system resilience against malicious adversaries by utilizing software rejuvenation, a prevention mechanism against unanticipated and undetectable attacks on cyber-physical systems. Without implementing any detection algorithm, the system is periodically refreshed with a secure and trusted copy of the control software to eliminate any malicious modifications to the run-time code and data that may have corrupted the controller.
Pradhan \textit{et al.}~ \cite{PRADHAN2016344} present a runtime infrastructure that provides autonomous resilience via self-reconfiguration. The approach relies on the implicit encoding of all possible states a system can reach (the configuration space) and it consists of relevant information about different system goals, functionalities, services, resources, and constraints. At any given time, there is exactly one configuration point that represents the current state of a platform. At runtime, when a configuration point is deemed faulty, the self-reconfiguration infrastructure computes a valid new configuration point that belongs to the same configuration space, and then transition, migrate, or reconfigure to the newly computed configuration point such that failures or anomalies are mitigated.
\textit{Advantages and limitations:} This approach returns the system to a previous safe and known state, which ensures the correct behavior. It is effective for attacks that compromise specific devices such as sensors, actuators, controllers, routers or switches. However, this solution does not last long, due to the vulnerabilities exploited by the adversary are still present in the previous image, and they can be exploited again.
\subsubsection{\textbf{Moving Target Defenses}}
A static structure allows adversaries to collect information and perform long-term analysis. In addition, the uniformity of components allows adversaries to expand the damage scope after they find one vulnerability. For this reason,
Moving Target Defense (MTD) approaches provide strategies that change the system over time to increase its complexity, attack cost, or limit the exposure of vulnerabilities \cite{zheng_survey_2019}. The mechanisms are usually applied at the network or the node level~\cite{lei_moving_2018}. Next, we summarize proposals for both levels as well as approaches specially designed for \gls*{cps}.
\subsubsubsection{Network MTD Approaches} The \textit{endpoint information} (such as MAC address, IP address, port, protocol, or encryption algorithm) and the \textit{forwarding path} (links and routing nodes) are two key elements in network transmission and it can be used to identify the source and destination nodes. Hence, it is important to protect this information as part of the attack surface.
Some approaches that protect the endpoint information are as follow.
Antonatos \textit{et al.}~ \cite{ANTONATOS20073471} propose the use of Network Address Space Randomization (NASR) to handle worm attacks. The method analyzes and discriminates the potentially infected endpoints and the nodes are forced to frequently change their IP address by using DHCP protocol.
Al-Shaer \textit{et al.}~ \cite{10.1007} proposed Random Host Mutation that assigns virtual IP addresses that change randomly and synchronously in a distributed way over time. To prevent disruption of active connections, the IP address mutation is managed by network appliances and transparent to the end-host.
MacFarland \textit{et al.}~ \cite{Macfarl_thesdn} hide the endpoint MAC, IP, and port numbers by setting up DNS hopping controller and synthetic addressing information in place of the real one with the help of NAT rules. This
can be considered to be chosen at random within certain validity constraints.
Other approaches protect the forwarding path information, i.e., it randomly selects routing nodes to change the forwarding paths while ensuring reachability. For example, Dolev \textit{et al.}~ \cite{6924217} use a secret sharing technique to encrypt its data and create \textit{n} shares, and only fewer than \textit{k} parts can be allowed to transmit in the same path. In addition, to reconstruct the data, the destination needs to have at least \textit{k} shares out of the \textit{n} shares that were sent. The approach objective is to provide private and secure interconnection between the data centers.
Aseeri \textit{et al.}~ \cite{Aseeri} propose an approach to improve the diversity of forwarding paths to deal with eavesdropping attacks in the SDN data plane. It uses bidirectional multiple routing paths to reduce the severity of data leakage. The SDN controller applies the multipath mechanism both ways, from the sender side and the receiver side. By negotiating migrating paths between source and destination, the forwarding path is changed randomly during transmission.
Duan \textit{et al.}~ \cite{6682715} propose a Random Route Mutation technique that enables changing randomly the route of the multiple flows in a network simultaneously to defend against reconnaissance, eavesdrop, and DoS attacks while preserves end-to-end QoS properties.
Ma \textit{et al.}~ \cite{978-3-319-50011} propose an approach for self-adaptive end-point hopping, which is based on adversary strategy awareness and implemented using SDN. This method periodically changes the network configuration in use by communicating endpoints. \CH{Potteiger \textit{et al.}~ \cite{Bradley_Potteiger_2022} propose to implement MTD techniques such as address space randomization (ASR), and data space randomization (DSR) in a mixed time and event-triggered architecture in order to maintain the safety and the availability during the attack. Mixing both architecture allows the system support predictable operation during normal circumstances while maintaining rapid detection and reconfiguration during an attack. Xu \textit{et al.}~ \cite{Xiaoyu_moving_2022} propose a MTD technique with a routing randomization method based on deep reinforcement learning. This proposal improves the security against eavesdropping attacks, improving the random routing granularity, real-time and accurate network state awareness, and powerful decision-making. Azab \textit{et al.}~ \cite{Azab_moving_2022} propose a novel MTD approach using multi-controller management of SDN. The objective of this multi-controller approach is to detour the runtime workload among multiples controllers and control misbehavior detection without impact on \gls*{cps} performance.}
\textit{Advantages and limitations:} This approach is similar to \textit{Programmable Networks}, the difference is that the re-configurations are periodicals and not triggered by any detection. Due to the pre-configurated system change, it may be possible to predict better the response of the system in each change and also in case of an attack. The approach is effective for attacks that compromise network traffic. One limitation is that the re-configurations increase the network complexity impacting negatively the debugging and managing effort. It may also impact the network performance in each reconfiguration period. F.i., creating loops until all the paths are updated or affecting the latency between nodes.
\subsubsubsection{Node MTD Approaches} Platform environment and software applications can be diversified to protect from adversaries. Diversity proposes to have many forms of the same object because this design can reduce the probability of intrusion \cite{verissimo_intrusion-tolerant_2003}. Address space, instructions, or data randomization are three typical ways to achieve platform environment diversification \cite{Forrest}. Another technique is software application isomerization. In software engineering, isomerization is a mechanism that changes codes dynamically to enhance the heterogeneity of software applications under the premise of ensuring functional equivalence. Depending on the application software life cycle, it can be divided into transformation mechanisms adopted during software compilation and link or transforming mechanism implemented during software load and execution \cite{lei_moving_2018}. In addition, programmable reflection is a meta-programming technique that has the potential to allow a programmable system to manipulate itself at runtime~\cite{reflect_Ana}.
The previous techniques are software techniques that can be applied to a wide variety of systems. Some CPS-specific MTD approaches have been proposed to control adversaries situated in the end devices, i.e., actuators and sensors. For example, in~\cite{giraldo_moving_2019}, Giraldo \textit{et al.}~ propose a MTD strategy that randomly changes the availability of the sensor data, so that it is harder for adversaries to achieve stealthy attacks. This approach uses switched control systems that allow detecting sensor compromise and minimize the impact of false-data injection attacks. \CH{In \cite{giraldo_moving_2022}, Giraldo \textit{et al.}~ present a novel approach for MTD using IoT-enable Data Replication (MTD-IDR). They utilize liner-matrix inequities for the optimization problem, in order to select and optimize the number of replicas of each communicated signal in the system. This approach prevents stealthy attacks and reduces the accuracy of attack to learn the system's model, nevertheless the energy consumption increases and the bandwidth is reduced.}
Griffioen \textit{et al.}~ \cite{griffioen_moving_2019} propose a MTD approach for recognizing and isolating \gls*{cps} integrity attacks on a set of sensors and actuators by introducing stochastic time-varying parameters in the control system. The underlying random dynamics of the system limit the adversary's knowledge of the model. \CH{Liu \textit{et al.}~ \cite{Liu_moving_2021} propose a strategy by proactively perturbing the primary control gains of the power converter device in DC microgrids (DCmGs) to defend against deception attacks. They highlight the importance of providing explicit conditions for the magnitude and the frequency of the perturbation in order to ensure the voltage stability of the system.}
Weerakkody \textit{et al.}~ \cite{weerakkody_moving_2016} propose a MTD approach to minimize identification in \gls*{cps}, i.e., to limit the adversary's knowledge of the system model to identify sensor attacks by changing the dynamics of the system as a function of time.
Kanellopoulos \textit{et al.}~ \cite{kanellopoulos_moving_2019} propose an approach to mitigate sensor and actuator attacks by formulating a control algorithm based on MTD that provides a proactive and reactive defense mechanism. It uses a stochastic switching structure to alter the parameters of the system and make it more difficult for the adversary to perform a system reconnaissance. Segovia \textit{et al.}~ \cite{9266030} propose a MTD approach that changes the \gls*{cps} physical model that executes in each node periodically. The system is modeled as a switched control system to improve resilience.
\textit{Advantages and limitations:} Similarly to the previous approach, the re-configurations are periodicals and not triggered by any detection. This is an advantage because modeling the system as a Switched Control System is possible to better predict the stability of the system in each change to ensure its correct behavior. In this case, the control theory provides strong mathematical models to understand, limit the damage and predict the system behavior under attack. This approach may be efficient for sensor, actuator, controller, or network attacks depending on how it is designed. One limitation is that the re-configurations increase the system complexity making the debugging and testing effort bigger.
\subsubsection{\textbf{Dynamic Software Evolution}}
Dynamic software evolution uses code generation or modification at runtime to adapt the system behavior and face adversaries. We can differentiate two main approaches: \textit{Runtime Code Generation} and \textit{Software Reflection}.
The former, Runtime Code Generation, is a particular case of code generation techniques used to create source code at runtime. Some languages support this feature, for example, .NET which provides a mechanism that produces source code in multiple programming languages at runtime, based on a single model that represents the code to render in a language-independent object model. This way, programs can be dynamically created, compiled, and executed at runtime. Code generation involves creating code that never has to be modified once it is generated. If a problem arises, the problem should be fixed in the code generator, and not in the generated source files. This technique may be used to generate diversity in the created software.
The latter, Software Reflection or Self-Modifying Code, is another technique that allows a system to adapt itself through the ability to examine and modify its execution behavior at runtime. As a mitigation technique, software reflection has the potential to allow a system to react and defend itself against availability threats. When malicious activity is detected, the system shall dynamically change the implementation to activate remediation techniques to guarantee that the system will continue to work. Software reflection provides the ability to analyze, inspect and modify the structure and behavior of an application at runtime. This allows the code to inspect other codes within the same system or even itself. Reflection allows inspecting classes, examining fields, changing accessibility flags, dynamic class loading, method invocation, and attribute usage at runtime even if that information is unavailable at compile time. Also, it is possible to use data marshaling and pull data from an outside source and load it into a Java object or use reflection to execute it.
He \textit{et al.}~ \cite{he_software_2008} propose an approach to modify the software runtime architecture through meta-operators based on reflection. Similarly, Kon \textit{et al.}~ \cite{kon_case_2002} propose a reflective middleware to deal with highly dynamic environments, supporting the development of flexible and adaptive systems and applications. Mavrogiannopoulos \textit{et al.}~ \cite{10.1016/j.cose.2011.08.007} present a taxonomy of self-modifying code with the purpose of obfuscation.
\textit{Advantages and limitations:} This approach is flexible and dynamic and it works for attacks on sensors, actuators and controllers. However, it is harder to have control over what is being executed in each node and its effects on the system stability. Also, due to the difficultly to understand what is being executed, it may be harder to test, debug and protect the system.
\subsubsection{\textbf{Consensus, Secret Sharing and Distributed Trust}}
Both consensus and distributed trust approaches have been largely investigated for general computer science problems where some of the subsystems are untrustworthy.
Consensus protocols provide resilience to the byzantine problem, i.e., in the presence of malicious nodes that send incorrect messages to deceive the system. These consensus approaches may be applied at the network level which has been largely studied by the distributed computing research community \cite{Lamport_byzantine, Fekete_consensus, leblanc_resilient}, or it may also be applied at the control level which is an active research area in the control theory community. In this case, at each update, the controller ignores suspicious values and computes the control input with the non-suspicious values. For example, using Distributed Kalman Filter for resilient state estimation \cite{mahmoud_distributed_2013, wen_distributed_2018} or other distributed observers strategies to manage sensor compromise \cite{severson_resilient_2020, 8270595}.
Other strategies are distributed function calculation in the presence of malicious agents \cite{5605238}, distributed multi-agent consensus \cite{amini_performance_2019, saldana_resilient_2017, meng_studies_2014, 7906513, DIBAJI2017123}, resilient vector consensus \cite{yan_resilient_2020, shabbir_resilient_2020} and resilient leader-followers consensus approaches \cite{usevitch_resilient_2019, zegers_event-triggered_2019}.
Techniques such as secret sharing schemes \cite{Shamir_secret, Brickell_secret, beimel_secret} and distributed trust \cite{Abdul_distribTrust, Josang_distribTrust} may be used to implement, for example, mechanisms that divide the control into shares, such that the system needs to reach a given threshold before granting control, i.e., a data $D$ is divided into $n$ pieces in such a way that $D$ is easily reconstructable from any k pieces, but even complete knowledge of $k-1$ pieces reveals no information about $D$.
Secret-sharing schemes are important tools in cryptography used in many security problems such as multiparty computation, Byzantine agreement, threshold cryptography, access control, attribute-based encryption, distributed certificate authorities, distributed information storage, key management in ad-hoc networks, electronic voting, and many others.
The main approaches to building secret sharing schemes are the Shamir’s threshold approach \cite{Shamir_secret} which divides the data $D$ using a polynomial of grade $n$. The correctness and privacy of this scheme follow from the Lagrange’s interpolation theorem. The undirected s-t-connectivity approach \cite{beimel_secret} builds the scheme using an undirected graph structure whose share parties between entities are mapped to edges, nodes, and paths to connect those nodes. Other existing schemes are based on monotone formulas, for example, the proposal in Ito \textit{et al.}~~\cite{Ito_4430720906}, the monotone formulas construction \cite{10.1007/0-387-34799-2_3}, and the monotone span programs construction \cite{10.1007/3-540-46885-4_45, 336536}. A monotone function is a function entirely non-increasing or non-decreasing, i.e., its first derivative does not change the sign. Every monotone formula computes a monotone function and every monotone function can be implemented using just AND and OR operators.
Benaloh and Leichter \cite{10.1007/0-387-34799-2_3} proved that if an access structure can be described by a monotone formula then it has an efficient perfect secret-sharing scheme.
The distributed trust aims at interacting with the most secure, honest, and trustworthy entities because this minimizes the exposure to risky transactions.
One strategy for distributed trust is a human-like mechanism based on reputation that chooses between benevolent and malicious behavior. Then using relationships and inferring rules, different levels of trust are derived for other entities \cite{Josang_distribTrust}. This way, reputation is an assessment based on the history of interactions with or observations of an entity, either directly with the evaluator (personal experience) or as reported by others (recommendations or third-party verification). A second mechanism to determine trust is using policies that describe the conditions necessary to obtain trust, and can also prescribe actions and outcomes if certain conditions are met \cite{ARTZ200758}. Policies frequently involve the exchange or verification of credentials, which are information issued (and sometimes endorsed using a digital signature) by one entity, and may describe qualities or features of another entity. Also, Distributed Ledger Technologies, like Blockchain, are characterized by transparency, traceability, and security by design. These features make the adoption of Blockchain attractive to enhance information security, privacy, and trustworthiness in very different contexts including distributed trust \cite{8970496}.
\textit{Advantages and limitations:} This approach is useful to address compromised components and it is effective for a determined number of compromised devices. As a result, the information used for the feedback control is more accurate and it is harder to execute commands based on fake information. However, when the majority is wrong, the stability and the correct behavior of the system are also compromised. Another limitation is the required time to synchronize the information between all the nodes. For this reason, the decision process can take a long time which is not suitable for real-time applications.
\subsubsection{\textbf{Game Theory}}
Approaches based on game-theoretic strategies use mathematical models to analyze the situation where players choose a different action in an attempt to maximize their returns \cite{ILAVENDHAN201846}. It studies the decision made in an environment in which multiple players interact with each other in a strategic setup. This means that game-theoretic approaches provide resilience trying to maximize the cost of attacking the system or minimize the damage that an adversary can apply to the system. For that, each player tries to optimize an objective function. This objective function depends on the choices of the other players in the game. Thus, each player can not optimize its objective independently of the choices of other players.
This technique has been proposed to respond to attacks where the defender chooses the optimal response according to the adversary actions. Game theory provides tools to model advanced adversaries who know the defense strategies and can adjust the attack strategies accordingly. In addition, it is possible to define games in both physical and cyber layers.
In the last years, there have been many proposals on game-theoretic approaches for \gls*{cps}. For example, Huang et Zhu \cite{huang_dynamic_2020} propose a dynamic game for long-term interaction between a stealthy adversary and a proactive defender. The stealthy and deceptive behaviors are captured by the multi-stage game of incomplete information, where each player has his private information unknown to the other. Both players act strategically according to their beliefs which are formed by multi-stage observation and learning. In addition, Hasan \textit{et al.}~ \cite{hasan_game-theoretic_2020} design an adversary-defender game-theoretic model for power systems. The adversary can identify the chronological order in which the critical substations and their protection assemblies can be attacked to maximize the overall system damage. The defender can intelligently identify the critical substations to protect such that the system damage can be minimized. Ismail \textit{et al.}~ \cite{10.1007/978-3-319-47413-7_10} model the interactions between an attacker and a defender and derive the minimum defense resources required and the optimal strategy of the defender that minimizes the risk. The solution is analyzed in power systems. Also, Rao \textit{et al.}~ \cite{rao_resilience_2015} propose a resilience approach using a game approach to face adversaries. Their functions consist of an infrastructure survival probability and a cost expressed in terms of the number of components attacked and reinforced. Zhu and Basar \cite{zhu_game-theoretic_2013} propose a game-theoretic approach to manipulate the attack surface of the network and create a moving target defense. The notion of attack surface is defined as the set of vulnerabilities of the system that can potentially be exploited by the adversary. The essential goal is to find an optimal configuration policy for the defender to shift the attack surface that minimizes its risk and damage.
Game-theoretic approaches have also been proposed to learn adversary models and estimate their knowledge about the system dynamics. For example, Sanjab and Saad \cite{sanjab_bounded_2016} propose a game-theoretic approach to analyze the interactions between one defender and one adversary over a \gls*{cps}. In this game, the adversary launches cyber-attacks on several cyber components of the \gls*{cps} to maximize the potential harm to the physical system while the system chooses to defend a set of cyber nodes to thwart the attacks and minimize potential damage to the physical side. Similarly, Kanellopoulos and Vamvoudakis \cite{kanellopoulos_non-equilibrium_2019} consider the problem of identifying the cognitive capabilities of adversaries. To categorize them, they use an iterative method of optimal responses that determine the policy of an agent with a determined level of intelligence. Then, they formulate a learning algorithm to train the different intelligence levels without any knowledge about the physics of the system.
\textit{Advantages and limitations:} This approach provides a quantitative mechanism for deciding the optimal strategy to face an attack. It may be effective for attacks on sensors, actuators, controllers, or network traffic depending on how the approach is designed. However, most of the existing proposals focus on cyber or network aspects, without considering the physical model of the process. In addition, as the decisions are calculated at runtime it may be hard to analyze and predict the stability of the physical process.
\section{Discussion}
\label{sec:ch2_why_CT}
Control theory and cybersecurity are research areas that provide significant contributions to solve security issues in \gls*{cps} from different perspectives. Similarly to the IoT domain~\cite{10.1145/3462513}, resilience in the \gls*{cps} domain is a dual problem with a part in the cyber world and the other part in the physical one. As pointed out in \cite{sanchez_bibliographical_2019, 6580348}, both such domains are complementary disciplines that working together have the potential to provide more efficient and effective solutions.
Control theory provides models that precisely describe the underlying physical process, which enables the prediction of future behavior and unforeseen deviations from it. It models the system to analyze attacks and their corresponding detection, mitigation, and recovery schemes. The cybersecurity research community also offers different approaches for numerous security problems in \gls*{cps}. Such approaches typically focus on the cyber aspects, such as communication networks, protocols, software, and data.
According to \cite{6580348}, \gls*{cps} security can be divided into two main categories: information security which focuses on cyber and data security, provides methods that are effective on software layers without using any physical model; and secure control theory, which studies how cyber-attacks affect the control system’s physical dynamics. Ensuring safety using only information security tools is not sufficient for \gls*{cps}. Therefore, they should be complemented with secure control theory that provides an attack model and a description of the interaction between the physical world and the control system. It provides a better understanding of the attacks' consequences, and the development of new detection methods, algorithms, and architectures, that make the control systems more resilient to possible attacks and failures.
Certain attacks are undetectable by traditional control-theoretic approaches, for example in situations when the adversary modifies inputs and outputs to be correlated with the estimated model or when the values are chosen by the adversary to fulfill certain properties as described in \cite{teixeira2012attack,Teixeira2015}. The incorporation of cybersecurity strategies to control theory approaches, provided new tools to build approaches to solve this issue as explained in Section \ref{sec:ch2_techniques}. Moreover, cybersecurity approaches do not cover all the possible vulnerabilities in the cyber components. Mechanisms to protect specific vulnerabilities may not exist or be too expensive to implement, and even when they are implemented they are also not free of false negatives.
Furthermore, due to the strong coupling between cyber and physical domains, the tools and methodologies developed to ensure cybersecurity are insufficient to secure \gls*{cps}. For instance, they can fail against purely physical attacks. As an example \cite{weerakkody_resilient_2020}, the confidentiality of encrypted sensor measurements can be violated by placing unencrypted malicious sensors in close proximity to encrypted sensors. The integrity of sensor measurements can be modified by changing a sensor’s local environment while control inputs can be changed by directly manipulating system actuators. In such a scenario, message authentication codes or digital signatures fail to recognize an attack. Availability can be compromised by physically shielding sensors and actuators. In this case, anti-jamming and denial-of-service techniques will fail.
The large scale of a \gls*{cps} may turn physical protection impractical, leaving the system vulnerable to the previous examples. However, in addition to the exposed vulnerabilities created by basic physical attacks, it is possible to create more advanced cyber-physical attacks that generate the same physical effects but using a remote connection and injecting malicious traffic. As showed in Section \ref{sec:CPS_attacks}, malicious traffic can be confused with legit traffic and be undetectable. This way, by using control theory models, it is possible to implement new advanced and coordinated attacks to exploit \gls*{cps}. These attacks are capable of bypassing cyber detection as discussed in the literature: the false data injection attack \cite{Mo2010FalseDI, mo2010false}, the replay attack \cite{Mo_2014}, the zero-dynamics attack \cite{teixeira2012revealing}, and the covert attack \cite{smith2011decoupled}. Last but not least, insider adversaries and human error that generate security breaches have to be also considered to ensure safety.
\section{Open challenges}
\label{sec:ch6_future_work}
The limitations highlighted in the previous section open several guidelines for future research work on the subjects surveyed in this article that would be beneficial for wider adoption of resilience methods and techniques for \gls*{cps}. Some representative guidelines are briefly presented next, in this section.
\vspace{-.35cm}
\subsection{System Modeling}
In terms of modeling, a \emph{higher interaction between system components}, e.g., cyber and physical components, would make the results more consistent and convincing. Indeed, a proper combination of the cyber-network and control-physical layers could be expanded towards next-generation cyber-physical systems able to properly correlate and repair cross-layer security incidents.
Most of the existing resilience techniques and measures focus on protecting the network, software or physical components in an independent manner. In a \gls*{cps} these elements work together and coordinated actions to attack vulnerabilities in the different components that may have dangerous consequences.
More integration between the different layers creates systems with better capabilities to react and defend from adversaries.
For that reason, resilience techniques should integrate these concepts and have a global view of the components and their interaction because approaching the problem with partial and independent views is not enough to solve the existing security issues.
Concerning \emph{resilient control and attack models}, the control theory domain shows to be more mature than the computer science and cybersecurity fields. However, the integration of both domains creates new challenges that need to be addressed. For example, how to create attack-tolerant control, i.e., how to design robust control that considers possible attacks. Proactive algorithms and system architectures that are robust to attacks, ensure stability and the performance thresholds are still required. In addition, the state of the cyber and network components should also be taken into account to consider factors such as the nodes' states and quality of service. To achieve that, it is also needed to improve the existing attack models, i.e., create attack models that characterize better the capabilities of the adversaries. One adversary model was developed in \cite{Teixeira2015} which is based on the available resources to an adversary. However, better models are still required including information such as their computational power, the type of access they may have, the data they collect, their collaborative capabilities, and signals an adversary has access to. This information helps to understand the logic behind the associated defense mechanisms to improve the defense mechanisms and compare with other security mechanisms.
A promising research opportunity related to the topics of this article can be explored around the use of \emph{digital twins}. In Section \ref{sec:ch2_CPS}, we present how to design the control loops using Kalman filters as estimators used for stochastic cases.
Such filters explicitly use a noise model for both state and output processes considering the stochastic nature of the dynamical system. Thus, it is more appropriate for \gls*{cps} and, in general, performs better for stochastic systems. Conversely, an observer, such as the Luenberger observer \cite{luenberger_introduction_1971}, is typically restricted to the deterministic cases, i.e., when there is no randomness in the states. Observers are used to estimate unmeasured states of a system and have been proposed to detect attacks in \gls*{cps}. The principle of estimators and observers are similar. An observer is a continuous-time dynamical system that takes as input the measured input and measured output of the system, and produces an estimate of the state of the system as output.
\vspace{-.35cm}
\subsection{Metrics and Evaluation Methods}
Some more efforts are needed on specifying \emph{complexity management to anticipate impacts on resilience}, e.g., to evaluate in a resilience approach how to manage the complexity of the proposal and how to anticipate the impact it may have on the system resilience. The resilience of a system is influenced by several factors that can be managed or exploited to enhance resilience \cite{linkov_fundamental_2019, book_resilience}. All resilience-enhancing measures can also cause a negative effect leading to an overall reduction in resilience. For example, to improve resilience, it may be required to use more complexity, such as using new connections, new components, more diversity, etc. As the number and heterogeneity of components grow, they offer more opportunities to regenerate the system. Agents may be able to use additional links to different elements or find replacement resources to ultimately restore its functions. However, high complexity may lead to interactions that are hard to understand, analyze and protect, causing unforeseen side effects. As a result, greater complexity may also reduce the resiliency of the system. Another example is fail-safe designs that disconnect a component or part of the system in case of compromise. This action prevents the spread and cascade failures. However, this might be detrimental to the overall resilience of the system if the component is needed to support other components that execute damage-absorbing actions.
The increase in complexity may lead to lower resilience by increasing the number of ways in which one failed component may cause the failure of another. Therefore, in most cases, greater complexity should be avoided when possible unless it directly supports resilience functions.
As a consequence, it is not enough only an analysis of the performance impact of an approach. Resilience proposals may also have hidden impacts on the system behavior and complexity that should be evaluated to consider the reduction in the overall resilience. The quantification and evaluation of this aspect is not trivial. An approach should never be implemented in production systems without an appropriate evaluation of these factors. How to appropriately analyze and measure the resilience enhancement to reveal potential negative impacts and systemic effects is another future research work.
Another line in terms of metrics and evaluation methods relies on \emph{safety ensuring and testing automation}. \gls*{cps} normally provide critical functionalities. It is essential to ensure stability and correct behavior even under an attack when the inputs are specially modified for malicious purposes. In addition, triggering defensive actions increases the complexity of the system. Hence, with all these aspects happening at the same time may be hard to ensure that safety-critical functions will continue to work properly in any context or situation. Testing and validating the security proposals to ensure physical safety is still an open issue.
\subsection{Testing and Validation Environments}
There is a need to develop global approaches in terms of \emph{scalability validation}. Indeed, real \gls*{cps} may scale into networks with hundreds or thousands of devices. Conti \textit{et al.}~ \cite{conti2021survey} survey validation testbeds and datasets for \gls*{cps}. As a result, we can observe that scalability makes it difficult to test the system in an integrated manner considering physical, network and cyber components. To test scalability normally simulation tools are used, but they abstract or forget the physical process part which is the essential part of the \gls*{cps}. The ideal validation option is experimental testbeds, which may be expensive and also exists limited stable testbed scenarios. Thus, testing scalability while combining physical process, network and software components is still a challenge. We highlight the need for better \gls*{cps} testing and validation environments. Numeric simulation tools, such as Matlab\textsuperscript{\textregistered} and Simulink\textsuperscript{\textregistered}, do not integrate the network and cyber aspects. Network simulation tools are conceived for traditional IT systems and do not integrate the physical process. Hence, performance validation in simulation platforms only gives a partial overview of the whole problem.
In particular, for testing network aspects, it will not be enough to test with reduced quantities of the devices. This presents two new issues. First, creating such a testbed is not easy due to the required investment. Second, the existing testbed scenarios consider only a limited quantity of devices.
The lack of realistic scenarios is mainly due to the complexity of creating system models describing the different aspects of a physical process, such as the existing physical process reactions, the physical model involved in those reactions, the physical equipment or components required, the safety and operating constraints, the operating cost function, the sensor signal noise, the process randomness, among others \cite{krotofil2015rocking}. Designing such a system is a huge effort and insights into real industrial systems are not possible due to justified confidentiality issues.
\section{Conclusion}
\label{sec:conclusion}
\CH{In Cyber-Physical Systems (CPS), adversaries may disrupt physical processes by injecting malicious traffic, e.g., cyber-physical attacks may use coordinated cross-layer techniques, to get control over the cyber or network layers and disrupt the physical devices. For this reason, attacks over critical processes may end affecting people, physical environments and companies. To develop comprehensive protection for \gls*{cps}, it is required to layer the three following protection mechanisms: prevention to postpone the attack as much as possible, detection-reaction to identify the attacks and attenuate them, and cyber-resilience to contain the impact of the attack while keep providing the essential services and restoring the normal operation as soon as possible.}
\CH{Cyber-resilience is essential for critical systems which monitor industrial and complex infrastructures based on networked control systems \cite{10.1007/978-3-319-76687-4_3}. If the defense strategy relies only on detection and reaction approaches, the system is not protected in case of false negatives, i.e., undetectable attacks or extremely rare events that are not considered in risk management. Attacks might also come from inside, for example, from high skilled employees acting as malicious insiders. The knowledge that such insiders possess about the system gives them unrestricted access to steal or modify data or even deactivate critical functionalities. It is important to have a \gls*{cps} capable of maintaining the stability of the system during such situations. The system should be protected at all times including the time required for detecting and responding to attacks. Otherwise, the system could experience disruption, leading to damages.}
\CH{In this article, we presented a systematization of knowledge about existing scientific efforts of making \gls*{cps} cyber-resilient. We systematically surveyed recent literature addressing the topic, with a specific focus on techniques that may be used on \gls*{cps}. We started by surveying control theory formalities for \gls*{cps} and cyber-physical attacks. }
Then, we analyzed detection and mitigation techniques to protect \gls*{cps}. We surveyed some current trends in terms of detection based on control-theoretic model-based approaches that incorporate the physical model to detect cyber-physical adversaries. We also surveyed
mitigation techniques aiming to optimize the recovery response of a system under attack. The proposals to build cyber-resilient systems turn around techniques such as diversity, segmentation, resilient control, system reconfiguration, dynamic software evolution, moving target defense, consensus and game theory paradigms. These techniques provide the ability to absorb, survive or recover from an attack.
We discussed how the techniques have evolved and we brought clarity to this complex field by treating the major axes of resilience techniques. We identified that the difference between the detection-reaction paradigm and resilience is not clearly defined in the literature, and often, the two concepts are confused. This problem arises for different causes. Firstly, because resilient designs are not easy to conceive. Our natural way of reasoning about security instructions is to detect the problem and then react. Another reason is probably that control theory and computer science have different definitions for the resilience concept. Control theory calls resilient a controller that can keep an understanding of the system state and calculate correct control signals despite malicious information injected at any point of the control loop. To achieve this, the control theory community normally uses approaches that in computer science are considered detection-reaction approaches. On the other hand, from a computer science perspective, a resilient system is capable to prepare, absorb, recover, and adapt to adverse effects. Or as we prefer to define it, a resilient system is capable to maintain the core set of critical functionalities despite ongoing adversarial misbehavior and guarantee the recovery of the normal operation within a predefined cost limit.
\CH{As a result of the literature analysis, we identified plenty of research efforts in terms of detection techniques and state estimation to maintain an awareness of the system state despite an attack. However, much less efforts exist in terms of remediation approaches to attenuate the attacks. We identified a lack of adapted resilience techniques for the \gls*{cps} particular needs. The research in resilience for \gls*{cps} can be extended and we pointed out several promising directions for future work, with a focus on the practical aspects of cyber-resilience, such as the use of metrics and evaluation methods, as well as testing and validation environments.}
\medskip
\noindent \textbf{Acknowledgements ---} \CH{The authors thank the anonymous referees for their valuable comments and helpful suggestions. The authors also acknowledge support from the Cyber CNI chair of the Institut Mines-T\'el\'ecom, as well as support from the European Commission, under grant agreement 830892 (H2020 SPARTA project).}
\bibliographystyle{unsrt_MS}
|
1,108,101,564,154 | arxiv | \section{Introduction}
Low-Power Wide-Area Networks (LPWAN) form a new class of technologies providing massive connectivity for the Internet-of-Things (IoT).
LPWAN technologies focus on Ma\-chi\-ne-Type Communications (MTC), especially on lightweight sensor network applications.
The most prominent LPWAN technologies are LoRaWAN, SigFox, and NB-IoT.
LoRaWAN has been widely used in academia due to openness and because it works in the unlicensed Industrial, Scientific, and Medical (ISM) bands~\cite{Centenaro:IEEEWC:2016}.
Several independent initiatives pushed the technology forward, making it available virtually everywhere.
Recent research on LoRaWAN shows that it may embrace the requirements of massive IoT applications.
Georgiou and Raza~\cite{Georgiou:WCL:2017} propose an analytic model of LoRaWAN disconnection and collision probabilities in Rayleigh fading channels. Disconnection considers the average probability that the signal-to-noise ratio (SNR) of a packet is below a reception threshold, while collision probability considers the threshold of the signal-to-interference ratio (SIR) of the same packet. The model captures the LoRaWAN sensitivity to collisions due to increased network usage, even though their SIR model only considers the dominant interferer.
Hoeller \textit{et al.}~\cite{Hoeller:Access:2019} extend~\cite{Georgiou:WCL:2017} and adapt the SIR model to consider several interference sources.
Mahmood \textit{et al.}~\cite{Mahmood:2019}, as well as \cite{Georgiou:WCL:2017} and \cite{Hoeller:Access:2019}, use stochastic geometry to build analytic coverage probability models for LoRaWAN and propose a path loss-based method to define network geometry.
Reynders \textit{et al.}~\cite{Reynders:ICC:2017} propose a power and data rate (spreading factor, SF) allocation method based on clustering for the NS-3 simulator.
Aligned to the problem we address, Abdelfadeel \textit{et al.}~\cite{Abdelfadeel:WoWMoM:2018} assess the performance of Adaptive Data Rate (ADR)-enabled LoRaWAN, achieving results similar to our theoretical analysis, and Li \textit{et al.} \cite{Li:Globecom:2018} study ADR convergence, both through simulations.
In this work, we review the analytic models for single-cell LoRaWAN and propose an adaptation to include the ADR feature.
Although multi-cell systems are likely to shape the topology of LoRaWAN networks in dense urban deployments, single-cell systems are still of interest for deployments in small town or villages, industrial plants, and in the agribusiness sector, where a dedicated single-cell LoRaWAN system may support a known number of users and applications.
Analytic models allow for faster evaluation and insights that are hard to obtain from simulations.
We validate our analytic model through Monte Carlo simulations.
Following \cite{Hoeller:Access:2019}, we use our model to plan the network deployment to respect a maximum outage probability.
We show that power control considerably reduces interference, increasing network capacity by up to $50\%$ and reducing average transmit power by roughly 25\%.
The main contributions in this letter are the performance analysis of ADR-enabled LoRaWAN and a simple closed expression for its outage probability in steady-state operation.
We assume the network reaches steady-state when ADR converges for all nodes, and their SF and transmit power configuration remain unchanged, as defined in \cite{Li:Globecom:2018}.
The performance analysis shows that ADR is an important feature of the technology and that it must be taken into account.
The closed-form expression assumes, as in \cite{Li:Globecom:2018}, that a network with static nodes converges to RSSI-based SF and transmit power figures, implementing, in practice, a truncated channel inversion scheme \cite{ElSawy:TWC:2014}.
Also, transient periods occur when channel or network conditions change, and the time to return to steady-state depends on application and deployment scenario~\cite{Li:Globecom:2018}.
\section{Baseline LoRaWAN Model}\label{sec:baseline}
LoRaWAN employs LoRa transceivers in the PHY layer, operating in sub-GHz frequencies (\textit{e.g.}, 868~MHz in Europe, 915~MHz in USA and Brazil) with Chirp Spread Spectrum modulation~\cite{Semtech:SX1276:2019}.
A key feature of LoRa modulation is the configurable SF rate.
As shown in Table~\ref{tab:lora}, higher SF rates increase signal robustness at the expense of transmission rate.
Since LoRa is a form of frequency modulation, it features the capture effect, where the receiver retrieves a colliding packet if it is sufficiently above interference.
The SIR for the successful reception of a packet is $6$dB~\cite{Semtech:SX1276:2019}.
A typical LoRa transceiver can use different transmit power ($\mathcal{P}$).
The Semtech SX1276 LoRa transceiver under European regulations admits 16 levels of transmit power between -1dBm and +14dBm, in 1dB steps.
\vspace{-.3cm}
\begin{table}[h]
\centering
\caption{LoRaWAN Uplink characteristics for packets of 19 bytes (13-bytes header, 6-bytes payload)~\cite{Semtech:SX1276:2019}.}
\label{tab:lora}
\scalebox{0.95}{
\begin{tabular}{@{}ccccc@{}}
\toprule
\textbf{\begin{tabular}[c]{@{}c@{}}SF\\ $i$\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}ToA\\ $t_i$ (ms)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Bitrate\\$Rb_i$ (kbps)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Receiver Sensitivity\\ $\mathcal{S}_i$ (dBm)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}SNR threshold\\ $\psi_i$ (dB)\end{tabular}} \\ \midrule
7 & 51.46 & 5.46 & -123 & -6 \\
8 & 102.91 & 3.12 & -126 & -9 \\
9 & 185.34 & 1.75 & -129 & -12 \\
10 & 329.73 & 0.97 & -132 & -15 \\
11 & 741.38 & 0.53 & -134.5 & -17.5 \\
12 & 1318.91 & 0.29 & -137 & -20 \\ \bottomrule
\end{tabular}
}
\end{table}
In its most commonly used operating mode, known as class A, LoRAWAN implements a variation of unslotted ALOHA in a star network topology where nodes reach the gateway, which in turn connects to a network server via an IP network.
\vspace{-.3cm}
\subsection{Network Model}
We model the spatial distribution and activity of LoRaWAN nodes with stochastic geometry~\cite{Haenggi:Book:2012}.
We divide the network into SF rings according to the distance from the node to the gateway.
The vector $L=[l_0,\ldots,l_6], l_0=0,$ defines the SF ring edges, with $R=l_6$ as the coverage radius.
For simplicity, $S=\{1,\ldots,6\}$ is the set of SF rings, and each ring uses a respective SF in the set $\{7,\ldots,12\}$.
We consider that all nodes run the same application.
Thus network usage differs for each SF because of different data rates (see Time-on-Air/ToA in Table~\ref{tab:lora}).
We also assume that devices generate a packet for transmission once every $T$ seconds and that the packet is transmitted with a given probability according to the pure ALOHA protocol.
The transmission probability is a vector $p = [p_1, \ldots, p_6], p_i \in (0,1]~ \forall i \in S$, and $p_i = t_i/T$, where $t_i$ is the ToA of the packet with the SF of ring $i$.
For example, Figure~\ref{fig:nodes} presents a network configuration with $\overline{N}=250$ nodes and network geometry ($L$), obtained to ensure $0.99$ connection probability according to the method we describe in Section~\ref{sec:planning}.
\begin{figure}[tb]
\centering
\includegraphics[width=.7\columnwidth]{hoeller_WCL1630_fig1.eps}
\caption{Sample of $\overline{N}=250$ nodes uniformly distributed in an area of radius $1200$m and with SF allocation for 1\% maximum disconnection probability.}
\label{fig:nodes}
\end{figure}
Each SF ring constitutes a separate PPP $\Phi_i$ with intensity $\alpha_i=p_i\rho_i$ in its area $V_i = \pi (l_i^2 - l_{i-1}^2)$, where $l_{i-1}$ and $l_i$ form its inner and outer edges.
$\rho_i = \overline{N}_i/V_i$ is the spatial density of nodes in ring $i$.
The average number of nodes in $\Phi_i$ is $\overline{N}_i = \rho_i V_i$.
The average total number of nodes is $\overline{N} = \sum_{i \in S} \overline{N}_i$.
The coverage area is $V=\pi R^2$.
For instance, take ring $i=5$ (SF$_{11}$) in Figure~\ref{fig:nodes}, defined by two circles of radii $l_4=789.5$m and $l_5=973.4$m.
The ring area is $V_5=\pi(l_5^2 - l_4^2)=1.02$ km\textsuperscript{2}.
With $\overline{N}_5=\rho_5 V_5=50$ nodes in the ring, the spatial density is $\rho_5=\overline{N}_5/V_5=49.1$ nodes/km\textsuperscript{2}.
If the transmit probability is $p_5=0.01$, then the intensity of $\Phi_5$ is $\alpha_5=p_5\rho_5=0.49$.
In our analysis, $d_k$ is the Euclidean distance between the $k$-th node and the gateway, and $d_1$ denotes the distance of the node of interest to the gateway. We use the subscript ``1'' whenever a variable refers to the node under analysis.
Nodes use a transmit power $\mathcal{P}_k$ to send signal $s_k$, and both path loss and Rayleigh fading $h_k$ affect the signal $r_1$ received at the gateway.
Path loss follows $g_k = \left (\frac{\lambda}{4 \pi d_k} \right )^\eta$, with wavelength $\lambda$, and path loss exponent $\eta>2$.
Therefore
\begin{align}
r_1 &= s_1\sqrt{\mathcal{P}_1g_k}h_1 + \sum\nolimits_{k \in \Phi_i} {s_k\sqrt{\mathcal{P}_kg_k}h_k} + n,
\end{align}
where the first term is the attenuated signal of interest, the second is interference, $i$ is the ring of $s_1$, and $n$ is the zero-mean additive white Gaussian noise (AWGN) of variance $\mathcal{N}$.
\vspace{-.5cm}
\subsection{Outage Probability}
We consider that communication outage occurs due to disconnection or interference, which are, respectively, conditioned on the realization of the SNR and the SIR of a transmitted packet.
We base our analysis on the stochastic geometry model of the SINR of Poisson Bipolar Networks with Rayleigh fading in~\cite[Theorem 5.7]{Haenggi:Book:2012}.
Disconnection depends on distance and happens if the SNR is below the threshold $\psi_i$ (see Table~\ref{tab:lora}).
The disconnection probability is~\cite{Georgiou:WCL:2017}
\begin{align}
H_0(d_1,\mathcal{P}_1) = \mathbb{P}[\textup{SNR} < \psi_i] = \mathbb{P} \left[ \frac{\mathcal{P}_1 g_1 |h_1|^2}{\mathcal{N}} < \psi_i ~\biggr|~ d_1 \right], \nonumber
\end{align}
with $i$ indicating the SF ring in use by the node under analysis.
With known $d_1$ and $\mathcal{P}_1$, we condition $H_0$ to the probability of the Rayleigh fading power in $|h_1|^2\sim\exp(1)$, so
\begin{align}
H_0(d_1,\mathcal{P}_1) &= 1 - \textup{exp}\left( - \frac{\psi_i\mathcal{N}}{\mathcal{P}_1 g_1} \right). \label{eqn:h0}
\end{align}
The outage due to interference ({\it i.e.}, collision with other packets) considers the capture effect.
Thus, the collision probability concerning the SIR threshold $\delta$ is~\cite{Hoeller:Access:2019}
\begin{align}
Q_0(d_1,\mathcal{P}_1) \!=\! \mathbb{P}\![\textup{SIR} \!<\! \delta | d_1]
\!=\! \mathbb{P} \left[ \frac{\mathcal{P}_1 g_1 |h_1|^2}{\sum_{k\in\Phi_i} \mathcal{P}_k g_k |h_k|^2} \!<\! \delta \biggr| d_1 \right]. \label{eqn:q0}
\end{align}
\section{Power Allocation for LoRaWAN}\label{sec:model}
When considering transmit power allocation, $\mathcal{P}_k$ may be different for each node.
We assume that nodes at the edge of each SF ring use the highest available transmit power ($\mathcal{P}_{max}$) to extend the coverage area.
Considering a predefined target outage due to disconnection ($\mathcal{T}_{H_0}$), we define the network geometry by making $H_0(l_i,\mathcal{P}_{max}) = \mathcal{T}_{H_0}$, so that
\begin{align}
l_i &= \frac{\lambda}{4\pi} \left( - \frac{\mathcal{P}_{max}\textup{ln}(1-\mathcal{T}_{H_0})}{\mathcal{N}\psi_i} \right)^{\frac{1}{\eta}}. \label{eqn:l_i}
\end{align}
We also use~\eqref{eqn:h0} to define the minimum transmit power the $k$-th device must use to ensure $\mathcal{T}_{H_0}$ as
\begin{align}
\mathcal{P}_{k_{min}} &= - \frac{\mathcal{N}\psi_i}{\textup{ln}(1-\mathcal{T}_{H_0}) g_k}. \label{eqn:pkmin}
\end{align}
In practice, $\mathcal{P}_{k_{min}}$ should be rounded up to the immediately higher value available.
Additionally, we obtain the network average transmit power by averaging~\eqref{eqn:pkmin} over the area, \textit{i.e.},
\begin{align}
\mathcal{P}_{avg} &= \frac{2\pi}{V} \sum_{i\in S} \int_{l_{i-1}}^{l_i} -\frac{\mathcal{N}\psi_i}{\textup{ln}(1-\mathcal{T}_{H_0}) g_k} d_k ~\textup{d}d_k \nonumber \\
&= -\frac{2\pi\mathcal{N}}{V\textup{ln}(1-\mathcal{T}_{H_0})} \left( \frac{4\pi}{\lambda} \right)^\eta \sum_{i\in S} \frac{\psi_i}{\eta+2} (l_i^{\eta+2} - l_{i-1}^{\eta+2}). \label{eqn:pavg}
\end{align}
\subsection{Outage Probability with Transmit Power Allocation}
Rewriting the disconnection probability in~\eqref{eqn:h0} with the power allocation method defined by~\eqref{eqn:pkmin} yields
\begin{align}\label{eqn:h1final}
H_0(d_1,\mathcal{P}_{1_{min}}) = 1 - \textup{exp}\left( - \frac{q_1\mathcal{N}}{\mathcal{P}_{1_{min}} g_1} \right) = \mathcal{T}_{H_0},
\end{align}so that transmit power control compensates for path loss, makes $H_0$ independent of $\mathcal{P}_1$ and $d_1$, and ensures $\mathcal{T}_{H_0}$ for all nodes.
Similarly, rewriting~\eqref{eqn:q0} with~\eqref{eqn:pkmin} yields
\begin{align}
Q_0(i) &= \mathbb{P} \left[ \frac{|h_1|^2}{\sum_{k\in\Phi_i} |h_k|^2} < \delta \right],
\end{align}
and therefore $Q_0$ becomes independent of transmit powers and distances from the gateway, being only dependent on fading.
If we define $X_i = \sum_{k\in\Phi_i} |h_k|^2$ and $Y_i = \frac{|h_1|^2}{X_i}$, then $Q_0(i) = \mathbb{P}\left[ Y_i < \delta \right] = F_{Y_i}(\delta)$, with the cdf of $Y_i$ obtained as
\begin{align}
F_{Y_i}(y) = \int_0^\infty F_{|h_1|^2}(xy) f_{X_i}(x)~\textup{d}x,\label{eqn:Fh1X}
\end{align}
where $|h_1|^2 \sim \textup{exp}(1)$, $F_{|h_1|^2}(z) = 1 - e^{-z}$, $X_i$ is Gamma distributed, $X_i\sim\Gamma(N_{\Phi_i},1)$, $f_{X_i}(x)=\frac{1}{\Gamma(N_{\Phi_i})}x^{N_{\Phi_i}-1}e^{-x}$, and $\Gamma(\cdot)$ is the Gamma Function~\cite{NIST:Book:2010}.
Following the duality of notation of PPPs~\cite[Box 2.3]{Haenggi:Book:2012}, $N_{\Phi_i}\sim\textup{Poiss}(\beta_i)$ is a Poisson random variable of mean $\beta_i=\alpha_iV_i=p_i\overline{N}_i$ describing the average number of \emph{active} interferers in PPP $\Phi_i$.
Thus,
\begin{align}
Q_0(i) = \mathbb{E}_{N_{\Phi_i}} \left[ \int_0^\infty (1 - e^{-x\delta}) \frac{1}{\Gamma(N_{\Phi_i})} x^{N_{\Phi_i}-1} e^{-x} \textup{d}x \right],
\end{align}
which is solved by distributing the multiplication, factoring out independent terms, and applying the identity $\int_0^\infty x^n e^{-ax} \textup{d}x = \frac{\Gamma(n+1)}{a^{n+1}}$~\cite{NIST:Book:2010}. Thus, the $N_{\Phi_i}$-dependent collision probability is
\begin{align}
Q_0(i) = \mathbb{E}_{N_{\Phi_i}} \left[ 1 - (\delta + 1)^{-N_{\Phi_i}} \right].
\end{align}
Since the pmf of $N_{\Phi_i}$ is $f_{N_{\Phi_i}}(z) = \frac{\beta_i^z e^{-\beta_i}}{z!}$,
\begin{align}
Q_0(i) = 1 - \textup{exp}\left( - \frac{\delta}{\delta + 1}\beta_i \right). \label{eqn:q0final}
\end{align}
Finally, the total outage probability for each SF ring $i$ is
\begin{align}
C_0(i) \triangleq H_0 + Q_0(i) - H_0Q_0(i). \label{eqn:c0}
\end{align}
Our model preserves the PPP properties for each point as long as the fixed communication distances and transmit powers sa\-tis\-fy $\frac{\mathcal{P}_1g_1}{\mathcal{P}_kg_k}=1$ in~(3), which is guaranteed by~\eqref{eqn:pkmin}.
\section{Network planning}\label{sec:planning}
We use the outage probability in~\eqref{eqn:c0} as a tool to plan the deployment of single-cell LoRaWANs. We assume a target maximum outage $\mathcal{T}_{C_0}$ for all nodes, $C_0(i) \leq \mathcal{T}_{C_0}, \forall i$.
We use this reliability constraint to maximize coverage radius and network usage.
After a closer look at~\eqref{eqn:c0}, we observe that, for each ring, $C_0(i)$ depends on the outer limit $l_i$ and the average number of active interferers $\beta_i$. Unfortunately, it is not possible to solve such optimization for both variables simultaneously, so, here, we explore the trade-off between coverage radius and network usage.
Assuming that the larger coverage radius and higher network usage occur on the worst-case scenario where $C_0(i)=\mathcal{T}_{C_0}, \forall i$, we represent the trade-off, following from~\eqref{eqn:c0}, as $\mathcal{T}_{C_0} = \mathcal{T}_{H_0} + Q_0(i) - \mathcal{T}_{H_0} Q_0(i)$, from which we equate, either, the maximum $\beta_i$ assuming a given $\mathcal{T}_{H_0}$ as
\begin{align}
\beta_i = -\frac{\delta + 1}{\delta} \textup{ln} \left( \frac{1-\mathcal{T}_{C_0}}{1-\mathcal{T}_{H_0}} \right), \label{eqn:bi}
\end{align}
or the maximum $\mathcal{T}_{H_0}$ assuming a given $\beta_i$ as
\begin{align}
\mathcal{T}_{H_0} = \frac{\mathcal{T}_{C_0} - Q_0(i)}{1 - Q_0(i)}. \label{eqn:th0}
\end{align}
Note that $\beta_i=p_iN_i$, so we use~\eqref{eqn:bi} to obtain the maximum number of nodes in each ring, assuming that all nodes in a ring use the same duty-cycle $p_i$.
Similarly, because of~\eqref{eqn:l_i}, we obtain the SF ring range $l_i$ with $\mathcal{T}_{H_0}$ from~\eqref{eqn:th0}.
\section{Numerical Results}\label{sec:results}
\begin{table}[tb]
\centering
\caption{Model and simulation parameters.} \label{tab:param}
\scalebox{.95}{
\begin{tabular}{@{}ll@{}}
\toprule
\textbf{Parameter} & \textbf{Value } \\ \midrule
$f_c$ & 868 MHz \\
$B$ & 125 kHz \\
$NF$ & 6 dB \\
$\mathcal{N}$ & $-174 + NF + 10 \textup{log}_{10}(B) = -117$dBm \\
$T$ & Every 15 minutes \\
$p~(\times 10^{-6})$ & $\{57.1, 114.3, 205.9, 366.3, 823.7, 1465.4 \}$ \\
$\mathcal{P}_k$ & $\{-1, 0, \ldots, 14\}$ dBm \\
$\mathcal{P}_{avg}$ & 12.63 dBm \\
$\mathcal{P}_{max}$ & 14 dBm \\
$\delta$ & 6 dB \\
$\mathcal{T}_{C_0}$ & 0.01 \\
$R_{min}$ & 1200 m \\ \bottomrule
\end{tabular}
}
\end{table}
We assume the parameters in Tables~\ref{tab:lora} and~\ref{tab:param} to mimic a suburban deployment of a single-cell LoRaWAN under European regulations.
The figures show our theoretical model (solid lines) and Monte Carlo simulations (marks).
Figure~\ref{fig:ptx_alloc} shows the power allocation using~\eqref{eqn:pkmin} and the average power in the network.
The dashed curve shows the continuous power allocation according to distance and considering different SFs. It shows that $SF_7$ uses a wider range of transmit power because its nodes are closer to the gateway. The power variation is 3dB in $SF_8$, $SF_9$, and $SF_{10}$, and 2.5dB in $SF_{11}$ and $SF_{12}$. That matches the variation of the SNR threshold in Table~\ref{tab:lora} ($\psi_i$) and is also aligned with the ADR power and SF allocation method defined by LoRaWAN.
Still, in Figure~\ref{fig:ptx_alloc}, the dotted curve shows the discrete practical power allocation, obtained by rounding up the continuous values of~\eqref{eqn:pkmin}.
That mostly impacts the power of nodes closer to the gateway. Figure~\ref{fig:ptx_alloc} also shows the average power in the network from~\eqref{eqn:pavg} as $12.63$~dBm -- an average power reduction of $27\%$.
\begin{figure}[tb]
\centering
\includegraphics[width=.94\columnwidth]{hoeller_WCL1630_fig2.eps}
\caption{Power allocation as a function of distance.}
\label{fig:ptx_alloc}
\end{figure}
Figures~\ref{fig:disc_ptx} and~\ref{fig:full_ptx} show results using two approaches: power allocation as in~\eqref{eqn:pkmin}, and all nodes with maximum power ($14$~dBm). The most noticeable aspect is that proper power allocation allows all nodes in the network to experience similar outage probabilities close to the target $\mathcal{T}_{C_0}=0.01$. When nodes use constant power, $\mathcal{T}_{C_0}$ is reached only on the edges of each SF ring. In the constant power scenario, the nodes closer to the ring inner edge use more power than needed, thus spending more energy and causing more interference.
\begin{figure}[tb]
\centering
\includegraphics[width=.98\columnwidth]{hoeller_WCL1630_fig3.eps}
\caption{System performance with power allocation. $\overline{N}=247$.}
\label{fig:disc_ptx}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=.98\columnwidth]{hoeller_WCL1630_fig4.eps}
\caption{System performance with fixed power. $\overline{N}=225$.}
\label{fig:full_ptx}
\end{figure}
The method in Figure~\ref{fig:disc_ptx}, besides using less average power than that in Figure~\ref{fig:full_ptx}, also serves more users. We observe a gain of 9.3\% in the number of supported users, on average, from 225 to 247 nodes. If we consider a scenario with fixed transmit power equal to the average power used in Figure~\ref{fig:disc_ptx}, then power allocation leads to a gain of 56.7\% in the number of users, from 157 to 247. Our results show that adequate power allocation in LoRaWAN contributes to the network capacity due to the interference reduction while being more energy-efficient.
\section{Conclusion}\label{sec:conclusion}
We modeled the performance of LoRaWAN with power allocation, considering two outage conditions: disconnection and interference. We determined the maximum number of users to ensure a maximum outage probability. Numerical results show that power allocation increases network reliability due to the reduction of interference while being more energy-efficient than fixed transmit power. In the future, we plan to investigate the performance of LoRaWAN with power control under inter-SF and external interference.
\bibliographystyle{IEEEtran}
|
1,108,101,564,155 | arxiv | \section{Introduction}
Kinetic Brownian motion is a stochastic process that describes a stochastic perturbation of the geodesic flow and has the property that the perturbation affects only the direction of the velocity but preserves its absolute value. It has been studied in the past years by several authors in pure mathematics \cite{FLJ07, angst, Li16,alexis, BT18} but versions of this diffusion process have been developed independently as surrogate models for certain textile production processes (see e.g. \cite{GKMW07,GS13, KSW13}).
Kinetic Brownian motion $(Y_t^\ensuremath{\gamma})_{t\geq 0}$ in the setting of a compact Riemannian manifold $(\ensuremath{\mathbb{M}}, g)$ can be informally described in the following way: $(Y_t^\ensuremath{\gamma})_{t\geq 0}$ is a stochastic process with continuous paths described by a stochastic perturbation of the geodesic flow on the sphere bundle $S\ensuremath{\mathbb{M}} =\{\xi\in T\ensuremath{\mathbb{M}}, \|\xi\|_g=1\}.$
More precisely, if we denote the geodesic flow vector field by $X$ and the (positive) Laplace operator on the fibers of $S\ensuremath{\mathbb{M}}$ by $\Delta_\ensuremath{\mathbb{S}}$, then the kinetic Brownian motion is generated by the differential operator
\[
\widetilde P_\ensuremath{\gamma} = -X +\frac 12 \ensuremath{\gamma} \Delta_\ensuremath{\mathbb{S}}\colon L^2(S\ensuremath{\mathbb{M}})\to L^2(S\ensuremath{\mathbb{M}}).
\]
The connection to the stochastic process $(Y_t^\ensuremath{\gamma})_{t\geq 0}$ is given via
\[
e^{-t\widetilde P_\ensuremath{\gamma}}f(x) = \mathbb E_x[f(Y_t^\ensuremath{\gamma})] \quad\text{with}\quad f\in L^2(S\ensuremath{\mathbb{M}}), x\in S\ensuremath{\mathbb{M}}.
\]
Observe that the parameter $\ensuremath{\gamma}>0$ controls the strength of the stochastic perturbation and it is a natural question to study the behavior of $\widetilde P_\ensuremath{\gamma}$ and $Y_t^\ensuremath{\gamma}$ in the regimes $\ensuremath{\gamma}\to 0$ as well as $\ensuremath{\gamma}\to\infty$. By hypoellipticity of $\widetilde P_\ensuremath{\gamma}$ one can show that $\widetilde P_\ensuremath{\gamma}$ has discrete $L^2$-spectrum. For negatively curved manifolds, Drouot \cite{alexis} has studied the convergence of this discrete spectrum of $\widetilde P_\ensuremath{\gamma}$ in the limit $\ensuremath{\gamma}\to 0$ and has shown that it converges to the Pollicott-Ruelle resonances of the geodesic flow. These resonances are a replacement of the spectrum of $X$ since its $L^2$-spectrum is equal to $i\ensuremath{\mathbb{R}}$ and they can be defined in various generalities of hyperbolic flows as pole of the meromorphically continued resolvent \cite{Liv04, FS11, DZ16a, DG16, DR16,BW17}.
In the limit of large random noise Li \cite{Li16} and Angst-Bailleul-Tardif \cite{angst} proved that $\pi(Y_{\ensuremath{\gamma} t}^\ensuremath{\gamma})$ converges weakly to the Brownian motion on $ \ensuremath{\mathbb{M}}$ with speed 2 as $\ensuremath{\gamma}\to \infty$ where $\pi\colon S\ensuremath{\mathbb{M}}\to \ensuremath{\mathbb{M}}$ is the projection.
This rescaled kinetic Brownian motion is generated by $P_\ensuremath{\gamma} =\ensuremath{\gamma}\widetilde P_\ensuremath{\gamma}$ whereas the Brownian motion on the base manifold is generated by the Laplace operator $\frac 12 \Delta_\ensuremath{\mathbb{M}}$.
Therefore, one may conjecture that the discrete spectrum of $P_\ensuremath{\gamma}$ converges to the Laplace spectrum.
We will give a proof of this fact including explicit error estimates, in the case of constant negative curvature surfaces:
\begin{theorem}\label{thm:evofPg}
Let $(\ensuremath{\mathbb{M}}, g)$ be a orientable compact surface of constant negative curvature scaled to $\kappa = -1$. For every $\eta\in\sigma(\Delta_\ensuremath{\mathbb{M}})$ with multiplicity $n$ there is an analytic function $\lambda_ \eta \colon ]2\sqrt{4\eta +6},\infty[ \to \ensuremath{\mathbb{C}}$ such that $\lambda_\eta(\gamma)$ is an eigenvalue of $P_\gamma$ with multiplicity at least $n$ and for every $\ensuremath{\gamma} > 2\sqrt{4\eta+6}$ the following estimate holds:
\begin{equation}\label{eq:thm_error_estimate}
|\lambda_\eta (\gamma)- \eta|\leq \frac {8\eta +12}{\ensuremath{\gamma} ((4 \eta+6)^{-1/2}-2\ensuremath{\gamma}\ensuremath{^{-1}})}.
\end{equation}
A fortiori, $\lambda_\eta (\ensuremath{\gamma}) \to \eta$ as $\ensuremath{\gamma}\to\infty$.
\end{theorem}
Another question to ask is whether the kinetic Brownian motion converges to equilibrium, i.e.
\[\mathbb E_x[f(Y_{\ensuremath{\gamma} t}^\ensuremath{\gamma})] \stackrel{t\to\infty}\longrightarrow \int_ {S\ensuremath{\mathbb{M}}}f.
\]
Baudoin-Tardif \cite{BT18} showed exponential convergence, i.e.
\[\left \|e^{-t P_\ensuremath{\gamma}} f- \int_{S\ensuremath{\mathbb{M}}} f \right \|\leq C e^{-C_\ensuremath{\gamma} t} \left \|f-\int_{S\ensuremath{\mathbb{M}}}f\right \|, \quad f\in L^2(S\ensuremath{\mathbb{M}}).\]
We should point out that the given rate $C_\ensuremath{\gamma}$ converges to 0 as $\ensuremath{\gamma}\to\infty$ but they conjecture that the optimal rate converges to the spectral gap of $\Delta_\ensuremath{\mathbb{M}}$ which is the smallest non-zero Laplace eigenvalue $\eta_1$ (see \cite[Section 3.1]{BT18}).
A direct consequence of Theorem \ref{thm:evofPg} shows that the optimal rate $C_\ensuremath{\gamma}$ is less than $\Re \lambda_{\eta_1}(\ensuremath{\gamma})$ for surfaces of constant negative curvature. Hence $\limsup_{\ensuremath{\gamma}\to \infty} C_\ensuremath{\gamma}\leq \eta_1$.
For a more explicit study of the convergence towards equilibrium we prove the following spectral expansion:
\begin{theorem}\label{thm:convergencetoequilibrium}
Let $(\ensuremath{\mathbb{M}}, g)$ be a orientable compact surface of constant negative curvature scaled to $\kappa = -1$.
For all $ \varepsilon > 0$, $ \ensuremath{\gamma} > \max \{4\sqrt{4C{\varepsilon} \ensuremath{^{-1}} +6} , 4\sqrt{32}\}$, and $f\in H^2(S\ensuremath{\mathbb{M}})$ with $\|f\|_{H^2(S\ensuremath{\mathbb{M}})}\leq C$ (for the precise definition of the used Sobolev norm see Section \ref{sec:hyperbolicsurfaces})
it holds
\[ \bigg \|e^{-tP_\ensuremath{\gamma}}f -\sum_{\substack{\eta \in \sigma(\Delta_\ensuremath{\mathbb{M}})\\ \eta \leq C{\varepsilon}\ensuremath{^{-1}}}}e^{-t \lambda_\eta (\ensuremath{\gamma})}\Pi_{\lambda_\eta(\ensuremath{\gamma})} f \bigg \|_{L^2(S\ensuremath{\mathbb{M}})}\leq \varepsilon + \frac 8 {\ensuremath{\gamma}^2t}e^{-\ensuremath{\gamma}^2t/4} \|f\|_{L^2(S\ensuremath{\mathbb{M}})}\]
where $\lambda_\eta(\ensuremath{\gamma})$ is an eigenvalue of $P_\ensuremath{\gamma}$ converging to $ \eta$ as $\ensuremath{\gamma}\to \infty$ from Theorem \ref{thm:evofPg} and $\Pi_{\lambda_\eta(\ensuremath{\gamma})}$ is a spectral projector for $P_\ensuremath{\gamma}$ of operator norm less than 2.
\end{theorem}
Note that this does not provide an asymptotic expansion for $t\to\infty$ due to arbitrarily small but constant error term $\varepsilon$. However, in contrast to asymptotic expansion in general, all coefficients, including the remainder term are explicitly controllable. As a corollary we get
an estimate on $\|e^{-t P_\ensuremath{\gamma}} f- \int_{S\ensuremath{\mathbb{M}}} f \|$.
\begin{korollar}\label{cor:equilibrium}
Let $(\ensuremath{\mathbb{M}}, g)$ be a orientable compact surface of constant negative curvature scaled to $\kappa = -1$. There is a constant $C_0$ such that for all $C>0,\varepsilon > 0,B\geq 1$ and $f\in H^2(S\ensuremath{\mathbb{M}})$ with $\|f\|_{H^2(S\ensuremath{\mathbb{M}})}\leq C$ and $\ensuremath{\gamma} >\max \{4B(4C{\varepsilon} \ensuremath{^{-1}} +6)^{3/2} , 4\sqrt{32}\}$ it holds
\begin{align*}
\norm{e^{-tP_\ensuremath{\gamma}}f - \int_{S\ensuremath{\mathbb{M}}} fd\mu}_{L^2(S\ensuremath{\mathbb{M}})} \leq \varepsilon &+ C_0 C \varepsilon\ensuremath{^{-1}} e^{-t(\eta_1-B\ensuremath{^{-1}})}\|f\|_{L^2(S\ensuremath{\mathbb{M}})}\\
&+\frac 8 {\ensuremath{\gamma}^2t}e^{-\ensuremath{\gamma}^2t/4} \|f\|_{L^2(S\ensuremath{\mathbb{M}})}
\end{align*} where $\eta_1\coloneqq \min \sigma(\Delta_\ensuremath{\mathbb{M}})\setminus\{0\}$.
\end{korollar}
Note that a problem related to the kinetic Brownian motion in $S\ensuremath{\mathbb{M}}$ is the study of the hypoelliptic Laplacian on $T\ensuremath{\mathbb{M}}$ introduced by Bismut \cite{Bis05}. Like the kinetic Brownian motion the hypoelliptic Laplacian interpolates between the geodesic flow and the Brownian motion. In \cite[Chapter 17]{BL08} Bismut and Lebeau prove the convergence of the spectrum of the hypoelliptic Laplacian to the spectrum of the Laplacian on $\ensuremath{\mathbb{M}}$ using semiclassical analysis. It seems plausible that their techniques can also be transferred to the setting of kinetic Brownian motion and might give the spectral convergence without any curvature restriction. The purpose of this article is however not to attack this general setting but show that under the assumption of constant negative curvature, harmonic analysis allows to drastically reduce the analytical difficulties. In fact we are able to reduce the problem to standard perturbation theory. This is also the reason why we are able to obtain the explicit error estimates \eqref{eq:thm_error_estimate}. The approach of applying harmonic analysis to spectral problems related to geodesics flows on manifolds of constant negative curvature (or more generally locally symmetric spaces) has been also pursude in \cite{FF03, DFG15, GHW18, GHW18a, KW17} for the analysis of Pollicott-Ruelle resonances and these results have been a major motivation for the present article.
Let us give a short outline of the proof of Theorem~\ref{thm:evofPg}: By the assumption of constant negative curvature, the manifold $\ensuremath{\mathbb{M}}$ is up to scaling of the Riemannian metric isometrically isomorphic to a hyperbolic surface $\ensuremath{\Gamma}\backslash \H$ where $\ensuremath{\Gamma}\leq PSL_2(\ensuremath{\mathbb{R}})$ is a cocompact torsion-free discrete subgroup and $\H$ the upper half plane. $\H$ itself can be written as homogeneous space $PSL_2(\ensuremath{\mathbb{R}})/PSO(2)$.
Under these identifications also the sphere bundle can be written as a homogeneous space $S\ensuremath{\mathbb{M}} =\ensuremath{\Gamma}\backslash PSL_2(\ensuremath{\mathbb{R}})$ which is obviously a homogeneous space for $PSL_2(\ensuremath{\mathbb{R}})$.
Since the manifold is compact we can decompose the corresponding $L^2(S\ensuremath{\mathbb{M}})$ into unitary irreducible $PSL_2(\ensuremath{\mathbb{R}})$-representations $\ensuremath{\mathcal} H_\pi$ and the generator $P_\ensuremath{\gamma}$ can be expressed by the right $\ensuremath{\mathfrak} {sl}_2(\ensuremath{\mathbb{R}})$-action.
As a consequence $P_\ensuremath{\gamma}$ preserves the decomposition $L^2(S\ensuremath{\mathbb{M}})=\oplus \ensuremath{\mathcal} H_\pi$ and we can study the restriction $P_\ensuremath{\gamma}\colon \ensuremath{\mathcal} H_\pi\to\ensuremath{\mathcal} H_\pi$ for each occurring representation separately. In each of these irreducible representations the spectral asymptotics of $P_\gamma$ can then be handled by standard perturbation theory of an operator-family of type (A) in the sense of Kato.
We want to point out that Theorem~\ref{thm:evofPg} can be obtained without using the representation theory of $SL_2(\ensuremath{\mathbb{R}})$. Even more, in an updated article \cite{kww20} we prove Theorem~\ref{thm:evofPg} in the case of constant curvature surfaces, i.e. we extended our result to flat and positively curved surfaces. Here we do not use the representation theory of the corresponding isometry groups of the universal cover. Instead the results are proven using eigenspace decompositions of certain commuting differential operators.
The article is organized as follows:
We will give a short overview over the kinetic Brownian motion and the connection between constant curvature surfaces and the representation theory of $PSL_2(\ensuremath{\mathbb{R}})$ in Section \ref{sec:preliminaries}.
After that we will recall a few results of perturbation theory for unbounded linear operators (Section \ref{sec:pertth}) which are mostly taken from \cite{kato}.
In the limit $\ensuremath{\gamma}\to\infty$ one would like to consider the geodesic vector field as a perturbation of the spherical Laplacian.
The major difficulty is that $\frac{1}{\ensuremath{\gamma}}X$ is not a small perturbation in comparison with $\Delta_\ensuremath{\mathbb{S}}$.
After the symmetry reduction there is a precise way to consider $X$ as small operator in any irreducible component.
Afterwards we will give a proof of the convergence of the spectra (Theorem \ref{thm:evofPg}).
In the last part (Section \ref{sec:equilibrium}) we will prove Theorem \ref{thm:convergencetoequilibrium}.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Kinetic Brownian Motion}\label{sec:kbb}
Let $\ensuremath{\mathbb{M}}$ be a compact Riemannian manifold of dimension $d\geq 2$ with sphere bundle $S\ensuremath{\mathbb{M}}=\{(x,v)\in T\ensuremath{\mathbb{M}}\mid \|v\| =1\}$. We introduce the spherical Laplacian $\Delta_\ensuremath{\mathbb{S}}$ as follows: for every $x \in \ensuremath{\mathbb{M}}$ the tangent space $T_x\ensuremath{\mathbb{M}}$ is a Euclidean vector space via the Riemannian metric and $S_x\ensuremath{\mathbb{M}}=\{v\in T_x\ensuremath{\mathbb{M}}\mid \|v\|=1\}$ is a submanifold of $T_x\ensuremath{\mathbb{M}}$. The inner product on $T_x\ensuremath{\mathbb{M}}$ induces a Riemannian structure on $S_x\ensuremath{\mathbb{M}}$. Hence, the (positive) Laplace-Beltrami operator $\Delta_\ensuremath{\mathbb{S}} (x)\coloneqq \Delta_{S_x\ensuremath{\mathbb{M}}}$ of $S_x\ensuremath{\mathbb{M}}$ defines an operator $C^\infty(S_x\ensuremath{\mathbb{M}})\to C^\infty (S_x\ensuremath{\mathbb{M}})$. We now obtain the spherical Laplace operator $\Delta_\ensuremath{\mathbb{S}}$ by
\[\Delta_\ensuremath{\mathbb{S}}: C^\infty(S\ensuremath{\mathbb{M}})\to C^\infty(S\ensuremath{\mathbb{M}}),\quad \Delta_\ensuremath{\mathbb{S}} f (x,v):= (\Delta_\ensuremath{\mathbb{S}}(x)f(x,\cdot))(v).\]
For $(x,v)\in S\ensuremath{\mathbb{M}}$ and $w \in T _{(x,v)} S\ensuremath{\mathbb{M}}$ we define $\theta_{(x,v)} (w) =g_x (v, T_{(x,v)} \pi \,w)$ where $\pi \colon S\ensuremath{\mathbb{M}}\to \ensuremath{\mathbb{M}}$ is the projection and $g$ is the Riemannian metric on $\ensuremath{\mathbb{M}}$. Then $\theta$ is a 1-form on $S\ensuremath{\mathbb{M}}$ and $\nu =\theta \wedge (d\theta)^{d-1}$ defines the Liouville measure on $S\ensuremath{\mathbb{M}}$ which is invariant under the geodesic flow $\phi_t$. The vector field $X = \d \phi_t^\ast$ is called the geodesic vector field.
Let us consider the operator $P_\ensuremath{\gamma} = -\ensuremath{\gamma} X + \frac {\ensuremath{\gamma}^2}2 \Delta_\ensuremath{\mathbb{S}}$ with domain $\dom(P_\ensuremath{\gamma})=\{u\in L^2(S\ensuremath{\mathbb{M}}) \mid P_\ensuremath{\gamma} u\in L^2(S\ensuremath{\mathbb{M}})\}$ for $\ensuremath{\gamma} > 0$. Note that the action of $P_\ensuremath{\gamma} $ has to be interpreted in the sense of distributions. We first want to collect some properties of $P_\ensuremath{\gamma}$.
\begin{proposition}
\label{prop:kbb}
$P_\ensuremath{\gamma}$ is a hypoelliptic operator with
$$\norm{f}_{H^{2/3}}\leq C(\norm f_{L^2}+\norm {P_\gamma f}_{L^2})\quad \text{for}\quad f\in \dom (P_\gamma).$$
$P_\gamma$ is accretive (i.e. $\Re\langle P_\ensuremath{\gamma} f ,f\rangle \geq 0$) and coincides with the closure of $P_\ensuremath{\gamma}|_{C^\infty}$.
Therefore, $P_\ensuremath{\gamma}$ has compact resolvent, discrete spectrum with eigenspaces of finite dimension, and the spectrum is contained in the right half plane. $P_\ensuremath{\gamma}$ generates a positive strongly continuous contraction semigroup $e^{-tP_\ensuremath{\gamma}}$.
\end{proposition}
\begin{proof}
See Appendix.
\end{proof}
\subsection{\texorpdfstring{Representation Theory of $SL_2(\ensuremath{\mathbb{R}})$}{Representation Theory of SL(2,R)}}\label{sec:sl2}
\begin{definition}
The \emph{special linear group} $SL_2(\ensuremath{\mathbb{R}})$ is defined by \[SL_2(\ensuremath{\mathbb{R}})\coloneqq \left \{\begin{pmatrix}
a&b\\c&d
\end{pmatrix}\in \ensuremath{\mathbb{R}}^{2\times2} \colon ad-bc=1\right\}.\]
and the \emph{projective special linear group} by $PSL_2(\ensuremath{\mathbb{R}})\coloneqq SL_2(\ensuremath{\mathbb{R}}) / \{\pm I\}$.
We abbreviate $PSL_2(\ensuremath{\mathbb{R}})$ by $G$.
Both groups are Lie groups with Lie algebra \[\ensuremath{\mathfrak} g\coloneqq \ensuremath{\mathfrak}{sl}_2(\ensuremath{\mathbb{R}})= \left \{\begin{pmatrix}
a&b\\c&d
\end{pmatrix} \in \ensuremath{\mathbb{R}}^{2\times2}\colon a+d=0\right\}.\]
\end{definition}
\begin{notation}
We introduce the following elements of $\ensuremath{\mathfrak} g$ resp. $\ensuremath{\mathfrak} g \otimes\ensuremath{\mathbb{C}}$.
\[ \Xi=\frac 12\begin{pmatrix}0&1\\-1&0\end{pmatrix},\quad H=\frac{1}{2}\begin{pmatrix}
1&0\\0&-1
\end{pmatrix}, \quad
B=\frac{1}{2}\begin{pmatrix}
0&1\\1&0
\end{pmatrix}\text{ and } X_\pm = -H\mp i B.
\]
The following commutator relations hold:
$$ [\Xi, H] = -B, \quad [\Xi, B] = H, \quad [H,B] = \Xi, $$ $$ [\Xi,X_\pm] =\pm i X_\pm, \quad [X_+,X_-] = -2i\Xi.$$
The Casimir element is given by $$\Omega =4\Xi^2-4H^2-4B^2=4\Xi^2-2(X_+X_-+X_-X_+)\in \ensuremath{\mathcal} U(\ensuremath{\mathfrak} g).$$
The maximal compact subgroup $K$ of $G$ is $PSO(2)\coloneqq \{\exp (\theta \Xi)\mid \theta \in\ensuremath{\mathbb{R}}\} / \{\pm I\}$.
\end{notation}
It follows by a simple calculation that $$[\Omega, \Xi] = [\Omega , H] = [\Omega, B] = 0, $$
hence $\Omega \in Z(\ensuremath{\mathcal} U(\ensuremath{\mathfrak} g))$.
Let $(\pi, \ensuremath{\mathcal} H_\pi)$ be a irreducible unitary representation of $PSL_2(\ensuremath{\mathbb{R}})$.
Then $\pi(\Omega)$ acts as a scalar $\lambda_\pi$ on $\ensuremath{\mathcal} H_\pi$ by Schur's lemma.
Since $PSO(2)$ is compact, $\ensuremath{\mathcal} H_\pi$ decomposes as a $PSO(2)$-representation, i.e. we have a orthogonal direct sum \begin{equation}
\ensuremath{\mathcal} H_\pi =\widehat\bigoplus_{k\in \ensuremath{\mathbb{Z}}} V_k \quad\text{with}\quad \pi(\exp (\theta \Xi)) = e^{ik\theta} \quad \text {on} \quad V_k.\label{eq:K-types}\end{equation}
One can show that each $V_k$ consists of analytic vectors for $\pi$ and is at most one-dimensional. Let $\phi_k$ denote a normalized element in $V_k$ if $V_k\neq 0$. In particular, $\pi(\Xi) \phi_k = ik \phi_k$ on $V_k$.
The operators $X_\pm$ are raising resp. lowering operators that is $X_\pm \colon V_k\to V_{k\pm 1}$. Indeed,
$$\Xi X_\pm v = X_\pm \Xi v +[\Xi, X_\pm]v = ik X_\pm v \pm i X_\pm v = i(k\pm 1) X_\pm v, \quad v\in V_k.$$
Moreover,
$$-4X_\mp X_\pm = \Omega -4\Xi^2 \mp 4i \Xi = \lambda_\pi + 4k^2 \pm 4k = (2k \pm 1)^2 +\lambda_\pi -1.$$
Since $X_\pm^\ast = - X_\mp$, the norm of $X_\pm$ is given by \[\|X_\pm\|_{V_k\to V_{k\pm 1}} =\frac 12 \sqrt{(2k\pm1)^2+\lambda_\pi -1}.\]
The scalar $\lambda_\pi$ classifies all unitary irreducible representations of $G$.
\begin{theorem}[see {\cite[Ch. 8 Thm. 2.2]{taylor}}]
Each non-trivial irreducible unitary representation of $PSL_2(\ensuremath{\mathbb{R}})$ is unitarily equivalent to one of the following types:
\begin{itemize}
\item (Anti-)Holomorphic discrete series: $\pi^\pm_{\pm 2n}$, $n\in\ensuremath{\mathbb{N}}$, with $\pi^\pm_{\pm 2n}(\Omega)=1-(2n-1)^2$ and $\frac 1i \sigma (\pi^\pm_{\pm 2n}(\Xi))=\pm (n+\ensuremath{\mathbb{N}}_0)$
\item Principle series: $\pi_{is},$ $s\in\ensuremath{\mathbb{R}}$, with $\pi_{is}(\Omega)=1+s^2$ and $\frac 1i \sigma (\pi_{is}(\Xi))=\ensuremath{\mathbb{Z}}$
\item Complementary series: $\pi_s$, $s\in (-1,1)\setminus\{0\}$ with $\pi_{s}(\Omega)=1-s^2$ and $\frac 1i \sigma (\pi_{s}(\Xi))=\ensuremath{\mathbb{Z}}$.
\end{itemize}
There are no unitary equivalences except for $\pi_{is}\simeq \pi_{-is}$ and $\pi_{s}\simeq \pi_{-s}$.
\end{theorem}
In our setting we do not have to distinguish between principle and complementary series representations. Hence, we only distinguish between irreducible unitary representations $\pi$ with $\lambda_\pi <0$ and $\lambda_\pi > 0$ (and the trivial representation). In the former case we have $\ensuremath{\mathcal} H_{\pi^\pm_{\pm 2n}} = \bigoplus _ {\pm k\geq n} V_k$ and in the latter case we have $\ensuremath{\mathcal} H_\pi =\bigoplus_{k\in \ensuremath{\mathbb{Z}}} V_k$ with $\dim V_k =1$ for all $\pm k \geq n$ resp. $k\in \ensuremath{\mathbb{Z}}$.
\subsubsection{Sobolev Regularity for Unitary Representations}\label{sec:sobolev}
Let $(\pi ,\ensuremath{\mathcal} H_\pi)$ be a unitary representation of a real Lie group $G$ and $X_1,\ldots , X_n$ be a basis of $\ensuremath{\mathfrak} g$.
We define the Laplacian $\Delta$ (depending on the Basis) as $$\Delta = -\sum X_i^2.$$ The Laplacian acts as an essentially self-adjoint operator
on $\ensuremath{\mathcal} H_\pi$. The Sobolev space $\ensuremath{\mathcal} H_\pi^2$ of order $2$ is the domain of the closure of $I+\Delta$, i.e.
$\ensuremath{\mathcal} H_\pi^2=\{u \in \ensuremath{\mathcal} H_\pi \mid (I+\Delta) u \in \ensuremath{\mathcal} H_\pi\}.$ Here $(I+\Delta) u$ is seen as an element of $(C^\infty (\ensuremath{\mathcal} H_\pi))^\ast$ where $C^\infty (\ensuremath{\mathcal} H_\pi)$ denotes the set of smooth vectors for $\pi$.
$ \ensuremath{\mathcal} H_\pi^2$ is a Hilbert space with the inner product $\langle u_1, u_2\rangle_2 = \langle (I+\Delta) u_1,(I+\Delta) u_2\rangle$.
Let $\ensuremath{\mathcal} U_k(\ensuremath{\mathfrak} g_\ensuremath{\mathbb{C}})$ be the subspace of $\ensuremath{\mathcal} U(\ensuremath{\mathfrak} g_\ensuremath{\mathbb{C}})$ spanned by $Y_{1}\cdots Y_{l}$ with $Y_{i}\in \ensuremath{\mathfrak} g_\ensuremath{\mathbb{C}}$ and $l\leq k$.
By \cite[Lemma 6.1]{Nel} we have $$ \forall \,B \in \ensuremath{\mathcal} U_2(\ensuremath{\mathfrak} g_\ensuremath{\mathbb{C}})\,\exists\, C>0 \colon \quad \norm{Bu}\leq C\norm{(I+\Delta)u} \quad \forall\, u\in C^\infty(\ensuremath{\mathcal} H_\pi).$$
In particular, $\ensuremath{\mathcal} H^2_\pi$ is independent of the choice of basis (in contrast to $\langle\cdot,\cdot\rangle_2$ which depends on the choice of the basis). We will need a slightly more general lemma which is an analogous to the ordinary elliptic regularity estimates in $\ensuremath{\mathbb{R}}^n$ (see e.g. \cite[Thm. 7.1]{zworski}).
\begin{lemma}
Let $Q =\Delta + A$ with $A\in \ensuremath{\mathcal} U_1(\ensuremath{\mathfrak} g_\ensuremath{\mathbb{C}})$. Then $\ensuremath{\mathcal} H^2_\pi=\{u\in\ensuremath{\mathcal} H_\pi \mid Qu\in \ensuremath{\mathcal} H_\pi\}$ and there is $C>0$ s.t. $\norm{u}_2 \leq C(\norm{Qu}+\norm u)$.
\end{lemma}
\begin{proof}
Since $\norm{\Delta u } \leq \norm{Qu} + \norm{Au}$ for $u \in C^\infty(\ensuremath{\mathcal} H_\pi)$ it remains to show that $\norm{X_i u} \leq C(\norm{Qu}+\norm{u})$. Let therefore $A=\sum_{i=1}^n a_i X_i + b$ with $a_i,b\in \ensuremath{\mathbb{C}}$.
Then we have by the Cauchy-Schwarz inequality
\begin{align*}
- |a_i\langle X_i u, u\rangle | &\geq - \norm{ X_i u} |a_i| \norm{u} = \frac 12 ((\norm{X_i u}-|a_i|\norm u)^2 -\norm{X_i u}^2-|a_i|^2 \norm{u}^2) \\& \geq -\frac 12 \norm{X_i u}^2 - \frac 12 |a_i|^2\norm{u}^2.
\end{align*}
Since $\norm{Qu - u}^2\geq 0$ we infer that
\begin{align*}
\norm{Qu}^2 + \norm u ^2 &\geq 2 \Re \langle Qu, u\rangle \\
&=2\sum \langle X_i u, X_i u\rangle + 2\sum \Re ( a_i \langle X_i u, u\rangle) + 2\Re b\langle u,u\rangle \\
&\geq 2\sum \norm{X_i u}^2 - 2\sum |a_i \langle X_i u, u\rangle| - 2|b| \norm{u}^2 \\
&\geq \sum \norm{X_i u}^2 - \left (\sum |a_i|^2+2|b| \right ) \norm u ^2.
\end{align*}
It follows that $\sum \norm{X_i u}^2 \leq \norm{Qu}^2 + \left( 1+ \sum |a_i|^2 + 2|b| \right ) \norm u ^2$. This completes the proof.
\end{proof}
So far we have considered arbitrary unitary representations. Now let $(\pi, \ensuremath{\mathcal} H_\pi)$ be an irreducible unitary representation of $G=PSL_2(\ensuremath{\mathbb{R}})$. Consider the basis $\Xi, H,B$ of $\ensuremath{\mathfrak} g$. Then we have $\Delta = -\Xi^2-H^2-B^2 = -2\Xi^2 + \Omega/4$. Note that $\Omega$ acts as a scalar since $\pi$ is irreducible. Hence, $H^2(\ensuremath{\mathcal} H_\pi) = \{u\in \ensuremath{\mathcal} H_\pi\mid \Xi^2u\in \ensuremath{\mathcal} H_\pi\} = \{u\in \ensuremath{\mathcal} H_\pi\mid (-\Xi^2 + A)u \in \ensuremath{\mathcal} H_\pi\}$ for every $A\in \ensuremath{\mathcal} U_1(\ensuremath{\mathfrak} g_\ensuremath{\mathbb{C}})$.
\subsection{Hyperbolic Surfaces}\label{sec:hyperbolicsurfaces}
Let $\ensuremath{\mathbb{M}}$ be a orientable compact Riemannian manifold of dimension 2 and constant negative curvature $-1$.
Since $\ensuremath{\mathbb{M}}$ has finitely many connected components, let us assume without loss of generality that $\ensuremath{\mathbb{M}}$ is connected.
By the uniformization theorem
$\ensuremath{\mathbb{M}}$ is isometrically isomorphic to $\ensuremath{\Gamma}\backslash \H$ where $\H = \{x+iy\mid y>0\}$ is the upper half plane with the metric $y^{-2} dxdy$ and $\ensuremath{\Gamma} \subseteq \isom^+(\H)$ is a discrete subgroup of orientation preserving isometries on $\H$ acting freely and properly discontinuously on $\H$.
Note that $G=PSL_2(\ensuremath{\mathbb{R}})$ acts on $\H$ by the M\"obius transformation. Even more, $G$ is the group of orientation preserving isometries $\isom^+(\H)$ which acts transitively on $\H$. With this action $G/K\simeq\H$ and $ G \simeq S\H $ via $g.(z,v)=(g.z, T_zg\,v)$.
We infer that $\ensuremath{\mathbb{M}}$ is a locally symmetric space $\ensuremath{\Gamma}\backslash G/K$ with sphere bundle $S\ensuremath{\mathbb{M}} = \ensuremath{\Gamma}\backslash G$.
We have a unitary representation of $G$ on $L^2(\ensuremath{\Gamma}\backslash G, m)$ (with the Haar measure $m$ on $\ensuremath{\Gamma}\backslash G$) given by
\[g.f(x)\coloneqq f(xg),\quad\quad f\in L^2(\ensuremath{\Gamma}\backslash G),\, x\in \ensuremath{\Gamma}\backslash G,\,g\in G\]
which we call \emph{regular representation}.
We obtain a Lie algebra representation of $\ensuremath{\mathfrak} g$ on $C^\infty(\ensuremath{\Gamma}\backslash G)$ by derivation:
\[Af(x) = \d f(x\exp (tA)),\qquad f\in C^\infty(\ensuremath{\Gamma}\backslash G),\, x\in \ensuremath{\Gamma}\backslash G, \, A\in\ensuremath{\mathfrak} g.\]
The geodesic vector field and the (spherical) Laplacian can be expressed by elements of $\ensuremath{\mathcal} U(\ensuremath{\mathfrak} g)$.
\begin{proposition}\label{prop:geometricoperators}
The geodesic vector field $X$, the spherical Laplace operator $\Delta_\ensuremath{\mathbb{S}}$ on $S\ensuremath{\mathbb{M}}$ and the Laplace operator on $\ensuremath{\mathbb{M}}$ are given by \[X = H, \qquad \Delta_\ensuremath{\mathbb{S}} = -\Xi^2\qquad \text{and}\qquad \Delta_\ensuremath{\mathbb{M}}=-H^2-B^2.\]
Note that $-H^2-B^2$ is a $K$-invariant element in $\ensuremath{\mathcal} U(\ensuremath{\mathfrak} g)$ so that it defines a right $K$-invariant differential operator on $C^\infty(\ensuremath{\Gamma}\backslash G)$ that descends to a differential operator on $C^\infty(\ensuremath{\Gamma} \backslash G/K)=C^\infty(\ensuremath{\mathbb{M}})$.
\end{proposition}
\begin{proof} See Appendix.
\end{proof}
As a consequence, the Haar measure $m$ converts to a $\varphi_t$-invariant smooth measure on $S\ensuremath{\mathbb{M}}$ under the identification $S\ensuremath{\mathbb{M}}\simeq \ensuremath{\Gamma}\backslash G $. Recall that the Liouville measure $\mu$ provides as well a $\varphi_t$-invariant smooth measure. As geodesic flows on compact negatively curved manifolds are known to have a unique smooth invariant measure (up to scaling) the Haar measure can be suitably scaled such that it coincides with the Liouville measure.
Not only the geometric operators can be expressed by the $\ensuremath{\mathcal} U(\ensuremath{\mathfrak} g)$-action but also the Sobolev spaces $H^\alpha(S\ensuremath{\mathbb{M}})$ can be described in terms of the $\ensuremath{\mathcal} U(\ensuremath{\mathfrak} g)$-action. More precisely, $-H^2-B^2-\Xi^2$ is an elliptic operator on $L^2(S\ensuremath{\mathbb{M}})$ so that we have $$H^\alpha(S\ensuremath{\mathbb{M}})= \{u \in L^2(S\ensuremath{\mathbb{M}})\mid (I-H^2-B^2-\Xi^2)^{\alpha/2} u \in L^2(S\ensuremath{\mathbb{M}})\}.$$
In particular, $\langle u_1, u_2\rangle_\alpha = \langle (I-H^2-B^2-\Xi^2)^{\alpha} u_1, u_2\rangle$ is a possible choice for an inner product on $H^\alpha(S\ensuremath{\mathbb{M}})$ that we will use for our results.
\subsection{Direct Decompositions}\label{sec:decomp}
Our main tool to investigate the spectrum of the kinetic Brownian motion on $L^2(S\ensuremath{\mathbb{M}})$ will be the following theorem.
\begin{theorem}[{see \cite[Ch. 8.6]{taylor}}]\label{thm:decomp}
The regular representation on $L^2(S\ensuremath{\mathbb{M}})$ decomposes discretely into unitary irreducible representation of $G$. For a principle or complementary series representation $\pi$ the multiplicity in $L^2(S\ensuremath{\mathbb{M}})$ is given by the multiplicity of the eigenvalue $\frac 14\lambda_\pi$ of the Laplace operator $\Delta_\ensuremath{\mathbb{M}}$ on $\ensuremath{\mathbb{M}}$. Moreover, the multiplicity of $\pi^\pm_{\pm n}$ is $(n-1)(g-1)$ for even $n\geq 4$ and $g$ for $n=2$ where $g$ is the genus of $\ensuremath{\mathbb{M}}$. The trivial representation occurs once in $L^2(S\ensuremath{\mathbb{M}})$.
Hence,
\begin{align*}
L^2(S\ensuremath{\mathbb{M}}) = \bigoplus_{s\in (0,1)} m(\pi_s) \ensuremath{\mathcal} H_{\pi_{s}} \oplus \bigoplus_{s \geq 0} m (\pi_{is}) \ensuremath{\mathcal} H_{\pi_{is}}\oplus \bigoplus _ {n\in\ensuremath{\mathbb{N}}} m(\pi_{\pm 2n}^\pm) \ensuremath{\mathcal} H_{\pi_{\pm 2n}^\pm} \oplus \ensuremath{\mathbb{C}}
\end{align*}
where $m(\pi_s) = \dim\ker (\Delta_\ensuremath{\mathbb{M}} - \frac 14(1-s^2))$, $m(\pi_{is}) = \dim\ker (\Delta_\ensuremath{\mathbb{M}} - \frac 14(1+s^2))$, $m(\pi_{\pm 2}^\pm)=g$ and $m(\pi_{\pm 2n}^\pm)=(2n-1)(g-1)$.
The Sobolev space decomposes into
\begin{align*}
H^2(S\ensuremath{\mathbb{M}}) = \bigoplus_{s\in (0,1)} m(\pi_s) \ensuremath{\mathcal} H^2_{\pi_{s}} \oplus \bigoplus_{s \geq 0} m (\pi_{is}) \ensuremath{\mathcal} H^2_{\pi_{is}}\oplus \bigoplus _ {n\in\ensuremath{\mathbb{N}}} m(\pi_{\pm 2n}^\pm) \ensuremath{\mathcal} H^2_{\pi_{\pm 2n}^\pm} \oplus \ensuremath{\mathbb{C}}.
\end{align*}
\end{theorem}
\subsection{Perturbation Theory}\label{sec:pertth}
We want to collect some basic results from perturbation theory for linear operators that can be found in \cite{kato}.
First, we introduce families of operators we want to deal with.
\begin{definition}[{see \cite[Ch. VII \S2.1]{kato}}]
A family $T(x)$ of closed operators on a Banach space $X$ where $x$ is an element in a domain $D\subseteq \ensuremath{\mathbb{C}}$ is called \emph{holomorphic of type (A)} if the domain of $T(x)$ is independent of $x$ and $T(x)u$ is holomorphic for every $u\in\operatorname{dom}(T(x))$.
\end{definition}
Without loss of generality let us assume that 0 is contained in the domain $D$. We call $T=T(0)$ the unperturbed operator and $A(x)=T(x)-T$ the perturbation. Furthermore, let $R(\zeta,x)= (T(x)-\zeta)\ensuremath{^{-1}}$ be the resolvent of $T(x)$ and $R(\zeta)=R(\zeta,0)$. If $\zeta\notin \sigma(T)$ and $1+A(x)R(\zeta)$ is invertible then $\zeta \notin \sigma(T(x))$ and the following identity holds:
\begin{align}\label{eq:resolventformula2}R(\zeta,x)=R(\zeta)(1+A(x)R(\zeta))\ensuremath{^{-1}}.\end{align}
Let us assume that $\sigma(T)$ splits into two parts by a closed simple $C^1$-curve $\ensuremath{\Gamma}$. Then there is $r>0$ such that $R(\zeta,x)$ exists for $\zeta\in \ensuremath{\Gamma}$ and $|x|<r$.
If the perturbation is linear (i.e. $T(x)=T+xA$) then a possible choice for $r$ is given by $\min_{\zeta\in\Gamma}\|AR(\zeta)\|^{-1}$.
In particular, we obtain that $\Gamma\subseteq \ensuremath{\mathbb{C}}\setminus\sigma(T(x))$ for $|x|<r$, i.e. the spectrum of $T(x)$ still splits into two parts by $\Gamma$. Let us define $\sigma_{\interior}(x)$ as the part of $\sigma(T(x))$ lying inside $\Gamma$ and $\sigma_{\ext}(x)=\sigma(T(x))\setminus\sigma_{\interior}(x)$.
The decomposition of the spectrum gives a $T(x)$-invariant decomposition of the space $X= M_{\interior}(x)\oplus M_{\ext}(x)$ where $M_{\interior}(x)=P(x)X$ and $M_{\ext}(x)=\ker P(x)$ with the bounded-holomorphic projection \[P(x)=-\frac{1}{2\pi i}\int_\Gamma R(\zeta,x)d\zeta.\]
Furthermore, $\sigma(T(x)|_{M_{\interior}(x)})=\sigma_{\interior}(x)$ and $\sigma(T(x)|_{M_{\ext}(x)})=\sigma_{\ext}(x)$.
To get rid of the dependence of $x$ in the space $M_{\interior}(x)$ we will use the following proposition.
\begin{proposition}[see {\cite[Ch. II \S4.2]{kato}}]
Let $P(x)$ be a bounded-holomorphic family of projections on a Banach space $X$ defined in a neighbourhood of 0.
Then there is a bounded-holomorphic family of operators $U(x)\colon X\to X$ such that $U(x)$ is an isomorphism for every $x$ and $U(x)P(0)=P(x)U(x)$. In particular, $U(x) P(0) X = P(x)X$ and $U(x)\ker P(0) = \ker P(x)$.
\end{proposition}
Denoting $U(x)^{-1} T(x) U(x)$ as $\widetilde{T}(x)$ we observe \[\sigma(\widetilde T(x)|_{M_{\interior}(0)})= \sigma(\widetilde T(x)) \cap \operatorname{int}(\ensuremath{\Gamma}) = \sigma(T(x)) \cap \operatorname{int}(\ensuremath{\Gamma})\] since $U(x)$ is an isomorphism. Here we denote the interior of $\ensuremath{\Gamma}$ by $\operatorname{int}( \ensuremath{\Gamma})$.
Let us from now on suppose that $\Gamma$ is a circle with radius $\rho$ centered at an eigenvalue $\mu$ of $T$ with finite multiplicity and encloses no other eigenvalues of $T$. Then $\sigma_{\interior}(0)=\{\mu\}$ and $M_{\interior}(0)$ is finite dimensional.
Hence, $\widetilde T(x)|_{M_{\interior}(0)}$ is a holomorphic family of operators on a finite dimensional vector space. It follows that the eigenvalues of $T(x)$ are continuous as a function in $x$.
In addition to the previous assumptions, let us suppose that the eigenvalue $\mu$ is simple.
Then $M_{\interior}(0)$ is one-dimensional and $\widetilde T(x)|_{M_{\interior}(0)}$ is a scalar operator.
We obtain that there is a holomorphic function $\mu\colon B_r\to\ensuremath{\mathbb{C}}$ (with $r=\min_{\zeta\in\Gamma}\|AR(\zeta)\|^{-1}$ as above) such that $\mu(x)$ is an eigenvalue of $T(x)$, $\mu(x)$ is inside $\Gamma$ and $\mu(x)$ is the only part of $\sigma(T(x))$ inside $\Gamma$ since $\sigma_{\interior}(x)=\sigma(\widetilde T(x)|_{M_{\interior}(0)})$.
As a consequence, \[|\mu (x) -\mu|<\rho \qquad \forall\ |x|<r.\] By Cauchy's inequality we infer $|\mu^{(n)}|\leq \rho r^{-n}$ for the Taylor series $\mu(x)=\sum x^n \mu^{(n)}$. Hence,
\begin{align}\left |\mu(x)-\sum_{n=0}^N x^n \mu^{(n)}\right|\leq \rho\cdot \frac{|x|^{N+1}}{r^{N}(r-|x|)}\quad \forall\ |x|<r.\label{eq:error_estimate}
\end{align}
We now want to calculate the Taylor coefficients of $\mu(x)$ in order to get an approximation of $\mu(x)$ in the case where $X=\ensuremath{\mathcal} H$ is a Hilbert space and $T(x)$ is a holomorphic family of type (A) with symmetric $T$ but not necessarily symmetric $T(x)$ for $x\neq 0$.
To this end let $\varphi(x)$ be a normalized holomorphic family of eigenvectors (obtained from $P(x)$).
Consider the Taylor series $\mu(x)=\sum x^n \mu^{(n)}$, $\varphi(x)=\sum x^n \varphi^{(n)}$ and $T(x)u=\sum x^n T^{(n)} u $ for every $u \in \dom(T)$ which converges on a disc of positive radius independent of $u$. This is due to the fact that Taylor series of holomorphic functions converge on every disc that is contained in the domain.
We compare the Taylor coefficients in \begin{align*}
(T(x)-\mu(x))\varphi(x)=0\qquad \text{and} \qquad\langle(T(x)-\mu(x))\varphi(x),\varphi(x)\rangle=0
\end{align*}
and obtain \begin{equation*}
(T-\mu^{(0)})\varphi^{(l)}=-\sum_{n=1}^l(T^{(n)}-\mu^{(n)})\varphi^{(l-n)}\end{equation*} and
\begin{align*}
\mu^{(k)}= &\langle T^{(k)}\varphi^{(0)},\varphi^{(0)}\rangle+\sum_{n=1}^{k-1}\langle(T^{(n)}-\mu^{(n)})\varphi^{(k-n)}, \varphi^{(0)}\rangle.
\end{align*}
A fortiori,
\begin{align}
\mu^{(1)}&= \langle T^{(1)}\varphi^{(0)},\varphi^{(0)}\rangle \label{eq:firstderivative} \\
\mu^{(2)}&= \langle T^{(2)}\varphi^{(0)},\varphi^{(0)}\rangle+\langle(T^{(1)}-\mu^{(1)})\varphi^{(1)}, \varphi^{(0)}\rangle, \label{eq:secondderivative} \end{align}
where $\varphi^{(1)}$ fulfils \begin{equation}(T-\mu^{(0)})\varphi^{(1)}=-(T^{(1)}-\mu^{(1)})\varphi^{(0)}.\label{eq:derivativevector}
\end{equation}
Although $\varphi^{(1)}$ is not uniquely determined by this equation, $\mu^{(2)}$ can be calculated in our setting.
Here $\varphi^{(1)}=v+c\varphi^{(0)}$ with unique $v\in \ker(T^{(0)}-\mu^{(0)})^\perp$ as $T^{(0)}$ is symmetric.
We infer that
\begin{align*}
\langle(T^{(1)}-\mu^{(1)})\varphi^{(1)}, \varphi^{(0)}\rangle&=\langle(T^{(1)}-\mu^{(1)})v, \varphi^{(0)}\rangle-c\mu^{(1)}+c\langle T^{(1)}\varphi^{(0)}, \varphi^{(0)}\rangle \\&= \langle(T^{(1)}-\mu^{(1)})v, \varphi^{(0)}\rangle.
\end{align*}
Therefore, $\mu^{(2)}$ depends only on $v$ and not on $c$.
\section{Perturbation Theory of the Kinetic Brownian Motion}
\label{sec:der}
We want to establish the limit $\gamma\to \infty$ of the spectrum of $P_\ensuremath{\gamma}$.
To do so we write $P_\ensuremath{\gamma}= \frac{\ensuremath{\gamma}^2}2 (\Delta_\ensuremath{\mathbb{S}}-2\ensuremath{\gamma}^{-1} X )=\frac{\ensuremath{\gamma}^2}2 T(-2\ensuremath{\gamma}^{-1})$ where $T(x)=\Delta_\ensuremath{\mathbb{S}}+xX$ and we want to use the methods established in Chapter \ref{sec:pertth}.
In order to have finite dimensional eigenspaces and holomorphic families of type (A) we will use the orthogonal decomposition of $L^2(S\ensuremath{\mathbb{M}})$ derived in Theorem \ref{thm:decomp}:
\[L^2(S\ensuremath{\mathbb{M}})\simeq L^2(\ensuremath{\Gamma}\backslash G)=\bigoplus\nolimits_{\pi\in\widehat G} m(\pi) \ensuremath{\mathcal} H_\pi.\]
Here, $T(x)$ is given by $-\Xi^2 + x H$ by Proposition \ref{prop:geometricoperators}. We denote the restriction of $T(x)$ to $\ensuremath{\mathcal} H_\pi$ by $T_\pi(x)$ and its resolvent by $R_\pi(\zeta,x)$ and $R _\pi(\zeta)=R_\pi(\zeta,0)$.
\begin{remark}
It follows from Section~\ref{sec:sobolev} that $\dom(T_\pi(x))=\{u\in \ensuremath{\mathcal} H_\pi \mid T_\pi(x) u \in \ensuremath{\mathcal} H_\pi\}=\ensuremath{\mathcal} H^2_\pi$. Furthermore, $T_\pi(x)$ is closed as a restriction of a closed operator.
We conclude that $T_\pi(x)$ is a holomorphic family of type (A) on the complex plane with domain $ \ensuremath{\mathcal} H^2_\pi$.
\end{remark}
\begin{remark}
One can realize the principle series representation on $\ensuremath{\mathcal} H_{\pi_{is}} = L^2(S^1)$ (see \cite[Ch. 4.3]{taylor}). Here $-\Xi^2$ is taken to $\Delta_{S^1}$ such that $\ensuremath{\mathcal} H_{\pi_{is}}^2=H^2(S^1)$. The remark from above then follows from the elliptic estimate $$\|u\|_{H^2(S^1)}\leq C(\|u\|_{L^2(S^1)}+\| (\Delta_{S^1}+ a(\vartheta) \partial _\vartheta +b(\vartheta) )u\|_{L^2(S^1)})$$ noting that $H$ is a first order differential operator.
\end{remark}
We use the structure of the $G$-representations to obtain a more precise version of elliptic regularity.
\begin{lemma}\label{la:Tbdd}
$H$ is $\Xi^2$-bounded on $\ensuremath{\mathcal} H_\pi$, more precisely $$\|Hu\|^2\leq \frac{|\lambda_\pi|}{4}\|u\|^2 + \frac 3 2\|\Xi^2 u\|^2 \quad\text{with}\quad u\in\ensuremath{\mathcal} H_\pi^2.$$
\end{lemma}
\begin{proof}
Let us express $u\in \ensuremath{\mathcal} H_\pi^2$ in its Fourier expansion according to $K$-types (see \eqref{eq:K-types}), i.e. $u=\sum_{n\in\ensuremath{\mathbb{Z}}} a_n \phi_n\in\ensuremath{\mathcal} H_\pi^2$. Since $H=-\frac 12 (X_++X_-)$ with the raising/lowering operators $X_\pm \colon V_n\to V_{n\pm 1}$ we can compute
\begin{align*}
\|Hu\|^2=& \langle -H^2 u,u\rangle\\
=&-\sum_n a_n \overline{a_n}\langle H^2\phi_n,\phi_n\rangle + a_n \overline{a_{n+2}}\langle H^2 \phi_n,\phi_{n+2}\rangle+ a_n \overline{a_{n-2}} \langle H^2\phi_n,\phi_{n-2}\rangle\\
=&-\frac 14 \sum_n |a_n|^2 \langle (X_+X_-+X_-X_+)\phi_n,\phi_n\rangle + a_n \overline{a_{n+2}}\langle X_+^2 \phi_n,\phi_{n+2}\rangle+\\
\phantom=&\phantom{-\frac 14 \sum}+a_n \overline{a_{n-2}} \langle X_-^2\phi_n,\phi_{n-2}\rangle.
\end{align*}
Since $\Omega =4\Xi^2-2(X_+X_-+X_-X_+)$ and $\Xi=in$ on $V_n$ we infer that \[\langle (X_+X_-+X_-X_+)\phi_n,\phi_n\rangle=-2n^2-\frac 12 \lambda_\pi.\]
Moreover, $\|X_\pm\|_{V_n\to V_{n\pm 1} }= \frac 12 ((2n\pm1)^2+\lambda_\pi -1)^{1/2}$ by Section \ref{sec:sl2}.
Hence, \[|\langle X_\pm^2\phi_n,\phi_{n\pm 2}\rangle| = \frac 14 ((2n\pm1)^2+\lambda_\pi -1)^{1/2} ((2n\pm3)^2+\lambda_\pi -1)^{1/2}.\]
With the Cauchy-Schwarz-inequality we obtain
\begin{align*}
\big|\sum_n a_n &\overline{a_{n\pm2}}\langle X_\pm^2 \phi_n,\phi_{n\pm2}\rangle\big|^2\\
&\leq \frac 1{16} \sum_n |a_n|^2|(2n\pm1)^2+\lambda_\pi -1|\sum_n |a_{n\pm2}|^2|(2n\pm3)^2+\lambda_\pi -1|\\
&= \frac 1{16} \sum_n |a_n|^2|(2n\pm1)^2+\lambda_\pi -1|\sum_n |a_{n}|^2|(2n\mp1)^2+\lambda_\pi -1|\\
&\leq\frac1{16}\left(\sum_n|a_n|^2(|\lambda_\pi|+4n^2+4|n|)\right)^2.\\
\end{align*}
We conclude \begin{align*}
\|Hu\|^2&\leq\frac 14 \sum_n |a_n|^2 (2n^2+\frac 12 |\lambda_\pi|+ 2\cdot\frac 14 (|\lambda_\pi| + 4n^2 +4|n|))\\
&\leq \frac 14 |\lambda_\pi| \|u\|^2+\frac 32 \|\Xi^2 u\|^2.\qedhere
\end{align*}
\end{proof}
The eigenspaces of the unperturbed operator $-\Xi^2$ are $V_0$ and $V_k\oplus V_{-k}$ which are finite dimensional.
As we have seen in Section \ref{sec:pertth} the eigenvalues of a holomorphic family of type (A) are continuous as a function of $x$ in this case. We deduce that for the eigenvalues $\mu(x)$ of $T_\pi(x)$ that arise from non-zero eigenvalues $\mu = \mu(0)$ of $-\Xi^2$ the limit $\gamma\to \infty$ of $\frac{\gamma^2}2\mu(2 \gamma\ensuremath{^{-1}})$, which is an eigenvalue of $P_\gamma$, is $\infty$.
Therefore, we do not care about non-zero eigenvalues at first.
Since $\Xi$ has non-zero spectrum in the discrete series representation we start with a principle or complementary series representation $(\pi,\ensuremath{\mathcal} H_\pi)$.
Here the eigenspace for the eigenvalue 0 of $T_\pi(0)$ is $\langle\phi_0\rangle$ which is one-dimensional.
This means that there is an analytic eigenvalue $\mu(x) = \sum x^n \mu^{(n)}$ of $T_\pi(x)$ and its eigenvector $\varphi(x)=\sum x^n \varphi^{(n)}$ is analytic on some $B_r(0)$ which will be determined later on.
Note that $\mu^{(0)} =0 $, $\varphi^{(0)}=\phi_0$, $T=T^{(0)}=-\Xi^2$ and $T^{(1)}=H$ in this case.
We can use Equation \eqref{eq:firstderivative} from Section \ref{sec:pertth}:
\begin{align*}
\mu^{(1)}=\langle T^{(1)} \varphi^{(0)}, \varphi^{(0)}\rangle=\langle H \phi_0,\phi_0\rangle=\frac12\langle - (X_++X_-)\phi_0,\phi_0\rangle.
\end{align*}
Due to the fact that $X_\pm$ are raising respectively lowering operators, i.e. $X_\pm V_k\subseteq V_{k\pm1}$, we conclude that $\mu'(0)=\mu^{(1)}=0$.
We now want to find the second derivative $\mu''(0)= 2 \mu^{(2)}$ of $\mu$.
According to Section \ref{sec:pertth} we first have to calculate $\varphi^{(1)}$ via $-\Xi^2\varphi^{(1)}=-H\phi_0$ (see Equation~ \eqref{eq:derivativevector}).
Notice that $-H\phi_0\in V_{-1}\oplus V_1 = \{u\mid -\Xi^2u =u\}$. Furthermore $\ker(-\Xi^2)=V_0 = \langle \phi_0\rangle$, and consequently $\varphi^{(1)}=-H\phi_0+c\phi_0$ for some $c\in\ensuremath{\mathbb{C}}$. Let us recall that $\mu''(0)$ is independent of $c$.
Consequently by Equation \eqref{eq:secondderivative}, \begin{align*}
\mu''(0) = 2 \mu^{(2)}=& 2\langle H(-H\phi_0), \phi_0\rangle=-\frac{1}{2}\langle (X_++X_-)^2\phi_0,\phi_0\rangle\\ =& -\frac12 \langle ( X_+^2+ X_+X_-+ X_-X_++ X_-^2)\phi_0,\phi_0\rangle.
\intertext{Again, $X_\pm$ are raising/lowering operators. Therefore, }
\mu''(0)&=-\frac12\langle (X_+X_- +X_-X_+)\phi_0,\phi_0\rangle \\
&=\frac 14 \langle \Omega \phi_0,\phi_0\rangle = \frac {\lambda_\pi}{4}
\end{align*}
as the Casimir operator $\Omega$ equals $4\Xi^2-2(X_+X_-+X_-X_+)$ and $\Xi \phi_0=0$.
Summarizing, we arrived at the following situation.
\begin{proposition}\label{thm:evofTpi}
For a principle or complementary series representation $\pi$ there is $r_\pi>0$ and an analytic function $\mu\colon B_{r_\pi}(0) \to \ensuremath{\mathbb{C}}$ such that $\mu(x)$ is an eigenvalue of $T_\pi(x)$ with multiplicity 1 and $\mu (x)=x^2\frac 12 \frac{\lambda_\pi}4 + \ensuremath{\mathcal} O (x^{3})$. A fortiori, $x^{-2}\mu (x) \to \frac 12 \frac{\lambda_\pi}{4}$ as $x\to 0$.
\end{proposition}
We want to determine error estimates for the eigenvalues and an lower bound for $r_\pi$ used above.
Let $\ensuremath{\Gamma}$ be the circle with radius $\frac 12$ centered at 0.
Hence the spectrum of $T_\pi(0)$ for a principle or complementary series representation is separated by $\ensuremath{\Gamma}$ where the only eigenvalue inside $\ensuremath{\Gamma}$ is 0.
As we have seen in Section \ref{sec:pertth} a choice for $r_\pi$ is $r_\pi=\min_{\zeta\in\Gamma} \|HR_\pi(\zeta)\|^{-1}$.
\begin{lemma}\label{la:normhr}
Let $\sigma$ be the spectrum of $-i\Xi$ on $\ensuremath{\mathcal} H_\pi$, i.e. $\sigma = \{k\mid k\in \ensuremath{\mathbb{Z}}\}$ if $\pi = \pi_{is}$ or $\pi = \pi_s$ or $\sigma = \{k\mid\pm k\geq n\}$ if $\pi = \pi_{\pm 2n}^\pm$. For $\zeta \in \ensuremath{\mathbb{C}}\setminus \sigma^2$ we have
$$ \|R_\pi(\zeta)\| = \sup_{k\in \sigma} |k^2-\zeta|^{-1}.$$
Addionally we can estimate:
\[\|HR_\pi(\zeta)\|^2\leq \left(\frac{|\lambda_\pi|}{4}+3|\zeta|^2\right) \|R_\pi(\zeta)\|^2 +3.\]
\end{lemma}
\begin{proof}
Let us first evaluate the norm of $R_\pi(\zeta)$. On the one hand $R_\pi(\zeta)\phi_k=(k^2-\zeta)\ensuremath{^{-1}}\phi_k$ and we infer $\|R_\pi(\zeta)\|\geq |k^2-\zeta|\ensuremath{^{-1}}$ for all $k\in \sigma$. On the other hand, \[\left\|R_\pi(\zeta)u\right\|=\left\|\sum_{k\in \sigma} a_k (k^2-\zeta)\ensuremath{^{-1}} \phi_k\right\|=\sqrt{\,\sum_{k\in \sigma} |a_k|^2 |k^2-\zeta|^{-2}}\leq \sup_{k\in \sigma} |k^2-\zeta|\ensuremath{^{-1}} \|u\|\]
for $u=\sum_k a_k\phi_k$. Thus $\|R_\pi(\zeta)\| = \sup_{k\in \sigma} |k^2-\zeta|\ensuremath{^{-1}}$.
Using Lemma \ref{la:Tbdd} it follows that
\begin{align*}
\|HR_\pi(\zeta)\|^2&\leq \frac {|\lambda_\pi|} 4 \|R_\pi(\zeta)\|^2+ \frac 32 \left\|-\Xi^2\left(-\Xi^2 -\zeta\right)\ensuremath{^{-1}} \right\|^2\\
&\leq \frac {|\lambda_\pi|} 4 \|R_\pi(\zeta)\|^2+ \frac 32 \left\| 1+\zeta R_\pi(\zeta)\right\|^2\\
&\leq \left(\frac{|\lambda_\pi|}{4}+3|\zeta|^2\right)\|R_\pi(\zeta)\|^2 +3
\end{align*}
where we used $(x+y)^2\leq 2(x^2+y^2)$ in the last step.
\end{proof}
\begin{korollar}
\begin{enumerate}[(i)]
\item
Let $\pi$ be a principle or complementary series representation. Then $R_\pi(\zeta,x)$ exists for all $|\zeta|\geq \frac 12$, $\Re \zeta\leq \frac 12$ and $|x|< (\lambda_\pi+6)^{-1/2}$ and we have $$\| R_\pi(\zeta,x)\|\leq |\zeta|\ensuremath{^{-1}} \left(1-|x|\sqrt{\lambda_\pi+6}\right)\ensuremath{^{-1}}.$$
\item
Let $\pi$ be a discrete series representation $\pi_{\pm 2n}^\pm$. Then $R_\pi(\zeta,x)$ exists for all $\Re \zeta\leq \frac 12$ and $|x|<1/\sqrt{32}$ and we have $$\| R_\pi(\zeta,x)\|\leq |\zeta-n^2|\ensuremath{^{-1}} \left(1-|x|\sqrt{32}\right)\ensuremath{^{-1}}.$$
\end{enumerate}
\label{cor:resofT}
\end{korollar}
\begin{proof}
Let $\Re \zeta\leq \frac 12$ and $|\zeta|\geq \frac 12$. Then $\|R_\pi(\zeta)\| = \sup_{k\in \ensuremath{\mathbb{Z}}} |k^2 -\zeta|\ensuremath{^{-1}} = |\zeta|\ensuremath{^{-1}}$ in the first case.
A simple consequence of Lemma \ref{la:normhr} is \[\|HR_\pi(\zeta)\|^2\leq \frac{\lambda_\pi}{4|\zeta|^2} +6\leq \lambda_\pi +6.\]
Combining this with Equation \eqref{eq:resolventformula2} we infer that $\zeta\not\in \sigma(T_\pi(x))$ for every $x$ with $|x|<\sqrt{\lambda_\pi+6}$.
The stated estimate is a consequence of Equation \eqref{eq:resolventformula2}, too.
In the case of $\pi=\pi_{\pm 2n}^\pm$, we have $\|R_\pi(\zeta)\| = \sup_{k \geq n} |k^2-\zeta|^{-1} = |n^2-\zeta|\ensuremath{^{-1}}$ if $\Re \zeta \leq \frac 12$.
Consequently by Lemma \ref{la:normhr},
\begin{align*}
\|HR_\pi(\zeta)\|^2&\leq\left(\frac{|\lambda_\pi|}{4}+3|\zeta|^2\right) \|R_\pi(\zeta)\|^2 +3\\
&= \frac{(2n-1)^2-1}{4|n^2-\zeta|^2}+3\frac{|\zeta|^2}{|n^2-\zeta|^2} + 3\\
&\leq\frac{n^2-n}{(n^2-1/2)^2}+3\frac{|\zeta|^2}{|1-\zeta|^2} + 3\\
&\leq \frac{1}{n^2-1/2}+3\left(1+ \frac{1}{|1-\zeta|}\right)^2+3\\
&\leq 2+3\cdot 9 +3 = 32.
\end{align*}
Using again Equation \eqref{eq:resolventformula2} finishes the proof.
\end{proof}
Now we can prove the following theorem on the spectrum of $T_\pi(x)$.
\begin{theorem}
\begin{enumerate}[(i)]
\item Let $\pi$ be a principle or complementary series representation and $r_\pi=(\lambda_\pi +6)^{-1/2}$. Then, there is a holomorphic function $\mu\colon B_{r_\pi}(0)\to \ensuremath{\mathbb{C}}$ such that $\mu(x)$ is an eigenvalue of $T_\pi(x)$ with multiplicity 1, $|\mu(x)|\leq \frac 12$ and $\sigma(T_\pi(x))\cap \{\zeta\mid \Re \zeta\leq \frac 12\} = \{\mu(x)\}$ for all $x\in B_{r_\pi}(0)$.
Furthermore, \[\left|\mu(x)-\frac 12 \frac { \lambda_\pi}4 x^2\right|\leq \frac 12 \frac {|x|^3}{r_\pi^2(r_\pi-|x|)}\qquad \forall \ |x|< r_\pi.\]
\item Let $\pi$ be a discrete series representation. Then $\Re \sigma (T_\pi(x))> \frac 12$ for all $x$ with $|x|<1/\sqrt{32}$.
\end{enumerate}\label{thm:specT}
\end{theorem}
\begin{proof}
We have seen before that $\mu(x)$ is the only eigenvalue with absolute value smaller than $\frac 12$ if $|x|< \min_{|\zeta|=1/2} \|HR_\pi(\zeta)\|^{-1}$. Since $\|HR_\pi(\zeta)\|\leq \frac 1{r_\pi}$ by Corollary \ref{cor:resofT} this is the case if $|x|<r_\pi$.
In Proposition \ref{thm:evofTpi} we calculated $\mu''(0)=\frac{\lambda_\pi}{4}$ and with Equation \eqref{eq:error_estimate} we obtain the error estimate.
The statement about the discrete series that remains to be proven follows directly from Corollary \ref{cor:resofT}.
\end{proof}
\begin{remark}\label{bem:nonuniform}
Unfortunately, the radius $r_\pi$ depends on $\lambda_\pi$ which is given by $1+s^2$ for $\pi = \pi_{is}$. As $\ensuremath{\mathcal} H_{\pi_{is}}$ are contained in $L^2(\ensuremath{\Gamma}\backslash G)$ for arbitrary large $s$ we do not obtain a uniform bound on $r$.
Since \[\sup_{|\zeta|=1/2} \|HR_\pi(\zeta)\| \geq \|HR(1/2)\|\geq \|HR(1/2)\phi_0\|=2\|H\phi_0\| =2\sqrt{\frac 12 \frac{\lambda_\pi}4}\]
we can not get rid of the dependence on $\lambda_\pi$.
\end{remark}
Reformulated in terms of $x=-2\ensuremath{\gamma}\ensuremath{^{-1}}$ we obtain Theorem \ref{thm:evofPg} for the generator of the kinetic Brownian motion on $S\ensuremath{\mathbb{M}}$.
\begin{proof}[Proof of Theorem \ref{thm:evofPg}]
As we have seen in Theorem \ref{thm:decomp} $L^2(\ensuremath{\Gamma} \backslash G)$ decomposes discretely in unitary irreducible representations and the multiplicity of a principle or complementary series representation $\pi$ in $L^2(\ensuremath{\Gamma}\backslash G)$ is given by the multiplicity of the eigenvalue $\frac {\lambda_\pi} 4$ of $\Delta_\ensuremath{\mathbb{M}}$.
Thus, if $\eta$ is a $\Delta_\ensuremath{\mathbb{M}}$-eigenvalue of multiplicity $n$ then there is a principle or complementary series representation $(\pi,\ensuremath{\mathcal} H_\pi)$ such that $\eta=\frac{\lambda_\pi}{4}$ and that occurs $n$ times in $L^2(\ensuremath{\Gamma}\backslash G)$.
For this representation Theorem \ref{thm:specT} states that there is $\mu\colon B_{r_\pi}(0)\to \ensuremath{\mathbb{C}}$ for $r_\pi=(\lambda_\pi+6)^{-1/2}=(4\eta +6)^{-1/2}$ such that $\mu(x)$ is an eigenvalue of $T_\pi(x)$.
Since $P_\ensuremath{\gamma}=\frac{\ensuremath{\gamma} ^2}2T(-2\ensuremath{\gamma}\ensuremath{^{-1}})$ and $T_\pi$ is the restriction of $T$ to $\ensuremath{\mathcal} H_\pi$ we obtain that $\frac{\ensuremath{\gamma}^2}2\mu(-2\ensuremath{\gamma}\ensuremath{^{-1}})$ is an eigenvalue with multiplicity $n$ of $P_\ensuremath{\gamma}$ if $\ensuremath{\gamma}>2\sqrt{4\eta+6}$. The given estimate follows from Theorem \ref{thm:specT} as well.
\end{proof}
\section{Convergence to Equilibrium}\label{sec:equilibrium}
In this chapter we want to analyse the convergence of the kinetic Brownian motion to equilibrium.
As it has been mentioned above this convergence is described by the propagator $e^{-tP_\ensuremath{\gamma}}$.
In general, the resolvent $(A + \zeta)\ensuremath{^{-1}}$ of a generator $A$ of a contraction semigroup on a Banach space $X$ is the Laplace transform of $e^{-tA}$ by the Hille-Yosida theorem (e.g. \cite[Thm. X.47a]{reedsimon}). Hence, we can obtain $e^{-tA}$ by the inverse Laplace transform of $(A+\zeta)\ensuremath{^{-1}}$.
More precisely we have the following proposition.
\begin{proposition}[{e.g. \cite[Ch. III Cor. 5.15]{engelnagel}}]\label{prop:inversion}
If $A$ generates the strongly continuous contraction semigroup $e^{-tA}$ on a Banach space $X$ then we have for all $u\in \dom(A)$ and $w<0$:
\[e^{-tA}u = \frac 1 {2\pi i} \lim_{n\to \infty} \int _{w-in}^{w+in} e^{-\zeta t} R(\zeta) u\,d\zeta.\]
\end{proposition}
Unfortunately, the integral does not converge absolutely. We will solve this issue by using integration by parts and the explicit estimates obtained by Corollary~\ref{cor:resofT}.
\begin{proposition}\label{prop:semigrouprestriction}
$T_\pi(x)$ generates a contraction semigroup $e^{-tT_\pi(x)}$ for real $x$, $\ensuremath{\mathcal} H_\pi$ is $e^{-tP_\ensuremath{\gamma}}$-invariant, and we have $$\left. e^{-tP_\ensuremath{\gamma}}\right|_{\ensuremath{\mathcal} H_\pi} = e^{-(t\frac{\ensuremath{\gamma}^2}2)T_\pi(2\ensuremath{\gamma}\ensuremath{^{-1}})}.$$
\end{proposition}
\begin{proof}
Since $T_\pi(x)$ is the restriction of a multiple of $P_{-2x\ensuremath{^{-1}}}$ it generates a contraction for real $x$ as well. The last statements follow from Proposition~\ref{prop:inversion} with the observation that $\dom(T_\pi(x)) =\ensuremath{\mathcal} H _\pi^2$ is dense in $\ensuremath{\mathcal} H_\pi$.
\end{proof}
We are now going to analyse the decay rate of $e^{-tP_\ensuremath{\gamma}}$ restricted to a fixed unitary representation.
\begin{theorem}\label{thm:contractionpi}
Let $\pi$ be a complementary or principle series representation, $\mu(x)$ the eigenvalue of $T_\pi(x)$ from Theorem \ref{thm:specT}, $r_\pi = (\lambda_\pi +6)^{-1/2}$ and $$P(x)=-\frac 1{2\pi i} \int_{|\zeta|=1/2} R(\zeta,x)\, d\zeta, \qquad |x|<r_\pi,$$ the projection onto the eigenspace corresponding to $\mu(x)$.
Then we have
\[e^{-tT_\pi(x)} u = e^{-\mu(x)t} P(x)u + \frac 1t \frac{1}{2\pi i} \int_{1/2-i\infty}^{1/2+i\infty}e^{-\zeta t}R_\pi(\zeta,x)^2u\,d\zeta\]
for all $u\in \ensuremath{\mathcal} H_\pi^2$, $x\in \ensuremath{\mathbb{R}}$ with $|x|<r_\pi$. Furthermore,
\[\|e^{-tT_\pi(x)} u - e^{-\mu(x)t} P(x)u \| \leq \frac 4t e^{-t/2}\|u\|\]
if $|x|\leq r_\pi/2$.
If $\pi$ is a discrete series representation $\pi_{\pm 2n}^\pm$ we have
\[e^{-tT_\pi(x)} u = \frac 1t \frac{1}{2\pi i} \int_{1/2-i\infty}^{1/2+i\infty}e^{-\zeta t}R_\pi(\zeta,x)^2u\,d\zeta\] and \[\|e^{-tT_\pi(x)} u \| \leq \frac 2{t(n^2-1/2)} e^{-t/2}\|u\|\leq \frac 4t e^{-t/2}\|u\|\quad \text{if} \quad |x|\leq \frac 1{2\sqrt{32}}.\]
\end{theorem}
\begin{proof}
From Proposition \ref{prop:inversion} we obtain that
\[e^{-tT_\pi(x)}u = \frac 1 {2\pi i} \lim_{n\to \infty} \int _{w-in}^{w+in} e^{-\zeta t} R_\pi(\zeta,x) u\,d\zeta\]
if $w<0$ and $u\in \dom(T_\pi(x))=\ensuremath{\mathcal} H_\pi^2$.
Since $|x|<r_\pi$ we infer with Theorem \ref{thm:specT} that $\sigma(T_\pi(x)) \cap \{\Re \zeta \leq 1/2\} = \{\mu(x)\}$ and $|\mu(x)|<1/2$.
Hence the only pole of $R_\pi(\zeta,x)$ in the considered domain is $\mu(x)$ which has order 1.
Applying the residue theorem we get
\begin{align*}\int _{w-in}^{w+in} e^{-\zeta t} &R_\pi(\zeta,x) u\,d\zeta + \int _{w+in}^{1/2+in} e^{-\zeta t} R_\pi(\zeta,x) u\,d\zeta + \int _{1/2+in}^{1/2-in} e^{-\zeta t} R_\pi(\zeta,x) u\,d\zeta \\ &+\int _{1/2-in}^{w-in} e^{-\zeta t} R_\pi(\zeta,x) u\,d\zeta= -2\pi i \res _{\zeta =\mu(x)}( e^{-\zeta t} R_ \pi (\zeta ,x)u)\end{align*}
By Corollary \ref{cor:resofT} (i) we have \[
\int _{w\pm in}^{1/2 \pm in} e^{-\zeta t} R_\pi(\zeta,x) u\,d\zeta \stackrel{n\to\infty}{\longrightarrow} 0.
\]
Integration by parts yields
\begin{align*}
\int _{1/2-in}^{1/2+in}& e^{-\zeta t} R_\pi(\zeta,x) u\,d\zeta=t\ensuremath{^{-1}} \int _{1/2-in}^{1/2+in} e^{-\zeta t}\frac{d}{d\zeta} R_\pi(\zeta,x) u\,d\zeta -\left. t\ensuremath{^{-1}} e^{-\zeta t}R_\pi(\zeta,x)u\right|_{1/2-in}^{1/2+in}\\
&=t\ensuremath{^{-1}} \int _{1/2-in}^{1/2+in} e^{-\zeta t} R_\pi(\zeta,x)^2 u\,d\zeta-\left. t\ensuremath{^{-1}} e^{-\zeta t}R_\pi(\zeta,x)u\right|_{1/2-in}^{1/2+in}.
\end{align*}
Using Corollary \ref{cor:resofT} (i) we furthermore calculate for $|x|\leq r_\pi/2$:
\begin{align*}
&\lim_{n\to\infty} \left\| \int _{1/2-in}^{1/2+in} e^{-\zeta t} R_\pi(\zeta,x) u\,d\zeta\right\| = \lim_{n\to\infty} \left\|t\ensuremath{^{-1}} \int_{1/2-in}^{1/2+in}e^{-\zeta t}R_\pi(\zeta,x)^2u\,d\zeta\right\| \\& \leq t\ensuremath{^{-1}} \int_{-\infty}^\infty e^{-t/2} \|R_\pi(1/2 + is,x)\|^2 \,ds\|u\| \leq 4 t\ensuremath{^{-1}} e^{-t/2} \int_{-\infty}^\infty \frac 1{1/4 +s^2}\,ds \|u\| \\
&= 8 t\ensuremath{^{-1}} e^{-t/2} \int_{-\infty}^\infty \frac 1{1 +s^2}\,ds \|u\|=8\pi t\ensuremath{^{-1}} e^{-t/2}\|u\|.
\end{align*}
In particular, the limit exists.
Notice that
\begin{align*}
\res&_{\zeta =\mu(x)}( e^{-\zeta t} R_ \pi (\zeta ,x)u) =e^{-\mu(x) t}\res_{\zeta =\mu(x)}( R_ \pi (\zeta ,x)) u \\
&=e^{-\mu(x) t}\frac 1 {2\pi i} \int _{|\zeta|=1/2} R_\pi(\zeta,x)\,d\zeta u= -e^{-\mu(x) t}P(x)u\end{align*}
as the pole has order 1.
Hence,
\[e^{-tT_\pi(x)}u = e^{-\mu(x)t}P(x)u + \frac 1t \frac{1}{2\pi i} \int_{1/2-i\infty}^{1/2+i\infty}e^{-\zeta t}R_\pi(\zeta,x)^2u\,d\zeta \] and the estimate follows from the above calculation.
For the case of a discrete series representation the proof is the same except that we do not collect a residue and use the estimate of Corollary \ref{cor:resofT} (ii).
\end{proof}
With the decomposition of $L^2(S\ensuremath{\mathbb{M}})$ we will now prove Theorem \ref{thm:convergencetoequilibrium}.
\begin{proof}[Proof of Theorem \ref{thm:convergencetoequilibrium}]
Recall that $L^2(S\ensuremath{\mathbb{M}})$ decomposes discretely by Theorem \ref{thm:decomp}. Let $f_\pi$ be the projection of $f$ on $m(\pi)\ensuremath{\mathcal} H_\pi$. If $\pi$ is a complementary or principle series representation it corresponds to the eigenvalue $\eta = \frac 14 \lambda_\pi>0$ of $\Delta_\ensuremath{\mathbb{M}}$. In this case we write $f_\eta$ instead of $f_\pi$. If $\eta=0$ we define $f_\eta$ to be the orthogonal projection of $f$ onto the trivial representation in $L^2(S\ensuremath{\mathbb{M}})$.
With the norm of Section \ref{sec:hyperbolicsurfaces} we have
\begin{align*}
C^2\geq&\, \|(-H^2-B^2-\Xi^2)f\|_{L^2(S\ensuremath{\mathbb{M}})}^2 \\\geq &\sum_{\eta\in\sigma(\Delta_\ensuremath{\mathbb{M}})} \|(-H^2-B^2-\Xi^2)f_\eta\|^2
+\sum_{\pi =\pi_{\pm2n}^\pm} \|(-H^2-B^2-\Xi^2)f_\pi\|^2\\
\geq& \sum_{\eta\in\sigma(\Delta_\ensuremath{\mathbb{M}})} \|(\eta-2\Xi^2)f_\eta\|^2 \\
=& \sum_{\eta\in\sigma(\Delta_\ensuremath{\mathbb{M}})} \langle (\eta-2\Xi^2)f_\eta,(\eta-2\Xi^2)f_\eta \rangle\\
\geq & \sum_{\eta\in\sigma(\Delta_\ensuremath{\mathbb{M}})} \eta^2\|f_\eta\|^2
\end{align*}
since $-\Xi^2$ is a positive operator.
Thus,
\begin{align*}
\sum_ {\eta>C\varepsilon\ensuremath{^{-1}}} \|f_\eta\|^2\leq (C\varepsilon\ensuremath{^{-1}})^{-2} \sum_ {\eta>C\varepsilon\ensuremath{^{-1}}} \eta^2\|f_\eta\|^2\leq(C\varepsilon\ensuremath{^{-1}})^{-2} \sum_ {\eta\in\sigma(\Delta_\ensuremath{\mathbb{M}})} \eta^2\|f_\eta\|^2\leq \varepsilon^2.
\end{align*}
Because of $\|e^{-tP_\ensuremath{\gamma}}\|\leq 1$ we obtain \begin{equation}
\bigg \|e^{-tP_\ensuremath{\gamma}}\sum_ {\eta>C\varepsilon\ensuremath{^{-1}}} f_\eta\bigg\|\leq \varepsilon.\label{eq:gleichung}
\end{equation}
We define $\lambda_\eta (\ensuremath{\gamma})\coloneqq \frac{\ensuremath{\gamma}^2}2 \mu_\eta(-2\ensuremath{\gamma}\ensuremath{^{-1}})$ where $\mu_\eta(x)$ is the eigenvalue of $T_\pi(x)$ obtained in Theorem \ref{thm:specT} and $\pi$ is the representation corresponding to $\eta\in\sigma(\Delta_\ensuremath{\mathbb{M}})$ (see Theorem \ref{thm:evofPg}).
Furthermore, let $\Pi_{\lambda_\eta(\ensuremath{\gamma})}$ be the projection onto the eigenvalue $\lambda_\eta (\ensuremath{\gamma})$ given by $$\Pi_{\lambda_\eta(\ensuremath{\gamma})} f = P_\eta(-2\ensuremath{\gamma}\ensuremath{^{-1}})f_\eta\quad \text{with}\quad P_\eta(x)=-\frac 1{2\pi i} \int_{|\zeta|=1/2} R_\pi(\zeta,x)\, d\zeta.$$
Note that $\|P_\eta(x)\|\leq 2$ for $|x|\leq (2\sqrt{4\eta+6})\ensuremath{^{-1}}$ by Corollary \ref{cor:resofT}. If $\eta=0$ we write $\lambda_\eta(\ensuremath{\gamma})=0$ and $\Pi_{\lambda_\eta(\ensuremath{\gamma})}f = f_\eta$ and it holds $e^{-tP_\ensuremath{\gamma}}f_\eta =f_\eta$.
Then we have by Proposition \ref{prop:semigrouprestriction} and Theorem \ref{thm:contractionpi}
\begin{align*}
\bigg \|e^{-tP_\ensuremath{\gamma}}f -&\sum_{\stackrel{\eta \in \sigma(\Delta_\ensuremath{\mathbb{M}})}{ \eta \leq C{\varepsilon}\ensuremath{^{-1}}}}e^{-t \lambda_\eta (\ensuremath{\gamma})}\Pi_{\lambda_\eta(\ensuremath{\gamma})} f \bigg \|_{L^2(S\ensuremath{\mathbb{M}})}^2=\sum_{\stackrel{\eta \in \sigma(\Delta_\ensuremath{\mathbb{M}})}{ \eta \leq C{\varepsilon}\ensuremath{^{-1}}}}\bigg \|e^{-tP_\ensuremath{\gamma}}f_\eta -e^{-t \lambda_\eta (\ensuremath{\gamma})}\Pi_{\lambda_\eta(\ensuremath{\gamma})} f_\eta \bigg \|^2\\
& \phantom\leq+ \bigg \|e^{-tP_\ensuremath{\gamma}}\sum_ {\eta>C\varepsilon\ensuremath{^{-1}}} f_\eta\bigg\|^2 + \sum_{\pi =\pi_{\pm2n}^\pm } \bigg \|e^{-tP_\ensuremath{\gamma}}f_\pi\bigg\|^2\\
&\overset{\text{Eq. \eqref{eq:gleichung}}}\leq \varepsilon^2 + \sum_{\stackrel{\eta \in \sigma(\Delta_\ensuremath{\mathbb{M}})}{ \eta \leq C{\varepsilon}\ensuremath{^{-1}}}}\bigg \|e^{-t\frac{\ensuremath{\gamma}^2}2T_\eta (-2\ensuremath{\gamma}\ensuremath{^{-1}})}f_\eta -e^{-t \frac{\ensuremath{\gamma}^2}2\mu_\eta(-2\ensuremath{\gamma}\ensuremath{^{-1}})}P_\eta (-2\ensuremath{\gamma}\ensuremath{^{-1}}) f_\eta \bigg \|^2\\
&\phantom= +\sum_{\pi =\pi_{\pm2n}^\pm } \bigg \|e^{-t\frac{\ensuremath{\gamma}^2}2T_\pi(-2\ensuremath{\gamma}\ensuremath{^{-1}})}f_\pi\bigg\|^2\\
&\overset{\text{Thm. \ref{thm:contractionpi}}}\leq \varepsilon^2 + \sum_{\stackrel{\eta \in \sigma(\Delta_\ensuremath{\mathbb{M}})}{ \eta \leq C{\varepsilon}\ensuremath{^{-1}}}}\left(\frac 8{t\ensuremath{\gamma}^2} e^{-t\ensuremath{\gamma}^2/4}\right)^2\|f_\eta\|^2 + \sum_{\pi =\pi_{\pm2n}^\pm } \left(\frac 8{t\ensuremath{\gamma}^2} e^{-t\ensuremath{\gamma}^2/4}\right)^2\|f_\pi\|^2\\
&\leq \varepsilon^2 +\left(\frac 8{t\ensuremath{\gamma}^2} e^{-t\ensuremath{\gamma}^2/4}\right)^2 \|f\|_{L^2(S\ensuremath{\mathbb{M}})}^2
\end{align*}for every $\ensuremath{\gamma} > \max \{4\sqrt{4C{\varepsilon} \ensuremath{^{-1}} +6} , 4\sqrt{32}\}$ where we have used Proposition~\ref{prop:semigrouprestriction} in the first inequality.
\end{proof}
We end this section with the proof of Corollary~\ref{cor:equilibrium}
\begin{proof}
By Theorem~\ref{thm:evofPg} we have $|\lambda_{\eta}(\ensuremath{\gamma}) - \eta|\leq B\ensuremath{^{-1}} $ for all eigenvalues $\eta \leq C\varepsilon\ensuremath{^{-1}}$ and $\ensuremath{\gamma} > 4B(4C{\varepsilon} \ensuremath{^{-1}} +6)^{3/2}$. In particular, $\Re \lambda_{\eta}(\ensuremath{\gamma}) \geq \eta_1 - B\ensuremath{^{-1}}$. Hence by Theorem~\ref{thm:convergencetoequilibrium},
\begin{align*}
\norm{e^{-tP_\ensuremath{\gamma}}f - \int_{S\ensuremath{\mathbb{M}}} f d\mu}_{L^2(S\ensuremath{\mathbb{M}})} &\leq \varepsilon + \frac 8 {\ensuremath{\gamma}^2t}e^{-\ensuremath{\gamma}^2t/4} \|f\|_{L^2(S\ensuremath{\mathbb{M}})} \\&\phantom =+ \sum_{\substack{\eta \in \sigma(\Delta_\ensuremath{\mathbb{M}}) \\ 0 \neq \eta \leq C{\varepsilon}\ensuremath{^{-1}}}}\bigg \|e^{-t \lambda_\eta (\ensuremath{\gamma})}\Pi_{\lambda_\eta(\ensuremath{\gamma})} f \bigg \|_{L^2(S\ensuremath{\mathbb{M}})}.
\end{align*}
Furthermore, we have $$\|e^{-t \lambda_\eta (\ensuremath{\gamma})}\Pi_{\lambda_\eta(\ensuremath{\gamma})} f \|_{L^2(S\ensuremath{\mathbb{M}})}\leq 2 e^{-t(\eta_1-B\ensuremath{^{-1}})}\norm{f}_{L^2(S\ensuremath{\mathbb{M}})}.$$
and by the Weyl law $$\sup_N \frac {\# \{\eta \in \sigma (\Delta_\ensuremath{\mathbb{M}})\mid \eta\leq N\}}N <\infty.$$
This completes the proof.
\end{proof}
|
1,108,101,564,156 | arxiv | \section{Introduction}
Marine vessels are a large contributor to global CO2 emissions\footnote{Global annual CO2 emissions due to shipping were estimated to be 938 million tonnes in 2012 \cite{imo-ghg3} and 831 million tonnes in 2015 \cite{johansson2017}. A single large ship can burn 40000 tons of fuel and produce 120000 tons of CO2 per year.}. Lately, emphasis has been put on optimizing various aspects of vessel operations, such as route and speed profile selection, which helps in reducing the emissions and make shipping more cost efficient. To be able to run such optimization, predictive models of vessels' fuel consumption are needed.
Moreover, vessel consumption models can be used to assess the global emissions of shipping. One such \textit{bottom-up} approach, where a consumption model is built for essentially every major ship in the world, is described in \cite{jalkanen09,jalkanen12, johansson2017}. The approach utilizes existing methods for ship resistance calculations, where various resistance coefficients are estimated based on different ship characteristics that can be obtained from commercial ship databases, such as \cite{fairplay}. Obtained models are used with vessels' AIS data\footnote{AIS (Automatic Identification System) is a system through which vessels report their location and speed. The International Maritime Organization (IMO) requires that AIS is used in all ships with gross tonnage larger than 300.}. Another model-based approach for assessing emissions is described in \cite{corbett}. Both of these approaches utilize \textit{white box} modelling, which means that vessel consumption data are not used in training the models. Including such data into the modelling (a \textit{grey box} approach) will improve the accuracy of the models. Moreover, the white box modelling usually neglects some major resistance factors such as wind, waves, shallow water resistance and hull fouling, which can influence the vessels' total resistance and contribute significantly to consumption.
Nowadays vessel-specific operational data related to vessel's consumption are becoming increasingly available. With such data, accurate models can be calibrated for each vessel. This enables detailed optimization of vessel operations, monitoring the vessel's propulsion performance and other detailed ship-specific analytics. Various companies offer such solutions, including Eniram Ltd, a Wärtsilä Company, the collaboration partner in this study\footnote{Part of Wärtsilä, see \url{https://www.wartsila.com/eniram}}.
Due to the reasons listed above the grey box approach has been selected for propulsion power modelling at Eniram. However, collecting detailed high fidelity data is costly, which calls for methods to build models also for ships for which we have limited or no data available. For instance, high-fidelity data based on high-frequency logging onboard a ship might be available only for a small amount of ships, but there might be, e.g., noon-report type of consumption data available for a larger number of ships, where the crew has reported total consumption numbers over certain time intervals (e.g. 24h). Such data is being collected in increasing amounts due to EU MRV and IMO DCS regulations that require consumption reporting for vessels with gross tonnage (GT) higher than 5000. Calibrating consumption models with noon-report data is challenging and calls for statistical methods to include all available information into the resistance coefficient estimation.
The goal of this paper is to illustrate an approach where we can use the data collected from a group of ships, and generalize the information to a larger population of vessels. We use real data collected from 64 cruise ships via the Eniram platform, anonymized due to data ownership questions. Our approach is to build a hierarchical Bayesian model that encompasses all the vessel-specific parameters and coefficients, but also includes a "hyper-model" that links the coefficient values between ships together. The approach is based on the idea that the resistance coefficients between two ships of similar characteristics (e.g. type and dimension) are likely close to each other. Both the vessel specific coefficients and the hyper-model parameters defining between-ship relationships are learned from the available data. The novelty compared to the existing resistance calculations is that the consumption model parameter values are informed by the data, and can thus give more accurate predictions than the classical methods. Moreover, estimating the resistance coefficients for a ship that has only limited data available can be made more robust and stable by including information about other similar ships. For instance, using only a small amount of noon-report consumption data can lead to nonphysical resistance coefficient estimates, but including the hyper-model can help significantly, as demonstrated later in this paper. Finally, the "hyper-model" can be used to predict the consumption of a ship from which we have no data, based only on its characteristics, which enables applications such as the global emission estimation discussed above and optimization of ships operations (e.g. route, speed) at scale, without involving expensive data collection platforms on-board.
We present a prototype of the hierarchical model and show that even such simple data driven approach can compete in prediction accuracy with the classical resistance calculations. We demonstrate how the regularization effect of the hierarchical model makes the results more stable and robust compared to independent vessel-specific models. Due to simplicity and data availability, we restrict ourselves to cruise ships and propulsion power modelling. Here, the goal is to present the hierarchical modeling concept with simple examples; more work is required to increase the sophistication of the model formulations, to generalize to other ship types and to include service power models and engine models to turn power consumption into fuel consumption.
The paper is organized as follows. Section \ref{sec:general} describes the general setup and the applied models. Section \ref{sec:results} describes the numerical examples and results. Section \ref{sec:conclusions} concludes the paper.
\section{Modeling Setup}
\label{sec:general}
This section gives an overview of the two approaches used to model propulsion power consumption $P$, which is the target variable in this paper. We first briefly present a well-known White Box modeling approach that will be used in the numerical comparisons in Section~\ref{sec:results}. We then introduce the data-driven, hierarchical Grey Box model and the rationale behind it.
\subsection{White Box approach: STEAM2}
\label{sec:STEAM2}
We follow the model used in \cite{jalkanen12}, which builds on earlier work, such as the widely used Hollenbach resistance calculations, see, e.g., \cite{schneekluth98, hollenbach98}. In STEAM2, the propulsion power consumption is calculated simply via $P=R_TV$, where $V$ is vessel speed through water and $R_T$ is the total resistance. In this approach, the total resistance is approximated by
\begin{equation}
R_T = R_F + R_R,
\end{equation}
where $R_F$ is the frictional resistance between the water and the vessel's wet surface, and $R_R$ is the "residual resistance" that accounts for other hydrodynamic resistance components such as wave making (the power needed for forming the wave pattern that the vessel generates). Note that many resistance components are ignored here, such as aerodynamic resistance that we included in the data-based model. Other ignored resistance effects include, for instance, wave breaking resistance (resistance caused by the waves that the vessel needs to propel through), shallow water resistance (additional resistance caused by sailing in shallow waters, also known as squatting, see \cite{schneekluth98} for more details) and hull roughness and biofouling. These effects could be included in the resistance calculations, but then we would need to come up with values for the corresponding resistance coefficients. This is in contrast with the data-based approach, described in the following Section, where we calibrate the coefficients from data and these extra effects could be added in a straight forward manner.
The frictional resistance is calculated as
\begin{equation}
R_F = C_F\frac{\rho}{2} S V^2,
\end{equation}
where $S$ is vessel's wet surface area, $\rho$ is water density and $C_F$ is the frictional resistance coefficient. Here, we follow the widely used ITTC approach\footnote{\url{https://ittc.info/}}, where $C_F=0.075/(\log_{10}(R_n) -2)$ and $R_n$ is the Reynolds number, calculated here as $R_n=V L_{wl}/\nu$, where $L_{wl}$ is the waterline length of the vessel and $\nu$ is the kinematic viscosity.
The waterline length $L_{wl}$ is typically available from various commercial ship databases such as IHS Markit \cite{fairplay}, but the wet surface area $S$ is typically unknown. In the Hollenbach calculations, $S$ is estimated using a rather complicated formula that involves various ship dimensions. The formula was obtained in \cite{hollenbach98} via regression analysis applied to model tank test results for 433 ships. The formula and the regression coefficients are reported, for instance, in \cite{schneekluth98} and are not reproduced here for brevity.
The residual resistance is calculated as
\begin{equation}
R_R = C_R\frac{\rho}{2} \left( \frac{B \cdot T}{10} \right) V^2,
\end{equation}
where $B$ is vessel breadth, $T$ vessel draft and $C_R$ the residual resistance coefficient. The residual resistance coefficient $C_R$ is obtained via the Hollenbach method using a similar approach than for wet surface area; the formula and the best-fit regression coefficients are reported in \cite{schneekluth98} and are not reproduced here.
Note that in our simple data-based model described in \cref{sec:simple}, we clump everything in front of $V^2$ into one constant parameter. We thus ignore the fact that the frictional resistance reduces a bit as a function of vessel speed. However, this effect is small compared to the overall accuracy of the models, and our goal is to show that a simple parameterization can give prediction accuracy comparable to more complex white box formulations. Moreover, the data-based approach allows us to fine tune the coefficients for each vessel instead of using fixed formulas and coefficients.
\subsection{Simplified hierarchical Grey Box model}
\label{sec:simple}
Let us assume that we have a ship-specific propulsion power model like
\begin{equation}
P_i = f(x_i, \theta_i)+\varepsilon, \quad i=1,..., N,
\end{equation}
where $i$ is the ship index, $P_i$ is the observed propulsion power, $x_i$ are the observed model inputs (vessel speed, wind speed and angle, etc.), and $\theta_i$ are the unknown parameters that we want to estimate (e.g. various resistance coefficients). The error term $\varepsilon_i$ denotes the discrepancy between modelled and observed propulsion power.
The traditional approach would be to estimate the parameters for each ship independently, using the data $(x_i, P_i)$ for each ship. Here, instead, we add another layer of modeling; we assume that the parameter values can be predicted with some (unknown) accuracy using various ship characteristics. Thus, we write a model for the ship-specific parameters as
\begin{equation}
\theta_i = g(c_i, \lambda)+\eta,
\end{equation}
where $c_i$ denotes the characteristics, $\lambda$ is a vector of unknown hyper-parameters and $\eta$ describes how accurate this hyper-model $g$ is in predicting the parameter values. The ship characteristics could be related, for instance, to vessel's size (e.g. weight), dimensions (width and length), construction year, or any other vessel metadata that carries some information about $\theta_i$. The general setup is illustrated in Figure~\ref{fig:hier_demo}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{hier_demo_final.png}
\caption{Illustration of the hierarchical model. Based on ship-specific power consumption data and various model inputs (black dots) the goal is to learn both ship-specific parameter values (blue error-bars) and hyper-parameters that link the between-ship parameters together. The ships in the figure are imaginary; the graph is generated with the synthetic demo code available in \url{https://github.com/solbes/stanship}.}
\label{fig:hier_demo}
\end{figure}
The goal is now to learn both the ship-specific coefficients $\theta_i$ and the hyper-parameters $\lambda$ using all the observed data $P_{1:N}$. In Bayesian terms, this amounts to finding the posterior distribution of the parameters given the measured data, $p(\theta_{1:N}, \lambda | P_{1:N})$. In addition, we would like to learn about the error terms $\varepsilon$ and $\eta$, which can be done by fixing the form of the error distributions (e.g. zero mean Gaussians) and including the parameters of the error distributions (e.g. variances of the Gaussians) to the group of parameters that are estimated from the data.
Finally, when we have learned the posterior distribution for all the parameters, we have a model where the ship-specific coefficients are informed by both their own data and data from a similar ship. Full Bayesian analysis of the parameters also lets us predict the behavior of a vessel that is not included in the training data, and give an idea about how certain we are about the predicted behavior. This feature is missing from the classical resistance calculations.
We demonstrate the hierarchical modeling idea with a simple example. Our vessel-specific model includes only two terms; one describing hydrodynamic resistances (e.g. friction and wave making) and one for aerodynamic resistance. Propulsion power for the ship $i$ is calculated via $P_i=R_{T,i}V_i$, where $R_{T,i}$ is the total resistance, which is here approximated as the sum of hydrodynamic and aerodynamic resistances:
\begin{equation}
R_{T,i} = R_{H,i} + R_{A,i}.
\end{equation}
The hydrodynamic resistance model used here is quite crude; we simply state that the resistance increases proportionally to vessel speed squared: $R_{H,i}=a_iV_i^2$, where $a_i$ is the hydrodynamic resistance coefficient, which is assumed to be an unknown constant. In reality, the hydrodynamic resistance coefficient is not constant though; it varies as a function of vessel speed and draft, for instance. However, for demonstration purposes this approximation is adequate, especially for cruise ships considered in this study for which the draft variations are minimal.
For wind, we use the simple approximation that the wind resistance is proportional to relative wind speed squared. When we project the wind resistance force vector to the heading of the ship, we get $R_{A,i} = b_i\cos(\alpha_i)U_{R,i}^2$, where $\alpha_i$ is the relative wind angle, $U_{R,i}$ is the relative wind velocity and $b_i$ is the unknown wind resistance coefficient. Note that this approximation is rather crude; it assumes, for instance, that the contact area between the wind and the vessel hull is constant. Some more sophisticated wind formulas, such as those described in \cite{blendermann1996, schneekluth98}, could be taken into use, but this simple formula is sufficient for demonstration purposes again.
With these approximations, our simplified propulsion power model for ship $i$ reads as
\begin{equation}
P_{i} = a_iV_i^3 + b_i\cos(\alpha_i)U_{R,i}^2V_i + \varepsilon_i.
\end{equation}
Now, the goal is to estimate coefficients $a_i$ and $b_i$ from measured data. This could be done individually for each ship, but that could be problematic if the data is not very informative about the coefficients. That is why we include the hyper-model to tie the coefficients between ships together in one model.
The task of the hyper-model is to predict the values of the resistance coefficients based on some ship characteristics $c_i$. Here, we use the ships total weight $w_i$ (gross tonnage, GT) as the hyper-model input, and model both coefficients as linear functions of GT:
\begin{equation}\label{eq:Aeb}
\begin{split}
a_i &= \lambda_1 + \lambda_2w_i + \eta_a \\
b_i &= \lambda_3 + \lambda_4w_i + \eta_b,
\end{split}
\end{equation}
where $\eta_a$ and $\eta_b$ are Gaussian error terms.
This is obviously not a very physical model. In more realistic settings, one could model the hydrodynamic and aerodynamic resistance using the vessel's dimensions, for instance. Here we pick GT as the input variable since it is easily available for all ships. The linear model choice comes from empirical observations; individual coefficients seem to roughly scale linearly as a function of vessel mass. Note also that the ability to use such nonphysical parameterizations can be considered as a strength of the data-based approach; we can essentially insert any parameterization and try to use data to figure out the relationships between unknown coefficients and ship characteristics.
The remaining task is to estimate all of the ship-specific resistance coefficients in one model together with the hyper-model parameters $\lambda_i$. Moreover, as the vessel-specific model and hyper-model errors, $\varepsilon_i$, $\eta_a$ and $\eta_b$ are unknown, we will estimate them from the data, as well. We will assume that the errors are normally distributed and zero mean: $\varepsilon_i \sim N(0,\sigma_{i})$, $\eta_a \sim N(0,\sigma_a)$ and $\eta_b \sim N(0,\sigma_b)$. In addition to the resistance coefficients and hyper-model slopes and intercepts, we also estimate the variances $(\sigma_i, \sigma_a, \sigma_b)$. For Bayesian statistical analysis we need to specify prior uncertainties for all the model parameters. We use uniform priors for the resistance coefficients, and uniform priors with positivity constraints for the variance parameters. With less informative data or a smaller number of groups (ships), one might need to constrain the variance parameter more. See \cite{gelman2006} about setting priors for variance parameters in hierarchical models\footnote{See also \url{https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations} about priors}.
The equations for the ship-specific models are simple and linear in parameters, but fitting the full hierarchical model is far from trivial. With 50+ ships the number of estimated parameters becomes rather high -- a few hundred -- and exploring this high-dimensional posterior distribution calls for efficient numerical methods. In recent years, flexible and openly available tools for defining and fitting such hierarchical Bayesian models have been developed, including, for instance, PyMC3 and the probabilistic programming language Stan \cite{pymc3, stan}. Here, the model fitting is carried out with the latter one, which implements a carefully tuned Markov Chain Monte Carlo (MCMC) sampler that is capable of exploring high-dimensional distributions. Model implementation with synthetic data (real data cannot be distributed) is available online\footnote{\url{https://github.com/solbes/stanship}}. The reader is referred to the experimental section for more details.
\section{Results}
\label{sec:results}
In this Section we present three numerical examples. The first illustrates how the hierarchical modelling regularizes the ship-specific parameter estimation. The second example compares the data-based model to the white box approach of Section~\ref{sec:STEAM2}. The last example demonstrates the ability to obtain uncertainty statistics for the model predictions.
We use real data in the experiment obtained from the Eniram platform to calibrate the grey box models. Propulsion power measurements are obtained from the vessels' automation systems. For vessel speed, we use speed over ground obtained from the vessel (a GPS-based measurement) augmented with ocean current forecasts to get an estimate of speed through water. For wind angle and wind speed, we use values from a weather forecast provider. Results are anonymized.
\subsection{Regularizing effect of hierarchy}
\label{sec:regularization}
Here, we demonstrate how the hierarchical modeling can help to identify the parameters of individual ships, in the case where the ship-specific data are not informative about the unknowns.
We make the following experiment. Instead of modelling the momentary power consumption, we attempt to emulate a setting where we only have "noon-report" type of data available; that is, we have total consumption readings over given time intervals (e.g. 24~h) and momentary vessel speed and weather data with higher resolution. We choose this setting for demonstration purposes, since such aggregated data has obviously much less information about the parameters than the momentary data, and using noon-report data to calibrate ship models is thus challenging.
We model the total consumption over a given time interval by integrating both sides of the ship-specific power models. For ship $i$ and a single 24~h period the model now reads as
\begin{equation}
\int_{24h}P_i(t)dt = a_i\int_{24h} V_i(t)^3dt + b_i\int_{24h} \cos(\alpha_i(t))U_{R,i}(t)^2V_i(t)dt + e_i,
\end{equation}
and thus the model remains linear with respect to the parameters. The training data for the ship-specific models are now the daily total consumption and integrated model input terms. In the experiment here, we replace the integrals by 24~h averages; in this way we don't need to worry if a few data points are missing from some 24~h intervals.
Note that in practice the noon-reported consumption is not necessarily reported at even 24h intervals. In real applications there is also added complexity related to handling maneuvering periods (when consumption can be unpredictable), missing data and other consumers in addition to propulsion (service power), for instance.
We use around $100$ daily averages in the model fitting for each ship. The results for the hydrodynamic and aerodynamic coefficients for each ship with and without the hierarchy are given in \cref{fig:reg_demo}. We see that the hydrodynamic coefficients are well identified with the ship-specific data alone, and the hierarchy does not have much of an effect. However, for the wind resistance coefficients the situation is different. There is not enough information in the noisy data to calibrate the coefficients, and thus fitting ships independently yields some unrealistic values (e.g.\ close to zero) and the uncertainty is large. Adding the hierarchy pools the estimates closer to the linear prior and yields more reasonable looking estimates. We expect similar results for other resistance factors that might not be well-informed by the vessel-specific data, such as the shallow water resistance effect.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.9\linewidth]{hier_demo_avg.png}
\caption{Illustration of the hierarchical model. Vessel size index is defined so that the smallest ship has value 1.}
\label{fig:reg_demo}
\end{figure}
\subsection{Comparing the models}
In this Section the goal is to compare the presented hierarchical Bayesian grey box approach to the STEAM2 white box model discussed in \cref{sec:STEAM2}. We compare STEAM2 to two data-based models; one where the resistance coefficients are predicted using the prior model (vessel's gross tonnage), and one that uses the vessel-specific resistance coefficients obtained from the hierarchical model fit. Note that the latter would obviously not be available in cases where vessel consumption data is unavailable, but the results are presented here anyway for reference.
We begin by illustrating a few typical cases in \cref{fig:speed-power} by plotting the speed-power curves obtained with the different models on top of the measured data\footnote{The data-based methods also include wind as an input. Here, we simulate speed-power curves with the median of the wind effect $cos(\alpha_i)U_{R,i}^2$ calculated from the data}. In some cases, STEAM2 seems to underestimate the power consumption, e.g.\ panels a.1) and a.2) in \cref{fig:speed-power}. The prior-based model obviously has bias in several cases, but the residuals are typically smaller than for STEAM2. In some other cases STEAM2 seems to under-estimate the power with small speeds but over-estimate it with high speeds, see plots labeled with \textit{b)} in the Figure. Also in these cases the prior-based model works better in general. There are also cases where the STEAM2 model performs better or equally well than the prior-based model, see plots labeled with \textit{c)} and \textit{d)}, respectively. In all cases the ship-specific fits give the best results, which is no surprise.
\begin{figure}
\centering
\includegraphics[width = 5.7in]{res_demo_all.png}
\caption{Left: comparison of data and speed-power curves obtained with different models. For the data-based methods, the simulation uses the median wind effect calculated from the data. Right: residual densities obtained by kernel density estimation.}
\label{fig:speed-power}
\end{figure}
To give a more comprehensive view on the performance of the different models, selected residual quantiles are illustrated for all the ships and all the models in \cref{fig:res-comp}. One can clearly observe the under-estimation of power in STEAM2, whereas the prior-based model residuals are more zero-centered. Thus, the hierarchical approach where the resistance coefficients of a very simple propulsion power model are predicted based only on the vessel's gross tonnage can give more accurate results than a white box approach. Note, however, that these results hold only for cruise vessels whose size is close to the range of ship sizes included in the estimation.
\begin{figure}
\centering
\includegraphics[width = \textwidth]{res_comp.png}
\caption{Residual median (solid line) and 95\% confidence region (filled area) for the different models. The bottom right figure compares the medians of the different models.}
\label{fig:res-comp}
\end{figure}
To conclude the model comparison, we illustrate in \cref{fig:res-vs-stw} how the model residuals behave as a function of vessel speed for the different models. To do this, we fit a smooth residual vs.\ speed through water (STW) curve to the data using the LOWESS method \cite{lowess} (see the top left plot in the Figure for an illustration), and then plot the smoothed curves for all the ships in one figure.
From the Figure we can again observe the under-estimation of power in STEAM2, and also the common over-estimation of power with high speeds. Also, in line with the \cref{fig:res-comp}, the data-based models perform better and have less speed-dependent bias.
\begin{figure}
\centering
\includegraphics[width = \textwidth]{res_vs_stw.png}
\caption{Residuals as a function of vessel speed. Top left: illustration of the LOWESS curve fitting to the data. Other plots: LOWESS smoothed residual vs. stw curves for different models over all ships (line color indicates a ship).}
\label{fig:res-vs-stw}
\end{figure}
One possible factor behind the under-estimation of power in the STEAM model is that it ignores many resistance components such as wind, waves, squat and hull roughness. Work is currently underway to include many of these effects to STEAM. The data-based approach doesn't explicitly include most of these either (only wind), but since the models are fitted to the data, they calibrate to some average contribution of these excluded resistances. For instance, the ship-specific fits calibrate to some average hull condition over the data period included in the model fitting, and the hyper-model calibrates to some average hull condition over the ships. In this sense, the data-based models do take these extra resistance factors into account in some way, and thus the comparison to STEAM is not completely fair. While the data-based models perform well here, within the set of cruise-ship examples, more research is needed for extrapolation to smaller vessels, where the implicitly included resistance factors may impact differently. The purpose of this comparison is thus not to claim that the data-based methods outperform the classical white box resistance calculations, but to demonstrate that the hierarchical modeling concept provides a viable option when enough data is available.
\subsection{Obtaining statistics for predictions}
One benefit of the Bayesian approach is that it is statistical. Model parameters are treated as random variables, and the solution is a distribution of possible parameter values instead of point estimates. Also, this enables assessing how reliable are the estimation results and predictions made with the model.
We demonstrate this feature by calculating the uncertainty distributions of the speed-power curves for six selected ships using the prior-based models. Due to incomplete and noisy data, there is uncertainty in the linear hyper-model parameters. Moreover, the linear model itself has errors, the magnitude of which is also estimated in the hierarchical model. Thus, with a given gross tonnage, we can give a range of values where the true speed-power curve likely is. This is illustrated in \cref{fig:uncertainty-demo}. The obtained statistics seem consistent. The "true" speed-power curve seems to fall within the calculated envelope. Wind effect was ignored here for simplicity.
\begin{figure}
\centering
\includegraphics[width = \textwidth]{uncertainty_demo.png}
\caption{Confidence envelopes (50\% and 95\%) for the speed-power curves (without wind effect) predicted based in vessel's gross tonnage. The red curve comes from the ship-specific parameters and represents where the true speed-power curve roughly is.}
\label{fig:uncertainty-demo}
\end{figure}
\section{Conclusion and future work}
\label{sec:conclusions}
The purpose of this paper was to illustrate a hierarchical Bayesian modeling approach for marine vessels. As a prototype case, we selected cruise vessels and propulsion power prediction. For demonstration purposes, we used a simple two-parameter propulsion power model and a linear hyper-model based on vessel's gross tonnage to link together the parameters between ships. We demonstrated that the accuracy of such an approach can improve upon classical white box resistance calculation -based methods.
Calibrating these models in one go becomes computationally rather expensive when the amount of ships and data per ship increases. In practical implementations one likely needs to take another approach. One idea is to fit the models sequentially. First obtain the vessel-specific parameter estimates using the current hyper-model as the prior, and then update the hyper-parameters based on the most recent ship-specific estimates. This would give a scalable approximation to the full hierarchical model fitting.
Here we had only one "data type" in the estimations (either simulated noon-report data or high-frequency data). In real life, one would like to combine all data (both the high-fidelity and the noon report data) in the estimation. This would enable efficient borrowing of information from data rich vessels.
We feel that the results can be improved further by introducing more sophisticated propulsion power models and better hyper-models that include more ship characteristics into the estimation. In addition, we estimated only propulsion power; to get a complete picture of the vessel's fuel consumption (and thus emissions), we would need models for non-propulsion related power consumption (service power) and engine models to map power into fuel flow. An obvious topic to be analyzed in more detail is the impact of fouling effects. These topics, and also generalization to other ship types, are left for future work.
\section*{Acknowledgments}
This work was supported by the Academy of Finland, decision number 313827, 'Industrial Internet and Data Analysis in Marine Industries',and
by the Centre of Excellence of
Inverse Modelling and Imaging (CoE), Academy of Finland, decision number 312122.This work has been supported by the European Regional Development Fund (Interreg Baltic Sea Region) project C006 CSHIPP.
\bibliographystyle{unsrt}
|
1,108,101,564,157 | arxiv | \section{Introduction}
Future wireless networks are featured by low latency and high reliability. Thus, machine learning (ML) embedded in each device is a ravishing solution that each user equipment (UE) has the capability to make decisions by its local data, even when it loses connectivity to the wireless system. Since the data at each device is limited, the training of on-device ML models always requires the data exchange among UEs \cite{9048613}.
However, directly exchanging data among UEs may cause serious risks in privacy leakage and information hijacking \cite{8274963}. To reduce this risk, federated learning (FL) is proposed, which is a new ML framework that trains an AI model across multiple UEs holding local datasets. In details, FL allows to train machine learning models locally at distributed UEs; after that, the UEs share the parameters of the locally trained models to a central server (i.e., the aggregator) where a global model is aggregated. Therefore, the UEs under the FL framework have the capability to cooperatively learn a global model without exchanging their data directly. Moreover, FL has been applied to real-world applications, including health care and autonomous driving \cite{9076082}.
Although FL shows its effectiveness in preserving privacy, it still endures several limitations. First, in the FL process, the single centralized aggregator is assumed to be trustworthy and it shall make fair decisions in terms of the user selection and aggregation. However, this assumption is not always appropriate, especially in the real-world operations. This is because a biased aggregator can intentionally emerge prejudice to a few selected UEs, thereby damaging the learning performance \cite{9048613}. Second, the aim of FL is restricted to applications orchestrated by the centralized aggregator. As a result, the resiliency of an aggregator depends on the robustness of the central server, and a failure in the aggregator could collapse the entire FL network. Then, although local data is not explicitly shared in the original format, it is still possible for adversaries to reconstruct the raw data approximately, especially in the aggregation process. In particular, privacy leakage may happen during model aggregating by outsider attacks. Lastly, the existing design is vulnerable to the malicious clients that might upload poisonous models to attack the FL network \cite{9084352}.
As a secure technology, blockchain has the capability to tolerate single point failure with distributed consensus, and it can further implement incentive mechanisms to encourage participants to effectively contribute to the system \cite{8733825}. Therefore, blockchain is introduced to FL to solve its limitations mentioned above. In \cite{8733825}, a blockchained FL architecture was developed to verify the uploaded parameters and it investigated the related system performances, such as the learning delay and the block generation rate. Moreover, work \cite{8843900} proposed a privacy-aware architecture that uses blockchain to enhance security when sharing parameters of machine learning models with other UEs. In addition, the authors in \cite{8905038} proposed a high-level but complicated framework by enabling encryption during model transmission and providing incentives from participants, and the work \cite{SHARMA2020102220} further applied this framework in the defensive military network. With the advanced features of blockchain such as tamper-proof, anonymity and traceability, an immutable audit trail of ML models can be created for greater trustworthiness in tracking and proving provenance \cite{9051184}. In addition, security and privacy issues of the decentralized FL framework are investigated in \cite{9134967,inproceedings1,8843900}, which delegate the responsibility of storing ML models to a trust community in the blockchain.
However, the assumption on the trust community may infer the same privacy issue when ML models transmitting over air, and the credibility of this community also needs further verification. In addition, these works have either not clearly clarified and fully addressed the incident issues, such as the long learning delay and the impact of blockchain forking on FL, or have difficulty in application.
Thus, in this work we have fully detailed the whole process of blockchain assisted decentralized FL (BLADE-FL), which has the capability to overcome the single point of failure problem. In addition, we further investigate the residual issues that exist in the BLADE-FL framework, and provide related solutions.
In detail, we present the design of the BLADE-FL framework in Sec.~II, and residual issues including privacy, resource allocation and lazy clients are investigated in Sec.~III. In Sec.~IV we provide extensive experimental results to show the effectiveness of the corresponding solutions. Finally, future directions and conclusion are drawn in Sec.~V.
\section{The framework of BC-FL}
With the aid of blockchain, we aim to build up a secure and reliable FL framework. To ensure this, the model updating process of FL is decentralized at each participating client, which is robust against the malfunction of traditional aggregators.
In this article, we detail the BLADE-FL framework, to achieve a dynamic client selection and a decentralized learning aggregation process.
The BLADE-FL framework is composed of three layers. In the network layer, the network features a decentralized P2P network that consists of task publishers and training clients, wherein a learning mission is first published by a task publisher, and then completed by the cooperation of several training clients. Different from previous work that model aggregation happens in a trust community in the blockchain \cite{8733825,8843900,8905038,SHARMA2020102220,9051184,9134967,inproceedings1}, we realize a fully decentralized framework that each client needs to train ML models and mine blocks for publishing aggregating results. In the blockchain layer, each FL-related event, such as publishing a task, broadcasting learning models, and aggregating learning results, is tracked by blockchain. In the application layer, the SC and FL are utilized to execute the FL-related events. Next, we will detail the working flow and key components of the BLADE-FL framework.
\subsection{Working Flow}
As shown in Fig.~\ref{system}, the working flow of the proposed framework operates in the following steps:
\begin{itemize}
\item \textbf{Step 1}: Task publishing and node selection. A task publisher broadcasts a FL task through deploying a SC over the blockchain network. In the deployed SC, the task publisher needs to deposit reward as financial incentives to the learning task. The SC selects available training nodes to participate in this learning task.
\item \textbf{Step 2}: Local model broadcast. Each training client runs its local training by using its own data samples and broadcasts its local updates and the corresponding processing information (e.g., computation time and local data size) over the P2P network. Privacy leakage may happen during this transmission, and we further investigate this issue in Sec.~III-A.
\item \textbf{Step 3}: Model aggregation. Upon receiving the local updates from other training nodes before a preset time-stamp, each client updates the global model according to the aggregating rule defined in the SC.
\item \textbf{Step 4}: Block generation. Each training client changes roles from trainer to miner and begins mining until either it finds the required nonce or it receives a generated block from other miners. The learning results are stored in the block as well. When one miner generates a new block, other clients verify the contents of this block (e.g., the nonce, the state changed by SC, the transactions, and the aggregated model). The resource allocation issue happens in each client in this step, and related discussions will be given in Sec.~III-B.
\item \textbf{Step 5}: Block propagation. If a block is verified by the majority of clients, this block will be added on the blockchain and accepted by the whole network. The lazy client issue happens in this step and we further investigate it in Sec.~III-C.
\item \textbf{Step 6}: Global model download and update. Each training client downloads the aggregated model from the block and performs updates before the next round of learning.
\item \textbf{Step 7}: Reward allocation. The SC deployed by the task publisher rewards the training clients according to their contributions in the learning task.
\end{itemize}
Before delving into each step, we elaborate on key designs in the BLADE-FL as follows.
\begin{figure*}
\centering
\includegraphics[width=0.77\textwidth]{workflow1.pdf}
\caption{The working flow of the blockchain assisted decentralized federated learning (BLADE-FL)} \label{system}
\end{figure*}
\subsection{Smart Contract Design}
Smart contracts are self-executing contracts defining rules for negotiating, verifying the fulfilment of rules and executing the agreement using the formal code.
The BLADE-FL framework relies on SC to enable trusted dynamic client selections in terms of desired distributed learning services, without relying on a centralized authority.
Moreover, BC-FL enables all clients to verify the learning results that are recorded on the blockchain, whereby distributed clients can be incentivized to participate and untrusted learning models can be detected. Based on the verification results, the reputation of each distributed client can be automatically updated, making the selection of learning nodes more reliable.
In addition, the design of SC in the BC-FL also includes the aggregating rules, and thus provides a fair and open rewarding feedback for participating clients. The SC in BC-FL enables three main functions as follows:
\textbf{Function 1}: Learning task publishing. A task publisher broadcasts a FL task through SC to all users. The SC contains the task requirements (e.g., the data size, training accuracy, latency, etc.), the aggregating rules and rewards paid by the task publisher.
\textbf{Function 2}: Dynamic bidding for requests and automatic incentive. Distributed training nodes, acting as auctioneers, bid for the task by replying their costs and capabilities. Note that in order to enforce accountability, each training client has to stake a deposit to the SC. The task replies from training nodes are recorded on the blockchain by the SC. Then the SC selects training clients with more valuable replies (e.g., higher capability and lower cost) as the bid winners to jointly execute the FL task. The training clients that lose the bidding will reclaim their deposits from the SC, while the deposits made by winners will be automatically refunded if the learning results are verified to be trustworthy afterward.
\textbf{Function 3}: Learning results aggregation and rewards feedback. Before generating a new block, each client will aggregate the uploaded models according to the aggregating rule in SC, in which the contribution of each one in the aggregated model is also recorded in the newly generated block. Then SC is automatically triggered to reward the miner that helps aggregate the learning model and the training clients that contribute to the FL process.
\subsection{The BLADE-FL Design}
The main purpose of the BLADE-FL is to enable a trusted cooperative machine learning among distributed nodes. The decentralized accountability enables all miners to verify the quality of uploaded models that are recorded on the blockchain. In addition, distributed training nodes can be motivated to participate in the FL process and misbehaving ones can be recognized from providing low quality of FL services. The key steps are illustrated as follows:
\textbf{Local model update and upload}: Training nodes are bid winners with capable devices and available sets of data samples. In each learning iteration, each training node updates a local ML model in a parallel manner by using the global model and its local data samples, and broadcasts its local model in the network. This article considers that local updates can be received by all miners through the gossip protocol \cite{jelasity2011gossip} over the P2P network. In this context, the aggregation process in the traditional FL is decentralized to each client that stores the uploaded models in its model pool, respectively.
\textbf{Model aggregation}: After collecting the uploaded models in the pool, each client calculates the global model updates according to the aggregating rule in SC. In the proposed architecture, the clients are designed to aggregate the learning parameters truthfully through a distributed ledger. Similar to the prevailing block structure in \cite{8843900}, each block in a ledger consists of the body and header parts. Concretely, the body stores the local model updates, such as the local data size and computing time of the associated training node and the aggregated learning parameters. The header contains the information of a pointer to the previous block, block generation rate, and the output value, such as the proof of work (PoW), in the consensus protocol.
\textbf{\textbf{Model recording and publishing}}: The clients record the the aggregated models in their block and publish the recorded models by broadcasting the generated block to the whole network. The blocks can be generated by using distributed or lightweight consensus protocols, such as PoW, proof of stake (PoS), delegated PoS (DPoS), etc. \cite{8869754}. In this article, we consider PoW due to its strong security over decentralized networks. This article uses a synchronous schedule to ensure that all miner start mining at the same time
Once a client find the hash value, its candidate block becomes to be a new block, and this block generation rate is controlled by the PoW difficulty. Then, this generated block is broadcasted to all other miners in the framework. All the other miners need to verify the nounce and the aggregated results contained in this block. For example, clients can compare the aggregated results with the one in the publishing block or use a public testing dataset to justify the effectiveness of the uploaded models. If the verification result is correct, other clients will accept it as a legal block and record it; otherwise, others will discard this generated block and continue to mine on the previous legal block.
\textbf{Reward allocation}:
The task publisher provides learning rewards for the participating training nodes, and the volume can be proportional to the size of training data. It is noted that the reward mechanism can be further mended by combining consider the data size and the quality of data samples. In this case, clients are responsible to verify the trustworthiness of local updates after aggregation,
to address the situation that untruthful UEs may exaggerate their sample sizes with abnormal local model updates. Specifically, when clients calculate the rewards to each training node, they can give scores/reputations to the training nodes based on the model qualities. In the next aggregation, nodes with low scores will be given less weights, and identified and gradually ignored during the learning. In practice, this can be guaranteed by Intel's software guard extensions, allowing applications to be operated within a protected environment, which has already been used in the blockchain technologies \cite{8048837}.
In addition, miners can also obtain rewards from mining and aggregating models, which can be treated as a gas tax in the traditional blockchain.
\section{Unique Issues and potential solutions}
In this section, we describe three critical issues that the proposed framework may confront with, namely privacy, resource allocation, and lazy client
\subsection{Privacy}
In the BLADE-FL, the roles of each client includes mining and training. To aggregate the global model, the trained local model will be published among clients which raises privacy issues. Previous works \cite{8733825,8843900,8905038,SHARMA2020102220,9051184,9134967,inproceedings1}, usually artificially assign the training and mining tasks to two disjoint sets of clients, and widely adopt that the miners are always trustful. However, if there exists an eavesdropper in the wireless environment, the published information of local models can cause privacy leakage. To address this, a differentially private mechanism can be implemented at the client side. In detail, keys steps are listed as follows:
\begin{itemize}
\item Each client sets up a self-required privacy level for itself before training. For example, the $i$-th client may have a local privacy budget $\epsilon_i$. Note that a small value of $\epsilon_i$ represents a high local privacy level, and will induce more additive noises on the parameters.
\item To achieve a local differential privacy (LDP), each client will add a random noise which follows a certain distribution on the uploaded models. For example, a random Gaussian noise $N(0, \sigma^2)$ or a Laplace noise $Lap(\lambda)$ will be added. Note that, a large noise power, i.e., $sigma^2$ implies a high privacy level.
\item Upon receiving the perturbed models, all clients can aggregate the global model locally, and store it in the generated block. Because of the injected noise, the learning convergence as well as the system performance will be negatively affected. A tradeoff between the privacy requirement and the learning performance needs further investigation. In addition, an non-uniform allocation of additive noise over communication rounds may improve the learning performance. For example, a decay rate for the noise power can be applied when the learning accuracy between two adjacent communication rounds stops improving \cite{9347706}.
\end{itemize}
\subsection{Computing Resource Allocation}
Since the computation resource is limited at each client, each participant needs to appropriately allocate the resources for local training and mining to complete the task. Specifically, more computing resources can be devoted to either faster model update or block generation. To meet the specific task requirements, such as learning difficulty, accuracy, and delay, each node optimizes its allocation strategy to maximize its reward under constraints of local capability.
According to the constraints, the computation resource allocation can be formulated as an optimization problem under the accurate mathematical model. In details:
\begin{itemize}
\item The block generation rate is determined by the computation complexity of the hash function and the total computing power of the blockchain network (i.e., total CPU cycles). The average CPU cycles required to generate a block can be defined as $kc_{\textrm{B}}$, where $k$ denotes the mining difficulty, and $c_{\textrm{B}}$ denotes the average number of total CPU cycles to generate a block. Thus, the average generation time of a block ($t_{\textrm{B}}$) can be expressed as $\frac{kc_{\textrm{B}}}{Nf}$, where $N$ is the number of clients, and $f$ denotes the CPU cycles per second of each client.
\item The training time consumed by each training iteration $t_{\textrm{T}}$ can be expressed as $\frac{|D|c_{\textrm{T}}}{f}$, where $|D|$ denotes the number of samples of each client, and $c_{\textrm{T}}$ denotes the number of CPU cycles required to train one sample.
\item Considering that a typical FL learning task is required to be accomplished within a fixed duration of $T_{\textrm{Sum}}$, it should satisfy that $K (\tau t_{\textrm{T}}+t_{\textrm{B}})\leq T_{\textrm{Sum}}$,
where $K$ denotes the total communication round, and $\tau$ denotes the local training epoches. Thus, to achieve a required learning performance, an appropriate choice for the communication round $K$ should be investigated under a certain ratio between the computing and mining time.
\end{itemize}
\subsection{Lazy nodes}
As the verification is processed locally, a lazy client may not perform local learning and directly copy uploaded parameters from other clients to save its computing resource. As a result, the client can devote more mining resources to reaping more mining rewards with a higher probability. However, this action significantly degrades the network learning performance. To investigate the effect of lazy nodes on the system performance, we provide related experimental results in Sec.~V-D.
To address the lazy client issue, we can implement a signature process at each client, which is based on the pseudo-noise (PN) sequence. Note that the signature mechanism here is completely different from the digital signature. What we need is a signature that is resilient to noise perturbation because the lazy clients are likely to perturb the plagiarized local models to hide the misbehavior. This process will introduce a negligible burden to the system but can provide a high detection accuracy. In details,
\begin{itemize}
\item Before broadcasting the local updates, each client will produce a PN sequence with a length $L$, where $L$ is usually a very large number (larger than the number of model parameters) and we select a same length with model parameters and add them to the updates. This PN sequence has a high self-correlation coefficient and is hard to detect or re-produce by other clients. At least, the complexity of detecting the PN sequence should be much larger than that of training the neural network so as to deter the attempt to discover the used PN sequence.
\item Upon receiving local updates from the other clients, each client will use its own PN sequence to check the correlation coefficient with the updates. If there exists high peaks in terms of the cross-correlation coefficient, then the lazy clients will be detected.
\item Once a lazy client is recognized by a local client, this client can publish the previously used PN sequence to others and invite other honest clients to verify this process. Then any future updates from the lazy client might be discarded as punishments.
\end{itemize}
\section{Experimental Results and Probable Solutions}
In this section, we provide some experimental results to show the issues in the multi-functional miner in the proposed BLADE-FL system.
\subsection{System setup}
For each experiment, we first divide the original training data into non i.i.d. training sets, locally compute a stochastic gradient descend (SGD) update on each dataset, and then aggregate updates to train a globally shared classifier. We evaluate the prototype on the Fashion-MNIST dataset and Cifar-10 data.
In the following results, we collect 20 runs for each experiment and record the average results.
For the blockchain setup, we set the total computation resource $T_{\textrm{Sum}}=200$ for each training node, and the total number of clients is set to $N=20$. In each communication round, each client uses $t_\textrm{B}$ time resources to generate a block and $t_\textrm{T}$ time resources to pursue a learning epoch, where $t_B=2$ for all experiments. Let $\theta=t_\textrm{T}/t_\textrm{B}$, and a larger $\theta$ implies that the client spares more computing resources to learning in each communication round.
\subsection{Investigation on the local differential privacy}
In this subsection, we apply local differential privacy on each client by adding random Gaussian noises on the uploaded models in each communication round. The testing accuracies of the Fashion-MNIST and Cifar-10 dataset are plotted in Fig.~\ref{p} with respect to different privacy levels $\epsilon$. In addition, an adaptive noise decaying method is compared with the constant one, which will decrease the noise power when the accuracy stopes increasing.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{privacy.pdf}
\caption{The learning performance with respect to different privacy levels} \label{p}
\end{figure}
As can be observed in this figure, the system achieves a higher performance with a larger value of $\epsilon$, which is under a weaker privacy protection, and the adaptive method can further improve the learning performance under the same level of privacy protection.
\subsection{Investigation on the resource allocation}
In this subsection, we mainly present the results for the resource allocation, and the training loss value with different ratios ($\theta$) of both datasets are represented in the Fig.~\ref{rs}.
\begin{figure}
\centering
\subfigure[Fashion-Mnist]{\label{loss1}
\includegraphics[width=0.23\textwidth]{loss1.pdf}}
\subfigure[Cifar-10]{\label{loss2}
\includegraphics[width=0.23\textwidth]{loss2.pdf}}
\caption{Learning performance of different total communication rounds under different resource allocation ratios} \label{rs}
\end{figure}
As can be found in Fig.~\ref{rs}, the system performances for different ratios are investigated with the increasing number of total communication rounds. In details, we can find that there exists an optimal total communication round ($K$) for each computing ratio $\theta$. For example, the smallest training loss value can be obtained if clients end learning in 14 communication rounds with 15 learning epochs in each round when $\tau=1$ in the Fashion-MNIST dataset. Moreover, for different computing ratios, the optimal loss value tends to be different. This is due to the fact that the optimal number of local learning epoch is different with various $\theta$. In addition, similar trends can be found in the Cifar-10 dataset.
\subsection{Investigation on the lazy nodes}
In this subsection, we investigate the impact of lazy clients on the proposed framework. We use signal to noise ratio (SNR) to denote the ratio between the power of original model parameters and that of the injected PN sequence, and Table~\ref{pn} represents the detecting rate of lazy clients under different SNRs. If the high peaks in terms of the cross-correlation coefficient surpass a predefined threshold, we can identify this client as a lazy one. We generate a $2^{15}$ length of PN sequence and use the first $25400$ values to add on the parameters. From the results with different SNRs, the detecting performances are remarkable and we can obtain a nearly $100\%$ rate to recognize the lazy clients when SNR=$3$ dB. Then Fig.~\ref{pnn} shows the PN sequence protecting performance (SNR=6 dB) when there are $30\%$ (6) lazy clients in each communication round. As can be found in this figure, the system performance with a certain percentage of lazy clients degrades sharply, i.e., $22.1\%$ and $19.6\%$ reduction in the Fashion-MNIST and Cifar-10 datasets, respectively. In addition, the proposed PN sequence protection method achieves $18\%$ and $13.8\%$ performance gain in each dataset, respectively.
\begin{table}
\centering
\caption{The detecting rate with different PN sequence power in the Fashion-mnist and Cifar-10 datasets}\label{pn}
\begin{tabular}{|c|c|c|c|}
\hline
{Signal to Noise Ratio} & 9 \textrm{dB} & 6 dB & 3 dB \\
\hline
Fashion-Mnist & 0.931 & 0.989 & 0.999 \\
\hline
Cifar-10 & 0.925 & 0.975 & 0.996\\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{pn.pdf}
\caption{Learning performance with/without lazy clients detection} \label{pnn}
\end{figure}
\section{Future Directions and Conclusion}
In this article, we have reviewed the weakness of FL and further investigated a blockchain assisted decentralized FL, called BLADE-FL. We then showed the effectiveness that the BLADE-FL can well address the potential issues, especially the single point of failure issue, existing in the traditional FL system. In addition, we have investigated the newly raising issues including privacy, resource allocation and lazy clients. Lastly, we have further provided related possible solutions and experimental results to solve these issues, which provide guidelines to the design of the BLADE-FL framework. For future directions, some asynchronous and heterogenetic investigations for different capabilities of clients, such as computing capability, training data size and transmitting diversity, as well as the smart contract design which provides reasonable rewards allocation for training and mining can be considered in the system. In addition, the light-weight model transmitting using quantization and sketch may be an alterative way to reduce the transmission cost.
\bibliographystyle{IEEEtran}
|
1,108,101,564,158 | arxiv | \section{\bf Definitions}
Multiplicity distributions (MD) of particles produced in high energy
collisions are the most typical and widely discussed characteristic of
the interaction dynamics. In a condensed form MD provide information about
the fluctuations of energy spent for multiple particle production during
a collision.
The goal of the present paper is to review briefly the new features of the
multiplicity distributions predicted by higher order QCD.
There are two complementary ways of dealing with multiplicity fluctuations :
-- studying the distribution\ $P_n\,=\,\sigma_n/\sigma$ which is the number
of produced particles per event, or
-- measuring the inclusive multiplicity correlators.
In practice, one uses often the normalized factorial moments $F_q$ and cumulants $K_q$
(for review see \cite{DW-D-K}) defined as
\begin{equation} F_q\,=\,\sum^{\infty}_{n=0}n(n-1)...(n-q+1)P_n/\langle n\rangle^q
\,=\,\frac{\langle n(n-1)...(n-q+1)\rangle}{\langle n\rangle^q}\,,
\end{equation}
\begin{equation} K_q\,=\,F_q\,-\,\sum^{q-1}_{m=1}C^{m}_{q-1}K_{q-m}F_m\,.
\end{equation}
Here $C^m_q\,=\,\frac{q!}{m!(q-m)!}$ are the binomial coefficients and
$F_0\,=\,F_1\,=\,K_1\,=\,1$.
These moments have an important advantage over the original moments \cite{BP}.
The average shown in (1) implies mean value of the corresponding
expressions over the available set of experimental events. In experiment this
averaging takes into account both statistical and dynamical effects. If one
assumes that random fluctuations due to limited number of detected particles
are described by the Poissonian distribution, then the total average of the
factorial moments
is equivalent to the dynamical average of usual moments \cite{BP}.
In the Feynman diagram's language, $F_q$ corresponds to the set of all graphs
while the
cumulants $K_q$ describe the connected graphs only. The cumulants provide the knowledge
about
the "true" correlations , non-reducible to the product of the correlations of lower orders.
At asymptotic energies the normalized factorial moments (as well as the ordinary ones)
do not depend on energy and are the functions of their rank only. The higher
the rank of the moment is the more sensitive $F_q$ and $K_q$ are to the "tail"
of MD
at large $n$. The steeper decrease of the distribution at large $n$ leads to
smaller values of the high rank factorial moments .
In a theoretical analysis instead of studying of the numerical series $P_n$ \
it is more convenient
to analyse the function "generating" it, namely the generating
function (GF). $F_q$ and $K_q$ are easily calculated if the generating function
$G(u)$ is known \cite{DW-D-K}
\begin{equation} G(u)\,=\,\sum^{\infty}_{n=0}P_n(1\,+\,u)^n\,.
\end{equation}
Then
\begin{equation} P_n\,=\,\frac{1}{n!}\frac{d^nG(u)}{du^n}\biggl |_{u=-1}\,,
\end{equation}
\begin{equation} F_q\,=\,\frac{1}{\langle n\rangle^q}\frac{d^qG(u)}{du^q}\biggl |_{u=0}\,,
\end{equation}
\begin{equation} K_q\,=\,\frac{1}{\langle n\rangle^q}\frac{d^q\ln G(u)}{du^q}\biggl
|_{u=0}\,.
\end{equation}
Thus, the knowledge of GF gives us a possibility to calculate both the
multiplicity distribution and cumulant and factorial moments i.e. (3)-(6)
demonstrate mathematical equivalence the description of MD by functions
$P_n$, $F_q$ and $K_q$ . In \cite{Dr1} it has been proposed to use the ratio of
cumulant to factorial moments $H_q\,\equiv\,K_q/F_q$ which behaves in a qualitatively
different way for various distributions\ and is more sensitive to specific features
of $P_n$ which are invisible when just plotted $P_n$ or even $F_q$ (see Sec.\,3).
\section{\bf Some properties of the multiplicity
distributions }
In pre-QCD time Koba, Nielsen and Olesen published the paper \cite{KNO}
with a hypothesis about the scaling properties of the multiplicity distributions at asymptotic
energies (the KNO scaling). If $z$ is the scaled multiplicity
$z\,=\,n/\langle n\rangle$, then the KNO scaling implies a universal form
$$\psi (z)\,=\,\langle n\rangle\,P_n$$
for the multiplicity distribution. During last 30 years the KNO-like
behavior of
MD was experimentally confirmed in various types of high energy particle
production processes except the data on proton-antiproton interactions at
the highest energies $\sqrt{s}\,=\,$546 and 900 GeV obtained by
UA5 collaboration
\cite{UA5} in CERN.
The negative binomial distribution (NBD)
$$G(u)\,=\,\biggl (1\,-\,\frac{u\langle n\rangle}{k}\biggr )^{-k}\,,$$
\begin{equation}
P_n\,=\,\frac{(n+k-1)!}{n!(k-1)!}\biggl (\frac{\langle n\rangle/k}
{1+\langle n\rangle/k}\biggr )^{n}\biggl (1\,+\,
\frac{\langle n\rangle}{k}\biggr )^{-k}
\end{equation}
$$F_q\,=\,\frac{(k+1)\cdot\cdot\cdot(k+q-1)}{k^{q-1}}\,,\ \ \ \
K_q\,=\,\frac{(q-1)!}{k^{q-1}}$$
\begin{equation}
H_q\,=\,\frac{(q-1)!}{(k+1)\cdot\cdot\cdot(k+q-1)}
\end{equation}
is another example of the distribution which is in good agreement with
experimental data
in full phase space and in smaller phase space domains.
NBD depends on two parameters,
the average multiplicity $\langle n\rangle$ and a positive
parameter $k$ describing the shape of the distribution.
Here we will mention only two classes of mechanisms proposed to
generate NBD, (partial)stimulated emission \cite{Car,GV} and cascading
\cite{GV}.
One feature of $H_q$ for NBD is that it always positive and tends to zero
with a $q^{-k}$ behaviour at high ranks. For the Poisson distribution $H_q$
is identically equal to zero (except for $H_1$\,=\,1)
\section{What does QCD tell us about the multiplicity distributions ?}
The KNO hypothesis was strongly supported by QCD when the equations for
generating function were solved in the so-called double logarithmic
approximation (DLA). DLA happens to be too crude however for making reasonable
predictions even for asymptotic energies: the predicted KNO shape of
the distribution\ appeared to be much wider than the experimental one. On the qualitative
level, DLA can be thought to overestimate cascading processes, ignoring
completely energy-momentum conservation since the energy of the radiating
particles remains unchanged after a soft gluon emission. Therefore
DLA apparently overestimates the gluon multiplicity
because the parton characteristic energy is higher and the parton
multiplicate more actively. Taking into account higher order perturbative
corrections leads to a more accurate control over the parton splitting
processes and energy conservation.
Such an approach has been realized (see \cite{BCM}, \cite{DKMT})
in the framework of the modified leading logarithmic approximation
(MLLA) by a generalization of the standard
LLA scheme following the logic of the famous
Dokshitzer-Gribov-Lipatov-Altarelli-Parisi approach and including the exact
angular ordering (AO) (instead of the strong AO within DLA). Thus the system of
the MLLA integro-differential equations for the quark and gluon GF has been
derived.
A recent series of publications \cite{Dr1}, \cite{YuD}--\cite{DH} was
devoted to solving of these equation in the case of $e^+e^-$-collisions with
account of different next-to-next-to leading (NNL) effects. Corresponding
corrections can be looked upon \cite{CT} as being due to a more
accurate account of
energy conservation in the course of parton splitting.
For example, the
approximation used in \cite{YuD} allowed in the framework of
gluodynamics
the derivation of
analytical expressions for the asymptotic behaviour of factorial moments
and the KNO function, which are in better agreement
with the data, by reducing substantially
the width of the theoretical distribution. Cumulant and factorial moments of the multiplicity distributions in
the perturbative gluodynamics have been calculated in \cite{Dr1}, \cite{DN}.
Accounting for the degrees of freedom associated with quarks
\cite{DLN} does not change the essential
qualitative features of $F_q$ , $K_q$ and influence only weakly $H_q$ .
The exact solutions of the QCD equations for quark and gluon GF
are obtained for the case of
fixed coupling in \cite{DH}.
The ratio $H_q$ is more sensitive to the form
of $P_n$ at large $n$ than $F_q$ (see Fig. 1). It was shown in \cite{DLN}
that the predictions of $F_q$ shown in Fig. 1a, have qualitatively the same
behaviour and are very close
to each other for $q\leq$\,10. However $H_q$ (Fig. 1b) demonstrate much stronger
sensitivity
to the assumptions used. The most typical feature of the ratio $H_q$ predicted by
QCD \cite{DN} is its quasi-oscillating form with a changing sign (Fig. 2).
Such an oscillating behavior of $H_q$ is a specific property of higher order QCD.
Less complete account of nonlinearities in the equation for GF leads \cite{Dr1},
\cite{DLN} only to one minimum with a very small
value of $H_q$ \, (the solid line in Fig. 1b).
The results of \cite{Dr1} have initiated a search for the peculiarities of $H_q$
from the experimental data. According to the $H_q$ measurements from multiplicity distributions
in $e^+e^-$--annihilation in the energy range from 22 to 91 GeV, and in
$hh$--collisions, in the energy range from 24 to 900 GeV, made in \cite{ Gia},
its behaviour corresponds to the predic-
\newpage
\vspace*{1.0cm}
\hspace{-0.5cm}
\begin{minipage}[t]{7.5cm}
\setlength{\unitlength}{\textwidth}
\begin{picture} (0.5,0.95) (0.1,0)
\mbox{\epsfig{file=fig1s.eps,width=1.20\textwidth,height=1.35\textwidth}}
\end{picture}
{Fig.1. a) The factorial moments for different QCD distributions\ (\cite{Dr1} - solid line;
\cite{YuD} -- dotted line)
and for NBD with $k$=7.6\,(dashed line). b) The ratio $H_q$
for the same distributions\ as in a).}
\end{minipage}
\hspace{4mm}
\begin{minipage}[t]{7.5cm}
\setlength{\unitlength}{\textwidth}
\begin{picture} (1.0,0.95) (0.02,0)
\mbox{\epsfig{file=fig2s.eps,width=1.20\textwidth,height=1.10\textwidth}}
\end{picture}
{Fig. 2. The ratio $H_q$ predicted by QCD \cite{DN}.}
\end{minipage}
\vspace*{1.0cm}
\noindent
tions of higher order QCD. A few examples are presented in Fig. 3.
It is a surprise for us that
the theoretical results
obtained for hard processes at asymptotic energies, are in a qualitative
agreement with experimental data at low and high energies
both for $e^+e^-$ processes and soft hadronic collisions.
The behaviour of $H_q$ for NBD shown in Fig 1, where $H_q$ falls monotone but
always positive, tending to zero at large ranks $q$, is not compatible
to results shown in Fig 3.
Therefore, despite the fact that NBD fits experimental MD very well
it is not appropriate for the complete of description
MD in particle production processes as claimed in \cite{DLN}
and \cite{Gia}. However, as will be seen from Sec.5, after modifications
NBD is able to generate the oscillating $H_q$ as well.
\section{ Monte Carlo Generators}
All Monte Carlo (MC) generators for high energy physics \cite{Sj1} and, in
particular, those which simulate deep inelastic scattering (DIS) \cite{HERA}
are based on the leading logarihm (LL) picture with two body parton splitting
$a\rightarrow d\,+\,c$. However, as one mentioned in the previous section,
higher orders in the perturbative QCD are necessary for a proper
description of
multiproduction at high energies.
At present this can only be achieved in the generators
through approximate methods implemented in different
QCD cascades e.g. the Lund parton shower (PS) \cite{AGIS},
the color
\newpage
\vspace*{3.0cm}
\hspace{-0.5cm}
\begin{minipage}[h]{7.5cm}
\setlength{\unitlength}{\textwidth}
\begin{picture} (0.5,0.95) (0.1,0)
\mbox{\epsfig{file=fig3s.eps,width=1.25\textwidth,height=1.65\textwidth}}
\end{picture}
{Fig.3. Experimental data \cite{Gia} on $H_q$ for a)-d) $e^+e^-$ ($\sqrt{s}$=29,
34.8, 43.8, 91 GeV) and e)-h) $hh$ ($\sqrt{s}$=62.2, 200, 546, 900 GeV)
collisions. Lines are to guide the eye.}
\end{minipage}
\hspace{4mm}
\begin{minipage}[h]{7.5cm}
\setlength{\unitlength}{\textwidth}
\begin{picture} (1.0,0.95) (0.02,0)
\mbox{\epsfig{file=fig4s.eps,width=1.00\textwidth,height=1.50\textwidth}}
\end{picture}
{Fig. 4. The ratio $H_q$ due to the QCD MC codes:
a) JETSET 7.3, $e^+e^-$, $\sqrt{s}$= 91 GeV;
b) ARIADNE 4.4, $e^+e^-$, $\sqrt{s}$= 91 GeV;
c) PYTHIA 5.5, $e^-p$, $\sqrt{s}$= 314 GeV.
Lines are to guide the eye.}
\end{minipage}
\vspace*{1.0cm}
\noindent
dipole model (CDM)\cite{GAL}.
The LLA used in PS and CDM does
not give a proper treatment of hard emissions. A method
was developed to let a single hard emission to be controlled by the exact
$O(\alpha_s)$ or $O(\alpha_s^2)$ QCD matrix elements and then modelling
subsequent radiation using the PS technique.
One can ask a question: are the above-mentioned improvements of the MC models
enough for a proper description of $H_q$ \,? The answer seems to us obvious:
since LLA is the base of PS one should not expect an oscillatory behavior of
$H_q$ . However, according to our calculations of the correlators with the MC
generators JETSET 7.3 \cite{Sj2}, ARIADNE 4.4 \cite{HERA}($e^+e^-,
\ \sqrt{s}\,=\,$91 GeV)
and PYTHIA 5.5 \cite{HERA}($e^-p,\ \sqrt{s}\,=\,$314 GeV) $H_q$ has,
nevertheless, an oscillating form as shown in Fig. 4.
An explanation of such a phenomenon can be found immediately if one recall
two facts: 1) each MC generator takes a special care about both the
local (in the course
of parton splitting) and global energy-momentum conservation in the collision;
2) the finite energy of collisions is the physical origin of large
$O(\alpha_s)$ corrections \cite{CT}. Thus, the LLA
in conjunction with the
energy-momentum conservation in the MC models imitate in some part the higher
order corrections leading to the oscillation of $H_q$ .
The question arises, though,
how much of the higher order corrections are accounted for?
\newpage
\setlength{\unitlength}{\textwidth}
\begin{picture} (0.5,0.95) (0.1,0)
\mbox{\epsfig{file=fig5s.eps,width=1.10\textwidth,%
height=1.05\textwidth}}
\end{picture}
{Fig.5. The ratio $H_q$ calculated for NBD with $\langle n\rangle$=9.22,
$k$=17.24. The solid line are for NBD truncated at $n_{tr}$ and the dished
lines are for NBD without truncation.}
\section{Phenomenological examples}
The conclusions from the previous section can be confirmed by the following
arguments \cite{L1}. Formally, according to (7) NBD has an infinite
"tail" at finite collision energy (finite $\langle n\rangle$). This
results in positive $K_q$ and monotone declining $H_q$ (8).
On the other hand, an infinite "tail" of MD is possible only for
production of massless particles or
through neglecting energy conservation during the reaction.
Taking into account these factors leads to a truncation of the MD
"tail" at some finite multiplicity $n_{tr}(s)$. As a result,
$H_q$ calculated for the
truncated NBD oscillates around the curve $q^{-k}$ with alternating
sign. The amplitude of the oscillation tends to zero quickly
as $n_{tr}\rightarrow\infty$
and $H_q^{(tr)}$ tends to $H_q^{(NBD)}$ (Fig. 5). The same behaviour of
$H_q$ has been found for the truncated Poisson distribution (PD).
Another example of the behaviour of $H_q$ is in soft $p\bar{p}$
collisions at $Sp\bar{p}S$
and Tevatron energies calculated in \cite{LS} in the framework of the Dual
Parton Model \cite{CTTV}. It was found \cite{LS} the properties of $H_q$
( amplitude of the oscillation, positions of minima and maxima) are very
sensitive to the number of cutted Pomerons accounted for in the calculation.
\section{ What can be done at HERA\,?}
In high energy reactions used in the study of the oscillations of $H_q$
\cite{Gia} only DIS data are missing.
New data from the $ep$ collider HERA
will be able to rectify this situation. The invariant mass $W$ of
the hadronic final
state in DIS at HERA extends, with significant cross sections, to the phase
space limit ($\sqrt{s}\,=\,$314 GeV). This circumstance allows us to formulate
several problems related to properties of MD which can be studied with the H1
and ZEUS detectors:
1. Detailed study of MD as a function of $z\,=\,n/\langle n\rangle$ over the
whole kinematical region of $W$. Does the KNO
scaling violated at large W ?
2. High precision measurement of the ration $H_q\,=\,K_q/F_q$ of cumulant
and factorial moments both for the full phase space and restricted rapidity
windows, for events with 1+1, 1+2, ... jets, etc. Does $H_q$ as
a function of the
order $q$ shows an oscillations around $H_q\,=\,0$\,? If so, confronting
the data with predictions of the MC models we would learn more about
the higher order effects implemented in these MC models.
3. Measurements of $H_q$ at different $W$ will shed more light on the problem
how the finite energy effects influence the $H_q$ shape.
To conclude, energy-momentum conservation plays a very important role
in the correct description of the multiplicity distributions , $F_q$ , $K_q$ and $H_q$ in the
framework of QCD and different phenomenological models. The ratio $H_q$
is extremely sensitive to the length of the MD "tail".
In perturbative QCD the behaviour of the MD "tail" is controlled by
higher order corrections while for phenomenological approaches (NBD, PD etc.)
the finite energy effects have to be accounted for
by truncating the MD "tail".
\vskip 0.4cm
\noindent{\bf Acknowlegments.} The author thank I.V. Andreev, I.M. Dremin,
and G. Gianini for discussions, N. Brook for reading the manuscript
and comments.
This work was supported in part by the
International Science Foundation under grant Ph1 35\,08045 and DESY.
|
1,108,101,564,159 | arxiv | \section{Introduction}
\label{sec1}
\IEEEPARstart{H}{igh}-dimensional data samples emerged in computer vision fields could be viewed as generated from a union of linear subspaces \cite{art_1,RN2518,DBLP:conf/sigmod/AgrawalGGR98,DBLP:journals/tkdd/KriegelKZ09}. subspace clustering, whose goal is to partition the data samples into several clusters with each cluster corresponding to a subspace, has attracted lots of researchers' attentions. In the past decades, many kinds of subspace clustering algorithms have been proposed \cite{RN1940,DBLP:journals/tkde/ChuCYC09,DBLP:journals/tcsv/YiHCC18,DBLP:journals/pami/MaDHW07,DBLP:journals/pami/ElhamifarV13,RN1710}. Among them, spectral-type methods showed more excellent performances in many applications such as motion segmentation, face clustering and so on \cite{DBLP:journals/pami/ElhamifarV13,RN1710,DBLP:conf/iccv/Kanatani01,DBLP:journals/siamrev/MaYDF08}.
\par Without loss of generality, suppose that a clean data matrix $\mathbf{X}= [\mathbf{X}_1,\mathbf{X}_2,$ $\cdots,\mathbf{X}_k]\in\mathcal{R}^{d\times n}$ contains $n$ data samples drawn from $k$ subspaces. $\mathbf{X}_i\subset\mathbf{X}$ denotes the sub-matrix including $n_i$ data samples lying in the $i$-th subspace, where $\sum_{i=1}^hn_i=n$. And if $i\neq j$ ($i,j =1,2,\cdots,k$), $\mathbf{X}_i\cap\mathbf{X}_j=\emptyset$. The framework of spectral-type subspace clustering algorithms is divided into three parts. Firstly, they learn a reconstruction coefficient matrix $\mathbf{Z}\in \mathcal{R}^{n\times n}$ satisfying $\mathbf{X}=\mathbf{XZ}$. Secondly, an affinity matrix $\mathbf{A}$ is built by using the obtained reconstruction coefficient matrix, i.e. $[\mathbf{A}]_{ij}=(|[\mathbf{Z}]_{ij}| + |[\mathbf{Z}^{\top}]_{ij}|)/2$, where $[\mathbf{A}]_{ij}$ and $[\mathbf{Z}]_{ij}$ denote the $(i,j)-th$ element of $\mathbf{A}$ and $\mathbf{Z}$ respectively, $\mathbf{Z}^{\top}$ is the transpose of $\mathbf{Z}$. Finally, a certain spectral clustering algorithm, e.g. normalized cuts (Ncuts) \cite{DBLP:journals/pami/ShiM00}, is used to get the final clustering results by using $\mathbf{A}$. It could be clearly seen that the performance of a spectral-type algorithm mainly rely on the learned reconstruction matrix. An ideal coefficient matrix should have inter-subspace sparsity and intra-subspace connectivity. Namely, if $\mathbf{x}_i$ and $\mathbf{x}_j$ belong to a same subspace, $|[\mathbf{Z}]_{ij}|>0$. Otherwise, $|[\mathbf{Z}]_{ij}|=0$.
\par Different spectral-type methods use different regularizers to produce coefficient matrices with different characteristics. For instance, sparse subspace clustering (SSC) \cite{DBLP:journals/pami/ElhamifarV13,conf_1} pursuits sparse reconstruction coefficient matrices by introducing a sparse constraint \cite{RN802}. Low-rank representation (LRR) \cite{RN1710,conf_2} seeks a low-rank reconstruct coefficient matrix by minimize the nuclear norm of the coefficient matrix. Least square regression (LSR) \cite{Lu:2012} defines a Frobenuis norm regularizer and searches a dense reconstruction coefficient matrix. Block diagonal representation (BDR) \cite{RN2485} provides a $k$ block diagonal reconstruction coefficient matrix by minimizing the sum of smallest $k$ eigenvalues of the coefficient matrix's Laplacian regularizer. Though these representative methods achieve promising results in different kinds of subspace clustering tasks, the obtained coefficient matrices still have some drawbacks. The coefficient matrices gotten by SSC are usually too sparse to lack connectedness within each subspace. The block diagonal constraint used in BDR may not lead the correct clustering, since each block still may not be fully connected. On the other hand, although the connectedness within subspaces are guaranteed in the dense coefficient matrices constructed by LRR and LSR, the coefficients of the inter-subspace samples are usually non-zero. In order to get away the dilemmas, three different types of methods are emerged. Firstly, some extensions of classical regularizers are developed. For example, Zhang et al. extended the nuclear norm regularizer used in LRR to a kind of Schatten-$p$ norm regularizer \cite{RN2478}. Xu et al. proposed a scaled simplex representation by adding the non-negative constraint and scaled affine constraint of the coefficient matrix obtained in LSR \cite{RN2691}. Secondly, researchers begin to use mixed regularizers of coefficient matrices. Li et al. proposed a structured sparse subspace clustering (SSSC) \cite{RN2328} by adding a re-weighted $l_1$-norm regularizer into SSC. Elastic net (EN) method defined a combination of $l_1$-norm and Frobenius regularizer of the coefficient matrices \cite{RN1705,RN2562}. Zhuang et al. combined sparse constraint and nuclear norm regularizer together to propose a non-negative low-rank and sparse representation method (NNLRSR) \cite{DBLP:journals/tip/ZhuangGTWLMY15}. Tang et al. generalized NNLRSR and devised a structure-constrained LRR (SCLRR) \cite{RN1888}. Lu et al. presented a graph-regularized LRR (GLRR) algorithm by minimizing the nuclear norm and the Laplacian regularizer the coefficient matrix simultaneously \cite{RN1946}. Tang et al. designed a dense block and sparse representation (DBSR) method which used the $2$-norm (the maximal singular value) and $l_1$ norm regularizers to compute a dense block and sparse coefficient matrix \cite{RN2192}. Thirdly, classical spectral-type subspace clustering algorithms are integrated to build cascade models. Wei et al. devised a sparse relation representation by stacking SSC and LRR \cite{RN2723}. Sui et al. also provided a similar method to show the effectiveness of cascade models \cite{RN2532}. These extended methods outperform the classical algorithms to a certain extent, but they still may not guarantee to produce ideal coefficient matrices.
\par From the view point of a spectral clustering algorithm, the best affinity matrix $\mathbf{M}^*$ of data set $\mathbf{X}$ should have the following properties: If $\mathbf{x}_i$ and $\mathbf{x}_j$ belong to a same cluster, then $[\mathbf{M}^*]_{ij} = 1$. Otherwise, $[\mathbf{M}^*]_{ij} = 0$. Namely, $\mathbf{M}^*$ should be $k$ block diagonal and have the following formulation:
\begin{equation}\label{e1}
\mathbf{M}=\left(
\begin{array}{cccc}
\mathbf{1}_{n_1}\mathbf{1}_{n_1}^{\top}& 0 & \cdots & 0 \\
0 & \mathbf{1}_{n_2}\mathbf{1}^{\top}_{n_2} & \cdots & 0 \\
\vdots & \ddots & \vdots & \vdots \\
0 & \cdots & 0 & \mathbf{1}_{n_k}\mathbf{1}^{\top}_{n_k}
\end{array}
\right)
\end{equation}
where $\mathbf{1}_{n_i}$ is a column vector with $n_i$ elements and each element equals $1$. In the correlation clustering domain, $\mathbf{M}^*$ is called a \textbf{membership matrix} \cite{DBLP:conf/soda/MathieuS10,DBLP:conf/soda/Swamy04,DBLP:conf/cvpr/LeeLLK15}. Moreover, the researchers also proved that a variation of the membership matrix, called \textbf{normalized membership matrix}, is also adequate for spectral clustering. The normalized membership matrix $\mathbf{A}^*$ corresponding to $\mathbf{M}^*$ is expressed as follows:
\begin{equation}\label{e2}
\mathbf{A}^*=\left(
\begin{array}{cccc}
\frac{1}{n_1}\mathbf{1}_{n_1}\mathbf{1}_{n_1}^{\top}& 0 & \cdots & 0 \\
0 & \frac{1}{n_2}\mathbf{1}_{n_2}\mathbf{1}^{\top}_{n_2} & \cdots & 0 \\
\vdots & \ddots & \vdots & \vdots \\
0 & \cdots & 0 & \frac{1}{n_k}\mathbf{1}_{n_k}\mathbf{1}^{\top}_{n_k}
\end{array}
\right).
\end{equation}
\par Back to the domain of subspace clustering, suppose an affinity matrix $\mathbf{A}$ is a normalized membership matrix. As we mentioned above, in a spectral-type subspace clustering algorithm, an affinity matrix is defined as $[\mathbf{A}]_{ij}=(|[\mathbf{Z}]_{ij}| + |[\mathbf{Z}^{\top}]_{ij}|)/2$. If we force $\mathbf{Z}=\mathbf{Z}^{\top}$ and $[\mathbf{Z}]_{ij}\geq 0$ (for all $i,j$), the best reconstruction coefficient matrix $\mathbf{Z}^*$ should be same as $\mathbf{A}^*$, namely $\mathbf{Z}^*$ is also a normalized membership matrix. We can see such $\mathbf{Z}^*$ is definitely inter-subspace sparse and intra-subspace connective. The property that each element in a block (i.e., $\mathbf{Z}_i^*$) equals $1/n_i(>0)$ means $\mathbf{Z}^*$ is fully connected in each block. Hence, this kind of coefficient matrix is better than that obtained by BDR. Fig. \ref{f1} presents the two coefficient matrices obtained by BDR and the proposed algorithm in this paper on a synthetic data set. We can see that all coefficient matrices are block diagonal, but each block in the coefficient matrices obtained by the proposed algorithms is denser than that obtained by BDR. Then the subspace structure of the data set could be revealed more faithfully by using the proposed algorithm.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{example1.eps}\\
\caption{We generate $5$ subspaces each of dimension $d = 5$ in an ambient space of dimension $d = 20$. We sampled $50$ data points from each subspace and constructed a $d\times 250$ data matrix $\mathbf{X}=[\mathbf{X}_1,\cdots,\mathbf{X}_5]$ without noise. $\mathbf{X}_i (i=1,2,\cdots,5)$ contains the samples from $i-th$ subspace. Then we use BDR and the proposed algorithm in this paper to compute the coefficient matrices. The two coefficient matrices obtained by BDR are illustrated in (a) and (b). The two coefficient matrices achieved by the proposed algorithm are shown in (c) and (d). }\label{f1}
\end{figure}
\par In \cite{DBLP:conf/cvpr/LeeLLK15}, Lee et al. also suggested constructing a normalized membership matrix for subspace clustering. However, the so-called membership representation (MR) algorithm \cite{DBLP:conf/cvpr/LeeLLK15} takes three steps to finally get the coefficient matrix. Firstly, a certain subspace clustering algorithm, such as SSC or LRR, was used to get an initial coefficient matrix. Secondly, MR sought a membership matrix by using the obtained initial coefficient matrix. Finally, a normalized membership matrix was computed with the obtained membership matrix. In the last two steps, augmented Lagrangian method (ALM) \cite{DBLP:journals/corr/LinCM10} was applied to solve the corresponding optimization problems. Hence, besides the computation time used for finding an initial coefficient matrix, the time cost in the last two steps of MR was also high.
\par In this paper, we invent a new method to find a coefficient matrix which is closed to a normalized membership matrix as much as possible. The motivation of the proposed algorithm is the \textbf{self-expressiveness property of the reconstruction coefficient vectors} obtained by subspace clustering algorithms. As we know, spectral-type subspace clustering algorithms assume that the original data samples obey the self-expressiveness property \cite{DBLP:journals/pami/ElhamifarV13}, i.e., each data point can be well reconstructed by a linear combination of other points in the given dataset. The self-expressiveness property of the obtained coefficient vectors means each coefficient vector could be linearly reconstructed by other coefficient vectors. Based on this proposition and the doubly stochastic constraints \cite{DBLP:conf/iccv/ZassS05,DBLP:conf/nips/ZassS06a}, an idempotent representation (IDR) method for subspace clustering is proposed. For solving the IDR problem, an optimization algorithm is also presented. And the convergence as well as the complexity analysis of the optimization algorithm are given consequently. We also make comparisons between IDR and some related algorithms, so that the superiority of IDR is shown. Finally, extensive experiments conducted on both synthetic and real world databases show the effectiveness and efficiency of IDR method.
\par The rest of the paper is organized as follows: we introduce the general formulation of spectral-type subspace clustering algorithms in Section \ref{sec2}. In Section \ref{sec3}, we propose the idea of idempotent representation (IDR) and the optimization algorithm for solving IDR problem. The further discussions of IDR, such as the analysis on the convergence and complexity of the optimization algorithm, the connections between IDR and the related algorithms, are given in Section \ref{sec4}. Comparative subspace clustering experiments on both synthetic data set and real world data sets are performed in Section \ref{sec5}. Section \ref{sec6} presents the conclusions.
\section{Preliminary}
\label{sec2}
Through there is a wide variety of existing spectral-type subspace clustering algorithms, the general objective function of these algorithms could be expressed as follows:
\begin{equation}\label{e3}
\begin{array}{ll}
\min_{\mathbf{Z,E}} & \Omega (\mathbf{Z}) \\
s.t. & \mathbf{X} = \mathbf{XZ},
\end{array}
\end{equation}
where $\Omega (\mathbf{Z})$ indicates a certain norm regularizer of $\mathbf{Z}$, $\mathbf{X}\in \mathcal{R}^{d\times n}$ is a data matrix. In real applications, data is often noisy or corrputed. Hence, the more robust version of above problem could be defined as follows:
\begin{equation}\label{e4}
\begin{array}{ll}
\min_{\mathbf{Z,E}} & \Omega (\mathbf{Z}) + \lambda \Phi (\mathbf{E}), \\
s.t. & \mathbf{X} = \mathbf{XZ} + \mathbf{E},
\end{array}
\end{equation}
where $\mathbf{E}$ is the error term, $\Phi(\mathbf{E})$ is a certain measurement of $\mathbf{E}$. $\lambda$ is a positive parameter which is used to balance the effects of $ \Omega(\mathbf{Z})$ and $\Phi(\mathbf{E})$. Moreover, some algorithms add some additional constraints of $\mathbf{Z}$ which could be expressed as $\Theta (\mathbf{Z})$. Then the main differences between the existing subspace clustering algorithms are the definitions of $\Omega(\cdot),\Phi(\cdot)$ and $\Theta(\cdot)$. We use Table \ref{t1} to summarize the formulations of $\Omega(\mathbf{Z}),\Phi(\mathbf{E})$ and $\Theta(\mathbf{Z})$ of some representative subspace clustering algorithms.
\begin{table}
\begin{center}
\scriptsize
\caption{The residual terms, regularizers and additional constraints of coefficient matrices used in some subspace clustering algorithms.}\label{t1}
\begin{tabular}{l|l|l|l}
\hline
Algorithms & $\Omega(\mathbf{Z})$ & $\Phi(\mathbf{E})$ & $\Theta(\mathbf{Z})$\\ \hline
SSC & $\|\mathbf{Z}\|_1$ & $\|\mathbf{E}\|_1$ & $diag(\mathbf{Z})=\mathbf{0}_n$ \\
LRR & $\|\mathbf{Z}\|_*$ & $\|\mathbf{E}\|_{2,1}$ & -\\
LSR & $\|\mathbf{Z}\|_F^2$ & $\|\mathbf{E}\|_F^2$ & $diag(\mathbf{Z})=\mathbf{0}_n$ \\
BDR & $\|\mathbf{Z}\|_{k}$ & $\|\mathbf{E}\|_F^2$ & $diag(\mathbf{Z})=\mathbf{0}_n,$ \\
& & & $\mathbf{Z}=\mathbf{Z}^{\top},$ \\
& & & $\mathbf{Z}\geq 0$ \\
SSSC & $\|(\mathbf{I}_n + \gamma\mathbf{Q})\bigodot \mathbf{Z}\|_1$ & $\|\mathbf{E}\|_1$ & $diag(\mathbf{Z})=\mathbf{0}_n$\\
EN & $\|\mathbf{Z}\|_F^2$ +$\gamma\|\mathbf{Z}\|_1$ & $\|\mathbf{E}\|_{1}$ & $diag(\mathbf{Z})=\mathbf{0}_n$\\
SCLRR & $\|\mathbf{Z}\|_*$ +$\gamma\|\mathbf{Z}\|_1$ & $\|\mathbf{E}\|_{2,1}$ & -\\
GLRR & $\|\mathbf{Z}\|_*$ + $\gamma Tr(\mathbf{ZLZ}^{\top})$ & $\|\mathbf{E}\|_{2,1}$ & -\\
DBSR & $\|\mathbf{Z}\|_2$ + $\gamma\|\mathbf{Z}\|_1$ & $\|\mathbf{E}\|_{2,1}$ & -\\
\hline
\end{tabular}
\end{center}
Notice: $\gamma>0$ is a parameter. $diag(\mathbf{Z})$ denotes a column vector composed by the elements in the diagonal of $\mathbf{Z}$. $\mathbf{0}_n$ is a column vector with each element equals $0$. In BDR, $\|\mathbf{Z}\|_{k}$ is a constraint which forces $\mathbf{Z}$ to be $k$ block diagonal. In SSSC, $\mathbf{Q}\in \mathcal{R}^{n\times n}$ is a weighted matrix updated by the segmentation results in each iteration and $\mathbf{I}_n$ is an $n\times n$ identity matrix. In GLRR, $Tr(\cdot)$ denotes the trace of a matrix and $\mathbf{L}$ is the Laplacian matrix built by using K-nearest-neighbors (KNN) \cite{DBLP:books/lib/DudaHS01} and $\mathbf{X}$.
\end{table}
\par All the algorithms mentioned in Table \ref{t1} could be solved by using ALM. Consequently, Ncuts is used to get final subspace clustering results.
\section{Idempotent representation}
\label{sec3}
\subsection{Motivation}
\label{sec3.1}
The key point of the spectral-type subspace clustering algorithms have in common is that they all assume that the data samples in $\mathbf{X}$ obey the self-expressiveness property \cite{DBLP:journals/pami/ElhamifarV13}. Namely, each data sample could be approximately constructed by a linear combination of other data points in the given dataset with tolerable errors. Thus, $\mathbf{X}\approx \mathbf{XZ}$ and $\mathbf{Z}$ records the reconstruction relationship of the original data samples.
\par In addition, as described in \cite{RN1710,conf_2}, the obtained coefficient matrix $\mathbf{Z}$ is a representation of the original data matrix $\mathbf{X}$ with $\mathbf{z}_i$ being the representation of $\mathbf{x}_i$. Here, $\mathbf{z}_i$ and $\mathbf{x}_i$ are the $i$-th columns of $\mathbf{Z}$ and $\mathbf{X}$ respectively. Then it is reasonable to assume that the coefficient vectors also obey the self-expressiveness property (\textbf{ Self-expressiveness property of coefficient vectors}), namely each coefficient vector could be linearly reconstructed by other coefficient vectors in $\mathbf{Z}$. Therefore, we have
\begin{equation}\label{en1}
\mathbf{Z}=\mathbf{ZT},
\end{equation}
where $\mathbf{T}$ is a reconstruction coefficient matrix corresponding to $\mathbf{Z}$. Moreover, we could hope $\mathbf{T}$ to be closed to $\mathbf{Z}$. Because if $\mathbf{Z}$ is a good representation of $\mathbf{X}$, $\mathbf{Z}$ should follow the reconstruction relationship of the original data set and $\mathbf{Z}$ records the reconstruction relationship of the original data samples. Therefore, the following equation holds
\begin{equation}\label{e5}
\mathbf{Z}\approx \mathbf{Z}\times\mathbf{Z} = \mathbf{Z}^2.
\end{equation}
The above equation means that $\mathbf{Z}$ is approximate to an idempotent matrix.
\par It is easy to verify that an $n\times n$ identity matrix $\mathbf{I}_n$ is idempotent and the solution to the problem $\mathbf{X}=\mathbf{XZ}$. Then in sepctral-type subspace clustering algorithm, the above idempotent constraint (Eq. (\ref{e5})) is not sufficient for finding a good coefficient matrix. Fortunately, it could be checked that a normalized membership matrix is also an idempotent matrix. Hence, we will show how to add some necessary constraints to compel an idempotent reconstruction coefficient matrix to be a normalized membership matrix.
\par In fact, Lee et al. pointed that an idempotent matrix is a normalized membership matrix if and only if it is doubly stochastic \cite{DBLP:conf/cvpr/LeeLLK15}. And a doubly stochastic matrix $\mathbf{Z}\in \mathcal{R}^{n\times n}$ can be completely described by the following doubly stochastic conditions \cite{DBLP:conf/iccv/ZassS05,DBLP:conf/nips/ZassS06a}:
\begin{equation}\label{e6}
\mathbf{1}_n^{\top}\mathbf{Z} = \mathbf{1}_n^{\top}, \mathbf{Z} = \mathbf{Z}^{\top}, \mathbf{Z} \geq \mathbf{0}.
\end{equation}
However, these conditions still can not prevent $\mathbf{Z}$ to be $\mathbf{I}_n$. As mentioned above, for revealing the subspace structure of data set $\mathbf{X}$ with $k$ subspaces faithfully, a coefficient matrix should be $k$ block diagonal. Then for an idempotent and doubly stochastic coefficient matrix $\mathbf{Z}$, we could simply let $Tr(\mathbf{Z})=k$, then $\mathbf{Z}$ would be $k$ block diagonal. This constraint could also prevent $\mathbf{Z}$ to degenerate the trivial solution, i.e., $\mathbf{Z}=\mathbf{I}_n$. Therefore, by integrating these constraints and the general formulation of subspace clustering algorithms, we could define the idempotent representation (IDR) problem as follows:
\begin{equation}\label{e7}
\begin{array}{ll}
\min_{\mathbf{Z}} & \|\mathbf{Z}\|_{id} + \lambda \|\mathbf{E}\|_{2,1}, \\
s.t. & \mathbf{X} = \mathbf{XZ} + \mathbf{E},\\
& \mathbf{1}_n^{\top}\mathbf{Z} = \mathbf{1}_n^{\top}, \mathbf{Z} = \mathbf{Z}^{\top},\mathbf{Z} \geq \mathbf{0},Tr(\mathbf{Z})=k,
\end{array}
\end{equation}
where $\|\mathbf{Z}\|_{id}$ denotes an idempotent regularizer of $\mathbf{Z}$, namely $\|\mathbf{Z}\|_{id} = \|\mathbf{Z}-\mathbf{Z}^2\|_F^2$. In most real applications, partial data samples are corrupted, hence we use $l_{2,1}$ norm to measure the error term $\mathbf{E}$.
\par Furthermore, all these restrictions imposed on $\mathbf{Z}$ will limit its representation capability. For alleviating this problem, we introduce an intermediate term and proposed the following relaxed problem:
\begin{equation}\label{e8}
\begin{array}{ll}
\min_{\mathbf{Z,S}} & \|\mathbf{Z}-\mathbf{S}\|_F^2+\gamma\|\mathbf{S}\|_{id} + \lambda \|\mathbf{E}\|_{2,1}, \\
s.t. & \mathbf{X} = \mathbf{XZ} + \mathbf{E}, \\
& \mathbf{1}_n^{\top}\mathbf{S} = \mathbf{1}_n^{\top}, \mathbf{S} = \mathbf{S}^{\top},\mathbf{S} \geq \mathbf{0},Tr(\mathbf{S})=k,
\end{array}
\end{equation}
where $\gamma$ is also a positive parameter.
\subsection{Optimization}
Similar to solving the existing subspace clustering problems, we use ALM \cite{DBLP:journals/corr/LinCM10} to find the solutions to IDR problem (e.g., Eq. (\ref{e8})). Firstly, we need to transfer Eq. (\ref{e8}) to the following equivalent problem:
\begin{equation}\label{e9}
\begin{array}{ll}
\min_{\mathbf{Z,S,C,D}} & \|\mathbf{Z}-\mathbf{S}\|_F^2+\gamma\|\mathbf{S}-\mathbf{SC}\|_F^2 + \lambda \|\mathbf{E}\|_{2,1}, \\
s.t. & \mathbf{X} = \mathbf{XZ} + \mathbf{E},\\
& \mathbf{S} = \mathbf{C},\mathbf{S} = \mathbf{S}^{\top},\mathbf{S} \geq \mathbf{0},\\
&\mathbf{1}_n^{\top}\mathbf{C} = \mathbf{1}_n^{\top},\\
& \mathbf{S} = \mathbf{D},Tr(\mathbf{D})=k,
\end{array}
\end{equation}
where $\mathbf{C},\mathbf{D}$ are two auxiliary variables. Then the corresponding augmented Lagrangian function of Eq. (\ref{e9}) could be expressed as follows:
\begin{equation}\label{e10}
\begin{array}{ll}
\mathfrak{L} & = \|\mathbf{Z}-\mathbf{S}\|_F^2+\gamma\|\mathbf{S}-\mathbf{SC}\|_F^2 + \lambda \|\mathbf{E}\|_{2,1} \\
&+<\mathbf{Y}_1, \mathbf{X} - \mathbf{XZ} - \mathbf{E}> + <\mathbf{Y}_2,\mathbf{S} - \mathbf{C}>\\
&+ <\mathbf{Y}_3,\mathbf{1}_n^{\top}\mathbf{C} - \mathbf{1}_n^{\top}>+<\mathbf{Y}_4,\mathbf{S} - \mathbf{D}>\\
&+ \mu/2\big(\|\mathbf{X} - \mathbf{XZ} - \mathbf{E}\|_F^2+\|\mathbf{S} - \mathbf{C}\|_F^2\\
&+\|\mathbf{1}_n^{\top}\mathbf{C} - \mathbf{1}_n^{\top}\|_F^2+\|\mathbf{S} - \mathbf{D}\|_F^2\big),
\end{array}
\end{equation}
where $\mathbf{Y}_1,\mathbf{Y}_2$, $\mathbf{Y}_3$ and $\mathbf{Y}_4$ are four Langrangian multipliers and $\mu>0$ is an additional parameter. By minimizing $\mathcal{L}$, the variables ${\mathbf{Z},\mathbf{S},\mathbf{C},\mathbf{D},\mathbf{E}}$ could be optimized alternately while fixing others.
\par \textbf{1. Fix other variables and update $\mathbf{Z}$.} Then in the $h-th$ iteration ($h$ is the number of iterations),
\begin{equation}\label{e11}
\begin{array}{l}
\mathbf{Z}^{h+1} = \arg \min_{\mathbf{Z}}\|\mathbf{Z}-\mathbf{S}^h\|_F^2 + <\mathbf{Y}_1^h,\mathbf{X} - \mathbf{XZ} - \mathbf{E}^{h}>\\
+\mu^h/2\|\mathbf{X} - \mathbf{XZ} - \mathbf{E}^{h}\|_F^2\\
=\arg \min_{\mathbf{Z}}\|\mathbf{Z}-\mathbf{S}^h\|_F^2 + \mu^h/2\|\mathbf{X} - \mathbf{XZ} - \mathbf{E}^{h} + \mathbf{Y}^h_1/\mu^h\|_F^2,
\end{array}
\end{equation}
where $\mathbf{S}^h,\mathbf{Y}^h_1$ and $\mu^h$ are the updated variables. It could be easily verified that $\mathbf{Z}^{h+1} = \big(2\mathbf{I}_n + \mu^h\mathbf{X}^{\top}\mathbf{X}\big)^{-1}\big(2\mathbf{S}^h + \mu^h(\mathbf{X}^{\top}\mathbf{X} - \mathbf{X}^{\top}\mathbf{E}^h) + \mathbf{X}^{\top}\mathbf{Y}_1^h\big)$, where $\big(2\mathbf{I}_n + \mu^h\mathbf{X}^{\top}\mathbf{X}\big)^{-1}$ is the pseudo-inverse of $(2\mathbf{I}_n + \mu^h\mathbf{X}^{\top}\mathbf{X})$.
\par \textbf{2. Fix other variables and update $\mathbf{S}$.} Similar to updting $\mathbf{Z}$,
\begin{equation}\label{e12}
\begin{array}{l}
\mathbf{S}^{h+1} = \arg \min_{\mathbf{S}}\|\mathbf{Z}^{h+1}-\mathbf{S}\|_F^2 + \gamma\|\mathbf{S}-\mathbf{SC}^h\|_F^2
\\+<\mathbf{Y}^h_2,\mathbf{S}-\mathbf{C}^h> +<\mathbf{Y}^h_4,\mathbf{S}-\mathbf{D}^h>
\\+ \mu^h/2\big(\|\mathbf{S}-\mathbf{C}^h\|_F^2+\|\mathbf{S}-\mathbf{D}^h\|_F^2\big)
\\= \arg \min_{\mathbf{S}}\|\mathbf{Z}^{h+1}-\mathbf{S}\|_F^2 + \gamma\|\mathbf{S}-\mathbf{SC}^h\|_F^2
\\+
\mu^h/2\big(\|\mathbf{S}-\mathbf{C}^h
+\mathbf{Y}^h_2/\mu^h\|_F^2+\|\mathbf{S}-\mathbf{D}^h+\mathbf{Y}^h_4/\mu^h\|_F^2\big).
\end{array}
\end{equation}
Hence, $\mathbf{S}^{h+1} = \big(2\mathbf{Z}^{h+1}+\mu^h\mathbf{C}^h-\mathbf{Y}^h_2+\mu^h\mathbf{D}^h-\mathbf{Y}^h_4\big)\big((2+2\mu^h)\mathbf{I}_n+2\gamma(\mathbf{I}_n-\mathbf{C}^h)(\mathbf{I}_n-\mathbf{C}^h)^{\top}\big)^{-1}$. Because of the non-negative and symmetric constraints on $\mathbf{S}$, we further let $\mathbf{S}^{h+1} = \max (\mathbf{S}^{h+1},\mathbf{0})$ and $\mathbf{S}^{h+1} = \big(\mathbf{S}^{h+1} + (\mathbf{S}^{h+1})^{\top}\big)/2$.
\par \textbf{3. Fix other variables and update $\mathbf{C}$.} We also could find
\begin{equation}\label{e13}
\begin{array}{l}
\mathbf{C}^{h+1} = \arg \min_{\mathbf{C}} \gamma\|\mathbf{S}^{h+1}-\mathbf{S}^{h+1}\mathbf{C}\|_F^2+<\mathbf{Y}^h_2,\mathbf{S}^{h+1}-\mathbf{C}>\\
+<\mathbf{Y}^h_3,\mathbf{1}_n^{\top}\mathbf{C}-\mathbf{1}_n^{\top}>+\mu^h/2\big(\|\mathbf{S}^{h+1}-\mathbf{C}\|_F^2\\
+\|\mathbf{1}_n^{\top}\mathbf{C}-\mathbf{1}_n^{\top}\|_F^2\big)\\
= \arg \min_{\mathbf{C}} \gamma\|\mathbf{S}^{h+1}-\mathbf{S}^{h+1}\mathbf{C}\|_F^2 +\mu^h/2\big(\|\mathbf{S}^{h+1}-\mathbf{C}\\
+\mathbf{Y}^h_2/\mu^h\|_F^2 +\|\mathbf{1}_n^{\top}\mathbf{C} -\mathbf{1}_n^{\top} + \mathbf{Y}^h_3/\mu^h\|_F^2\big).
\end{array}
\end{equation}
Then $\mathbf{C}^{h+1} = \big(2\gamma(\mathbf{S}^{h+1})^{\top}\mathbf{S}^{h+1}+\mu^h(\mathbf{I}_n + \mathbf{1}_n\mathbf{1}_n^{\top})\big)^{-1}\big(2\gamma(\mathbf{S}^{h+1})^{\top}\mathbf{S}^{h+1}+\mathbf{Y}_2^h -\mathbf{1}_n\mathbf{Y}_3^h+\mu^h(\mathbf{S}^{h+1}+\mathbf{1}_n\mathbf{1}_n^{\top})\big)$.
\par \textbf{4. Fix other variables and update $\mathbf{D}$.} For updating $\mathbf{D}$, we could get the following problem:
\begin{equation}\label{e14}
\begin{array}{ll}
\min_{\mathbf{D}} &<\mathbf{Y}_4^h,\mathbf{S}^{h+1}-\mathbf{D}>+\mu^h/2\|\mathbf{S}^{h+1}-\mathbf{D}\|_F^2\\
=\min_{\mathbf{D}} &\|\mathbf{S}^{h+1}-\mathbf{D}+\mathbf{Y}_4^h/\mu^h\|_F^2\\
=\min_{\mathbf{D}} &\|\mathbf{D} - \mathbf{M}\|_F^2,\\
s.t. & Tr(\mathbf{D})=k,
\end{array}
\end{equation}
where $\mathbf{M} = \mathbf{S}^{h+1}+\mathbf{Y}_4^h/\mu^h$. Note that the constraint is just imposed on the diagonal elements of $\mathbf{D}$, hence $[\mathbf{D}^{h+1}]_{ij} = [\mathbf{M}]_{ij}$, if $i\neq j (i,j=1,2,\cdots,n)$. Let $\mathbf{d} = diag(\mathbf{D})$ and $\mathbf{m} = diag(\mathbf{M})$, then we have
\begin{equation}\label{e15}
\begin{array}{ll}
\min_{\mathbf{d}} &\|\mathbf{d} - \mathbf{m}\|_2^2,\\
s.t. & \mathbf{1}_n^{\top}\mathbf{d}=k.
\end{array}
\end{equation}
This problem could be solved by any off-the-shelf quadratic programming solver. We here provide a more efficient method to achieve the solution to Problem (\ref{e15}). The Lagrangian function of Problem (\ref{e15}) is
\begin{equation}\label{e16}
\mathcal{L} =\|\mathbf{d} - \mathbf{m}\|_2^2 - \eta(\mathbf{1}_n^{\top}\mathbf{d}-k),
\end{equation}
where $\eta>0$ is a Lagrange multiplier. The optimal solution $\mathbf{d}$ should satisfy that the derivative of Eq. (\ref{e15}) w.r.t. $\mathbf{d}$ is equal to zero, so we have
\begin{equation}\label{e17}
2(\mathbf{d} - \mathbf{m})-\eta\mathbf{1}_n=\mathbf{0}_n.
\end{equation}
Then for the $j-th$ element of $\mathbf{d}$, we have
\begin{equation}\label{e18}
d_j - m_j-\eta/2=0,
\end{equation}
where $d_j$ and $m_j$ are the $j-th$ element of $\mathbf{d}$ and $\mathbf{m}$ respectively. According to the constraint $\mathbf{1}_n\mathbf{d}=k$ in Problem (\ref{e15}), then
\begin{equation}\label{e19}
\eta = 2(k-\mathbf{1}_n^{\top}\mathbf{m})/n.
\end{equation}
Hence,
\begin{equation}\label{e20}
\mathbf{d} = \mathbf{m} + \mathbf{1}_n(k-\mathbf{1}_n^{\top}\mathbf{m})/n.
\end{equation}
By summarizing the above computations,
\begin{equation}\label{e21}
\mathbf{D}^{h+1} = \mathbf{M} + \mathbf{Diag}\big(\mathbf{1}_n(k-\mathbf{1}_n^{\top}diag(\mathbf{M}))/n\big),
\end{equation}
where $\mathbf{Diag}\big(\mathbf{1}_n(k-\mathbf{1}_n^{\top}diag(\mathbf{M}))/n\big)$ is a diagonal matrix with its diagonal vector being $\mathbf{1}_n(k-\mathbf{1}_n^{\top}diag(\mathbf{M}))/n$.
\par \textbf{5. Fix other variables and update $\mathbf{E}$.} From Eq. (\ref{e10}), it could be easily obtained as follows:
\begin{equation}\label{e22}
\begin{array}{l}
\mathbf{E}_{h+1} =\arg\min_{\mathbf{E}} \lambda\|\mathbf{E}\|_{2,1} + <\mathbf{Y}_1^h,\mathbf{X}-\mathbf{XZ}^{h+1}-\mathbf{E}>\\
+\mu^h\|\mathbf{X}-\mathbf{XZ}^{h+1}-\mathbf{E}\|_F^2\\
=\arg\min_{\mathbf{E}} \lambda\|\mathbf{E}\|_{2,1} + \mu^h/2\|\mathbf{X}-\mathbf{XZ}^{h+1}-\mathbf{E}+\mathbf{Y}^h_1/\mu^h\|_F^2.
\end{array}
\end{equation}
The above problem could be solved by following the Lemma presented in \cite{RN1710,conf_2}.
\par \textbf{Lemma 1.} Let $\mathbf{Q}=[\mathbf{q}_1,\mathbf{q}_2,\cdots,\mathbf{q}_i,\cdots]$ be a given matrix. If the optimal solution to
\begin{equation}\label{e22-1}
\min_{\mathbf{P}}\alpha\|\mathbf{P}\|_{2,1} + \frac{1}{2}\|\mathbf{P}-\mathbf{Q}\|_F^2
\end{equation}
is $\mathbf{P}^*$, then the $i$-th column of $\mathbf{P}^*$ is
\begin{equation}\label{e22-2}
\mathbf{P}^*(:,i)=\left\{
\begin{array}{ll}
\frac{\|\mathbf{q}_i\|_2-\alpha}{\|\mathbf{q}_i\|_2}\mathbf{q}_i, & \hbox{if $\alpha<\|\mathbf{q}_i\|_2$;} \\
0, & \hbox{oterwise.}
\end{array}
\right.
\end{equation}
\par \textbf{6. Fix other variables and update parameters.} The precise updating schemes for the parameters existed in Eq. (\ref{e10}) are summarized as follows:
\begin{equation}\label{e23}
\begin{array}{l}
\mathbf{Y}_1^{h+1} = \mathbf{Y}^h_1+\mu^h(\mathbf{X}-\mathbf{XZ}^{h+1}-\mathbf{E}^{h+1}), \\
\mathbf{Y}_2^{h+1} = \mathbf{Y}^h_2+\mu^h(\mathbf{S}^{h+1}-\mathbf{C}^{h+1}), \\
\mathbf{Y}_3^{h+1} = \mathbf{Y}^h_3+\mu^h(\mathbf{1}_n^{\top}\mathbf{C}^{h+1}-\mathbf{1}_n^{\top}), \\
\mathbf{Y}_4^{h+1} = \mathbf{Y}^h_4+\mu^h(\mathbf{S}^{h+1}-\mathbf{D}^{h+1}), \\
\mu^{h+1} = \min(\mu_{max},\rho\mu^h),
\end{array}
\end{equation}
where $\mu_{max}$ and $\rho$ are two given positive parameters.
\subsection{Algorithm}
We summarize the algorithmic procedure of IDR in \textbf{Algorithm 1}. For a data set, once the solutions to IDR are obtained, we use $\mathbf{Z}$ and $\mathbf{S}$ to define two affinity graphs $\mathbf{G}_1$ and $\mathbf{G}_2$ as $[\mathbf{G}_1]_{ij}=\big(|[\mathbf{Z}]_{ij}|+|[\mathbf{Z}^{\top}]_{ij}|\big)/2$ and $[\mathbf{G}_2]_{ij}=\big(|[\mathbf{S}]_{ij}|+|[\mathbf{S}^{\top}]_{ij}|\big)/2$. Then N-cut is consequently performed on the two graphs to get two segmentation results. Finally, the best one would be chosen as the final result.
\begin{algorithm}
\small
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand\algorithmicensure {\textbf{Output:}}
\caption{Idempotent representation (IDR)}
\begin{algorithmic}[1]
\REQUIRE ~~\\
Data set $\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{n}]\in \mathcal{R}^{d\times n}$ with each column has unit $l_2$ norm, parameters $\gamma,\lambda$, the number of subspaces $k$, the maximal number of iteration $Maxiter$;
\ENSURE ~~\\
The coefficient matrix $\mathbf{Z}^*$ and $\mathbf{S}^*$ and the noise term $\mathbf{E}^{*}$;
\STATE Initialize the parameters, i.e., $h=0,\mu^h=10^{-6}, \mu_{max}=10^{4}, \rho=1.1, \varepsilon=10^{-7}$ and $\mathbf{Y}_1^h=\mathbf{Y}_2^h=\mathbf{Y}_3^h=\mathbf{0},\mathbf{Z}^h=\mathbf{S}^h=\mathbf{C}^h=\mathbf{D}^h=\mathbf{0}$.
\WHILE {$\|\mathbf{S}^h-\mathbf{C}^h\|_{\infty}>\varepsilon$, $\|\mathbf{S}^h-\mathbf{D}^h\|_{\infty}>\varepsilon$, $\|\mathbf{1}_n^{\top}\mathbf{C}^h-\mathbf{1}^{\top}_n\|_{\infty}>\varepsilon$ and $h<Maxiter$}
\STATE $h = h + 1$;
\STATE Update $\mathbf{Z}^{h+1} = \big(2\mathbf{I}_n + \lambda\mathbf{X}^{\top}\mathbf{X}\big)^{-1}\big(\mathbf{S}^h +\lambda\mathbf{X}^{\top}\mathbf{X}\big)$;
\STATE Update $\mathbf{S}^{h+1} = \big(2\mathbf{Z}^{h+1}+\mu^h\mathbf{C}^h-\mathbf{Y}^h_2+\mu^h\mathbf{D}^h-\mathbf{Y}^h_4\big)\big((2+2\mu^h)\mathbf{I}_n+2\gamma(\mathbf{I}_n-\mathbf{C}^h)(\mathbf{I}_n-\mathbf{C}^h)^{\top}\big)^{-1}$. Then let $\mathbf{S}^{h+1}=\max(\mathbf{0},\mathbf{S}^{h+1})$ and $\mathbf{S}^{h+1} =\big(\mathbf{S}^{h+1} + (\mathbf{S}^{h+1})^{\top}\big)/2$;
\STATE Update $\mathbf{C}^{h+1} = \big(2\gamma(\mathbf{S}^{h+1})^{\top}\mathbf{S}^{h+1}+\mu^h(\mathbf{I}_n + \mathbf{1}_n\mathbf{1}_n^{\top})\big)^{-1}\big(2\gamma(\mathbf{S}^{h+1})^{\top}\mathbf{S}^{h+1}+\mathbf{Y}_2^h -\mathbf{1}_n\mathbf{Y}_3^h+\mu^h(\mathbf{S}^{h+1}+\mathbf{1}_n\mathbf{1}_n^{\top})\big)$.
\STATE Update $\mathbf{D}^{h+1} = \mathbf{M} + \mathbf{Diag}\big(\mathbf{1}_n(k-\mathbf{1}_n^{\top}diag(\mathbf{M})/n\big)$, where $\mathbf{M}=\mathbf{S}^{h+1}+\mathbf{Y}_3^{h}$;
\STATE Update $\mathbf{E}^{h+1}$ by solving Problem (\ref{e22});
\STATE Update $\mathbf{Y}_1^{h+1},\mathbf{Y}_2^{h+1},\mathbf{Y}_3^{h+1}$ and $\mu^{h+1}$ by using Eq. (\ref{e23}).
\ENDWHILE\label{code:recentEnd}
\RETURN $\mathbf{Z}^{*}=\mathbf{Z}^{h}, \mathbf{S}^{*}=\mathbf{S}^{h}$.
\end{algorithmic}
\end{algorithm}
\section{Further analyses}
\label{sec4}
\subsection{Complexity analysis}
We can see that the complexity of \textbf{Algorithm 1} is mainly determined by the updating of five variables $\mathbf{Z,S,C,D,E}$. In each iteration, these variables all have closed form solutions. For updating $\mathbf{Z,S,C}$, it needs to compute the pseudo-inverse of an $n\times n$ matrix, hence the computation burden is $O(n^3)$. For updating $\mathbf{D}$, it takes $O(n^2)$ to compute the multiplier of an $n\times n$ matrices. And for updating $\mathbf{E}$ by using \textbf{Lemma 1}, its time cost is $O(n)$. Hence, the time complexity of \textbf{Algorithm 1} in each iteration taken together is $O(n^3)$. In our experiments, the number of iterations of \textbf{Algorithm 1} are all less than $500$, hence its total complexity is $O(n^3)$.
\subsection{Convergence analysis}
Then we present a theoretical convergence proof of the proposed \textbf{Algorithm 1}.
\par \emph{Proposition 1}: \textbf{Algorithm 1} is convergent and the sequence $\{\mathbf{Z}^h,\mathbf{S}^h,$ $\mathbf{C}^h,\mathbf{D}^h,\mathbf{E}^h\}$ generated by \textbf{Algorithm 1} would convergent to a stationary point.
\par \emph{Proof}: \textbf{Algorithm 1} aims to minimize the Lagrangian function of Eq. (\ref{e10}) by alternately updating the variables $\mathbf{Z,S,C,D,E}$. Firstly, from the updating rule of $\mathbf{Z}^{h+1}$ in Eq. (\ref{e11}), we have
\begin{equation}\label{e24}
\mathbf{Z}^{k+1}=\arg\min_{\mathbf{Z}} \mathfrak{L} (\mathbf{Z}^h,\mathbf{S}^h,\mathbf{C}^h,\mathbf{D}^h,\mathbf{E}^h).
\end{equation}
Note that $\mathfrak{L} (\mathbf{Z}^h,\mathbf{S}^h,\mathbf{C}^h,\mathbf{D}^h,\mathbf{E}^h)$ is $\beta$-strongly convex w.r.t. $\mathbf{Z}$. The following inequality holds:
\begin{equation}\label{e25}
\begin{array}{l}
\mathfrak{L} (\mathbf{Z}^{h+1},\mathbf{S}^h,\mathbf{C}^h,\mathbf{D}^h,\mathbf{E}^h)
\leq \mathfrak{L} (\mathbf{Z}^{h},\mathbf{S}^h,\mathbf{C}^h,\mathbf{D}^h,\mathbf{E}^h)\\-\beta/2\|\mathbf{Z}^{h+1}-\mathbf{Z}^h\|_F^2.
\end{array}
\end{equation}
Here we use Lemma B.5 in \cite{DBLP:conf/icml/Mairal13}.
\par Secondly, according to updating schemes for the rest variables, it could be found that these variables, namely $\mathbf{S,C,D,E}$, have the similar properties of $\mathbf{Z}$. Hence, the corresponding inequalities of the variables similar to (\ref{e25}) would hold. By adding these inequalities, we have
\begin{equation}\label{e26}
\begin{array}{l}
\mathfrak{L} (\mathbf{Z}^{h+1},\mathbf{S}^{h+1},\mathbf{C}^{h+1},\mathbf{D}^{h+1},\mathbf{E}^{h+1})\leq \mathfrak{L} (\mathbf{Z}^{h},\mathbf{S}^h,\mathbf{C}^h,\mathbf{D}^h,\mathbf{E}^h)\\-\beta/2\Big(\|\mathbf{Z}^{h+1}-\mathbf{Z}^h\|_F^2+\|\mathbf{S}^{h+1}-\mathbf{S}^h\|_F^2
+\|\mathbf{C}^{h+1}-\mathbf{C}^h\|_F^2\\
+\|\mathbf{D}^{h+1}-\mathbf{D}^h\|_F^2+\|\mathbf{E}^{h+1}-\mathbf{E}^h\|_F^2\Big).
\end{array}
\end{equation}
Hence, $\mathcal{L}(\mathbf{Z}^{h},\mathbf{S}^{h},\mathbf{C}^{h},\mathbf{D}^{h},\mathbf{E}^{h})$ is monotonically decreasing and thus it is upper bounded. This implies that $\{\mathbf{Z}^h,\mathbf{S}^h,$ $\mathbf{C}^h,\mathbf{D}^h,\mathbf{E}^h$ are also bounded. Now, summing inequality (\ref{e26}) over $h=1,2,\cdots$, we have
\begin{equation}\label{e27}
\begin{array}{l}
\sum_{k=1}^{+\infty}\frac{\beta}{2}\Big(\|\mathbf{Z}^{h+1}-\mathbf{Z}^h\|_F^2+\|\mathbf{S}^{h+1}-\mathbf{S}^h\|_F^2 \\+\|\mathbf{C}^{h+1}-\mathbf{C}^h\|_F^2
+\|\mathbf{D}^{h+1}-\mathbf{D}^h\|_F^2+\|\mathbf{E}^{h+1}-\mathbf{E}^h\|_F^2\\
\leq \mathcal{L}(\mathbf{Z}^{0},\mathbf{S}^{0},\mathbf{C}^{0},\mathbf{D}^{0},\mathbf{E}^{0}\Big).
\end{array}
\end{equation}
This implies when $h\rightarrow+\infty$,
\begin{equation}\label{e28}
\begin{array}{l}
\mathbf{Z}^{h+1}-\mathbf{Z}^h\rightarrow 0,\\
\mathbf{S}^{h+1}-\mathbf{S}^h\rightarrow 0,\\
\mathbf{C}^{h+1}-\mathbf{C}^h\rightarrow 0,\\
\mathbf{D}^{h+1}-\mathbf{D}^h\rightarrow 0,\\
\mathbf{E}^{h+1}-\mathbf{E}^h\rightarrow 0.
\end{array}
\end{equation}
Moreover, according to the definition, clearly $ \mathfrak{L} (\mathbf{Z}^{h},\mathbf{S}^{h},\mathbf{C}^{h},\mathbf{D}^{h},\mathbf{E}^{h})\geq 0$. Therefore, the convergence of \textbf{Algorithm 1} is guaranteed and the sequence $\{\mathbf{Z}^{h},\mathbf{S}^{h},\mathbf{C}^{h},\mathbf{D}^{h},\mathbf{E}^{h}\}$ would convergent to a stationary point of Eq. (\ref{e9}).
\subsection{Comparative analysis with related algorithms}
We now discuss the relationships between IDR and some related algorithms.
\subsubsection{Comparative analysis with membership representation (MR)}
As we mentioned in Section \ref{sec1}, MR also proposes to learn a normalized membership matrix as a reconstruction coefficient matrix \cite{DBLP:conf/cvpr/LeeLLK15}. However, MR is a cascade model which consists of three steps:
\par Firstly, an initial coefficient matrix $\mathbf{W}$ is leaned by using SSC or LRR.
\par Secondly, a membership matrix $\mathbf{M}$ is constructed by solving the following problem:
\begin{equation}\label{e30}
\begin{array}{ll}
\min_{\mathbf{M}}&\|\mathbf{W}-\mathbf{W}\bigodot\mathbf{M}\|_1+\lambda\|\mathbf{M}\|_F^2\\
s.t. & diag(\mathbf{M}) = \mathbf{1}_n,\mathbf{M}\geq 0,\mathbf{M}\succeq \mathbf{0},
\end{array}
\end{equation}
where $\mathbf{M}\succeq \mathbf{0}$ requires $\mathbf{M}$ to be a positive semi-definite, $\lambda>0$ is a positive parameter.
\par Thirdly, after $\mathbf{M}$ is obtained, a normalized membership matrix $\mathbf{Z}$ is achieved by optimizing the following problem:
\begin{equation}\label{e31}
\begin{array}{ll}
\min_{\mathbf{Z}}&Tr(\mathbf{Z})\\
s.t. & \mathbf{1}_n^{\top}\mathbf{Z}=\mathbf{1}_n^{\top},\mathbf{Z}\geq 0,\mathbf{Z}\succeq \mathbf{0},\\
&<\mathbf{H},\mathbf{Z}>\leq c,
\end{array}
\end{equation}
where $\mathbf{H} = \mathbf{1}_n\mathbf{1}_n^{\top}-\mathbf{M}$ and $c = \beta\|\mathbf{H}\|_1/n$. $\beta>0$ is a manually set constant. We could see that the symmetric constraint of $\mathbf{Z}$ is omitted in the above problem. Hence, the coefficient matrix found by MR may not be close to a normalized membership matrix.
\par Besides the computation for finding the initial coefficient matrix, Problem (\ref{e30}) and (\ref{e31}) are also needed to solve by using ALM method. Clearly, MR is very time-consuming.
\par Additionally, it can be seen that the performances of MR depends on the learned initial coefficient matrices. The value of the parameter in SSC or LRR will influence its performances. How to choose an initial coefficient matrix is not reported discussed in \cite{DBLP:conf/cvpr/LeeLLK15}. Moreover, the three hyper-parameters existed in MR will make the tuning of the parameters be difficult.
\subsubsection{Comparative analysis with doubly stochastic subspace clustering (DSSC) \cite{DBLP:journals/corr/abs-2011-14859}}
Based on the descriptions in Section \ref{sec3.1}, it could be seen that the normalized membership matrix obtained by IDR is a special case of doubly stochastic matrix. Recently, Lim et al. devised a doubly stochastic subspace clustering (DSSC) algorithm \cite{DBLP:journals/corr/abs-2011-14859} which pursuits a doubly stochastic coefficient matrix. The objective of DSSC could be expressed as follows:
\begin{equation}\label{e32}
\begin{array}{ll}
\min_{\mathbf{Z,A}} & \gamma\|\mathbf{Z}\|_1 +\lambda/2\|\mathbf{Z} -\eta \mathbf{A}\|_F^2+1/2\|\mathbf{X-XZ}\|_F^2\\
s.t. & diag(\mathbf{Z})=0, \mathbf{A} \in \Psi_n,
\end{array}
\end{equation}
where $\gamma,\lambda,\eta>0$ are three parameters and $\mathbf{A} \in \Psi_n$ means $\mathbf{A}$ is an $n\times n$ doubly stochastic matrix. Namely, $\mathbf{A}$ satisfies the conditions presented in Eq. (\ref{e6}). By using two different strategies to solve the above problem, two different models, joint DSSC (J-DSSC) and approximation DSSC (A-DSSC), are presented. Among them, A-DSSC is a two-step algorithm which firstly uses LSR or EN to get an initial coefficient matrix and computes a doubly stochastic matrix consequently. On the other hand, the computation burden of J-DSSC is high, because in each iteration of J-DSSC, two intermediate $n\times n$ matrices should be iteratively updated by using linear alternating direction method of multipliers (ADMM) \cite{DBLP:journals/jscic/Ma16}. Moreover, we also could see that DSSC has three hyper-parameters which will also leads the difficulties in parameters adjustment.
\subsubsection{Comparative analysis with self-representation constrained LRR (SRLRR) \cite{RN2445}}
The idempotent constraint of coefficient matrix is firstly proposed in our previously work in \cite{RN2445}. The SRLRR defines the following problem:
\begin{equation}\label{en33}
\begin{array}{ll}
\min_{\mathbf{Z,E}} & \|\mathbf{Z}\|_* +\gamma\|\mathbf{Z} -\mathbf{Z}^2\|_F^2+\lambda\|\mathbf{E}\|_{2,1}\\
s.t. & \mathbf{X}=\mathbf{XZ}+\mathbf{E}, \mathbf{1}_n^{\top}\mathbf{Z} =\mathbf{1}_n^{\top}.
\end{array}
\end{equation}
The main problem existed in SRLRR is that we have not build solid theoretical connections between $\mathbf{Z}$ and a normalized membership matrix. The nuclear norm minimization and the affine constraint (i.e., $\mathbf{1}_n^{\top}\mathbf{Z} =\mathbf{1}_n^{\top}$) \cite{RN2572} in SRLRR are used to avoid $\mathbf{Z}$ to degenerate to $\mathbf{I}_n$ or $\mathbf{0}_{n\times n}$. This is totally different from IDR.
\par Based on these comparisons, we could see the existing subspace clustering methods, which also aim to seek normalized membership matrices or doubly stochastic matrices, will all use certain existing regularizers of $\mathbf{Z}$. IDR presents a much different method for tacking subspace clustering problems.
\section{Experiments}
\label{sec5}
\subsection{Experiment setup}
\subsubsection{Datasets}
Both synthetic and real world data sets are applied in our experiments to verify the effectiveness of IDR. Four benchmark databases including Hopkins 155 motion segmentation data set \cite{DBLP:conf/cvpr/TronV07}, ORL face image database \cite{DBLP:conf/wacv/SamariaH94}, AR face image database \cite{ARface} and MNIST handwritten digital database\footnote{http://yann.lecun.com/exdb/mnist/} are used for evaluation.
\subsubsection{Comparison methods} The representative and close related algorithms such as SSC \cite{conf_1}, LRR \cite{conf_2,RN1710}, LSR \cite{Lu:2012}, BDR \cite{RN2485}, MR \cite{DBLP:conf/cvpr/LeeLLK15} and DSSC \cite{DBLP:journals/corr/abs-2011-14859} would be used for comparison\footnote{We provide the Matlab codes of IDR, MR and DSSC on \url{https://github.com/weilyshmtu/Learning-idempotent-representation-for-subspace-segmentation}. And the Matlab codes for SSC and LRR could be found on \url{http://www.vision.jhu.edu/code/} and \url{http://sites.google.com/site/guangcanliu/} respectively. The Matlab codes of LSR and BDR could be found on https://canyilu.github.io/code/.}. All the experiments are conducted on a Windows-based machine with an Intel i7-4790 CPU with 20-GB memory and MATLAB R2017b.
\subsubsection{Parameters Setting}
\label{sec5.1.3}
Because the value of parameters will influence the performance of the evaluated algorithms, for each compared method, we will tune all the parameters by following the suggestions in corresponding references and retain those with the best performance on each data set. The chosen parameter settings for all algorithms are given in Table \ref{t2}. Especially, for MR, when SSC or LRR is used to achieve the initial coefficient matrix, the parameter corresponding to the two algorithms would be chosen in $[0.001,0.01,0.1,1,5,10]$ according to the description in \cite{DBLP:conf/cvpr/LeeLLK15}.
\begin{table*}
\begin{center}
\caption{Parameters searched over for different methods.}\label{t2}
\scriptsize
\begin{tabular}{c|l}
\hline
Methods & Parameters \\\hline
SSC & $\lambda \in \{0.0001,0.001,0.01,0.1,1,10,20,50,100,200,500,600,800,1000\}$\\\hline
LRR & $\lambda \in \{0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1,2,5,8,10,15,20,50\}$\\\hline
LSR & $\lambda \in \{0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1,2,5,8,10,15,20,50\}$\\\hline
BDR & $\lambda \in \{0.001,0.01,0.05,0.1,0.2,0.5,1,2,3,5,8,10,15,20,50\}$,\\
& $\gamma \in \{0.1,1,10,20,30,40,50,60,70,80\}$\\\hline
MR & $\lambda \in \{0.001,0.01,0.1,1,5,10\},\beta \in \{0.01,0.1,1,5,10\}$\\\hline
DSSC(JDSSC) & $\lambda \in \{0.01,0.25,1,25\}, \eta\in\{0.01,0.05,0.1,0.2\},\gamma\in\{0,0.01\}$\\ \hline
DSSC(ADSSC) & $\lambda \in \{0.1,1,10,25,50\}, \eta\in\{0.0005,0.001,0.01,0.025,0.05,0.1\},\gamma=0$\\\hline
IDR & $\lambda,\gamma\in\{0.001,0.005,0.01,0.02,0.05,0.1,0.2,0.5,1,5,10,50,100,200\}$\\\hline
\end{tabular}
\end{center}
\end{table*}
\subsubsection{Evaluation metrics} For all the evaluated algorithms, we use the obtained coefficient matrices to construct the affinity graphs without any post-processing. For the performance evaluation, we use segmentation accuracy (SA) or segmentation error (SE), which is defined as follows:
\begin{equation}\label{e33}
\begin{array}{l}
SA =\frac{1}{n}\sum_{i=1}^n\delta(s_i,f(t_i)),\\
SE = 1 - SA,
\end{array}
\end{equation}
where $s_i$ and $t_i$ represent the ground truth and the output label of the $i$-th point respectively, $\delta(x, y) = 1$ if $x = y$, and $\delta(x, y) = 0$ otherwise, and $f(t_i)$ is the best mapping function that permutes clustering labels to match the ground truth labels.
\subsection{Experiments on a synthetic data set}
We generate $5$ subspaces each of dimension $d = 5$ in an ambient space of dimension $D = 20$. We sample $50$ data points from each subspace and construct a $D\times 250$ data matrix $\mathbf{X}$. Moreover, a certain percentage $p = 0-100\%$ of the data vectors are chosen randomly to add Gaussian noise with zero mean and variance $0.3\|\mathbf{x}\|_2^2$. Finally, the evaluated algorithms are used to segment the data into $5$ subspaces. For a certain $p$, the experiments would repeat $20$ trials. Therefore, there would be total $220$ subspace clustering tasks.
\par Actually, the similar experiments could be found in some existing references \cite{conf_2,RN2328}. But in these experiments the parameters of corresponding evaluated algorithm would be fixed when the algorithm is performed on each sub-database. Then by changing the parameter(s), the best results with certain parameter(s) would be finally selected. However, performing subspace clustering on sub-database should be viewed as a sole segmentation task. In our experiments, we hence will let the parameter(s) of each algorithm vary in the corresponding interval sets in Table \ref{t2} and record the highest segmentation accuracies of the evaluated algorithms on each sub-database. Then the mean of these highest segmentation accuracies (averaged from 20 random trials) of each algorithm versus variation of the percentage of corruption are reported in Fig. \ref{f2}.
\par In addition, IDR and BDR can produce two coefficient matrices to compute the clustering results. By using different methods (SSC and LRR) to construct initial coefficient matrices, MR could obtain two different results. Based on different strategies, DSSC has two sub-models, namely JDSSC and ADSSC. Hence, we plot the accuracies of all the algorithms by selecting the better ones of the corresponding two results in Fig \ref{f2}(a). The detailed segmentation accuracies of IDR and BDR by using two different coefficient matrices, and the results of MR based on SSC and LRR (denoted as MR-SSC and MR-LRR) as well as JDSSC and ADSSC are ploted in Fig. \ref{f2}(b).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{syntheticdataresults.eps}\\
\caption{The segmentation accuracies of each method versus variation the range of corruption. (a) The best results of the evaluated algorithms, (b) the detailed results of IDR, BDR, MR and DSSC, where BDR-Z denotes the results obtained by the reconstruction coefficient matrix $\mathbf{Z}$ and BDR-B indicates the results obtained by the intermediate matrix introduced in BDR problem.}\label{f2}
\end{figure}
\par From Fig. \ref{f2}(a), we can see that 1) IDR constantly achieves the best results; 2) the performances of LRR, LSR, MR and DSSC are closed to each other, when the percentage of corruption is smaller than $50\%$ ; 3) when the percentage of corruption is larger than $50\%$, MR dominates LRR, LSR and DSSC; 4) SSC is inferior to other algorithms.
\par From Fig. \ref{f2}(b), it can be seen that 1) the results obtained by two different coefficient matrices corresponding to IDR and BDR respectively are closed to each other; 2) the performances of JDSSC and ADSSC are also similar to each other; 3) However, the results of MR-LRR are much better than those of MR-SSC. This means that the performance of MR relies on the initial coefficient matrices
\par In order to show the sensitivity of IDR to its two parameters $\gamma$ and $\lambda$, we report the segmentation accuracies of IDR changed with the values of parameters. The sub-databases with $p=10\%,50\%,90\%$ are now used. Then the mean of segmentation accuracies against the pairs of parameters are illustrated in Fig. \ref{f3}.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{syntheticdataresultssensitity.eps}\\
\caption{The segmentation accuracy against the variation of parameters of IDR. The vertical axis in each figure denotes the subspace clustering accuracy. The three sub-figures in the left column show the segmentation accuracies obtained by $\mathbf{Z}$ changed with parameters, and the three sub-figures in the right column record the segmentation accuracies obtained by $\mathbf{S}$ changed with parameters. The first, second and third rows present the segmentation accuracies of IDR on the sub-databases with corruption percentage equals $10\%,50\%,90\%$ respectively.}\label{f3}
\end{figure*}
\par From Fig. \ref{f3}, we could see that 1) the performance of IDR is stable when the parameters varied in relative large intervals; 2) when the corruption percentage is low, IDR is insensitive to $\gamma$.
However, when the corruption percentage is high, small $\gamma$ and $\lambda$ could help IDR to achieve good results. We believe that when a data set is clean, a normalized membership reconstruction coefficient matrix is easily to get, so the idempotent constraint could also be satisfied. However, when most data samples in the data set are corrputed, the normalized membership reconstruction coefficient matrix is difficult to obtain. Hence, in such situations, the corresponding parameter $\gamma$ should be small
\subsection{Experiments on Hopkins 155 data set}
\par Hopkins155 database is a well-known benchmark database to test the performances of subspace clustering algorithms. It consists of 120 sequences of two motions and 35 sequences of three motions. Each sequence is a sole clustering task and there are 155 clustering tasks in total. The features of each sequence are extracted and tracked along with the motion in all frames, and errors are manually removed for each sequence. We illustrate the sample images from Hopkins 155 database in Fig.~\ref{f4}.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{sampleimagesfromhopkins.eps}\\
\caption{The sample images from Hopkins 155 database.}\label{f4}
\end{figure*}
\par We performed the experiments with the original data matrix and projected data matrix in $4s-$dimensional subspace\footnote{$s$ is the number of subspaces.} obtained by using principal component analysis (PCA) \cite{DBLP:books/lib/DudaHS01}. Then the segmentation errors (i.e., $SE= 1-SA$) of each evaluated algorithm are computed on each sequence.
\par Firstly, we collect all the best results of each algorithm obtained on 155 sequences with the parameters changing in the given intervals. And the mean, median and std. (standard variance) of the results are reported in Table \ref{t3} and \ref{t4}. From the two tables, we could see that 1) IDR achieves best results in these experiments; 2) BDR and LSR also achieve competitive results; 3) MR-LRR and MR-SSC do not conquer their corresponding classical methods SSC and LRR. This means the post-processing on the coefficient matrices in MR may not always enhance the performance of LRR and SSC; 5) JDSSC fails to achieve satisfying results.
\begin{table*}
\scriptsize
\begin{center}
\caption{The segmentation errors ($\%$) and average computation time (sec.) of different algorithms on the Hopkins 155 database with the original data points. The best results of these algorithms are emphasized in bold.}\label{t3}
\begin{tabular}{c|c|ccc|ccc|ccc}
\hline
\multirow{2}{*}{Methods} & Average time &\multicolumn{3}{c|}{2 motions} &\multicolumn{3}{c|}{3 motions} &\multicolumn{3}{c}{All motions} \\
& (sec.) & mean & median & std. & mean & median & std. & mean & median & std. \\ \hline
IDR-Z & \multirow{2}{*}{$9.15$} & $\mathbf{0.25}$ & $\mathbf{0}$ & $\mathbf{1.15}$ & $\mathbf{1.14}$ & $0.20$ & $\mathbf{2.11}$ & $\mathbf{0.45}$ & $\mathbf{0}$ & $\mathbf{1.47}$ \\
IDR-S & & $0.50$ & $\mathbf{0}$ & $1.89$ & $2.23$ & $0.56$ & $3.49$ & $0.89$ & $\mathbf{0}$ & $2.44$ \\ \hline
SSC & $2.98$ & $1.66$ & $\mathbf{0}$ & $5.13$ & $5.29$ & $1.46$ & $7.35$ & $2.48$ & $\mathbf{0}$ & $5.88$ \\ \hline
LRR & $9.378$ & $1.15$ & $\mathbf{0}$ & $3.19$ & $4.17$ & $1.20$ & $5.99$ & $1.83$ & $\mathbf{0}$ & $4.17$ \\ \hline
LSR & $\mathbf{0.03}$ & $0.56$ & $\mathbf{0}$ & $2.18$ & $1.94$ & $0.21$ & $4.12$ & $0.87$ & $\mathbf{0}$ & $2.79$ \\ \hline
BDR-B & \multirow{2}{*}{$5.50$} & $0.58$ & $\mathbf{0}$ & $2.78$ & $2.72$ & $\mathbf{0}$ & $4.73$ & $1.06$ & $\mathbf{0}$ & $3.42$ \\
BDR-Z & & $0.6$ & $\mathbf{0}$ & $2.76$ & $2.77$ & $\mathbf{0}$ & $5.10$ & $1.09$ & $\mathbf{0}$ & $3.53$ \\\hline
MR-SSC & $41.15$ & $2.71$ & $\mathbf{0}$ & $6.56$ & $9.22$ & $6.05$ & $9.18$ & $4.33$ & $0.21$ & $7.78$ \\
MR-LRR & $43.29$ & $1.39$ & $\mathbf{0}$ & $3.95$ & $6.52$ & $2.85$ & $6.82$ & $2.66$ & $\mathbf{0}$ & $5.28$ \\ \hline
JDSSC & $16.29$ & $12.51$ & $11.45$ & $10.54$ & $24.09$ & $25.06$ & $11.62$ & $15.13$ & $14.48$ & $11.8$ \\
ADSSC & $0.07$ & $2.42$ & $\mathbf{0}$ & $5.67$ & $8.74$ & $5.37$ & $9.65$ & $3.85$ & $\mathbf{0}$ & $7.24$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\scriptsize
\begin{center}
\caption{The segmentation errors ($\%$) and average computation time (sec.) of different algorithms on the Hopkins 155 database with the $4s-$dimensional data points by applying PCA. The best results of these algorithms are emphasized in bold.}\label{t4}
\begin{tabular}{c|c|ccc|ccc|ccc}
\hline
\multirow{2}{*}{Methods} & Average time &\multicolumn{3}{c|}{2 motions} &\multicolumn{3}{c|}{3 motions} &\multicolumn{3}{c}{All motions} \\
& (sec.) & mean & median & std. & mean & median & std. & mean & median & std. \\ \hline
IDR-Z & \multirow{2}{*}{$9.43$} & $\mathbf{0.30}$ & $\mathbf{0}$ & $\mathbf{1.24}$ & $\mathbf{1.20}$ & $\mathbf{0}$ & $\mathbf{2.30}$ & $\mathbf{0.50}$ & $\mathbf{0}$ & $1.58$ \\
IDR-S & & $0.49$ & $\mathbf{0}$ & $1.76$ & $2.16$ & $0.56$ & $3.27$ & $0.86$ & $\mathbf{0}$ & $2.29$ \\ \hline
SSC & $2.29$ & $1.66$ & $\mathbf{0}$ & $5.13$ & $5.29$ & $1.46$ & $7.35$ & $2.48$ & $\mathbf{0}$ & $5.88$ \\ \hline
LRR & $10.27$ & $1.15$ & $\mathbf{0}$ & $3.19$ & $4.17$ & $1.20$ & $5.99$ & $1.83$ & $\mathbf{0}$ & $4.17$ \\ \hline
LSR & $\mathbf{0.03}$ & $0.56$ & $\mathbf{0}$ & $2.18$ & $1.94$ & $0.21$ & $4.12$ & $0.87$ & $\mathbf{0}$ & $2.79$ \\ \hline
BDR-Z & \multirow{2}{*}{$4.41$} & $0.65$ & $\mathbf{0}$ & $2.82$ & $2.91$ & $\mathbf{0}$ & $5.12$ & $1.16$ & $\mathbf{0}$ & $3.59$ \\
BDR-B & & $0.65$ & $\mathbf{0}$ & $2.90$ & $2.68$ & $\mathbf{0}$ & $5.03$ & $1.11$ & $\mathbf{0}$ & $3.58$ \\ \hline
MR-SSC & $39.89$ & $4.62$ & $\mathbf{0}$ & $8.22$ & $10.44$ & $10.47$ & $8.59$ & $6.06$ & $0.47$ & $8.65$ \\
MR-LRR & $44.47$ & $1.39$ & $\mathbf{0}$ & $3.93$ & $6.72$ & $2.85$ & $6.93$ & $2.71$ & $\mathbf{0}$ & $5.34$ \\ \hline
JDSSC & $14.18$ & $12.51$ & $11.45$ & $10.54$ & $24.09$ & $25.06$ & $11.62$ & $15.13$ & $14.48$ & $11.8$ \\
ADSSC & $0.07$ & $2.42$ & $\mathbf{0}$ & $5.67$ & $8.76$ & $5.37$ & $9.65$ & $3.85$ & $\mathbf{0}$ & $7.24$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\par Moreover, we also report the average computation time of each algorithm on the 155 motion sequences in Table \ref{t3} and \ref{t4}. Clearly, LSR and ADSSC are much efficient than other algorithms. The average computation time of IDR is close to that of LRR. Hence the computation burden of IDR is acceptable. We could see that MR is time-consuming.
\par Secondly, we analyse the experimental results of each algorithm in another way. For each algorithm, we present the percentage of motions with the obtained SEs are less than or equal to a given percentage of segmentation error in Fig. \ref{f5}. We can see that the segmentation errors on all motions obtained by IDR-Z are all less than $0.2$.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{hopkinscurves.eps}\\
\caption{The percentage of motions versus the variation of segmentation error for each method. (a) The results obtained on the original data samples, (b) The results obtained on the projected data samples.}\label{f5}
\end{figure*}
\par Finally, we also test the sensitivity of IDR to the parameters on Hopkins 155 database. For a fixed pair of $(\gamma,\lambda)$, we compute the segmentation error for all 155 segmentation tasks, then the mean of 155 segmentation accuracies could be achieved. Then by changing the values of $\gamma$ and $\lambda$, we illustrate the performance of IDR against to its parameters in Fig \ref{f6}.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{Hopkinssensitivity.eps}\\
\caption{The segmentation accuracy against the variation of parameters of IDR on Hopkins 155 data set. Now the vertical axis in each figure denotes the subspace clustering error which is different from the experiments conducted on synthetic data sets. The two sub-figures in the left column show the segmentation errors obtained by $\mathbf{Z}$ changed with parameters, and the two sub-figures in the right column record the segmentation errors obtained by $\mathbf{S}$ changed with parameters. The first and second rows present the segmentation error of IDR on the original data sets and projected data sets respectively.}\label{f6}
\end{figure*}
Based on Fig. \ref{f6}, we still could see that IDR is insensitive to its parameters and small $(\gamma, \lambda)$ could help IDR to achieve better results.
\subsection{Experiments on face image databases}
We now perform the experiments on two benchmark face image databases, i.e., ORL database \cite{DBLP:conf/wacv/SamariaH94} and AR database \cite{ARface}. The brief information of the two databases is introduced as follows:
\par ORL database contains $400$ face images (without noise) of $40$ persons. Each individual has 10 different images. These images were taken at different times, varying the lighting, facial expressions (open/closed eyes, smiling/not smiling) and facial details (glasses/no glasses). In our experiments, each image is resized to $32\times 32$ pixels.
\par AR database consists of over $4000$ face images of $126$ individuals. For each individual, 26 pictures were taken in two sessions (separated by two weeks) and each section contains 13 images. These images include front view of faces with different expressions, illuminations and occlusions. In our experiments, each image is resized to $50\times 40$ pixels.
\par Moreover, the pixel value in each image belongs to the two databases lies in $[0,255]$. For efficient computation, we let each pixel value be divided by $255$, so that the pixel value of each image fell into $[0,1]$. This will not change the distribution of the original data sets. Some sample images from ORL and AR database are shown in Fig. \ref{f7}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{samplesoffaceimagedatabases.eps}\\
\caption{Sample images from ORL database and AR database.}\label{f7}
\end{figure}
\par We firstly randomly choose images of $q$ persons from the two databases. In ORL database, $q\in \{6,12,18,24,36\}$, and in AR database, $q\in \{4,8,12,16,20\}$. Then the performances of the evaluated methods are tested in these sub-databases. With the parameters varying, the highest clustering accuracy of each algorithm obtained on each sub-database is collected. These experiments are run $10$ trials, and the mean and standard variance of SAs obtained by each algorithm are reported in Table \ref{t5} and Table \ref{t6} respectively.
\begin{table*}
\scriptsize
\begin{center}
\caption{Segmentation accuracies ($\%$) $\pm$ standard variances of evaluated algorithms on the sub-databases of ORL database. The best results (mean) of the algorithms are emphasized in bold.}\label{t5}
\begin{tabular}{c|cccccc}
\hline
\multirow{2}{*}{Methods} & \multicolumn{6}{c}{Number of persons} \\\cline{2-7}
& $6$ & $12$ & $18$ & $24$ & $30$ & $36$ \\\hline
IDR-Z & $92.33\pm7.42$ & $87.08\pm3.91$ & $87.28\pm4.21$ & $83.54\pm2.56$ & $82.40\pm1.74$ & $\mathbf{81.89\pm0.87}$ \\
IDR-S & $\mathbf{94.17\pm6.35}$ & $\mathbf{89.50\pm1.63}$ & $87.39\pm2.66$ & $83.67\pm2.28$ & $\mathbf{83.03\pm1.37}$ & $81.81\pm0.85$ \\\hline
SSC & $89.83\pm7.22$ & $80.25\pm5.81$ & $80.44\pm5.12$ & $79.00\pm2.57$ & $77.93\pm2.88$ & $77.36\pm1.89$ \\\hline
LRR & $89.00\pm7.04$ & $82.50\pm5.61$ & $84.67\pm4.08$ & $80.71\pm2.88$ & $79.43\pm1.85$ & $77.75\pm1.26$ \\\hline
LSR & $91.67\pm5.88$ & $86.08\pm3.95$ & $86.33\pm3.47$ & $82.92\pm3.11$ & $81.60\pm1.14$ & $80.69\pm1.52$ \\\hline
BDR-Z & $90.50\pm7.20$ & $85.67\pm5.03$ & $86.89\pm3.49$ & $83.38\pm2.56$ & $80.53\pm1.31$ & $80.36\pm1.09$ \\
BDR-B & $91.83\pm7.13$ & $86.08\pm2.97$ & $\mathbf{88.28\pm3.22}$ & $84.17\pm2.76$ & $81.20\pm2.66$ & $80.19\pm0.67$ \\\hline
MR-SSC & $91.67\pm6.67$ & $87.17\pm6.59$ & $86.00\pm3.48$ & $\mathbf{84.46\pm2.22}$ & $82.90\pm1.67$ & $80.75\pm1.06$ \\
MR-LRR & $85.00\pm6.57$ & $86.08\pm3.51$ & $84.94\pm3.98$ & $81.71\pm3.54$ & $78.23\pm1.71$ & $78.92\pm0.55$ \\\hline
JDSSC & $88.00\pm7.93$ & $87.83\pm5.14$ & $86.83\pm3.46$ & $81.75\pm2.23$ & $77.83\pm1.89$ & $77.36\pm1.08$ \\
ADSSC & $91.17\pm6.29$ & $85.50\pm4.01$ & $85.83\pm3.49$ & $82.58\pm2.25$ & $81.10\pm1.34$ & $80.03\pm0.79$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\scriptsize
\begin{center}
\caption{Segmentation accuracies ($\%$) $\pm$ standard variances of evaluated algorithms on the sub-databases of AR database. The best results (mean) of the algorithms are emphasized in bold.}\label{t6}
\begin{tabular}{c|ccccc}
\hline
\multirow{2}{*}{Methods} & \multicolumn{5}{c}{Number of persons} \\\cline{2-6}
& $4$ & $8$ & $12$ & $16$ & $20$ \\\hline
IDR-Z & $90.29\pm9.44$ & $88.03\pm7.90$ & $88.40\pm4.79$ & $89.59\pm2.38$ & $87.65\pm4.16$ \\
IDR-S & $\mathbf{91.44\pm8.45}$ & $\mathbf{94.18\pm4.67}$ & $\mathbf{94.04\pm3.89}$ & $\mathbf{93.08\pm3.22}$ & $\mathbf{90.54\pm3.53}$ \\\hline
SSC & $88.27\pm11.28$ & $82.69\pm9.03$ & $81.92\pm6.63$ & $79.18\pm4.04$ & $79.25\pm4.33$ \\\hline
LRR & $86.63\pm10.22$ & $85.53\pm6.89$ & $87.98\pm6.12$ & $87.28\pm3.63$ & $86.27\pm3.04$ \\\hline
LSR & $83.65\pm12.22$ & $85.10\pm7.31$ & $89.78\pm4.66$ & $86.56\pm3.25$ & $86.04\pm3.98$ \\\hline
BDR-Z & $85.29\pm13.40$ & $83.51\pm6.03$ & $85.83\pm6.43$ & $86.13\pm3.94$ & $83.35\pm2.77$ \\
BDR-B & $88.17\pm11.00$ & $85.67\pm7.28$ & $84.65\pm5.32$ & $82.74\pm3.77$ & $81.58\pm4.32$ \\\hline
MR-SSC & $88.17\pm13.58$ & $84.23\pm7.93$ & $82.95\pm7.64$ & $82.12\pm4.20$ & $79.71\pm3.47$ \\
MR-LRR & $84.71\pm14.03$ & $81.15\pm4.57$ & $86.25\pm7.23$ & $88.51\pm3.67$ & $86.11\pm6.68$ \\\hline
JDSSC & $66.83\pm19.56$ & $68.87\pm7.67$ & $76.60\pm6.83$ & $80.65\pm2.13$ & $76.73\pm4.30$ \\
ADSSC & $83.41\pm12.86$ & $80.17\pm5.72$ & $83.41\pm1.72$ & $83.53\pm1.67$ & $81.49\pm4.03$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
Clearly, the two tables show that on most cases, IDR outperforms other algorithms on the two databases. Especially on AR database, IDR gets much better results than those of other evaluated algorithms.
\par We also compare the computation time of all the evaluated algorithms. For a face images database, on its sub-databases with a fixed $q$ (number of persons), we could compute the average computation time of each algorithm. Then the computation time of each algorithm changed with $q$ could be illustrated in the following Fig. \ref{f8}. Similar to the results obtained on Hopkins 155 databases, it could be seen that the computation time of IDR is acceptable. When $q$ is relatively small, the computation cost of IDR is close to that of LRR, when $q$ is relatively large, IDR is more efficient than LRR. However, JDSSC spends much more time than other algorithms.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{facetimeresults.eps}\\
\caption{The average computation time (sec.) of each algorithm versus the number of persons. (a) the results obtained on ORL database, (b) the results obtained on AR database. }\label{f8}
\end{figure}
\par Finally, we test the convergence of IDR by using all the samples in ORL database. The residuals defined as $\mathbf{redisualZ} = \|\mathbf{Z}^h-\mathbf{Z}^{h+1}\|_F^2,\mathbf{residualS} = \|\mathbf{S}^h-\mathbf{S}^{h+1}\|_F^2,\mathbf{residualE} = \|\mathbf{E}^h-\mathbf{E}^{h+1}\|_F^2$ of the three variables $\mathbf{Z},\mathbf{S},\mathbf{E}$ in Eq. (\ref{e9}). Fig. \ref{e9} plots the residuals versus the number of iterations. It can be seen that the variables $\mathbf{Z},\mathbf{S},\mathbf{E}$ could converge to a station point with a relative small number of iterations by using \textbf{Algorithm 1}. And when the number of iterations is larger than $200$, the residuals are closed to $0$.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{convergence.eps}\\
\caption{The residuals of variables $\mathbf{Z},\mathbf{S},\mathbf{E}$ versus the iterations on the whole ORL database.}\label{f9}
\end{figure*}
\par The performances of the evaluated algorithms on the whole ORL database are reported in Table \ref{t7}. We can see that IDR still achieves best results. In Table \ref{t7}, the average computation time of each algorithm with different parameters is also reported. Moreover, the sensitivity verification of IDR to its parameters is illustrated in Fig. \ref{f10}. It still shows that IDR is stable and can get good results when $\gamma$ is relatively small.
\begin{table*}
\scriptsize
\begin{center}
\caption{The segmentation accuracies ($\%$) and average computation time (sec.) of the evaluated algorithms on the whole ORL database. The best results of the algorithms are emphasized in bold.}\label{t7}
\begin{tabular}{c|cc|c|c|c|cc|c|c|c|c}
\hline
Methods&IDR-Z & IDR-S & SSC & LRR & LSR & BDR-Z & BDR-B & MR-SSC & MR-LRR & JDSSC & ADSSC \\\hline
Segmentation Accuracy&$\mathbf{81.50}$ & $80.75$ & $78.00$ & $76.00$ & $78.25$ & $79.50$ & $81.25$ & $80.25$ & $75.25$ & $73.75$ & $78.25$ \\
Average time (sec.)&\multicolumn{2}{c|}{$36.19$} & $16.23$ & $58.55$ & $\mathbf{0.06}$ & \multicolumn{2}{c|}{$26.54$} & $113.84$ & $136.78$ & $88.43$ & $0.11$\\\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{40sensitity.eps}\\
\caption{The segmentation accuracies of IDR against the variation of parameters on the whole ORL database.}\label{f10}
\end{figure}
\subsection{Experiments on MNIST data set}
\par MNIST database has 10 subjects, corresponding to $10$ handwritten digits, namely `$0$'-`$9$'. We first select a subset which consists of the first 100 samples of each subject’s training data set to form a sub MNIST database. And each image is resized to $28\times28$ pixels. Some sample images from the database are illustrated in Fig. \ref{f11}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{sampleimagesofMNIST.eps}\\
\caption{Sample images from MNIST database.}\label{f11}
\end{figure}
\par Then we followed the similar methodologies used in the above experiments. Here, we randomly chose images of $\{2,4,6,8,10\}$ digits from each subjects' training data set to build sub-databases. We also run the experiments $10$ trails and record the mean and standard variance of segmentation accuracies obtained by each algorithm in Table \ref{t8}.
\begin{table*}
\scriptsize
\begin{center}
\caption{Segmentation accuracies ($\%$) $\pm$ standard variances of evaluated algorithms on the sub-databases of MNIST database. The best results (mean) of the algorithms are emphasized in bold.}\label{t8}
\begin{tabular}{c|cccccc}
\hline
\multirow{2}{*}{Methods} & \multicolumn{5}{c}{Number of digits} \\\cline{2-6}
& $2$ & $4$ & $6$ & $8$ & $10$ \\\hline
IDR-Z & $98.45\pm1.99$ & $84.92\pm7.37$ & $73.67\pm4.25$ & $\mathbf{71.26\pm4.13}$ & $66.65\pm2.38$ \\
IDR-S & $\mathbf{99.35\pm0.91}$ & $\mathbf{85.60\pm9.47}$ & $\mathbf{74.58\pm4.03}$ & $70.97\pm4.35$ & $\mathbf{68.00\pm2.09}$ \\ \hline
SSC & $97.05\pm2.03$ & $77.15\pm10.67$ & $67.83\pm3.48$ & $64.56\pm4.08$ & $62.71\pm2.15$ \\\hline
LRR & $96.80\pm2.54$ & $82.50\pm7.36$ & $68.38\pm3.70$ & $64.46\pm4.10$ & $61.02\pm2.09$ \\\hline
LSR & $94.70\pm5.42$ & $77.05\pm7.54$ & $66.73\pm4.67$ & $61.29\pm5.50$ & $56.55\pm2.53$ \\\hline
BDR-Z & $95.15\pm5.29$ & $76.35\pm6.55$ & $67.87\pm4.85$ & $60.78\pm3.41$ & $57.79\pm2.11$ \\
BDR-B & $93.20\pm6.33$ & $74.78\pm7.56$ & $64.72\pm6.83$ & $56.92\pm3.47$ & $54.21\pm1.65$ \\\hline
MR-SSC & $97.35\pm1.73$ & $76.72\pm9.17$ & $67.33\pm2.91$ & $63.02\pm2.66$ & $59.54\pm1.65$ \\
MR-LRR & $96.25\pm2.76$ & $77.40\pm9.33$ & $66.25\pm3.27$ & $61.46\pm3.51$ & $57.67\pm2.98$ \\\hline
JDSSC & $97.75\pm1.57$ & $77.38\pm13.53$ & $68.73\pm5.14$ & $63.96\pm4.33$ & $62.15\pm1.75$ \\
ADSSC & $95.15\pm4.74$ & $76.13\pm6.96$ & $65.33\pm4.78$ & $60.24\pm3.93$ & $56.87\pm2.79$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\par Form Table \ref{t8}, we could find that IDR still dominates the other algorithms. Actually, IDR achieves much better results than those of other algorithms. In addition, we could see that the performances of other algorithms are closed to each other.
\par Moreover, we also plot the average computation time of each algorithm against the number of digits in Fig. \ref{f12}(a) and show that the performances of IDR-Z and IDR-S changed with the values of parameters $\gamma$ and $\lambda$ in Fig. \ref{f12}(b) and Fig. \ref{f12}(c) respectively. For the visualization of IDR's sensitivity, here we use $10$ sub-databases with $10$ digits.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{mnistresults.eps}\\
\caption{(a) The average computation time (sec.) of each algorithm versus the number of digits. (b) The segmentation accuracies of IDR-Z against the variation of parameters. (b) The segmentation accuracies of IDR-S against the variation of parameters. }\label{f12}
\end{figure*}
\par From Fig. \ref{f12}, we could conclude that 1) the computation time of IDR is much less than MR and JDSSC; 2) the computation costs of MR-SSC and MR-LRR are much larger than those of other algorithms; 2) IDR could achieve better results with small $\gamma$ and $\lambda$. This coincides with the experiments provided above.
\par Based on all the above experiments, we could make the following summarizations: 1) IDR could get satisfying subspace clustering results on different kinds of databases; 2) Compared with the closed related algorithms, such as MR and DSSC, the computation cost of IDR is acceptable; 3) IDR is insensitive to its two parameters. However, small parameters could make IDR achieve better results.
\section{Conclusions}
\label{sec6}
Spectral-type subspace clustering algorithms show their excellent performances in subspace clustering tasks. The classical spectral-type methods hope to use different norms of reconstruction coefficient matrices to seek coefficient matrices satisfying intra-subspace connectivity and inter-subspace sparse. In this paper, we design an idempotent constraint for reconstruction coefficient matrices based on the proposition that reconstruction coefficient vectors also obey the self-expressiveness property. By integrating double stochastic constraints, we present an idempotent representation (IDR) method for subspace clustering. subspace clustering experiments conducted on both synthetic data sets and real world data sets verify the effectiveness and efficiency of IDR.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,564,160 | arxiv | \section{Introduction}
The study of derivations on C$^*$-algebras, which was started in 1953 by Kaplansky, had undergone several stages during its course: theory of bounded derivations, unbounded derivations and noncommutative vector-fields, according to \cite{B}. Originally motivated by research on dynamics in statistical mechanics, development of the theory of unbounded derivations in C$^*$-algebras began much later than its bounded counterpart; see \cite{Sa}. The focus was on closability, generator properties and classification of closed derivations. More recently, classification and generator properties of derivations which are well behaved with respect to the action of a locally compact group were some of the major concerns \cite{BEJ}. Additionally, derivations feature in the theory of noncommutative vector fields \cite{J}, which was inspired by Connes work on noncommutative geometry \cite{C}.
In this paper we study classification and decompositions of unbounded derivations in Bunce-Deddens-Toeplitz and Bunce-Deddens algebras \cite{BD1}, \cite{BD2}. Given an increasing sequence $\{l_k\}_{k=0}^{\infty}$ of nonnegative integers such that $l_k$ divides $l_{k+1}$ for $k\geq 0$, the Bunce-Deddens-Toeplitz algebra is defined as the C$^*$-algebra of operators on $\ell^2(\mathbb Z_{\geq 0})$ generated by all $l_k$-periodic weighted shifts for all $k \geq 0$. Different sequences $\{l_k\}$ may lead to the same algebras, with the classifying invariant being the supernatural number $N=\prod_{p-\textnormal{prime}} p^{\epsilon_p}$, where $\epsilon_p:=\sup\{j: \exists k\ p^j|l_k\}$. In this paper we adopt a slightly different definition of the Bunce-Deddens-Toeplitz algebra $A(N)$ associated with the supernatural number $N$ that uses $N$ more directly. We consider both finite and infinite $N$.
The algebra $\mathcal K$ of compact operators on $\ell^2(\mathbb Z_{\geq 0})$ is contained in $A(N)$ and the quotient $A(N)/ \mathcal K := B(N)$ is known as the Bunce-Deddens algebra. The structure of all those algebras is quite different depending on whether $N$ is finite or infinite. The main objects of study in this paper are densely defined derivations $d: \mathcal A(N) \rightarrow A(N)$ in the Bunce-Deddens-Toeplitz algebras, where $\mathcal A(N)$ is the subalgebra of polynomials of $l_k$-periodic weighted shifts, as well as derivations $\delta: \mathcal B(N) \rightarrow B(N)$ in the Bunce-Deddens algebras, where $\mathcal B(N)$ is the image of $\mathcal A(N)$ under the quotient map $A(N)\to A(N)/ \mathcal K = B(N)$.
Intriguingly, if $d: \mathcal A(N) \rightarrow A(N)$ is any derivation then $d$ preserves the ideal of compact operators $\mathcal K$, and consequently $[d]: \mathcal B(N) \rightarrow B(N)$ defined by $[d](a+ \mathcal K)= d(a)+ \mathcal K$ is a derivation in $B(N)$. It is a non-trivial problem to describe properties of the map $d\mapsto [d]$. In general, on any C$^*$-algebra, bounded derivations preserve closed ideals and so define derivations on quotients. It was proven in \cite{P} that for bounded derivations and separable C$^*$-algebras the above map is onto, i.e., derivations can be lifted from quotients in separable cases but not in general. We prove here that lifting unbounded derivations from Bunce-Deddens to Bunce-Deddens-Toeplitz algebras is always possible when $N$ is finite and conjecture that it is true for any supernatural number $N$.
The main results of this paper are that any derivation in Bunce-Deddens or Bunce-Deddens-Toeplitz algebras can be uniquely decomposed into a sum of a certain special derivation and an approximately inner derivation. The special derivations are not approximately inner, are explicitly described, and depend on whether $N$ is finite or infinite.
The algebra $A(N)$ has a natural $S^1$ action given by scalar multiplication of the generators, see formula (\ref{rho_action}), which also quotients to $B(N)$. The key technique, like in \cite{BEJ}, is to use Fourier series decomposition with respect to this action. The Fourier components of a derivation $d$ satisfy a covariance property with respect to the $S^1$ action. It turns out that such $n$-covariant derivations can be completely classified and their properties explicitly analyzed. We then use Ces\`aro convergence of Fourier series to infer properties of $d$.
Additionally, we describe implementations of derivations in various GNS Hilbert spaces associated with the algebras. Some of those implementations can be used to construct spectral triples on Bunce-Deddens-Toeplitz and Bunce-Deddens algebras, similarly to what was done in \cite{KMR1},\cite{KMR2}.
\section{Definitions, Notations and preliminary results.}
In this section we introduce notation and terminology used in the paper.
\subsection{$\mathbb Z/N\mathbb Z$ rings}
A {\it supernatural number} $N$ is defined as the formal product:
\[N= \prod_{p-\textnormal{prime}} p^{\epsilon_p}, \;\;\; \epsilon_p \in\{0,1, \cdots, \infty\}.\]
If $\sum \epsilon_p < \infty$ then $N$ is said to be a finite supernatural number (a regular natural number), otherwise it is said to be infinite. If $N'= \prod_{p-\textnormal{prime}} p^{\epsilon_p'}$ is another supernatural number, then their product is given by:
\[NN'= \prod_{p-\textnormal{prime}} p^{\epsilon_p + \epsilon_p'}.\]
A supernatural number $N$ is said to divide $M$ if $M=NN'$ for some supernatural number $N'$, or equivalently, if $\epsilon_p(N) \leq \epsilon_p(M)$ for every prime $p$.
For the remainder of the paper we work with a fixed $N$. We let
\[\mathcal J_N=\{j: \; j|N, j<\infty\}\]
be the set of finite divisors of $N$.
Notice that $(\mathcal J_N, \leq)$ is a directed set where $j_1 \leq j_2$ if and only if $j_1 | j_2 |N$.
Consider the collection of rings $\left\{\mathbb Z/ j\mathbb Z\right\}_{j \in \mathcal J_N}$ and the family of ring homomorphisms
\[\begin{aligned}\pi_{ij}: \mathbb Z/ j\mathbb Z&\rightarrow \mathbb Z/ i\mathbb Z, \;\;\;\; j\geq i\\
\pi_{ij}(x)&=x\ (\textrm{mod } i)\end{aligned}\]
satisfying
\[\pi_{ik} = \pi_{ij} \circ \pi_{jk} \textnormal{ for all } i \leq j \leq k.\]
Then the inverse limit of the system can be denoted as:
\[\mathbb Z/N\mathbb Z:=\lim_{\underset{j\in \mathcal J_N}{\longleftarrow}} \mathbb Z/ j\mathbb Z
=\left\{\{x_j\}\in\prod\limits_{j\in \mathcal J_N}\mathbb Z/ j\mathbb Z : \pi_{ij}(x_j)=x_i\right\},\]
and let $\pi_j: \mathbb Z/N\mathbb Z \ni \{x_j\}\mapsto x_j \in \mathbb Z/ j\mathbb Z$ be the corresponding homomorphisms.
In particular, if $N$ is finite the above definition coincides with the usual meaning of the symbol $\mathbb Z/N\mathbb Z$, while if $N=p^\infty$ for a prime p, then the above limit is equal to $\mathbb Z_p$, the ring of $p$-adic integers, dee for example \cite{R}.
In general we have the following simple consequence of the Chinese Reminder Theorem.
\begin{prop}
If $N= \prod\limits_{\substack{p-\textnormal{prime} \\ {\epsilon_p \neq 0}}} p^{\epsilon_p}$, then $ \mathbb Z/N\mathbb Z \cong \prod\limits_{\substack{p-\textnormal{prime} \\ {\epsilon_p \neq 0}}} \mathbb Z/ {p^{\epsilon_p}}\mathbb Z$.
\end{prop}
When the ring $ \mathbb Z/N\mathbb Z $ is equipped with the Tychonoff topology it forms a compact, abelian topological group. Thus it has a unique normalized Haar measure $d_Hx$. Also, if $N$ is an infinite supernatural number then $ \mathbb Z/N\mathbb Z$ is a Cantor set \cite{W}.
Let $q_j: \mathbb Z \rightarrow \mathbb Z/j\mathbb Z$ be the quotient maps and let $q: \mathbb Z \rightarrow \mathbb Z/N\mathbb Z$ be defined by:
\begin{equation}\label{q_def}
q(x)=\{x\ (\textrm{mod } i)\}.
\end{equation}
We have the following simple property:
$$\pi_j\circ q=q_j.$$
As a consequence of this and the structure of cylinder sets, we obtain the following observation, needed later in the description of Bunce-Deddens algebras.
\begin{prop}
The range of $q$ is dense in $ \mathbb Z/N\mathbb Z $.
\end{prop}
We denote by $\mathcal E( \mathbb Z/N\mathbb Z)$ the space of locally constant functions on $ \mathbb Z/N\mathbb Z$. This is a dense subspace of the space of continuous functions on $ \mathbb Z/N\mathbb Z$. For $f \in \mathcal E( \mathbb Z/N\mathbb Z)$, consider the sequence:
$$a_f(k)= f(q(k)),\ \ k\in\mathbb Z_{\geq 0}.$$
Then we have the following observation:
\begin{prop}\label{loc_const}
If $f \in \mathcal E( \mathbb Z/N\mathbb Z)$, then there exists $j \in \mathcal J_N$ such that $a_f(k+j)= a_f(k)$ for every $k\in \mathbb Z_{\geq 0}$. Conversely, if $a(k)$ is a $j$-periodic sequence for some $j \in \mathcal J_N$, then there is a unique $f \in \mathcal E( \mathbb Z/N\mathbb Z)$ such that $a(k)=a_f(k)$.
\end{prop}
\begin{proof}
The result follows from an observation that any locally constant function on $\mathbb Z/N\mathbb Z$ is a pullback via $\pi_j$ of a function on $\mathbb Z/j\mathbb Z$ for some $j|N$, see \cite{RV}.
\end{proof}
\subsection{BD algebras}
Consider the Hilbert space $\ell^2(\mathbb Z_{\geq 0})$ equipped with the canonical basis $\{E_k\}_{k=0}^{\infty}$. Let $U: \ell^2(\mathbb Z_{\geq 0}) \rightarrow \ell^2(\mathbb Z_{\geq 0})$ be the unilateral shift given by $UE_k=E_{k+1}$.
The adjoint of $U$ is given by:
\[U^*E_k=\begin{cases}
E_{k-1} & \textnormal{ if } k\geq 1\\
0 & \textnormal{ if } k=0,
\end{cases}\]
and we have the relation:
\begin{equation*}
U^*U=I.
\end{equation*}
We also use the following diagonal label operator:
$$\mathbb K E_k= kE_k.$$
If $\{a(k)\}_{k=0}^\infty$ is a bounded sequence, then $a(\mathbb K)$ is a bounded operator given by:
$$a(\mathbb K) E_k= a(k) E_k.$$
In numerous formulas below we use convention $a(-1)=0$, so that, for example, we have:
\[a({\mathbb K}-I)E_k=\begin{cases}
a(k-1)E_{k} & \textnormal{ if } k\geq 1\\
0 & \textnormal{ if } k=0.
\end{cases}\]
Given a supernatural number $N$, we define the following algebra of diagonal operators:
\[\mathcal A_{\textrm{diag, per}}(N)= \left\{ a(\mathbb K) : \; a(k) \textnormal{ is $j$-periodic for some $j|N$}\right\}.\]
The norm closure of $\mathcal A_{\textrm{diag, per}}(N)$, denoted by $ A_{\textrm{diag, per}}(N)$, is a commutative unital C$^*$-algebra which, by Proposition \ref{loc_const}, is canonically isomorphic to the C$^*$-algebra of continuous functions on $ \mathbb Z/N\mathbb Z$:
\begin{equation}\label{diag_id}
\overline{\mathcal A_{\textrm{diag, per}}(N)}=: A_{\textrm{diag, per}}(N) \cong C( \mathbb Z/N\mathbb Z).
\end{equation}
\begin{defin}
Given a supernatural number $N$, the {\it Bunce-Deddens-Toeplitz algebra}, denoted by $A(N)$, is the C$^*$-algebra of operators in $\ell^2(\mathbb Z_{\geq 0})$ generated by $U$ and $\mathcal A_{\textrm{diag, per}}(N)$:
\[A(N)= \textnormal{C}^*(U, \mathcal A_{\textrm{diag, per}}(N)).\]
\end{defin}
It is easy to see that for infinite $N$ this definition coincides with the original definition \cite{BD1}, \cite{BD2} given in the introduction.
Let $A_{\textnormal{diag}}(N)$ be the commutative $^*$-subalgebra of $A(N)$ consisting of operators diagonal with respect to the canonical basis $\{E_k\}$ of $\ell^2(\mathbb Z_{\geq 0})$. If the space of sequences which are eventually zero is denoted by $c_{00}$, we define:
\[\mathcal A_{\textrm{diag}}(N):= \{a(\mathbb K): a(k)= a_0(k) + a_{\textrm{per}}(k), \; a_0(k) \in c_{00} \textrm{ and } a_{\textrm{per}}(\mathbb K) \in \mathcal A_{\textrm{diag, per}}(N)\}\]
which is a separable unital $^*$-algebra. Some useful properties of this algebra are described in the following statement.
\begin{prop}\label{decomp_prop}
$\mathcal A_{\textrm{diag}}(N)$ is a dense $^*$-subalgebra of $A_{\textnormal{diag}}(N)$. If the space of sequences converging to zero is denoted by $c_0$, then we have the identification:
\[A_{\textnormal{diag}}(N)= \overline{\mathcal A_{\textrm{diag}}(N)}= \{a(\mathbb K): \; a(k)= a_0(k)+a_{\textrm{per}}(k), \ a_0(k)\in c_0,\ a_{\textrm{per}}(k) \in C(\mathbb Z/ N\mathbb Z) \}.
\]
\end{prop}
\begin{proof}
Other than $A_{\textrm{diag, per}}(N)$, the algebra $A_{\textnormal{diag}}(N)$ also contains additional diagonal operators that are in the algebra generated by the unilateral shift $U$. Those are precisely the compact diagonal operators: $\{a_0(\mathbb K): \ a_0(k)\in c_0\}$, see \cite{KMR1}. The additive decomposition $a(k)= a_0(k)+a_{\textrm{per}}(k)$ in $\mathcal A_{\textrm{diag}}(N)$ persists in completion $A_{\textnormal{diag}}(N)$ because compact diagonal operators form an ideal in $A_{\textnormal{diag}}(N)$, with the quotient isomorphic to $C(\mathbb Z/ N\mathbb Z)$. In fact, we have the following easy estimate:
\begin{equation*}
\|a(\mathbb K)\|= \|a_0(\mathbb K)+a_{\textrm{per}}(\mathbb K)\|\geq \|a_{\textrm{per}}(\mathbb K)\|,
\end{equation*}
which implies directly the decomposition when passing to limits.
\end{proof}
Let $\mathcal A(N)$ denote the $^*$-algebra generated algebraically by $U, U^*$ and $\mathcal A_{\textrm{diag, per}}(N)$. We have the following description of $\mathcal A(N)$.
\begin{prop}\label{curlA}
$\mathcal A(N)$ is a dense $^*$-subalgebra of $A(N)$. Moreover, we have the following description:
\[\begin{aligned} \mathcal A(N):= \Big\{ &a\in A(N): \; a= \sum_{n\geq 0} U^n a_{n,0}^+(\mathbb K) + \sum_{n\geq 1} a_{n,0}^-(\mathbb K)(U^*)^n+ \sum_{n\geq 0} U^n a_{n,\textrm{per}}^+(\mathbb K)\\
&+ \sum_{n\geq 1} (U^*)^na_{n,\textrm{per}}^-(\mathbb K), \; a_{n,0}^{\pm}(k) \in c_{00}, a_{n, \textrm{per}}^{\pm}(\mathbb K) \in \mathcal A_{\textrm{diag, per}}(N), \textrm{ finite sums} \Big\} .\end{aligned}\]
\end{prop}
\begin{proof}
By Proposition 3.1 of \cite{KMR1} the polynomials in $U$ and $U^*$ which are compact operators are precisely the finite sums of the form:
$$\sum_{n\geq 0} U^n a_{n,0}^+(\mathbb K) + \sum_{n\geq 1} a_{n,0}^-(\mathbb K)(U^*)^n,$$
where $a_{n,0}^{\pm}(k) \in c_{00}$. They form an ideal in $\mathcal A(N)$ so that, using additionally the commutation relation \eqref{the_com_rel} below, all the remaining polynomials in $U, U^*$ and $\mathcal A_{\textrm{diag, per}}(N)$ can be written as the last two terms in the statement of the proposition.
\end{proof}
If $a(\mathbb K) \in A_{\textnormal{diag}}(N)$, then we have the commutation relation:
\begin{equation}\label{the_com_rel}
a(\mathbb K)U= Ua(\mathbb K+I).
\end{equation}
In fact, $A(N)$ is the partial crossed product of $ A_{\textnormal{diag}}(N)$ with $\mathbb Z_{\geq 0}$ where the action of $\mathbb Z_{\geq 0}$ on $ A_{\textnormal{diag}}(N)$ is translation by one \cite{E}, \cite{St}. In the trivial case of $N=1$, the algebra $A(1)$ is the Toeplitz algebra, i.e., the C$^*$-algebra generated by $U$. If $N$ is finite, we can also identify $A(N)$ as the tensor product of the Toeplitz algebra with matrices of size $N \times N$ (see \cite{D} and also Section 4):
\[A(N) \cong A(1) \otimes M_N(\mathbb C). \]
If $\mathcal K$ are the compact operators in $\ell^2(\mathbb Z_{\geq 0})$, then $\mathcal K$ is an ideal in $A(N)$, and we have the short exact sequence:
\[0 \rightarrow \mathcal K \rightarrow A(N) \xrightarrow{\xi} B(N) \rightarrow 0\]
where $B(N):= A(N)/\mathcal K$ and $\xi : A(N)\to A(N)/\mathcal K$ is the quotient map.
For any supernatural number $N$, we will call $B(N)$ the {\it Bunce-Deddens algebra}. The Bunce-Deddens algebras are simple for infinite $N$, mutually non-isomorphic and have unique tracial state \cite{BD1},\cite{BD2},\cite{D}.
\subsection{Structure of BD algebras}
We now proceed to a more detailed description of the Bunce-Deddens algebras $B(N)$.
Suppose $\{E_l\}_{l\in \mathbb Z}$ is the canonical basis of $\ell^2(\mathbb Z)$, we let $V:\ell^2(\mathbb Z) \rightarrow \ell^2(\mathbb Z)$ be the bilateral shift given by:
$$VE_l=E_{l+1},$$
let $\mathbb L$ be the diagonal label operator:
$$\mathbb L E_l= lE_l,$$
and let $\mathcal B_{\textnormal{diag}}(N)$ be defined as:
\[\mathcal B_{\textnormal{diag}}(N):= \{b(\mathbb L): \; b(l+j)=b(l) \textnormal{ for some } j\mid N\}.\]
Notice that $B_{\textnormal{diag}}(N):= \overline{\mathcal B_{\textnormal{diag}}(N)}$ is naturally isomorphic to $C( \mathbb Z/N\mathbb Z )$, just like in (\ref{diag_id}).
Similarly to \eqref{the_com_rel} we have the commutation relation:
\begin{equation}\label{the_com_rel2}
b(\mathbb L)V= Vb(\mathbb L+I).
\end{equation}
For any $N$ we introduce the Toeplitz-like operator $T: \mathcal B(\ell^2(\mathbb Z)) \rightarrow \mathcal B(\ell^2(\mathbb Z_{\geq 0}))$ given by the formula:
\begin{equation}\label{Toep_def}
T(b)f= Pbf,
\end{equation}
where $f\in \ell^2(\mathbb Z_{\geq 0}) $, and $P: \ell^2(\mathbb Z) \rightarrow \ell^2(\mathbb Z)$ is the orthogonal projection onto the subspace $S=$ span$\{E_l : l\geq 0\}$, which is naturally isomorphic with $\ell^2(\mathbb Z_{\geq 0})$. It is clear that we have:
$$T(I|_{\ell^2(\mathbb Z)})=I|_{\ell^2(\mathbb Z_{\geq 0})}.$$
The operator $T$ is a linear, continuous, and $*$-preserving map between the spaces of bounded operators on $\ell^2(\mathbb Z)$ and $\ell^2(\mathbb Z_{\geq 0})$, and moreover it has the following properties:
\begin{lem} \label{Toep_lemma}
For every $a,b \in \mathcal B(\ell^2(\mathbb Z))$ and any bounded diagonal operator $b (\mathbb L)$:
\begin{enumerate}[(i)]
\item $T(b\,V^n)= T(b) U^n$ and $T(V^{-n}b)= (U^*)^n T(b)$ for $n \geq 0$
\item $T(a\,b (\mathbb L))= T(a) b (\mathbb K)$
\item $T( b (\mathbb L)\,a)= b (\mathbb K) T(a)$.
\end{enumerate}
\end{lem}
\begin{proof}
Those statements are obtained via direct calculations. For example, we have:
\begin{equation*}
T(bV^n)f=PbV^nf=Pb PV^nf =T(b) U^n f
\end{equation*}
because for $n\geq 0$ the operator $V^n$ preserves $S$. Other calculations are similar.
\end{proof}
Since any element in C$^*(V, \mathcal B_{\textnormal{diag}}(N))$ can be approximated by a finite sum of the form:
\begin{equation}\label{pol_In_B(N)}
\sum_{n \in \mathbb Z} V^nb_{n} (\mathbb L),
\end{equation}
with $b_{n} (\mathbb L) \in \mathcal B_{\textnormal{diag}}(N)$, it is clear that $T$ maps C$^*(V, \mathcal B_{\textnormal{diag}}(N))$ into $A(N)$.
\begin{prop}\label{iden_2}
For any supernatural number $N$ the algebras $B(N)$ and C\,$^*(V, \mathcal B_{\textnormal{diag}}(N))$ are isomorphic.
\end{prop}
\begin{proof}
For any $b_1, b_2 \in $ C$^*(V, \mathcal B_{\textnormal{diag}}(N))$, it can be shown just like for regular Toeplitz operators, that:
\[T(b_1b_2)= T(b_1)T(b_2) + K\]
for some compact operator $K \in \mathcal K$. Now, the map
\[[T]: \textnormal{C}^*(V, \mathcal B_{\textnormal{diag}}(N))\rightarrow A(N)/\mathcal K\]
defined by:
\[[T](b)= T(b) + \mathcal K\]
gives the required isomorphism.
\end{proof}
Let $\mathcal B(N)$ be the $^*$-algebra generated algebraically by $V, V^{-1}$ and $\mathcal B_{\textnormal{diag}}(N)$. Notice that we have:
$$\mathcal B(N)= \mathcal A(N)/(\mathcal A(N) \cap \mathcal K),$$
i.e. $\mathcal B(N)$ is the image of $\mathcal A(N)$ under the quotient map $\xi$. Also, because of the commutation relation \eqref{the_com_rel2}, the elements of $\mathcal B(N)$ are precisely the finite sums of the form given in \eqref{pol_In_B(N)}.
We have the following further identification of $B(N)$, see \cite{E}.
\begin{prop}\label{cross_iden}
For infinite $N$ the algebra $B(N)$ can be identified with the crossed product of $C(\mathbb Z/N\mathbb Z)$ with $\mathbb Z$, acting on $C(\mathbb Z/N\mathbb Z)$ via shifts. i.e.,
\[B(N) \cong C(\mathbb Z/N\mathbb Z) \rtimes_{\sigma} \mathbb Z\] where for $f \in C(\mathbb Z/N\mathbb Z)$, $\sigma f(x)= f(x+1)$.
\end{prop}
For finite $N$ one can identify $B(N)$ with $C(S^1) \otimes M_N(\mathbb C)$. This is useful for the purpose of classifying derivations in $A(N)$ and $B(N)$ in the next section. We describe this identification in detail below.
\begin{prop}\label{iden_3}
For a finite supernatural number $N$ there is an isomorphism:
$$\textnormal{C}^*(V, \mathcal B_{\textnormal{diag}}(N)) \cong C(S^1) \otimes M_N(\mathbb C).$$
\end{prop}
\begin{proof}
We first relabel the basis elements of $\ell^2(\mathbb Z)$ as follows:
$$\{E_{kN+j}\; \vert \; k \in \mathbb Z, 0 \leq j <N\}.$$
Consider the following sequence:
\begin{equation}\label{eN_def}
e_N(l)=\begin{cases}
1 & \textnormal{ if } N \mid l\\
0 & \textnormal{otherwise}.
\end{cases}
\end{equation}
Then clearly we have periodicity
$$e_N(l+N)=e_N(l),$$
and the following formula:
$$e_N(\mathbb L) E_{kN+j}= \delta_{j,0} E_{kN+j}.$$
For $0 \leq s,r < N$, we define the operators:
$$P_{sr}:= V^s e_N(\mathbb L) V^{-r}.$$
It is easy to verify using the above formulas that $P_{sr}$ have the following properties:
\begin{enumerate}[(i)]
\item $P_{sr}^*= P_{rs}$
\item $P_{sr} P_{tq}= \delta_{tr}P_{sq}$.
\end{enumerate}
As a consequence,
if $E_{sr}$ are the standard basis elements of $M_N(\mathbb C)$, then the map $P_{sr} \mapsto E_{sr}$ induces the isomorphism C$^*(P_{sr}) \cong M_N(\mathbb C)$. Moreover, any element of $\mathcal B_{\textnormal{diag}}(N)$ can be written as a linear combination of $P_{rr}$, $0 \leq r < N$. We also have the relation:
\[V= P_{10}+P_{21}+ \cdots+ P_{(N-1)(N-2)} + V^N P_{0(N-1)},\]
which can be verified by a direct calculation on basis elements.
Therefore, we obtain:
$$\textnormal{C}^*(V, \mathcal B_{\textnormal{diag}}(N)) \cong {\textnormal{C}}^*(P_{sr}, V^N).$$
Consequently, because $V^N$ commutes with the operators
$P_{sr}$ for all $0 \leq s,r < N$, we have:
\[\textnormal{C}^*(V, \mathcal B_{\textnormal{diag}}(N)) \cong \textnormal{C}^*(V^N) \otimes \textnormal{C}^*(P_{sr}) \cong C(S^1) \otimes M_N(\mathbb C).\]
Here C$^*(V^N)$ is isomorphic with $C(S^1)$ because $V^N$ is equivalent to the usual bilateral shift.
\end{proof}
\section{Covariant Derivations}
\subsection{Derivations.}
A {\it derivation} $d$ in $A(N)$ with domain $\mathcal A(N)$ is a linear map $d: \mathcal A(N) \rightarrow A(N)$ which satisfies the Leibniz rule:
\[d(ab)= d(a)b+ ad(b)\]
for all $a,b \in \mathcal A(N)$. In this paper we only study derivations $d$ with domain $\mathcal A(N)$, and derivations $\delta$ in $B(N)$ with domain $\mathcal B(N)$, so we will not explicitly mention domains below.
A derivation $d$ is called {\it approximately inner} if there are $a_n\in A(N)$ such that
$$d(a) = \lim_{n\to\infty}[a_n,a]$$
for $a\in\mathcal A(N)$.
The first important observation is that any derivation in $A(N)$ preserves compact operators.
\begin{theo}
If $d: \mathcal A(N) \rightarrow A(N)$ is a derivation, then $d: \mathcal A(N)\cap\mathcal K \rightarrow \mathcal K$.
\end{theo}
\begin{proof}
It is enough to prove that $d(P_0)$ is compact, where $P_0$ is the orthogonal projection onto the one-dimensional subspace spanned by $E_0$, because $\mathcal A(N)\cap\mathcal K$ is comprised of linear combinations of expressions of the form: $U^rP_0(U^*)^s$. The result then follows immediately from the Leibniz property. To see that $d(P_0)$ is compact, simply apply $d$ to both sides of the relation $P_0=P_0^2$ to obtain:
$$d(P_0)=d(P_0)P_0+P_0d(P_0)\in \mathcal K,$$
which completes the proof.
\end{proof}
As a consequence of the above theorem, if $d:\mathcal A(N) \rightarrow A(N)$ is a derivation in $A(N)$, then $[d]: \mathcal B(N) \rightarrow B(N)$ defined by
\[[d](a+\mathcal K) := da+ \mathcal K\]
gives a derivation in $B(N)$ where, as before, $\mathcal B(N)= \mathcal A(N)+ \mathcal K$.
\subsection{Classification of covariant derivations.}
For each $\theta \in [0, 2\pi)$, let $\rho^{\mathbb K}_{\theta}: A(N) \rightarrow A(N)$ be defined by:
\[\rho^{\mathbb K}_{\theta} (a)= e^{i\theta \mathbb K} a e^{-i\theta \mathbb K}.\]
Then we have:
\begin{equation}\label{rho_action}
\rho^{\mathbb K}_{\theta}(U)= e^{i\theta}U,\ \rho^{\mathbb K}_{\theta}(U^*)= e^{-i\theta}U^* \textrm{ and } \rho^{\mathbb K}_{\theta}(a(\mathbb K))= a(\mathbb K).
\end{equation}
Thus, $\rho^{\mathbb K}_{\theta}$ is a well-defined automorphism of $A(N)$, and $\rho^{\mathbb K}_{\theta}$ preserves $\mathcal A(N)$.
\begin{defin}
Given $n \in \mathbb Z$, a derivation $d$ in $A(N)$ is said to be a {\it $n$-covariant derivation} if the relation
$$(\rho^{\mathbb K}_{\theta})^{-1}d(\rho^{\mathbb K}_{\theta}(a))= e^{-in\theta} d(a)$$ holds.
\end{defin}
Similarly, for $\theta \in [0, 2\pi)$, we let $\rho^{\mathbb L}_{\theta}$ be the automorphism of $B(N)$, preserving $\mathcal B(N)$, defined by:
\[\rho^{\mathbb L}_{\theta}(b)= e^{i\theta \mathbb L} b e^{-i\theta \mathbb L}.\]
\begin{defin}
Given $n \in \mathbb Z$, a derivation $\delta$ in $B(N)$ is said to be a {\it $n$-covariant derivation} if the relation
$$(\rho^{\mathbb L}_{\theta})^{-1}\delta(\rho^{\mathbb L}_{\theta}(b))= e^{-in\theta} \delta(b)$$ holds.
\end{defin}
An important step in classifying derivations on Bunce-Deddens-Toeplitz algebras is the classification the $n$-covariant derivations in $A(N)$ since they arise as Fourier coefficients of general derivations. First we establish the following useful description of covariant subspaces in $A(N)$.
\begin{prop}
We have the following equality:
\[A_{\textnormal{diag}}(N)= \{a\in A(N): \; \rho^{\mathbb K}_{\theta}(a)= a\}.\]
\end{prop}
\begin{proof}
Clearly if $a\in A_{\textnormal{diag}}(N)$, then $ \rho^{\mathbb K}_{\theta}(a)= a$ by formula (\ref{rho_action}).
Conversely, if $a\in A(N)$ satisfies $ \rho^{\mathbb K}_{\theta}(a)= a$ then the equation:
$$(E_k, e^{i\theta \mathbb K} a e^{-i\theta \mathbb K} E_l)= (E_k, aE_l)$$
implies that have:
$$e^{i\theta (k-l)} (E_k, aE_l)= (E_k, aE_l)$$
for every $\theta \in [0, 2\pi)$ and every $k,l$, from which it follows that $a$ is a diagonal operator.
\end{proof}
\begin{prop}\label{n_spec_subsp}
Denote by $A_n(N)$ the $n$-th spectral subspace of $\rho^{\mathbb K}_{\theta}$:
\[A_n(N):= \{a\in A(N): \; \rho^{\mathbb K}_{\theta}(a)= e^{i n\theta} a\}.\]
Then we have:
\[A_n(N)= \begin{cases}
\{U^na(\mathbb K): \; a(\mathbb K)\in A_{\textnormal{diag}}(N)\} & \textnormal{ if } n\geq 0\\
\{a(\mathbb K)(U^*)^{-n}: \; a(\mathbb K)\in A_{\textnormal{diag}}(N)\} & \textnormal{ if } n<0.
\end{cases}\]
\end{prop}
\begin{proof}
We will give the proof for $n>0$; the proof for $n < 0$ works similarly.
Since we have:
$$\rho^{\mathbb K}_{\theta}(U^n a(\mathbb K))=e^{in\theta}U^n a(\mathbb K),$$
one containment clearly follows. Conversely, if $a\in A_n(N)$ then $a(U^*)^n \in A_{\textnormal{diag}}(N)$, hence is of the form $a(U^*)^n= a(\mathbb K)$ by the previous proposition. Consequently, we have:
$$a=a(\mathbb K)U^n = U^n a(\mathbb K+nI),$$
which shows the other containment.
\end{proof}
It turns out that $n$-covariant derivations in $A(N)$ can be described explicitly.
\begin{theo} \label{n_cov_der_formula}
If $d$ is an $n$-covariant derivation in $A(N)$, then there exists a diagonal operator $\beta_n(\mathbb K)$ such that $d$ can be written as:
\begin{equation}\label{d_com_formulas}
d(a)=\begin{cases}
[U^n\beta_n(\mathbb K), a] & \textnormal{ if } n\geq 0\\
[\beta_n(\mathbb K)(U^*)^{-n}, a] & \textnormal{ if } n< 0,
\end{cases}
\end{equation}
where the operator $\beta_n(\mathbb K)$ satisfies the following conditions: if $N$ is infinite and $n \neq 0$ or $N$ is finite but $N \nmid n$, then
$$\beta_n(\mathbb K) \in A_{\textnormal{diag}}(N),$$ (so in particular it is bounded); otherwise:
$$\beta_n(\mathbb K)-\beta_n(\mathbb K-I)\in A_{\textnormal{diag}}(N).$$
The operator $\beta_n(\mathbb K)$ is unique except when $n=0$ where $\beta_0(\mathbb K)$ is unique up to an additive constant. Conversely, given any $\beta_n(\mathbb K)$ satisfying those properties, the formulas above define $n$-covariant derivations in $A(N)$.
\end{theo}
\begin{proof}
Suppose $n>0$ and $d$ is a $n$-covariant derivation in $A(N)$. It follows that we have $d(a(\mathbb K)) \in A_n(N)$, and hence the formula:
\begin{equation}\label{d_tilde}
d(a(\mathbb K))= U^n \tilde d(a(\mathbb K))
\end{equation}
for some $\tilde d(a(\mathbb K)) \in A_{\textnormal{diag}}(N)$ by Proposition \ref{n_spec_subsp}.
Similarly, there exists $\alpha_n(\mathbb K) \in A_{\textnormal{diag}}(N)$ such that:
\begin{equation}\label{d_on_U_U*}
\begin{aligned} d(U^*)&= -U^{n-1}\alpha_n(\mathbb K) \;\;\;\textnormal{ and }\\
d(U)&= U^{n+1}\alpha_n(\mathbb K+I),
\end{aligned}\end{equation}
where the last equation follows from the relation $d(U^*)U+U^*d(U)=0$.
From formula \eqref{d_tilde} for every $a(\mathbb K), b(\mathbb K) \in \mathcal A_{\textnormal{diag}}(N)$ we have the following:
\[\begin{aligned} \tilde d(a(\mathbb K)b(\mathbb K))&= (U^*)^n d(a(\mathbb K)b(\mathbb K)) \\
&=(U^*)^nd(a(\mathbb K)) b(\mathbb K) + (U^*)^n a(\mathbb K)d(b(\mathbb K)) \\
&=\tilde d(a(\mathbb K))b(\mathbb K) + a(\mathbb K +nI) \tilde d(b(\mathbb K)). \end{aligned}\]
Since $\tilde d(a(\mathbb K)b(\mathbb K))= \tilde d(b(\mathbb K)a(\mathbb K))$, it follows that:
\begin{equation}\label{a-bequ}
\tilde d(a(\mathbb K))[b(\mathbb K )-b(\mathbb K +nI)]= \tilde d(b(\mathbb K))[a(\mathbb K )-a(\mathbb K +nI)].
\end{equation}
For given $n$ we can always choose $a(\mathbb K)$ such that $a(k )-a(k+n) \neq 0$ for every $k$. Using such $a(\mathbb K)$ we define:
$$\beta_n(\mathbb K)= \tilde d(a(\mathbb K))(a(\mathbb K )-a(\mathbb K +nI))^{-1},$$
which is independent of the choice of $a$ by the formula (\ref{a-bequ}). It follows that:
\begin{equation}\label{d_on_a(K)}
d(a(\mathbb K))= U^n\beta_n(\mathbb K) [a(\mathbb K )-a(\mathbb K +nI)]
\end{equation}
for any $a(\mathbb K)$ because, when $a(k +n)-a(k) = 0$ for some $k$, then both sides of the above equation are zero, as implied again by the formula (\ref{a-bequ}).
Next, applying $d$ to the commutation relation $U^*a(\mathbb K)=a(\mathbb K +I)U^*$ we obtain:
\[(\beta_n(\mathbb K ) - \beta_n(\mathbb K-I) - \alpha_n(\mathbb K)) [a(\mathbb K )-a(\mathbb K +nI)]=0.\]
It follows that we must have:
\[ \alpha_n(\mathbb K) =\beta_n(\mathbb K ) - \beta_n(\mathbb K-I).\]
This leads to formulas:
\begin{equation*}
\begin{aligned} d(U^*)&= -U^{n-1}(\beta_n(\mathbb K ) - \beta_n(\mathbb K-I) ) \;\;\;\textnormal{ and }\\
d(U)&= U^{n+1}(\beta_n(\mathbb K + I) - \beta_n(\mathbb K)),
\end{aligned}\end{equation*}
and so we have that $d(a)= [U^n\beta_n(\mathbb K), a]$ holds true for all the generators and hence for every $a\in \mathcal A(N)$ and $n>0$. Notice also that we can compute $\beta_n$ in terms of $\alpha_n$ by the formula:
\[\beta_n(k)= \sum_{i=0}^{k}\alpha_{n}(i),\]
The proof for $n<0$ works similarly.
If $n=0$, the formulas for $d(U)$ and $d(U^*)$ are:
\[d(U)=U\alpha_0(\mathbb K), \;\;\; d(U^*)=-\alpha_0(\mathbb K) U^*.\]
We claim that in this case we have:
$$d(a(\mathbb K))=0.$$
This is because for an invariant derivation we have:
$$d:\mathcal A_{\textnormal{diag}}(N)\to A_{\textnormal{diag}}(N),$$
and elements of $\mathcal A_{\textnormal{diag}}(N)$ are finite linear combinations of diagonal orthogonal projections. If $P\in \mathcal A_{\textnormal{diag}}(N)$ is such a projection, by applying $d$ to $P^2=P$ we obtain:
$$(I-2P)d(P)=0,$$
which implies $d(P)=0$.
To obtain $d(a)=[\beta_0({\mathbb K}),a]$, we define the operator $\beta_0(\mathbb K)$ as the solution of the equation:
$$\alpha_0(\mathbb K)=\beta_0(\mathbb K+I)-\beta_0(\mathbb K),$$
and so it is determined only up to additive constant. We usually make a particular choice:
\[\beta_0(k)= \sum_{i=0}^{k-1}\alpha_{0}(i),\]
with $\beta_0(0)=0$.
There are additional restrictions on $\beta_n(\mathbb K)$. For $N$ infinite and $n \neq 0$, we must have
\begin{equation*}
\tilde d(a(\mathbb K))=\beta_n(\mathbb K) [a(\mathbb K )-a(\mathbb K +nI)] \in A_{\textnormal{diag}}(N)
\end{equation*}
for every $a(\mathbb K) \in \mathcal A_{\textnormal{diag}}(N)$. By choosing for example $a(k)= e^{2\pi ik/l}$, an $l$-periodic sequence, with $l\mid N$ but $l \nmid n$, it is clear that $a( k +n)-a( k) \neq 0$ for every $k$ and so the operator $a({\mathbb K} +n)-a({\mathbb K})$ is invertible. It follows that $\beta_n(\mathbb K)$ must belong to $A_{\textnormal{diag}}(N)$.
If $N$ is finite and $N\nmid n$ then we can choose an $N$-periodic sequence and argue as above to show that $\beta_n(\mathbb K) \in A_{\textnormal{diag}}(N)$.
Conversely, given any $\beta_n(\mathbb K)$ satisfying the properties and derivation $d$ given by the commutator formula (\ref{d_com_formulas}), the expressions for $d$ on generators (\ref{d_on_U_U*}), (\ref{d_on_a(K)}) imply that $d$ is a well defined derivation in $A(N)$.
In particular, if $N$ is finite, $N\mid n$, and $a(k)$ is any $l$-periodic sequence where $l\mid N$ is also $n$-periodic, then $a(k )-a(k+n)=0$. Moreover, if $a(k)$ a sequence that is eventually zero, then for large enough $k$ we have $a(k )-a(k+n)=0$. Thus $ \tilde d(a(\mathbb K))$ is eventually zero for every $k$, and there are no additional restrictions on $\beta_n(\mathbb K)$ in this case.
\end{proof}
From this theorem it is clear that if $N$ is infinite and $n \neq 0$ or $N$ is finite but $N \nmid n$, then the $n$-covariant derivation $d$ in $A(N)$ is inner. Otherwise such an $n$-covariant derivation $d$ is in general not inner.
We simultaneously state here without a detailed proof a similar classification of $n$-covariant derivations on Bunce-Deddens algebras. In fact, all of the arguments in the unilateral shift case of Theorem \ref{n_cov_der_formula} work the same (if not simpler) in the bilateral case needed for Bunce-Deddens algebras.
\begin{theo}\label{BDncov}
If $\delta$ is an $n$-covariant derivation in $B(N)$, then there exists $\eta_n(\mathbb L)$ such that
\begin{equation*}
\delta(a)=[V^n\eta_n(\mathbb L), a]
\end{equation*}
for every $a$ in $\mathcal B$.
If $N$ is infinite and $n \neq 0$ or $N$ is finite but $N \nmid n$ then:
$$\eta_n(\mathbb L)\in B_{\textnormal{diag}}(N),$$
otherwise we have:
$$\eta_n(\mathbb L+I)-\eta_n(\mathbb L)\in B_{\textnormal{diag}}(N).$$
Conversely, given any $\eta_n(\mathbb L)$ satisfying those properties, the formulas above define $n$-covariant derivations in $B(N)$.
\end{theo}
\subsection{Properties of covariant derivations}
In general, if an $n$-covariant derivation is approximately inner then it can also be approximated by inner $n$-covariant derivations.
\begin{prop}\label{CovAppProp}
Suppose $d$ is an $n$-covariant derivation in $A(N)$. If $d$ is approximately inner, then there exists a sequence of $n$-covariant derivations $\{d^M\}$ in $A(N)$ such that, for every $a \in \mathcal A(N)$ we have:
\[d(a)= \lim_{M \rightarrow \infty} d^M(a). \]
\end{prop}
\begin{proof}
Given an element $a\in A(N)$, define its $\rho^{\mathbb K}_{\theta}$ $n$-th Fourier component by:
\[(a)_n= \frac 1{2\pi} \int_0^{2\pi} e^{-in\theta} \rho^{\mathbb K}_{\theta}(a) d\theta.\]
If $d$ is approximately inner then there is a sequence $\{z^M\}$, with $z^M \in A(N)$ such that:
$$d(a)= \lim \limits_{M \rightarrow \infty} [a, z^M]$$
for every $a \in \mathcal A(N)$. It can be easily checked that the Fourier component $(z^M)_n$ is in $A_{n}(N)$. So, it is sufficient to show that:
\[d(a)= \lim_{M \rightarrow \infty} [a, (z^M)_n]\]
on the generators $U$, $ U^*$ and $a({\mathbb K})$ as the result then follows from Proposition \ref{n_spec_subsp}.
Since
$$Uz^M- z^M U \rightarrow d(U),$$
we equivalently have:
$$e^{-in \theta} (z^M -U^*z^M U) \rightarrow e^{-in\theta} U^* d(U).$$
So, given $\epsilon >0$, there is an integer $m$ such that for every $M \geq n$, we can estimate:
\[\|e^{-in\theta} (z^M -U^*z^M U - U^*d(U)) \| < \epsilon .\]
Since $\rho^{\mathbb K}_{\theta}(U^* d(U))= e^{in\theta} U^*d(U)$, the estimate can be written as:
\[\begin{aligned} \|e^{-in\theta} \rho^{\mathbb K}_{\theta}(z^M) - e^{-in\theta}U^*\rho^{\mathbb K}_{\theta}(z^M) U - U^*d(U) \| = \|e^{-in\theta} \rho^{\mathbb K}_{\theta}(z^M - U^*z^M U - U^*d(U)) \| < \epsilon.
\end{aligned}\]
Consequently, we have:
\[\begin{aligned}&\| (z^M)_n - U^* (z^M)_nU - U^*d(U) \| \leq \\
&\leq\frac 1{2\pi} \int_0^{2\pi} \|e^{-in\theta} \rho^{\mathbb K}_{\theta}(z^M) - e^{-in\theta}U^*\rho^{\mathbb K}_{\theta}(z^M) U - U^*d(U) \| d\theta < \epsilon,\end{aligned}\]
thus proving the convergence:
$$d(U)= \lim_{M \rightarrow \infty} [U, (z^M)_n].$$
A similar proof works for $d$ acting on $U^*$ and $a({\mathbb K})$, so the result follows.
\end{proof}
The explicit formulas of Theorem \ref{n_cov_der_formula} allow us to discuss when an $n$-covariant derivation is approximately inner. There are several cases to consider. When $N$ is infinite we separately consider the case when $n=0$, while when $N$ is finite, there are differences depending on whether $n$ is a multiple or $N$ or not.
First consider an invariant derivation $d$ in $A(N)$ given by
\[d(U)=U\alpha_0(\mathbb K), \;\;\; d(U^*)=-\alpha_0(\mathbb K) U^*,\;\;\; d(a(\mathbb K))=0.\]
\begin{lem}\label{inv_app_inn_c_0}
Suppose $d$ is an invariant derivation in $A(N)$ with $N$ infinite. If $\alpha_0(k) \in c_0$ then $d$ is approximately inner.
\end{lem}
\begin{proof} Like in \cite{KMR1} we define $\alpha_{0}^M (\mathbb K) \in A_{\textnormal{diag}}(N)$ by:
\[\alpha_{0}^M (k)= \begin{cases}
\alpha_{0}(k) & \textnormal{ if } k\leq M\\
0 & \textnormal{otherwise}.
\end{cases}\]
Then we see that $ \alpha_{0}^M (\mathbb K) $ converges to $\alpha_{0}(\mathbb K)$ in norm as $M$ tends to infinity because:
\[\| \alpha_{0}(\mathbb K) - \alpha_{0}^M (\mathbb K) \| = \sup_{k} |\alpha_{0}(k) - \alpha_{0}^M (k)|= \sup_{k>M} |\alpha_{0}(k)| \xrightarrow{M \rightarrow \infty} 0.\]
The sequence $\beta_{0}^M (k)$ defined by:
\[\beta_{0}^M (k):= \sum_{j=0}^{k-1} \alpha_{0}^M (j)\]
is eventually constant and in particular, it is bounded. Therefore, we have that
$$d^M(a):= [\beta_{0}^M(\mathbb K), a]$$
is an inner derivation. To prove that $d^M(a) \rightarrow d(a)$ as $M \rightarrow \infty$ for every $a\in \mathcal A(N)$, it is enough to check that on the generators $U$ and $U^*$. But this follows easily since we have:
\[
d(U)- d^M(U)= U(\alpha_{0}(\mathbb K) - \alpha_{0}^M (\mathbb K)),\]
and similarly for $U^*$.
Thus $d$ is an approximately inner derivation.
\end{proof}
The second case of invariant approximately inner derivations is described next.
\begin{lem}\label{inv_app_inn_per}
Suppose $d$ is an invariant derivation in $A(N)$ with an infinite supernatural number $N$. If $\alpha_0(k)=f(q(k))$ where $f \in C( \mathbb Z/N\mathbb Z)$ with $\int_ {\mathbb Z/N\mathbb Z} f(x) d_Hx=0$ and $q$ is the quotient map introduced in \eqref{q_def}, then $d$ is approximately inner.
\end{lem}
\begin{proof}
If $f \in C( \mathbb Z/N\mathbb Z)$, then
$$f(x)= \lim_{M \rightarrow \infty} f^M(x)$$
uniformly for some sequence of locally constant functions $f^M(x)$ on $ \mathbb Z/N\mathbb Z$. By Proposition \ref{loc_const} there is a sequence of numbers $\{j^M\}$, such that $j^M \mid N$ and for every $M$ the sequence $f^M(q(k))$ is $j^M$-periodic. Moreover, by subtracting constants:
$$\int_{\mathbb Z/N\mathbb Z} f^M(x) d_Hx \xrightarrow{M \rightarrow \infty} 0,$$
if necessary, we can choose $f^M$ so that:
\begin{equation}\label{zero_int}
\int_{\mathbb Z/N\mathbb Z} f^M(x) d_Hx=0.
\end{equation}
Now consider the sequence $\{\alpha_0^M(k)\}:=\{f^M(q(k))\} $. A simple calculation shows that the equation (\ref{zero_int}) is equivalent to the following condition:
$$\sum_{i=0}^{j^M-1}\alpha_0^M(i)=0.$$
Furthermore, defining
\[\beta_0^M(k)= \sum_{i=0}^{k-1}\alpha_0^M(i),\]
we have that $\beta_0^M$ is also $j^M$-periodic because:
\[\beta_0^M(k+j^M)= \sum_{i=0}^{k-1}\alpha_0^M(i) + \sum_{i=k}^{k+j^M-1}\alpha_0^M(i)= \sum_{i=0}^{k-1}\alpha_0^M(i) + \sum_{i=0}^{j^M-1}\alpha_0^M(i)=\beta_0^M(k).\]
Let $d^M: \mathcal A(N) \rightarrow A(N)$ be the derivation defined by:
\[d^M(U)=U\alpha_0^M(\mathbb K), \;\;\; d^M(U^*)=-\alpha_0^M(\mathbb K) U^*,\;\;\; d^M(a(\mathbb K))=0.\]
Thus, we have:
$$d^M(a)= [a, \beta_0^M(\mathbb K) ]$$
for every $a\in \mathcal A(N)$ and, since $\beta_0^M(\mathbb K) \in A_{\textnormal{diag}}(N)$, it follows that $d^M$ is an inner derivation. Moreover, the sequence $\{d^M\}$ approximates $d$ because:
\[\|d(U)- d^M(U)\|= \sup_k |\alpha_0(k)- \alpha_0^M(k)|= \sup_k |f(q(k))- f^M(q(k))| \rightarrow 0\] as $M \rightarrow \infty$. Similarly, we obtain that:
$$\lim_{M \rightarrow \infty} d^M(U^*)= d(U^*),$$
and therefore, $d$ is approximately inner.
\end{proof}
Now we consider the case of finite $N$ and $N \mid n$. Further examples of approximately inner $n$-covariant derivations are described by the following lemma.
\begin{lem}\label{n_cov_app_inn}
Suppose $d$ is an $n$-covariant derivation in $A(N)$ where $N$ is finite and $N \mid n$. Then $d$ is approximately inner if $\alpha_n(k) \in c_0$.
\end{lem}
\begin{proof}
The proof is essentially the same as that of Lemma \ref{inv_app_inn_c_0}.
\end{proof}
To complete the classification of $n$-covariant derivations we introduce special derivations $d_{n,\mathbb K}$ in $A(N)$ given by:
\[d_{n,\mathbb K}(a)=\begin{cases}
[U^n(\mathbb K +I), a] & \textnormal{ if } n\geq 0\\
[(\mathbb K + I)(U^*)^n, a] & \textnormal{ if } n<0.
\end{cases}\]
Notice that by Theorem \ref{n_cov_der_formula}, derivations $d_{n,\mathbb K}(a)$ are well-defined in $A(N)$ when $N$ is infinite and $n = 0$ or when $N$ is finite and $N \mid n$, because we have the following relation for the diagonal operator coefficients:
$$(\mathbb K+I)-\mathbb K=I\in A_{\textnormal{diag}}(N),$$
and there are no other restrictions on the coefficients for those cases.
\begin{theo}\label{ncov_N|n}
If $d$ is an $n$-covariant derivation in $A(N)$ where $N$ is infinite and $n = 0$ or when $N$ is finite and $N \mid n$, then there exists a unique constant $C_n$ such that
\[d(a)= C_n d_{n,\mathbb K}(a) + \tilde d (a)\] for every $a\in \mathcal A(N)$, where $\tilde d$ is an approximately inner derivation.
\end{theo}
\begin{proof}
Consider the case $n>0$ and finite $N$. We then have the formula: $d(a)=[U^n\beta_n(\mathbb K), a]$, and the condition: $\alpha_n(\mathbb K) =\beta_n(\mathbb K +I) - \beta_n(\mathbb K) \in A_{\textnormal{diag}}(N)$.
We apply Proposition \ref{decomp_prop} to $ \alpha_n(k)$, and refine it in the following way:
\[\alpha_n(k)=\alpha_{n,0}(k) +C_n + \alpha_{n, \textnormal{per}}(k)\]
where $C_n$ is a constant, $\alpha_{n,0}(k) \in c_0$, $ \alpha_{n, \textnormal{per}}(k+N)=\alpha_{n, \textnormal{per}}(k)$ and
$$\sum_{k=0}^{N-1} \alpha_{n, \textnormal{per}}(k) =0.$$
We then decompose $\beta_n$ using the following:
\[\beta_{n,0}(k):= \sum_{j=0}^{k}\alpha_{n,0}(j), \;\;\; \beta_{n, \textnormal{per}}(k):= \sum_{j=0}^{k}\alpha_{n,\textnormal{per}}(j).\]
It is easy to verify that $ \beta_{n, \textnormal{per}}(k)$ is $N$-periodic, just as in Lemma \ref{inv_app_inn_per}. We then obtain:
\[\beta_n(k)=\beta_{n,0}(k) + C_n(k +1)+ \beta_{n, \textnormal{per}}(k).\]
So, for $n>0$, the derivation $d$ decomposes as follows:
\[d(a)= [U^n\beta_{n,0}(\mathbb K), a] + C_nd_{n,\mathbb K}(a) + [U^n \beta_{n, \textnormal{per}}(\mathbb K), a]. \]
We know that $ [U^n\beta_{n,0}(\mathbb K), a]$ is approximately inner by Lemma \ref{n_cov_app_inn}. Moreover, since $ \beta_{n, \textnormal{per}}(\mathbb K) \in A(N)$, $[U^n \beta_{n, \textnormal{per}}(\mathbb K), a]$ is an inner derivation.
To conclude the theorem for $n>0$, and verify the uniqueness, it only remains to show that $d_{n,\mathbb K}(a) $ is not approximately inner. This easily follows from the methods of Theorem 4.4 in \cite{KMR1}, in the following way.
Assume to the contrary that $d_{n,\mathbb K}$ is approximately inner. By Proposition \ref{CovAppProp} there exists a sequence $\mu^M({\mathbb K})\in A_{\textnormal{diag}}(N)$, $M=1,2,\ldots$, such that:
\begin{equation*}
d_{n,\mathbb K}(a) = \lim_{M\to\infty} [U^n\mu^M({\mathbb K}),a]
\end{equation*}
for all $a\in\mathcal{A}$. In particular, we must have:
\begin{equation*}
d_{n,\mathbb K}(U) = U^{n+1} = \lim_{M\to\infty} U^{n+1}(\mu^M({\mathbb K}+I)-\mu^M({\mathbb K})).
\end{equation*}
Without loss of generality assume $\mu^M(k)$ are real, or else in the argument below simply consider the real part of $\mu^M(k)$.
The above equation implies that:
\begin{equation*}
\lim_{M\to\infty}\underset{k}{\textrm{sup}}|(\mu^M(k+1)-\mu^M(k)) - 1)| = 0.
\end{equation*}
Therefore for any small $\varepsilon>0$ there are $k$ and $m$ large enough so that we have:
\begin{equation*}
1-\varepsilon \le \mu^M(k+1) - \mu^M(k)\le 1+ \varepsilon.
\end{equation*}
By telescoping $\mu^M(k)$, we get:
\begin{equation*}
\mu^M(k) = (\mu^M(k) - \mu^M(k-1)) + \cdots + (\mu^M(k_0+1) - \mu^M(k_0)) + \mu^M(k_0)
\end{equation*}
for some fixed $k_0$. Together the last two formulas imply that:
$$\mu^M(k)\ge (1-\varepsilon)(k-k_0) + \mu^M(k_0),$$
which goes to infinity as $k$ goes to infinity. This contradicts the fact that $\mu^M({\mathbb K})\in A_{\textnormal{diag}}(N)$ which completes the proof for $n>0$ and finite $N$.
Cases $n=0$ and $n<0$ can be proved very similarly.
\end{proof}
We summarize the remaining cases of our classification of $n$-covariant derivations in $A(N)$ in the next theorem.
\begin{theo}\label{n_cov_inner}
Suppose $d$ is an $n$-covariant derivation in $A(N)$. If $N$ is infinite with $n \neq 0$ or if $N$ is finite with $N\nmid n$, then $d$ is an inner derivation.
\end{theo}
\begin{proof}
From Theorem \ref{n_cov_der_formula} we already know that $\beta_n(\mathbb K) \in A_{\textnormal{diag}}(N)$ when $N$ is infinite with $n \neq 0$ or when $N$ is finite with $N\nmid n$. Thus $d$ is an inner derivation.
\end{proof}
This concludes the classification of $n$-covariant derivations in $A(N)$. Classification of $n$-covariant derivations in $B(N)$ is somewhat simpler and can be obtained by applying the same methods as used in the classification of $n$-covariant derivations in $A(N)$.
\begin{theo}
If $\delta$ is a $n$-covariant derivation in $B(N)$ where $N$ is infinite and $n \neq 0$ or $N$ is finite but $N \nmid n$ then $\delta$ is an inner derivation. Otherwise there exists a unique constant $C_n$ such that
\[\delta(a)= C_n [V^n\mathbb L, a] + \tilde \delta (a)\]
for every $a\in \mathcal B(N)$, where if $N$ is finite and $N \mid n$, then $\tilde \delta$ is an inner derivation, and if $N$ is infinite and $n=0$ then $\tilde \delta$ is an approximately inner derivation.
\end{theo}
\section{General unbounded derivations}
This chapter contains our main results: the classification of derivations in $A(N)$ and $B(N)$. The structure of such derivations differs depending on whether $N$ is finite or infinite, and is interesting even for the simplest case of $N=1$, when $A(1)$ is the Toeplitz algebra. The main technique is the use of Fourier series with respect to the $S^1$ action $\rho^\mathbb{K}_\theta$ on $A(N)$, and $\rho^{\mathbb L}_\theta$ on $B(N)$. The Fourier coefficients of derivations are defined in the following way.
\begin{defin}\label{Fou_comp}
If $d$ is a derivation in $A(N)$, the {\it $n$-th Fourier component} of $d$ is defined as:
$$d_n(a)= \frac 1{2\pi} \int_0^{2\pi} e^{in\theta} (\rho^{\mathbb K}_{\theta})^{-1}d\rho^{\mathbb K}_{\theta}(a)\; d\theta.$$
\end{defin}
\begin{defin}\label{Fou_comp1}
If $\delta$ is a derivation in $B(N)$, the {\it $n$-th Fourier component} of $\delta$ is defined as:
$$\delta_n(b)= \frac 1{2\pi} \int_0^{2\pi} e^{in\theta} (\rho^{\mathbb L}_{\theta})^{-1}\delta\rho^{\mathbb L}_{\theta}(b)\; d\theta.$$
\end{defin}
We have the following simple observation.
\begin{prop}
If $d$ is a derivation in $A(N)$, then $d_n$ is an $n$-covariant derivation and well-defined on $\mathcal A(N)$.
\end{prop}
\begin{proof}
It is straightforward to see that $d_n$ is a derivation and is well-defined on $\mathcal A(N)$.
The following computation verifies that $d_n$ is $n$-covariant:
\[\begin{aligned} (\rho^{\mathbb K}_{\theta})^{-1}d_n\rho^{\mathbb K}_{\theta}(a)&=\frac 1{2\pi} \int_0^{2\pi} e^{in\phi} (\rho^{\mathbb K}_{\theta})^{-1}\rho_{\phi}^{-1}d \rho_{\phi}\rho^{\mathbb K}_{\theta}(a)\; d\phi\\
&= \frac 1{2\pi} \int_0^{2\pi} e^{in\phi} \rho_{\theta + \phi}^{-1}d \rho_{\theta + \phi}(a)\; d\phi .
\end{aligned}\]
Changing to new variable $\theta + \phi$, and using the translation invariance of the measure, it now follows that $(\rho^{\mathbb K}_{\theta})^{-1}d_n\rho^{\mathbb K}_{\theta}(a)= e^{-in\theta} d_n(a)$.
\end{proof}
We have the following key Ces\`aro mean convergence result for Fourier components of $d$, which is more generally valid for unbounded derivations in any Banach algebra with the continuous circle action preserving the domain of the derivation.
\begin{lem} If $d$ is a derivation in $A(N)$ then:
\begin{equation}\label{Ces_eq}
d(a)=\lim_{M \rightarrow \infty } \frac 1{M+1} \sum_{j=0}^M \left(\sum_{n=-j}^j d_n(a)\right),
\end{equation}
for every $a\in \mathcal A(N)$.
\end{lem}
\begin{proof}
We need to show that:
\begin{equation*}
\frac 1{M+1} \sum_{j=0}^M \left(\sum_{n=-j}^j d_n(a)-d(a)\right) \xrightarrow {M \rightarrow \infty }0
\end{equation*}
for all $a\in \mathcal A(N)$. Using the standard Fourier analysis \cite{K} we can write:
\[\frac 1{M+1} \sum_{j=0}^M \left(\sum_{n=-j}^j d_n(a)-d(a)\right)= \frac 1{2\pi} \int_0^{2\pi} F_M(\theta) \left((\rho^{\mathbb K}_{\theta})^{-1}d_n\rho^{\mathbb K}_{\theta}(a)- d(a)\right) d \theta,\]
where:
$$F_M(\theta)= \frac 1{M+1} \left(\frac{\sin\left(\frac{M+1}2\right)\theta}{\sin\left(\frac{\theta}2\right)}\right)^2$$
is the Fej\'er kernel, which is manifestly positive and satisfies:
$$\frac 1{2 \pi}\int_0^{2\pi} F_M(\theta) d\theta =1.$$
Since $(\rho^{\mathbb K}_{\theta})^{-1}d_n\rho^{\mathbb K}_{\theta}(a)- d(a)$ is continuous in $\theta$, given $\epsilon >0$ we can find small $\omega >0$ so that we have estimates:
\[\begin{aligned} \frac 1{2\pi} \int_0^{\omega} F_M(\theta) \|(\rho^{\mathbb K}_{\theta})^{-1}d_n\rho^{\mathbb K}_{\theta}(a)- d(a)\| d \theta &\leq \frac{\epsilon}{3} \;\;\textnormal{ and}\\
\frac 1{2\pi} \int_{2\pi- \omega}^{2\pi} F_M(\theta) \|(\rho^{\mathbb K}_{\theta})^{-1}d_n\rho^{\mathbb K}_{\theta}(a)- d(a)\| d \theta &\leq \frac{\epsilon}{3} .
\end{aligned}\]
Moreover, on the remaining interval we can estimate as follows:
\[\frac 1{2\pi} \int_{\omega}^{2\pi-\omega} F_M(\theta) \|(\rho^{\mathbb K}_{\theta})^{-1}d_n\rho^{\mathbb K}_{\theta}(a)- d(a)\| d \theta \leq \frac {\textrm{const}}{(M+1) \sin^2(\omega /2)}\]
for some constant in the numerator. Consequently, we can choose $M$ large enough so that we get:
\[\left\|\frac 1{M+1} \sum_{j=0}^M \left(\sum_{n=-j}^j d_n(a)-d(a)\right)\right\| \leq \epsilon,\]
which completes the proof of \eqref{Ces_eq}.
\end{proof}
The first case we consider is a description of derivations for infinite $N$.
\begin{theo}\label{inf_N_class}
Suppose $d$ is a derivation in $A(N)$ with $N$ infinite. Then there exists a unique constant $C$ such that
\[d(a)= C[\mathbb K, a] + \tilde d(a)\]
where $\tilde d$ is approximately inner.
\end{theo}
\begin{proof}
Let $d_0$ be the $0$-th Fourier component of $d$. It is an invariant derivation, so by Theorem \ref{ncov_N|n} we have the unique decomposition:
\begin{equation*}
d_0(a)= C d_{0,\mathbb K}(a) + \tilde d_0 (a)= C[\mathbb K, a] + \tilde d_0(a),
\end{equation*}
for every $a\in \mathcal A(N)$, where $\tilde d_0$ is an approximately inner derivation.
From Theorem \ref{n_cov_inner} we have that the Fourier components $d_n$, $n \neq 0$ are inner derivations. It follows from \eqref{Ces_eq}, by extracting $d_0$, that we have:
\begin{equation*}
d(a)=d_0(a)+\lim_{M\to\infty}\frac 1{M+1} \sum_{j=1}^M \left(\sum_{|n|\leq j,\, n\ne 0}d_n(a)\right).
\end{equation*}
The terms under the limit sign are all finite linear combinations of $n$-covariant derivations and so they are inner derivations themselves, meaning that the limit is approximately inner, which ends the proof.
\end{proof}
In exactly the same way we obtain the corresponding classification result for unbounded derivations in Bunce-Deddens algebras for infinite $N$.
\begin{theo}
Suppose $\delta$ is a derivation in $B(N)$ with $N$ infinite. Then there exists a unique constant $C$ such that
\[\delta(b)= C[\mathbb L, b] + \tilde \delta(b)\]
where $\tilde \delta$ is approximately inner.
\end{theo}
We now turn to the classification of derivations in Bunce-Deddens and Bunce-Deddens-Toeplitz algebras for finite $N$. We start with the following simple observation.
\begin{lem}\label{center}
If $\delta: \mathcal B(N) \rightarrow B(N)$ is a derivation, then $\delta(V^N) \in \textnormal{C}^*(V^N)$.
\end{lem}
\begin{proof}
Applying $\delta$ to the relation $V^N P_{sr}= P_{sr}V^N$, we see that $\delta(V^N)$ commutes with $P_{sr}$ for every $r,s$ and so it must be in C$^*(V^N)$.
\end{proof}
By Propositions \ref{iden_2} and \ref{iden_3}, we know that for finite $N$ we have an isomorphism of C$^*$-algebras: $B(N) \cong C(S^1) \otimes M_N(\mathbb C)$ and $\mathcal B(N)$ can be identified with the set of $N$ by $N$ matrix-valued trigonometric polynomials $F(t)$ on $S^1$. For any $f\in C(S^1)$ we define the following special derivation $\delta_f$ in $B(N)$:
\[\delta_f(F(t))= f(t) \frac 1i \frac d{dt} F(t).\]
Derivations $\delta_f$ are used in the following theorem which gives very concrete and explicit classification of derivations in $B(N)$.
\begin{theo}
Suppose $N$ is finite and $\delta$ is a derivation in $B(N)$. Then there exists a unique $f\in C(S^1)$ such that
\[\delta = \delta_f + \tilde \delta\]
where $ \tilde \delta$ is inner.
\end{theo}
\begin{proof}
If $p(t)$ is a trigonometric polynomial and $A \in M_N(\mathbb C)$, then we have:
\[\delta(p(t)A) = \delta(p(t))A + p(t) \delta(A).\]
Here $C(S^1) \cong $ C$^*(V^N)$ and by Lemma \ref{center}, there is $f\in C(S^1)$ such that $\delta(e^{it})= f(t) e^{it}$ and hence we have:
\[\delta(p(t))= f(t) \frac 1i \frac d{dt} p(t).\]
Moreover, given any derivation $\delta: M_N(\mathbb C) \rightarrow C(S^1,M_N(\mathbb C) )$, the following continuous matrix-valued function $H(t) \in C(S^1,M_N(\mathbb C) )$ given by:
\[H(t)= \frac 1N \sum_{r,s=1}^N \delta(P_{rs})(t) P_{rs}\]
satisfies the easily verifiable relation:
$$\delta(A)(t)=[H(t), A].$$
Consequently, we have:
\[\delta(p(t))= \left(f(t) \frac 1i \frac d{dt} p(t)\right)A + p(t)[H(t), A] =
\delta_f (p(t)A) + [H(t), p(t)A],\]
which completes the proof.
\end{proof}
It remains to classify derivations in $A(N)$ for finite $N$. For any $f\in C(S^1)$ we define a special derivation $d_f$ in $A(N)$ to be the unique derivation such that:
\begin{equation}\label{d_f_formulas}
d_f(a_{\textnormal{per}}(\mathbb K))=0, \;\; d_f(U)=\frac 1N UT(f(V^N)), \;\; d_f(U^*)=-\frac 1N T(f(V^N))U^*,
\end{equation}
where $a_{\textnormal{per}}(\mathbb K)$ is any element of $A_{\textrm{diag, per}}(N)$.
Here $T(f(V^N))$ is the Toeplitz operator of formula \eqref{Toep_def}, where $f(V^N)$ is an operator in $\ell^2(\mathbb Z)$ defined by the functional calculus.
The derivation $d_f$ is given on generators of $\mathcal A(N)$, hence, if it exists it is unique; to see that it is unambiguously defined on all of $\mathcal A(N)$ we need an additional argument.
Let $f(t)=\sum_{n\in\mathbb Z}f_ne^{int}$ be a trigonometric polynomial which we decompose as:
$$f(t)=f^+(t)+f^-(t),$$
where $f^+(t)=\sum_{n\geq 0}f_ne^{int}$ and $f^-(t)$ has a similar formula, then we claim that we have the following formula for $d_f$:
\begin{equation}\label{df_formula}
d_f(a) = \frac{1}{N}\left[T(f^+(V^N))({\mathbb K}+I) + ({\mathbb K}+I)T(f^-(V^N)), a\right].
\end{equation}
To verify \eqref {d_f_formulas} we calculate using Lemma \ref{Toep_lemma}:
\begin{equation*}
Nd_f(a_{per}({\mathbb K})) = T\left(\left[f^+(V^N),a_{per}(\mathbb L)\right]\right)({\mathbb K}+I) + ({\mathbb K}+I)T\left(\left[f^-(V^N),a_{per}(\mathbb L)\right]\right).
\end{equation*}
Since $a_{per}(\mathbb L)$ is $N$ periodic, it commutes with $V^N$, thus the above commutators are zero and hence $d_f(a_{per}({\mathbb K})=0$.
Next, notice that $UT(f^+(V^N)) = T(f^+(V^N))U$ since $f^+(V^N)$ only contains nonnegative powers of $V$. Using this fact, the commutation relation \eqref{the_com_rel},
and Lemma \ref{Toep_lemma} we have
\begin{equation*}
\begin{aligned}
Nd_f(U)
&=T(f^+(V^N))\left[({\mathbb K}+I)U - U({\mathbb K}+I)\right] + \left[({\mathbb K}+I) - {\mathbb K} UU^*\right]T(f^-(V^N)V) \\
&=\left(T(f^+(V^N)) + T(f^-(V^N))\right)U = T(f(V^N))U.
\end{aligned}
\end{equation*}
For similar reasons as above we have $U^*T(f^-(V^N)) = T(f^-(V^N))U^*$. Using this, the commutation relation \eqref{the_com_rel}, and again Lemma \ref{Toep_lemma}, we obtain the last part of formula \eqref{d_f_formulas}.
Thus this completes the proof of existence of $d_f$ for polynomial $f$.
It is clear from those formulas that $d_f$ is a well-defined derivation $\mathcal A(N) \rightarrow A(N)$.
For a general $f\in C(S^1)$ we use an approximation argument to construct $d_f$. Namely if $\{f^M\}$ is a sequence of trigonometric polynomials converging uniformly to $f$ then, by formulas \eqref{d_f_formulas}, the sequence of derivations $\{d_{f^M}\}$ converges on generators of $\mathcal A(N)$, and hence it converges for every $a\in\mathcal A(N)$. The limit, which must be a derivation in $A(N)$, gives a construction of $d_f$.
Derivations $d_f$ are used in the theorem below.
Compared to the proof of Theorem \ref{inf_N_class}, the classification of derivations in $A(N)$ for finite $N$ gets more complicated since in this case a derivation may have infinitely many non-inner Fourier components. To handle those difficulties we need the following lemma which is more generally valid for unbounded derivations in any algebra if the domain is finitely generated.
\begin{lem}\label{app_der_lem}
If $N$ is finite, $d$ is a derivation in $A(N)$, and there is a sequence $\{d^M\}$ of approximately inner derivations such that for every $a\in \mathcal A(N)$:
$$d(a)=\lim_{M\to\infty}d^M(a),
$$
then $d$ is also approximately inner.
\end{lem}
\begin{proof} For finite $N$ the algebra $\mathcal A(N)$ is finitely generated; for example we can choose the following set of generators:
\begin{equation*}
G:=\{U, U^*, e_N({\mathbb K})\},
\end{equation*}
where the sequence $e_N(k)$ was defined in \eqref{eN_def}. Also, $d^M$ are approximately inner which means that there is a sequence $\{z^{M,W}\}$ of elements of $A(N)$ such that for every $a\in \mathcal A(N)$:
\begin{equation*}
d^M(a)=\lim_{W\to\infty}[z^{M,W},a].
\end{equation*}
For every positive integer $j$ we can choose $M_j$ such that for every $a\in G$ we have:
\begin{equation*}
\left\|d(a)-d^{M_j}(a)\right\|\leq \frac{1}{2j},
\end{equation*}
which can be done because the generating set $G$ is finite. Then choose $W_j$ such that for every $a\in G$ we have:
\begin{equation*}
\left\|d^{M_j}(a)-[z^{M_j,W_j},a]\right\|\leq \frac{1}{2j}.
\end{equation*}
By the triangle inequality we obtain:
\begin{equation*}
\left\|d(a)-[z^{M_j,W_j},a]\right\|\leq \frac{1}{j},
\end{equation*}
which means that we have:
\begin{equation*}
d(a)=\lim_{j\to\infty}[z^{M_j,W_j},a]
\end{equation*}
for every $a\in G$, which by the Leibniz identity implies the above convergence for every $a\in \mathcal A(N)$. Consequently, $d$ is approximately inner, finishing the proof.
\end{proof}
With this preparation we are now ready to state our classification result for derivations in $A(N)$ with finite $N$.
\begin{theo}
Suppose $N$ is finite and $d$ is a derivation in $A(N)$. Then there exists unique function $f\in C(S^1)$ such that:
\[d = d_f + \tilde d,\]
where $ \tilde d$ is approximately inner and $d_f$ is defined by formula \eqref{d_f_formulas}.
\end{theo}
\begin{proof}
Consider the derivation $[d]: \mathcal B(N) \rightarrow B(N)$ given by:
$$[d](a+\mathcal K)= d(a)+\mathcal K.$$
It is easy to see from Definitions \ref{Fou_comp} and \ref{Fou_comp1} that we have the following equality for the Fourier components:
$$[d]_n= [d_n].$$
To construct the function $f$ in the statement of the theorem we notice that Lemma \ref{center} states that $[d](V^N)\in \textnormal{C}^*(V^N)$ and hence we can write:
$$[d](V^N)=f(V^N)V^N$$
for some $f\in C(S^1)$. It follows that we have:
\begin{equation}\label{d_n_one}
[d]_n(V^N)=\begin{cases}
f_jV^{jN+N} & \textnormal{ if } n=jN\\
0 & \textnormal{otherwise,}
\end{cases}
\end{equation}
where $f_j$ are the Fourier coefficients of $f$.
On the other hand, from Theorem \ref{n_cov_der_formula} we know that
\[d_n(U^N)=[U^n \beta_n(\mathbb K), U^N]= U^{n+N}\left(\alpha_n(\mathbb K+(N-1)I) + \cdots + \alpha_n(\mathbb K)\right). \]
Next, for $N\mid b$, we decompose $\alpha_n(k)$ as in the proof of Theorem \ref{ncov_N|n}:
\[\alpha_n(k)= \alpha_{n,0}(k)+C_n + \alpha_{n, \textnormal{per}}(k),\]
where $C_n$ is a constant, $\alpha_{n,0}(k) \in c_0$, $ \alpha_{n, \textnormal{per}}(k+N)=\alpha_{n, \textnormal{per}}(k)$ and
$$\sum_{k=0}^{N-1} \alpha_{n, \textnormal{per}}(k) =0.$$
This decomposition is also valid for $N\nmid n$ but with $C_n=0$ by Theorem \ref{n_cov_inner}.
It follows that we have:
\[d_n(U^N)=U^{n+N}\left(\alpha_{n,0} (\mathbb K +(N-1)I)+ \cdots + \alpha_{n,0}(\mathbb K) + NC_n\right),\]
and consequently, we obtain:
\begin{equation}\label{d_n_two}
[d_n](V^N)= NC_nV^{n+N} = \begin{cases}
NC_{jN}V^{jN+N} & \textnormal{ if } n=jN\\
0 & \textnormal{otherwise.}
\end{cases}
\end{equation}
Comparing equation \eqref{d_n_one} and \eqref{d_n_one} implies the following formulas for constants $C_n$:
\begin{equation*}
C_n=\begin{cases}
\frac{1}{N}f_j & \textnormal{ if } n=jN\\
0 & \textnormal{otherwise.}
\end{cases}
\end{equation*}
It then follows from the formula \eqref{df_formula} that we have:
\begin{equation}\label{df_fourier}
(d_f)_n(a)= \frac{1}{N}f_j d_{n,\mathbb K}(a)=\begin{cases}
[U^n C_n (\mathbb K +I), a] & \textnormal{ if } n \geq 0\\
[C_n (\mathbb K +I) (U^*)^{-n}, a] & \textnormal{ if } n <0.
\end{cases}
\end{equation}
As in the proof of Theorem \ref{ncov_N|n} we decompose $\beta_n$ using:
\[\beta_{n,0}(k):= \sum_{j=0}^{k}\alpha_{n,0}(j), \;\;\; \beta_{n, \textnormal{per}}(k):= \sum_{j=0}^{k}\alpha_{n,\textnormal{per}}(j).\]
This gives the following formulas for the Fourier components of the difference between $d$ and $d_f$:
\[\tilde d_n(a):=(d-d_f)_n(a)= \begin{cases}
[U^n \left(\beta_{n,0}(\mathbb K) + \beta_{n, \textnormal{per}}(\mathbb K)\right), a] & \textnormal{ if } n \geq 0\\
[\left(\beta_{n,0}(\mathbb K) + \beta_{n, \textnormal{per}}(\mathbb K)\right) (U^*)^{-n}, a] & \textnormal{ if } n <0.
\end{cases}\]
From Theorem \ref{n_cov_inner} we know that $\tilde d_n$ is an inner derivation if $N\nmid n$. If we denote by $(\tilde d_0)_n$ and $(\tilde d_{\textnormal{per}})_n$ the following derivations on $ \mathcal A(N)$:
\[(\tilde d_0)_n(a)= \begin{cases}
[U^n\beta_{n,0}(\mathbb K) , a] & \textnormal{ if } n \geq 0\\
[\beta_{n,0}(\mathbb K) (U^*)^{-n}, a] & \textnormal{ if } n <0,
\end{cases}\]
\[(\tilde d_{\textnormal{per}})_n(a)= \begin{cases}
[U^n \beta_{n, \textnormal{per}}(\mathbb K) , a] & \textnormal{ if } n \geq 0\\
[\beta_{n, \textnormal{per}}(\mathbb K) (U^*)^{-n}, a] & \textnormal{ if } n <0
\end{cases}
\]
then, when $N\mid n$, we know that $(\tilde d_{\textnormal{per}})_n$ is inner while $(\tilde d_0)_n$ is approximately inner from Theorem \ref{ncov_N|n}.
To conclude that $\tilde d$ is approximately inner we first use formula \eqref{Ces_eq} which says, in view of the above discussion, that $\tilde d$ is a limit of approximately inner derivations. Consequently, using Lemma \ref{app_der_lem}, we see that $\tilde d$ is approximately inner.
To show the uniqueness of this decomposition, it is sufficient to prove that $d_f$ is approximately inner if and only if $f=0$. If $f=0$, it is clear that $d_f$ is approximately inner. To prove the converse statement, notice that if $d_f$ is approximately inner then so are the Fourier components $(d_f)_n$, which by formula \eqref{df_fourier} are proportional to derivations $d_{n,\mathbb K}$, which in turn were proved in Theorem \ref{ncov_N|n} not to be approximately inner. This gives a contradiction and finishes the proof of the theorem.
\end{proof}
\section{Implementations}
The purpose of this section is to investigate implementations of unbounded derivations in Bunce-Deddens algebras $B(N)$ as operators in Hilbert spaces.
This study is inspired by the following noncommutative geometry concept of first order elliptic operator with respect to a C$^*$-algebra: an unbounded operator $D$ acting in a Hilbert space $\mathcal H$ which carries a representation $\pi$ of a C$^*$-algebra $A$ is called a first order elliptic operator with respect to $A$ if it satisfies two properties:
\begin{enumerate}
\item $[D,\pi(a)]$ is bounded for all $a$ in some dense $^*$-subalgebra $\mathcal A$ of $A$.
\item $D$ has a compact parametrices, which by the appendix of \cite{KMR1} is equivalent to the two operators $(I + D^*D)^{-1/2}$ and $(I + DD^*)^{-1/2}$ being compact operators.
\end{enumerate}
Such first a order elliptic operator with respect to $A$ is a key component of the notion of a spectral triple in noncommutative geometry; see \cite{CPR}.
If a first order elliptic operator $D$ is an implementation of a densely-defined unbounded derivation in $A$ then the first condition of the above definition is automatically satisfied. Hence we are mainly interested in establishing when implementations of derivations in $B(N)$ have compact parametrices. We only consider here the representations of $B(N)$ in Hilbert spaces obtained from the GNS construction as those are the most geometrical representations of those algebras.
In \cite{KMR1} and \cite{KMR2} implementations of 0-covariant, that is invariant, and 1-covariant derivations in the quantum disk (Toeplitz algebra) and the quantum annulus were studied to see if it was possible to construct spectral triples on those quantum domains. Here we continue this analysis for $n$-covariant derivations in $B(N)$.
A state $\tau: B(N) \to \mathbb C$ is called a $\rho^{\mathbb L}_{\theta}$-{\it invariant state} on $B(N)$ if for all $a\in B(N)$ it satisfies the following:
\[\tau(\rho^{\mathbb L}_{\theta}(a))= \tau(a).\]
It is not difficult to describe the $\rho^{\mathbb L}_{\theta}$-invariant states on $B(N)$.
To do this we use the identification $B(N) \cong C(\mathbb Z/N\mathbb Z) \rtimes_{\sigma} \mathbb Z$; see Proposition \ref{cross_iden}.
There is a natural expectation $E: B(N) \rightarrow C(\mathbb Z/N\mathbb Z)$, a positive, linear map such that $E^2=E$. For an element
\[b= \sum\limits_{n \in \mathbb Z} V^n b_n(x) \in \mathcal B(N),\]
see \eqref{pol_In_B(N)}, it is given by:
$$E(b)= \frac{1}{2\pi}\int_0^{2\pi}\rho^{\mathbb L}_{\theta}(b)\,d\theta= b_0(x) \in C(\mathbb Z/N\mathbb Z).$$
Since $C(\mathbb Z/N\mathbb Z)$ is the fixed point algebra for $\rho^{\mathbb L}_{\theta}$, we immediately obtain the following observation:
if $\tau : B(N)\to\mathbb C$ is a $\rho^{\mathbb L}_{\theta}-$invariant state on $B(N)$ then there exists a state $t : C(\mathbb Z/N\mathbb Z)\to\mathbb C$ such that:
$$\tau(b) = t(E(b)).$$
Conversely given a state $t : C(\mathbb Z/N\mathbb Z)\to\mathbb C$, then $\tau(b) = t(E(b))$ defines a $\rho_\theta-$invariant state on $B(N)$.
Therefore the invariant states are given by probabilistic measures on $\mathbb Z/N\mathbb Z$.
We will concentrate below on the following two most interesting and natural $\rho^{\mathbb L}_{\theta}$-invariant states on $B(N)$, namely $\tau_0$ and $\tau_{\textnormal{Haar}}$ defined by:
\[\tau_0(b)= E(b)(0)\quad\textrm{ and }\quad \tau_{\textnormal{Haar}}(b)= \int_{\mathbb Z/N\mathbb Z}E(b)(x)\ d_Hx, \]
where $d_Hx$ is the unique normalized Haar measure. Denote by $\mathcal{H}_0$ and $\mathcal{H}_{\textnormal{Haar}}$ the GNS Hilbert spaces corresponding to $\tau_0$ and $\tau_{\textnormal{Haar}}$ respectively and let $\pi_0, \pi_{\textnormal{Haar}}$ be the corresponding representations.
We have the following concrete description of those Hilbert spaces and representations.
\begin{prop}\label{gns_hilbert_spaces}
The GNS Hilbert spaces $\mathcal{H}_0$ and $\mathcal{H}_{\textnormal{Haar}}$ are naturally isomorphic to the following:
\begin{equation*}
\mathcal{H}_0 \cong \ell^2(\mathbb Z)\quad\textrm{ and }\quad \mathcal{H}_{\textrm{Haar}} \cong L^2(\mathbb Z\times\mathbb{Z}/N\mathbb{Z}).
\end{equation*}
The representation $\pi_0: B(N) \rightarrow \mathcal B(\ell^2(\mathbb Z))$ is the defining representation of $B(N)$, i.e. $\pi_0(a)=a$ for all $a\in B(N)$.
The representation $\pi_{\textnormal{Haar}}: B(N) \rightarrow \mathcal B(L^2(\mathbb Z\times\mathbb{Z}/N\mathbb{Z}))$ is completely described by:
\begin{equation*}
\begin{aligned}
&1.\ \pi_{\textrm{Haar}}(V)f(m,x)= f(m-1,x) \\
&2.\ \pi_{\textrm{Haar}}(a(q(\mathbb L)))f(m,x)=a(x+m)f(m,x),
\end{aligned}
\end{equation*}
where $f(m,x)\in L^2(\mathbb Z\times\mathbb{Z}/N\mathbb{Z})$ and $a(x)\in C(\mathbb Z/N\mathbb Z)$.
\end{prop}
\begin{proof}
To properly identify the Hilbert space
\[
\mathcal{H}_0= \overline{B(N)/\{b\in B(N): \tau_0(b^*b)=0\}}
\]
we must study $\tau_0(b^*b)=0$ for $b\in B(N)$. In fact, due to the continuity of $\tau_0$, we only need to work on the dense subalgebra $\mathcal{B}(N)$. For any $b\in\mathcal{B}(N)$ given by equation \eqref{pol_In_B(N)}, a straightforward calculation yields:
\begin{equation*}
\tau_0(b^*b) = \sum_{n\in\mathbb Z}|b_{n}(0)|^2.
\end{equation*}
Therefore, if $\tau_0(b^*b) = 0$, it follows that $b_{n}(0)=0$ for all $n$. Then the formula:
$$
\mathcal{H}_0\ni[b]\mapsto \{b_n(0)\}\in\ell^2(\mathbb{Z})
$$
gives an isomorphism $\mathcal{H}_0 \cong \ell^2(\mathbb{Z})$, similar to the proof of Proposition $5.4$ in \cite{KMR1}. Notice that the class $[I]$ in the completion of the quotient $\overline{B(N)/\{b\in B(N): \tau_0(b^*b)=0\}}$ corresponds to the basis element $E_0$ in $ \ell^2(\mathbb{Z})$.
From the formula:
\begin{equation*}
Vb = \sum_{n\in\mathbb Z}V^nb_{n-1}(\mathbb{L}),
\end{equation*}
it follows that we have:
$$\pi_0(V)[b]= \{b_{n-1}(0)\}_{n\in\mathbb{Z}}.$$
An analogous calculation shows:
$$\pi_0(a(\mathbb L))[b]=\{a(n)b_{n}(0)\}_{n\in\mathbb{Z}}$$
for $a(\mathbb L)\in B_{\textnormal{diag}}(N)$. This proves the first part of the proposition.
In the second example we have $\tau_{\textrm{Haar}}(b^*b)=0$ if and only if $b=0$. If $b\in\mathcal B$ is given by:
$$b=\sum_{n \in \mathbb Z} V^n b_{n} (\mathbb L)=
\sum_{n \in \mathbb Z} V^n f_{n} (q(\mathbb L))
$$
then the corresponding function in $L^2(\mathbb Z\times\mathbb{Z}/N\mathbb{Z})$ is given by:
$$[b](m,x)=f_m(x).
$$
Otherwise calculations with
$\tau_{\textrm{Haar}}$ are very similar.
\end{proof}
We remark here briefly that because $B(N)$ is defined as the quotient of $A(N)$ with the ideal of compact operators, an invariant state on $B(N)$ lifts to an invariant state on $A(N)$. The corresponding GNS Hilbert spaces are the same as for $B(N)$, with compact operators represented trivially.
In general, for a GNS Hilbert space $\mathcal{H}_\tau$ of $B(N)$ with respect to a state $\tau$ we have that $B(N)\subseteq \mathcal{H}_\tau$ is dense in $\mathcal{H}_\tau$ and $[I]\in \mathcal{H}_\tau$ is cyclic.
Consequently, the subspace
$$\mathcal{D}_\tau := \pi_\tau(\mathcal{B}(N))\cdot[I]$$
is dense in $\mathcal{H}_\tau$. Define $V_{\tau ,\theta} : \mathcal{H}_\tau\to \mathcal{H}_\tau$ via the equation:
$$V_{\tau ,\theta}[b] = [\rho^\mathbb{L}_\theta(b)].$$
Notice that for every $\theta$, the operator $V_{\tau ,\theta}$ extends to a unitary operator in $\mathcal{H}_\tau$. Moreover by a direct calculation we get:
\begin{equation*}
V_{\tau ,\theta}\pi_\tau(b)V_{\tau ,\theta}^{-1} = \pi_\tau(\rho^\mathbb{L}_\theta(b)),
\end{equation*}
meaning that $V_{\tau ,\theta}$ is an implementation of $\rho^\mathbb{L}_\theta$.
It follows from the definitions that we have the following inclusions:
$$V_{\tau ,\theta}(\mathcal{D}_\tau)\subseteq\mathcal{D}_\tau \textrm{ and }\pi_\tau(\mathcal{B}(N))(\mathcal{D}_\tau)\subseteq\mathcal{D}_\tau.$$
Let $\delta$ be an $n$-covariant derivation in $B(N)$ and let $\tau$ be a $\rho^{\mathbb{L}}_\theta-$invariant state. Implementations of $\delta$ in the GNS Hilbert space $\mathcal H_{\tau}$ are defined in the following way.
\begin{defin}
An operator $D_\tau :\mathcal{D}_\tau \to \mathcal{H}_\tau$ is called a {\it covariant implementation} of an $n$-covariant derivation $\delta$ if
$$[D_\tau, \pi_\tau(b)] = \pi_\tau(\delta(b))$$ and
$$V_{\tau,\theta} D_\tau V_{\tau,\theta}^{-1} = e^{in\theta} D_\tau.$$
\end{defin}
Below we find all covariant implementations of $n$-covariant derivations on the two GNS Hilbert spaces $\mathcal{H}_0$ and $\mathcal{H}_{\textrm{Haar}}$ of Proposition \ref{gns_hilbert_spaces}, and establish when they have compact parametrices. We start by recapping Theorem \ref{BDncov} with additional details needed for the formulation of the implementation results.
Any $n$-covariant derivation $\delta$ in $B(N)$ is of the form:
$$\delta(b)=[V^n\eta_n(\mathbb L),b],
$$
where for $N$ infinite, $n\ne 0$ and $N$ finite and $N\nmid n$ the operator $\eta_n(\mathbb L)$ is in $B_{\textnormal{diag}}(N)$; hence it comes from a function $h_n$ in $C(\mathbb Z/N\mathbb Z)$, so that we have:
$$\eta_n(\mathbb L)=h_n(q(\mathbb L)).$$
In other cases the increment
$$\gamma_n(\mathbb L):=\eta_n(\mathbb L)-\eta_n(\mathbb L-I)$$
is in $B_{\textnormal{diag}}(N)$, so it can be written as:
$$\gamma_n(\mathbb L)=g_{n}(q(\mathbb L))$$
for some $g_{n}(x)\in C(\mathbb Z/N\mathbb Z)$. It follows that there is a constant $C_n$ such that we have decompositions:
$$g_n(x)=C_n+\tilde g_n(x) \textrm{ and } \eta_n(\mathbb L)=C_n\mathbb L +\tilde\eta_n(\mathbb L),
$$
where the function $\tilde{g}_{n}(x)\in C(\mathbb Z/N\mathbb Z)$ satisfies the property:
\begin{equation*}
\int_{\mathbb{Z}/N\mathbb{Z}}\tilde{g}_{n}(x)\ d_H x = 0,
\end{equation*}
and we have:
$$
\tilde g_n(q(\mathbb L))=\tilde\eta_n(\mathbb L)-\tilde\eta_n(\mathbb L-I).
$$
When $N$ is infinite and $n=0$, in general, it is possible for $\tilde\eta_0(l)$ to be unbounded. However, when $N$ is finite and $N\mid n$ then $\tilde\eta_n(l)$ must be in the finite dimensional vector space $C(\mathbb Z/N\mathbb Z)$, and so we have:
$$\tilde\eta_n(\mathbb L)=\tilde h_{n}(q(\mathbb L))$$
for some $\tilde h_{n}(x)\in C(\mathbb Z/N\mathbb Z)$. All of this notation is used in the following implementation statements.
\begin{theo}
Any covariant implementation $D_{\tau_0} :\mathcal{D}_{\tau_0} \to \ell^2(\mathbb{Z})$ of an $n$-covariant derivation $\delta$ in $B(N)$ is of the form:
\begin{equation*}
D_{\tau_0} =\left\{
\begin{aligned}
&V^n\eta_n(\mathbb{L}) &&\textrm{for }n\ne0\\
&\eta_0(\mathbb{L}) + c\cdot I &&\textrm{for }n=0,\\
\end{aligned}\right.
\end{equation*}
with arbitrary constant $c$ for $n=0$.
If $N$ is infinite and $n\neq0$ or if $N$ is finite and $N\nmid n$, then the operator $D_{\tau_0}$ is bounded, so it does not have compact parametrices. In all other cases, $\eta_n(l)\to\infty$ as $l\to\infty$ is a necessary and sufficient condition for $D_{\tau_0}$ to have compact parametrices.
\end{theo}
\begin{proof}
It is easy to see that $\mathcal{D}_{\tau_0}$ coincides with $c_{00}\subseteq \ell^2(\mathbb{Z})$. The formulas for $D_{\tau_0}$ follow from simple calculations, just like in \cite{KMR1}. See also the next theorem for more details of similar calculations in the Haar measure state case.
From the appendix of \cite{KMR1}, $D_{\tau_0}$ has compact parametrices if and only if $(I + D_{\tau_0}^*D_{\tau_0})^{-1/2}$ and $(I + D_{\tau_0}D_{\tau_0}^*)^{-1/2}$ are compact operators. A direct calculation yields the following formula:
\begin{equation*}
I + D_{\tau_0}^*D_{\tau_0} =\left\{
\begin{aligned}
&I + |\eta_n|^2(\mathbb{L}) &&\textrm{for }n\neq0 \\
&(1+c^2)\cdot I + 2\textrm{Re }\eta_0(\mathbb{L}) + |\eta_0|^2(\mathbb{L}) &&\textrm{for }n=0, \\
\end{aligned}\right.
\end{equation*}
which is a diagonal operator for all $n$. Therefore, it follows that $(I + D_{\tau_0}^*D_{\tau_0})^{-1/2}$ is compact if and only if $\eta_n(l)$ goes to infinity, in particular when $C_n\ne 0$. An analogous computation works for $(I + D_{\tau_0}D_{\tau_0}^*)^{-1/2}$, thus completing the proof.
\end{proof}
Similar analysis can also be performed for implementations of $n$-covariant derivations in the GNS Hilbert space corresponding to the invariant state on $B(N)$ determined by the Haar measure on $\mathbb Z/N\mathbb Z$.
\begin{theo}
There exists a function $\psi(x)\in L^2(\mathbb{Z}/N\mathbb{Z},d_Hx)$ such that any implementation $D_{\tau_{\textrm{Haar}}} : \mathcal{D}_{\tau_{\textrm{Haar}}} \to L^2(\mathbb Z\times\mathbb{Z}/N\mathbb{Z})$ of $\delta$ is of the form:
\begin{equation*}
\left(D_{\tau_{\textrm{Haar}}}f\right)(m,x) = h_{n}(x+m-n)f(m-n,x) + (\psi(x) - h_{n}(x))f(m-n,x+n),
\end{equation*}
if $N$ is infinite, $n\neq0$, or if $N$ is finite, $N\nmid n$, and
\begin{equation*}
\left(D_{\tau_{\textrm{Haar}}}f\right)(m,x) = \left(C_0 m+ (\tilde{g}_{0}(x+m-1)+\cdots+\tilde{g}_{0}(x)) + \psi(x)\right)f(m,x),
\end{equation*}
if $N$ is infinite, $n=0$, or
\begin{equation*}
\left(D_{\tau_{\textrm{Haar}}}f\right)(m,x) = \left(C_n\cdot(m-n)+\tilde h_{n}(x+m) - \tilde h_{n}(x) +\psi(x) \right)f(m-n,x),
\end{equation*}
if $N$ is finite, $N\mid n$.
For $N$ finite and $N\mid n$ a necessary and sufficient condition for $D_{\tau_{\textrm{Haar}}}$ to have compact parametrices is $C_n\neq0$. In all other cases $D_{\tau_{\textrm{Haar}}}$ does not have compact parametrices.
\end{theo}
\begin{proof}
First notice that $[I] = \chi_0(m,x)$ where $\chi_0(m,x) =1 $ when $m=0$ and zero for all other values of $m$. Given $b\in\mathcal{B}(N)$ we compute as follows:
\begin{equation*}
\begin{aligned}
D_{\tau_{\textrm{Haar}}}[b] &= D_{\tau_{\textrm{Haar}}}\pi_{\textrm{Haar}}(b)[I] = [D_{\tau_{\textrm{Haar}}},\pi_{\textrm{Haar}}(b)][I] + \pi_{\textrm{Haar}}(b)D_{\tau_{\textrm{Haar}}}[I] \\
&=\pi_{\textrm{Haar}}(\delta(b))[I] + \pi_{\textrm{Haar}}(b)D_{\tau_{\textrm{Haar}}}[I].
\end{aligned}
\end{equation*}
Applying the covariance condition $V_{\tau_{\textrm{Haar}},\theta}D_{\tau_{\textrm{Haar}}}V_{\tau_{\textrm{Haar}},\theta}^{-1} = e^{in\theta}D_{\tau_{\textrm{Haar}}}$ to $[I] = \chi_0(m,x)$ shows that there exists a function $\psi(x)\in L^2(\mathbb{Z}/N\mathbb{Z},d_Hx)$ such that
$$D_{\tau_{\textrm{Haar}}}\chi_0(m,x) = \psi(x)(\pi_{\textrm{Haar}}(V^n)\chi_0)(m,x).$$
It follows that we have the formula:
\begin{equation*}
\pi_{\textrm{Haar}}(b)D_{\tau_{\textrm{Haar}}}[I](m,x) = \psi(x)(\pi_{\textrm{Haar}}(bV^n)\chi_0)(m,x)=\psi(x)[b](m-n, x+n),
\end{equation*}
because of the following calculation with Fourier components of $b$:
$$bV^n=\sum_{m \in \mathbb Z} V^{m+n} f_{m} (q(\mathbb L)+n\cdot I)=\sum_{m \in \mathbb Z} V^{m} f_{m-n} (q(\mathbb L)+n\cdot I).
$$
This implies the following general expression of the operator $D_{\tau_{\textrm{Haar}}}$:
\begin{equation*}
\begin{aligned}
(D_{\tau_{\textrm{Haar}}}[b])(m,x) &=
\left[\sum_{m\in\mathbb Z}V^{m}\eta_{n}(\mathbb{L} + (m-n)\cdot I)b_{m-n}(\mathbb{L}) - \eta_{n}(\mathbb{L})b_{m-n}(\mathbb{L}+n\cdot I)\right](m,x)\\
&+\psi(x)[b](m-n, x+n).
\end{aligned}
\end{equation*}
If $N$ is infinite and $n\neq 0$ or if $N$ is finite and $N\nmid n$ then $\eta_{n}(\mathbb{L})$ is in $B_{\textrm{diag}}(N)$ hence it comes from a function $h_n(x)$ in $C(\mathbb Z/N\mathbb Z)$. Consequently, we have the formula:
\begin{equation*}
\left(D_{\tau_{\textrm{Haar}}}[b]\right)(m,x) = h_{n}(x+m-n)[b](m,x) + (\psi(x) - h_{n}(x))[b](m,x+n).
\end{equation*}
The first and last terms of the above expression (those containing $h_n(x)$) are bounded operators and hence $D_{\tau_{\textrm{Haar}}}$ has compact parametrices if and only if the middle term has compact parametrices by the results in the appendix of \cite{KMR1}. That term is unitarily equivalent to the operator:
$$f(m,x)\mapsto \psi(x)f(m,x),
$$
which for every $m$ is the multiplication operator by an $L^2$-function in $L^2(\mathbb Z/N\mathbb Z,d_Hx)$ and
therefore $D_{\tau_{\textrm{Haar}}}$ can not have compact parametrices.
In the case when $N$ is infinite and $n=0$, there is in general no function $h_0(x)$ such that $\eta_0(\mathbb L)=h_0(q(\mathbb L))$, and so we write the difference $\eta_0(\mathbb L+m\cdot I) - \eta_0(\mathbb L)$ in terms of $\eta_n(\mathbb L)-\eta_n(\mathbb L-I)=\gamma_n(\mathbb L)=g_0(q(\mathbb L))=C_n\cdot I+\tilde g_0(q(\mathbb L))$ to obtain the following expression:
\begin{equation*}
\left(D_{\tau_{\textrm{Haar}}}[b]\right)(m,x) = \left(C_0 m+ (\tilde{g}_{0}(x+m-1)+\cdots+\tilde{g}_{0}(x)) + \psi(x)\right)[b](m,x).
\end{equation*}
As in the first case, since for each fixed $m$ the above formula is a diagonal operator that is a multiplication by a $L^2$-function, it is therefore impossible for $D_{\tau_{\textrm{Haar}}}$ to have compact parametrices.
Finally in the last case, when $N$ is finite and $N\mid n$, the Hilbert space $L^2(\mathbb Z/N\mathbb Z,d_Hx)$ is now a finite dimensional Hilbert space. Hence, we can decompose $\eta_n(\mathbb L)$ as follows:
$$\eta_n(\mathbb L)=C_n\mathbb L+ \tilde g_n(q(\mathbb L)).$$
Using $N$-periodicity in $x$ we arrive at the expression:
\begin{equation*}
\left(D_{\tau_{\textrm{Haar}}}[b]\right)(m,x) = \left(C_n\cdot(m-n)+\tilde h_{n}(x+m) - \tilde h_{n}(x) +\psi(x) \right)[b](m-n,x).
\end{equation*}
Notice that since $\tilde h_{n}(x)$ is $N$-periodic then $\tilde h_{n}(x+m)$ is uniformly bounded in $m$ and $x$.
It now follows that $D_{\tau_{\textrm{Haar}}}$ have compact parametrices if and only if $C_n\neq0$. This completes the proof.
\end{proof}
|
1,108,101,564,161 | arxiv | \section{Introduction}\label{sec1}
Liquid-liquid phase separation (LLPS) has emerged as a fundamental mechanism by which eukaryotic cells organize themselves into membraneless compartments {called biomolecular condensates} that carry out important cellular functions \citep{brangwynne2009germline, hyman2014liquid, alberti2019considerations}. Despite its recognition over the last decade as a means of self-organization in biology, there is an emerging skepticism regarding the supporting evidence for LLPS \citep{mcswiggen2019evaluating, Leslie336}.
A major criticism levelled at the current approaches to study LLPS is their reliance on qualitative measures and the lack of concomitant quantitative modeling.
For instance, Fluorescence Recovery After Photobleaching (FRAP) -- a technique used commonly to ascertain the liquid-like nature of {biomolecular condensates} -- measures the time it takes for photo bleached molecules to diffuse out of these {liquid droplets}.
However, the observed diffusion times from FRAP studies \citep{mcswiggen2019evaluating} show a huge spread (from a few seconds to minutes) and are likely to be sensitive to the {droplet's complex environment (such as the elasticity of biopolymer networks in the medium surrounding the droplet)}, yet this connection is poorly understood.
Further, qualitative approaches are not infallible to experimental limitations (like diffraction limit), post-processing induced artifacts, and presence of alternate mechanisms that lead to similar outcomes.
Given the small size of {biomolecular condensates} and the complex nature of the physiological environment inside cells, purely qualitative observations can be misleading without accounting for the various physical effects that are significant at cellular length scales.
Therefore, there is a need to supplement the current methods of investigation with rigorous quantitative analyses that establish the crucial link between {the medium's} material properties and the outcome of LLPS, and can help guide and interpret future studies.
Recent experiments have revealed that the sea of \textit{elastic} network contained in cells -- which impart them an elastic-solid like mechanical resistance to deformation -- play a very important role in LLPS by providing elastic resistance to the growth of {biomolecular condensates} \citep{zhang2020mechanical, lee2021chromatin, boddeker2021non, fernandez2021putting}.
This mechanism may hold answers to why LLPS inside the cellular milieu leads to tightly regulated {condensate} sizes and associated dynamics, unlike the classic example of oil-water phase separation.
{Synthetic polymer based systems have provided a simpler \textit{droplet-in-polymer} analog to \textit{condensate-in-cell} systems to experimentally and theoretically study the effects of elastic resistance \citep{style2018liquid, rosowski2020elastic, rosowski2020elastic2, ronceray2021liquid, vidal2021cavitation, mukherjee2021statistical, biswas2021thermodynamics,paulin2021fluid, wei2020modeling}.
These systems have raised some intriguing questions.}
Existing studies frequently treat droplet growth as a cavitation process \citep{ball1982discontinuous}, which assumes a constant, {size-independent pressure inside the droplet}, and thus does not energetically distinguish between a single large droplet and multiple small droplets (overall droplet volume being the same). Then why do we see the latter and not the former?
Furthermore, how does elasticity affect the commonly measured experimental quantities, such as size of the droplets, their coarsening dynamics, and their spatial localization?
{In our previous work on liquid-liquid phase separation inside synthetic polymers \citep{kothari2020effect}, we showed that the elastic resistance imposed by the medium delays phase separation, arrests droplet growth, and can even drive flux of liquid from regions of high stiffness to low stiffness.
For the specific problem at hand, our analysis in \citep{kothari2020effect} was restricted to the assumption of infinite domains and employed a simple transport law where liquid flux was assumed to be driven by concentration gradients.}
{To quantitatively understand the role of elasticity in intracellular phase separation, where the material properties and length scales differ substantially from synthetic systems studied so far in the literature, there is a need to port the knowledge from synthetic systems to biological systems and to address the above limitations.}
In particular, in recent years there has been an accumulation of evidence that strain stiffening plays an important role in various functions of the cell \cite{li2021nonlinear,han2018cell,van2016strain,hu2019high}. {Could this effect be crucial, even at the cellular length scales,} in explaining the size regulation of membraneless organelles?
Motivated by these questions, we develop a theoretical model that builds on \citep{kothari2020effect}, and apply it to explain experimental observations in relevant cellular systems. We then extend the analysis beyond the experimental range to broaden our understanding of the constitutive sensitivities.
\section{Model}\label{sec2}
{The model presented in this section closely follows the authors' previous work \citep{kothari2020effect} with a key point of departure: in this work we employ a generalized kinetic model by prescribing the flux as proportional to its thermodynamic conjugate, i.e. gradients in chemical potential\footnote{{The chemical potential is defined here as the change in Gibbs free energy of a system when a molecule is removed or added to it at constant temperature and pressure.}}, as opposed to prescribing the flux proportional to concentration gradients.
This kinetic law allows us to model a much broader range of material heterogeneities and is capable of describing both \textit{downhill} (from higher to lower concentration) and \textit{uphill} diffusion (from lower to higher concentration), thereby widening the applicability of the theory to real-world systems.}
Our model system {comprises} an elastic network of cross-linked polymer, permeated by a liquid mixture that is made of the following two components: free liquid of the corresponding un-crosslinked or partially crosslinked polymer chains \citep{jensen2015wetting} (denoted by A), and another liquid of a different species (denoted by B).
The model system is a {simplified representation} of cellular milieus that show phase separation.
{For example, in cells,} proteins and RNAs are mixed inside the cytoplasm, which contains elastic network as well as many other liquids.
Proteins and RNAs phase separate to create P granules, that segregate but coexist with cytoplasm \citep{brangwynne2009germline, hyman2014liquid}.
Similarly, inside the nucleus, the nucleolar protein fibrillarin (FIB-1) phase separates from the nucleoplasm to create nucleoli \citep{berry2015rna}.
Typically, there are two main time scales associated with phase separation in heterogeneous materials.
{\textit{Short timescale:}} as the mixture {(supersaturated with liquid B) is stimulated, droplets of liquid B} nucleate and grow inside the elastic matrix before achieving a quasi-equilibrium size, which is governed by a balance between the local chemical and elastic properties of the mixture.
If the nucleation and growth process is rapid, the global heterogeneity does not influence the local behavior of the droplets, which involves only short range migration of liquid and thus only depends on local elastic properties.
{\textit{Long timescale:}} heterogeneity can dominate transport and drive long-range migration of liquid across the material at a longer timescale.
In the current work, we will restrict our attention to scenarios where the timescales are well separated. This allows us to treat nucleation and growth separately from the ensuing dynamics.
We make the following assumptions in our model.
We consider situations in which the matrix does not swell appreciably in the entire process, and thus changes in elastic energy due to the swelling of the matrix can be neglected.
Accordingly, we model the process as mixing between liquids A and B, where the cross-linked network only provides elastic resistance to droplet growth, and only liquid B can migrate spatially.
Restricting our attention to the dilute limit, we also ignore the strain energy due to elastic interaction among droplets,
{and we assume the droplet distribution to be monosdisperse.}
The concentration of liquid B in the mixture is denoted by $\phi$, and is defined with respect to the matrix volume (i.e. excluding the droplet volume).
Concentration of the liquid in the droplet phase is denoted by $\phi_D =\frac{4\pi}{3}r^3 n_d$, where $r$ is the radius of the droplets, and $n_{d}$ is the number of droplets per unit volume; $\phi_D$ is defined with respect to the total volume within a representative element.
As a supersaturated mixture of initial concentration $\phi^{sup}$ phase-separates, the droplets nucleate and start to grow in the elastic matrix.
Phase separation is driven by the lowering of the mixing free energy ($\Delta \bar{G}_{mix}$). At later stages of phase separation, reduction of surface energy ($\Delta \bar{G}_{sur}$) drives coarsening; however, the growth and coarsening of droplets incurs substantial elastic energy ($\Delta \bar{G}_{el}$).
This energy competition drives the system to choose a droplet size that minimizes the total free energy. In terms of driving forces, at equilibrium, the chemical potential of liquid B $(\mu)$ in the matrix balances the chemical potential of the droplet $(\mu_D)$.
The change in total free energy density of the system, calculated per unit volume, can be expressed as the sum of contributions from mixing, surface, and elastic energies
\begin{multline}\label{eq:total-free-energy}
\Delta \bar{G}(\phi,\phi_D) = \Delta \bar{G}_{mix}( \phi, \phi_D) + \Delta \bar{G}_{sur}(\phi_D) \\ +\Delta \bar{G}_{el}(\phi_D),
\end{multline}
respectively. In this work we will use the Flory–Huggins solution theory \citep{flory1953principles} to represent changes in mixing free energy,\footnote{We assume that the number of lattice sites occupied by a single chain of the liquid $N_A \gg 1$ for simplicity.} $$ \Delta \bar{G}_{mix}= \frac{k T}{\nu_m}(1-\phi_D)(\phi \ln\phi+ \chi(T) \phi(1-\phi)),$$ where $k$ denotes the Boltzmann constant, $T$ denotes the temperature, $\nu_m$ denotes the molecular volume of liquid B, and $\chi$ denotes the Flory-Huggins parameter{, which can be inferred by energy minimization for a given saturation concentration of liquid B} (see Appendix A). Changes in surface energy are written as $ \Delta \bar{G}_{sur}(\phi_D)= 4\pi r^2\Gamma n_d$, where $\Gamma$ is the surface energy between the two liquids, and $\Delta \bar{G}_{el}(\phi_D)=\frac{4\pi}{3}r^3W(r) n_d$, where $W(r)$ denotes the elastic strain energy density due to growth of a single droplet (see Appendix B).
In the short timescale, before long-range migration occurs, the local concentration of the absorbed liquid, must be conserved, namely
\begin{equation}\label{eq:mass-conservation}
\phi^{sup} = \phi(1-\phi_D) +\phi_D,
\end{equation}
{where $\phi^{sup}$ denotes the concentration of the supersaturated mixture prior to phase separation.}
With the specific response functions chosen, the short-timescale equilibrium droplet size can now be calculated by minimizing the total free energy \eqref{eq:total-free-energy} subject to the mass conservation constraint \eqref{eq:mass-conservation}.
Minimizing the free energy is equivalent to finding the concentration, $\phi$, that equilibriates the chemical potential of the liquid in the matrix {($\mu$)} to the chemical potential of the liquid in the {droplet} {($\mu_D$).}
{When the droplets and matrix are in equilibrium (minimum energy state), exchange of a molecule will not alter the total energy. In other words, their chemical potentials are the same. } The derivative
${\rm d}\Delta \bar{G}/{\rm d} \phi =0$ gives
\begin{eqnarray}\label{eq:quasi-eqm1}
\begin{split}
\underbrace{kT(\ln\phi + (1-\phi) +\chi(1-\phi)^2)}_{\mu}= \quad\quad \\ \underbrace{\nu_m\left(\frac{2\Gamma}{r}+ W(r) +\frac{r}{3}W'(r) \right)}_{\mu_D},
\end{split}
\end{eqnarray}
where we have used ${\rm d}(\phi(1-\phi_D)+\phi_D)=0$ from \eqref{eq:mass-conservation} (see Appendix C for details).
{Here, $\mu$ {is} due to changes in free energy of the mixed state with changes in the concentration, while $\mu_D$ arises due to change in {droplets'} surface energy and elastic energy upon change in its size.}
The solution of equation \eqref{eq:quasi-eqm1} along with mass conservation constraint \eqref{eq:mass-conservation}, for each homogeneous region in the problem, fully describes the quasi-equilibrium state of the system, and serves as the initial condition for the long timescale dynamics of heterogeneous systems.\footnote{For a detailed treatment of the short timescale behavior from nucleation to growth of droplets, we refer the readers to \citep{kothari2020effect}.}
At the longer timescale, spatial heterogeneities in material properties can give rise to long-range migration in phase-separated systems.
{Rather than modeling the individual droplets, we consider a 1D aggregate representation with variations in droplet size and liquid concentration in the matrix represented by the two field variables $\phi_D(x)$ and $\phi(x)$, respectively. Such a 1D description is consistent with the symmetry of the problems we will study in this work.}
{Next, prescribing the flux $J$ {of liquid B} as proportional to its thermodynamic conjugate, i.e. gradients in chemical potential, we write}
\begin{equation}\label{eq:dynamics-flux}
J = -\frac{D\phi}{kT} \bf{\nabla} \mu,
\end{equation}
where $D$ is the diffusivity of the liquid (upon linearization \eqref{eq:dynamics-flux} reduces to the commonly used Fick's law).
As the liquid starts to migrate across the matrix, the droplet-matrix equilibrium is disturbed $(\mu_D\neq\mu)$ and a cascading effect follows where droplet size may increase or decrease.
This can be modeled by thinking of droplets as a source/sink term which gives the following equations for the dynamics of the system,
\begin{eqnarray}\label{eq:dynamics-pde}
&\frac{\partial \phi(x,t)}{\partial t} + {\rm Div} J = s(x,t) \\
&\frac{\partial \phi_D(x,t)}{\partial t} = -s(x,t).
\end{eqnarray}
Here, $s(x,t)$ is the source term that captures the behavior of the droplets, which can dissolve back into the matrix to replenish it or grow in size by absorbing the excess liquid in the matrix. This process is driven by the difference in chemical potential {of liquid B}; a thermodynamically consistent form of the source term is thus chosen as,
\begin{eqnarray}\label{eq:source}
s(x,t) = \frac{K}{\nu_m}\left(\mu_D - \mu \right)H(\phi_D), \quad \mathcal{K} =\frac{ kT L^2K}{D\nu_m},
\end{eqnarray}
where $H(\phi_D)$ is the Heaviside function to ensure that the source is exhausted at $\phi_D = 0$, $L$ is a characteristic length scale of the system, and $K$ is a material property, which in its non-dimensionalized form, $\mathcal{K}$, is {called the \textit{dissolution number}} — it quantifies the relative eagerness of the droplet to give out the liquid to the matrix \citep{kothari2020effect}.
{The specific value of $K$ is not known and estimating it remains an area of future work.}
The kinetics of the process are governed primarily by the dissolution number --- a larger dissolution number makes the source term bigger and makes the droplets respond faster to any changes in the matrix.
Equations \eqref{eq:dynamics-flux}-\eqref{eq:source} complete the set of governing equations, which must be supplemented with appropriate initial and boundary conditions to constitute a well-defined problem.
In the following, we will apply the model to two cases of interest.
\section{Results and Discussion}\label{sec3}
\subsection{Case I - Size Regulation and Coarsening}\label{subsec2}
The size of various cellular organelles must be tightly regulated for their proper function.
However, a distribution of droplets resulting from phase separation typically tends to coarsen over time, where larger droplets grow at the expense of smaller droplets --- a process known as Ostwald ripening. How do cells then achieve a stable distributions of multiple droplets?
A few possible explanations have been proposed.
For instance, chemical reactions taking place inside the droplets can potentially arrest the Ostwald ripening \citep{zwicker2015suppression}.
Alternatively, certain cellular components can act as surfactants to stabilize the droplet size by reducing the surface energy of the droplet-matrix interface \citep{cuylen2016ki}.
In the current work, we {explore} another mechanism that can stabilize the droplet distribution --- the elastic resistance to the growth and merger of droplets imposed by the medium.
{This mechanism was considered for phase separation in synthetic polymers in \citep{style2018liquid, kothari2020effect}.
Here, we extend it to biopolymers and show that this mechanism can still be important at the scales relevant for the cells.}
Biopolymers often show strain-stiffening behavior, where they progressively stiffen as they are stretched.
Using our model and specializing $W(r)$ for strain-stiffening materials (see Appendix B), we study how elasticity regulates the size of droplets.
To illustrate the results, we select representative material values for chromatin -- a biopolymer found in nucleus -- from the literature \citep{ronceray2021liquid, zhang2020mechanical, berry2015rna}.
{Since the values of strain-stiffening parameter $n$ and the number density of droplets $n_d$ are not reported in these studies, we select $n$ to represent moderate strain-stiffening (with more values reported in the Appendix D), and choose $n_d$ to be proportional to the elastic modulus, as motivated by the experimental finding from Style et al. \citep{style2018liquid}. Accordingly, we set $n_d = \alpha E$, where the choice of $\alpha = 10^{13}$N$^{-1}$m$^{-1}$ is made to obtain droplet sizes in ballpark of those reported in \citep{zhang2020mechanical}. }
The droplet sizes at short timescales (or in absence of heterogeneity that can lead to long timescale migration) are shown in Fig. \ref{fig:droplet_size} for two different supersaturations and moderate strain-stiffening.
In contrast to the commonly assumed neo-Hookean response, in a strain-stiffening matrix the elastic energy cost of expanding a single large droplet is higher than growing multiple small droplets for the same total volume. Thus, strain-stiffening provides a clear explanation for the observed multi-droplet state.
\begin{figure}[ht!]
\centering
\makebox[0.45\textwidth][c]{\includegraphics[width=0.4\textwidth]{Figure1_droplet_size_v2.pdf}}
\caption{{\textbf{Typical droplet size as a function of chromatin stiffness at short timescales.} Material Properties: $n=0.95, \Gamma = 5\times10^{-7}$ Nm$^{-1}$, $\nu_m = 1\times10^{-23}$ m$^{3}$, $\alpha = 10^{13}$ N$^{-1}$m$^{-1}$, $\chi = 2.14$. Inset: optogenetically nucleated droplets in chromatin network show typical size of a few microns \citep{zhang2020mechanical}.}} \label{fig:droplet_size}
\end{figure}
\begin{figure}[h!]%
\centering
\includegraphics[width=0.48\textwidth]{fig_withschematic_coarsening_v3.pdf}
\caption{{\textbf{Elasticity-driven coarsening in heterogeneous systems.} Normalized droplet radius, $\tilde{R}$, for droplets at the interface on the right side. Material Properties: $E = 100$ Pa (Left); $E = 50$ Pa (Right); $\Gamma = 5\times10^{-7}$ Nm$^{-1}$, $\nu_m = 5\times10^{-24}$ m$^{3}$, $D = 5\times10^{-11}$m$^{2}$s$^{-1}$, $\alpha = 10^{13}$ N$^{-1}$m$^{-1}$, $\chi = 2.14$ (both sides).}}\label{fig:coarsening}
\end{figure}
\begin{figure*}[h!]%
\centering
\includegraphics[width=\textwidth]{Fig3.pdf}
\caption{{\textbf{Dynamics of coarsening in heterogeneous systems.} Same system as shown in the schematic in Fig. 2. Material Properties: $E = 100$ Pa (Left); $E = 50$ Pa (Right); $\Gamma = 5\times10^{-7}$ Nm$^{-1}$, $\nu_m = 5\times10^{-24}$ m$^{3}$, $D = 5\times10^{-11}$m$^{2}$s$^{-1}$, $\alpha = 10^{13}$ N$^{-1}$m$^{-1}$, $\chi = 2.14$ (both sides). (A) Rate of coarsening for different values of strain stiffening and dissolution numbers; $\tilde{R}$ denotes the normalized droplet radius. Rate of Ostwald ripening is shown for comparison in blue. (B) Spatial distribution of droplet sizes for $t= 120$ mins. Also see Appendix E for coarsening behavior at longer times.}}\label{fig:coarseningdynamics}
\end{figure*}
We find that even before onset of long timescale effects, the response of the system is highly nonlinear in its dependence on the constitutive properties.
{By choosing a reasonable value for $\alpha$, the droplet sizes obtained from the model are in good agreement with the experimentally observed range of droplet sizes \cite{zhang2020mechanical}. This supports the hypothesis that elastic resistance from the medium can play an important role in size regulation of the droplets. However, experimental measurements of $n$ and $n_d$ are required to provide accurate predictions}.
Beyond the intuitive trends that are captured (i.e. that a stiffer medium results in smaller droplets), the rapid decay of droplet radius, from a maximum value to a nearly unaffected size, indicates a narrow zone of high sensitivity within a range of stiffnessses that are accessible to active biological materials.
Namely, biological systems can use this process as a \textit{mechanical switch}, whereby activating relatively small changes in stiffness can induce large changes in droplet size. (Also see Appendix D for strain-stiffening effects.)
While at the long timescale the surface tension driven coarsening is hindered by the presence of elastic networks, these systems may show limited coarsening driven by gradients in stiffness.
To study this coarsening behavior in an elastically heterogeneous medium, {we take inspiration from \citep{rosowski2020elastic} and} construct the simplest heterogeneous setup by considering two homogeneous connected regions of differing stiffnesses, as shown in Fig. \ref{fig:coarsening}.
{We require continuity of flux at the interface, and consider and infinite domain (i.e. $x\in(-\infty,\infty)$).}
After the initial nucleation and growth of droplets is complete, a longer timescale transport emerges that causes the liquid to flow along the direction of decreasing chemical potential.
Fig. \ref{fig:coarsening} shows the results for the normalized droplet size {at the interface, denoted by $\tilde{R}$,} evolving with time, {where we use the initial droplet radius -- at the end of the short timescale -- to normalize: $\tilde{R} = r(x=0^+, t)/r(x=0^+, t=0)$.}
For strain-stiffening media, the droplets eventually reach a final size
{(Figs. E3 and E4 in Appendix E show coarsening plots up to 1200 minutes that show the size convergence).}
{Figs. \ref{fig:coarseningdynamics}(A1-A3) show the rate of coarsening for a range of strain-stiffening parameters and dissolution numbers, {where the blue slopes show the Ostwald ripening rate for comparison.}
At the initial times, the coarsening rate follows the Ostwald ripening power law ($\Tilde{R}^3 -1 \propto t$), and diverges from it at longer times.
Cases with $\mathcal{K}=10$ are the first to show a significant deviation because of their faster dynamics, whereas the cases with smaller $\mathcal{K}$ will take longer to diverge.
In particular, we find that even for moderate levels of strain-stiffening (i.e. Figs. \ref{fig:coarseningdynamics}(A2, A3)), significant deviation from Ostwald ripening can occur and the coarsening is eventually arrested.
For a neo-Hookean material $(n=1)$, the coarsening continues indefinitely. Nonetheless, it eventually becomes slower than Ostwald ripening.}
{Finally, Figs. \ref{fig:coarseningdynamics}(B1-B3) show the spatial distribution of droplet sizes for all the different cases at $t =120$ minutes.}
{Fig. \ref{fig:coarsening} and Figs. E2-E4 (in Appendix E) also highlight the role of the dissolution number: since it controls the dynamics of matrix-droplet transport, a higher dissolution number leads to faster coarsening, and the droplets arrive at their final size faster (in the case of a strain-stiffening medium).}
\subsection{Case II - Spatial Localization of Droplets in Finite Domains}\label{subsubsec2}
Cells can spatially localize condensates by employing phase separation.
{For instance, Brangwynne et al. \citep{brangwynne2009germline} showed that in \textit{Caenorhabditis elegans}, P granules, starting from a uniformly distributed state, localize to the posterior of the $\sim 50 \mu m$-long cell over a period of $\sim$ 10 minutes.}
Cells are also highly dynamic and regulate their internal structure as well as mechanical properties both spatially and temporally \citep{heidemann2004towards}.
These two observations naturally raise the question: how does the change of stiffness in the {cellular} medium impact the localization of the droplets\footnote{{While phase separation in \citep{brangwynne2009germline} was shown to be controlled by concentration of polarity proteins, we use spatiotemporal localization of condensates as a motivating example to explore the role of elasticity.}}?
To answer this question, we consider a scenario where the system develops a stiffness gradient over time and study its impact on the localization of droplets.
Initially, the system is homogeneous and has a uniform distribution of droplets, the size of which can be obtained by solving equations \eqref{eq:mass-conservation} and \eqref{eq:quasi-eqm1}.
Over a timescale of minutes, the system develops a linear gradient in stiffness, where the stiffness of the left end {($x = -L/2$)} starts to increase, while that of the right end {($x = L/2$)} remains fixed: {$E(x,t) = 100 - 10(x/L-1/2)t$ Pa, where $t$ is the time in minutes and $x \in [-L/2,L/2].$}
By establishing a stiffness gradient, the system alters the chemical potential of the droplets, which now exceeds that of the matrix, thus disturbing the equilibrium and causing the liquid to migrate from left to right (along the decreasing stiffness).
{We apply a \textit{no flux} boundary condition on both the ends (i.e. $J=0$ at $x=-L/2, L/2$).
{Note that we take $n_d$ to be constant during this process, corresponding to $E = 100$Pa.}
In the initial process of nucleation and growth, the regions near the droplet-matrix interface experience high strains and, thus, inevitable damage. These locations also serve as preferential spots for droplet growth. Furthermore, the elastic networks prevent the droplets from coalescing. Thus, once the short timescale equilibrium has been achieved, $n_d$ remains the same.}
\begin{figure}[ht]
\centering
\makebox[0.5\textwidth][c]{\includegraphics[width=0.48\textwidth]{localization_figure_aug11_2022.pdf}}
{ \caption{\textbf{Spatial localization of droplets by stiffness gradient.} A linear stiffness gradient is established across a 50 $\mu$m region, which is previously homogeneously stiff. Material Properties: $E(x,t) = 100 - 10(x/L-1/2)t$ Pa where $t$ is in minutes, $n_d = 10^{15}$ m$^{-3}$, $\Gamma = 5\times10^{-7}$ Nm$^{-1}$, $\nu_m = 5\times10^{-24}$ m$^{3}$, $D = 5\times10^{-11}$m$^{2}$s$^{-1}$, $\chi = 2.14$, $n = 0.95$.} }\label{fig:stiffness_gradient}
\end{figure}
Fig. 4 shows the results of this process over time for {a $50 \mu m$-long domain}. We find that the droplets localize over biologically relevant timescales ($\sim$ 10 mins) to the {right side.}
We also see that a higher dissolution number promotes faster localization.
{When $\mathcal{K}$ is sufficiently large, the droplet-matrix system can quickly adapt to changes in the matrix stiffness and continuously maintain a quasi chemical equilibrium. If $\mathcal{K}$ is too low, the droplet dynamics will lag behind the stiffness changes in the matrix, and the system will always be out of chemical equilibrium.}
This effect can be especially important in the tightly regulated cellular environments.
We speculate that the dissolution number lumps together factors like local stress, swelling of the network, and diffused damage around the droplet \citep{kim2020extreme}; a multi-scale approach can better resolve these local dynamics and is beyond the scope of the current work.
\section{Conclusion}\label{sec13}
In conclusion, these quantitative predictions in a simple and experimentally realizable setting are aimed at uncovering the role of elastic driving forces in intracellular phase separation.
Even at the cellular length scales, the elastic resistance from the cellular media can impact the growth and dynamics of biomolecular condensates -- both of which are critical to the functioning of cells.
While we have established that elasticity-driven mechanisms are accessible to biological systems, the question of whether and to what extent cells use these mechanisms remains an open question.
Future work in this area must focus on understanding if the changes in cellular stiffness are fast and significant enough to affect the outcome of LLPS.
The recent development of advanced optogenetic techniques to selectively initiate phase separation \cite{Spatio2017}, together with the capability to create spatially heterogeneous crosslinking can be a promising technique to quantify the dynamics arising due to differences in elastic properties, and to test the predictions of our model. {Finally, the model presented here is not without limitations. Future work should extend beyond 1D geometries to reveal more complex phase separation phenomena, and should account for additional physical mechanisms that may become important in cellular systems, such as strain dependent surface energy \citep{jensen2017strain, xu2017direct}.}
\backmatter
\bmhead{Acknowledgments}
The authors acknowledge the support of the Office of Naval Research, United States of America and Dr. Timothy B. Bentley, Program Manager, under award number N00014-20-1-2561.
\section*{Declarations}
The authors declare no conflict of interests.
\begin{appendices}
\section{Flory-Huggins Parameter}\label{secA1}
{The Flory-Huggins parameter $\chi$ can be derived for a given solubility of liquid B in the mixture, $\phi^{sat}$, as \cite{kothari2020effect},
\begin{equation}
\chi = -\frac{\log\phi^{sat} + (1-\phi^{sat})}{(1-\phi^{sat})^2},
\end{equation}
in the absence of any elastic and surface effects. Throughout this paper, we use $\chi = 2.14$ which translates to a solubility concentration of $\phi^{sat} =0.06$.}
\section{Strain Energy of Droplet Growing Inside a Strain-Stiffening Elastic Matrix}\label{S1}
To capture nonlinear material response at large strains, we use the incompressible Mooney-Rivlin constitutive model for the elastic response of the matrix \citep{kothari2020effect}. The work done in expanding a single droplet from stress-free radius $r_0$ to an expanded radius $r$ is given as,
\begin{eqnarray} \label{eq:elastic-work}
\begin{split}
W(r) = nE\bigg(\frac{5}{6}-\frac{r_0}{r}- \frac{r_0^3}{3r^3 }+ \frac{r_0^4}{2r^4} \bigg)+ \\ (1-n)E\bigg(\frac{r}{2r_0}-\frac{1}{3} -\frac{r_0^2}{r^2} + \frac{5}{6}\frac{r_0^3}{r^3}\bigg),
\end{split}
\end{eqnarray}
where $E$ is the stiffness of the crosslinked polymer, and $0 \leq n\leq 1$ is the strain-stiffening parameter; $n=1$ represents the neo-hookean material (no strain stiffening) and the level of strain stiffening increases with decreasing $n$.
{The stress-free radius is the length scale at which the elastic strain energy dominates over the surface energy in the growth of the droplet, and throughout this paper we choose $r_0=0.1 \mu m$.}
As explained in \citep{kothari2020effect}, $r_0$ may be different from the pore size of the elastic network in the matrix.
{We also note that our choice of $r_0$ is consistent with [6]: Fig. 3(b) in [6] shows that in a cross-linked matrix, the pressure initially drops as the droplet grows, before increasing again when the radius is $\sim 0.1 \mu m$, which identifies the region of elastic resistance from chromatin to the growth of droplets.}
\section{Minimization of Free Energy}\label{S0}
Following \eqref{eq:total-free-energy}, the total free energy can be written as,
\begin{eqnarray}
\begin{split}
\Delta \bar{G}(\phi,\phi_D) = (1-\phi_D)\frac{k T}{\nu_m}\{\phi \ln\phi+ \chi(T) \phi(1-\phi)\} \\+ 4\pi r^2\Gamma {n_d} + \frac{4\pi}{3}r^3W(r) {n_d}.
\end{split}
\end{eqnarray}
The system also satisfies the mass conservation constraint,
\begin{equation}
\phi^{sup} = \phi(1-\phi_D) +\phi_D.
\end{equation}
The energy minimization condition
\begin{equation}
\frac{{\rm d}\Delta \bar{G}}{{\rm d}\phi} = 0
\end{equation}
can then be evaluated and simplified using the following relations: $\frac{\partial\left(1-\phi_D\right)}{\partial \phi} = \frac{(1-\phi_{D})}{(1-\phi)}$ and $\phi_D = \frac{4\pi}{3}r^3 n_d$,
\begin{eqnarray}
\begin{split}
&\frac{{\rm d}\Delta \bar{G}_{mix}}{{\rm d}\phi} = \frac{k T}{\nu_m}\frac{\partial\left(1-\phi_D\right)}{\partial \phi}\{\phi \ln\phi+ \chi(T) \phi(1-\phi) \} \\ & +\frac{k T}{\nu_m}(1-\phi_D)\{1+ \ln\phi+ \chi(T)(1-2\phi)\}\\
=&\frac{k T}{\nu_m}\frac{\left(1-\phi_D\right)}{(1-\phi)}\{ \ln\phi+ (1-\phi) + \chi(T) (1-\phi)^2 \} \\ =& \frac{\left(1-\phi_D\right)}{(1-\phi)}\frac{\mu}{\nu_m}
\end{split}
\end{eqnarray}
\begin{eqnarray}
\begin{split}
&\left(\frac{{\rm d}\Delta \bar{G}_{sur}}{{\rm d}\phi} + \frac{{\rm d}\Delta \bar{G}_{el}}{{\rm d}\phi}\right) = \\& \frac{\partial}{\partial \phi_D}\left\{\phi_D\left( \frac{3\Gamma}{r}+ W(r) \right) \right\}\frac{\partial \phi_D}{\partial\phi}\\
=& - \frac{\left(1-\phi_D\right)}{(1-\phi)}\left\{\frac{2\Gamma}{r} + W(r) +\frac{r}{3}W'(r) \right\} \\
=& - \frac{\left(1-\phi_D\right)}{(1-\phi)}\frac{\mu_D}{\nu_m}
\end{split}
\end{eqnarray}
Finally, these equations yield
\begin{equation}
\frac{{\rm d}\Delta \bar{G}}{{\rm d}\phi} = \frac{\left(1-\phi_D\right)}{(1-\phi)}\left(\frac{\mu -\mu_D}{\nu_m}\right) =0
\end{equation}
as the equilibrium condition shown in \eqref{eq:quasi-eqm1}.
\section{Equilibrium Droplet Size Variation with Strain-Stiffening} \label{S2}
The equilibrium droplet size is determined by the solution of \eqref{eq:quasi-eqm1} together with the mass conservation constraint \eqref{eq:mass-conservation}.
Using the form outlined in \eqref{eq:elastic-work} for $W(r)$, we study the sensitivity of the equilibrium droplet sizes to the level of strain stiffening, as governed by the parameter $n$.
Figure \ref{fig:SI_droplet_sizes} shows that the increasing strain-stiffening decreases the equilibrium droplet size.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{SI_droplet_sizes_v2.pdf}
\caption{Equilibrium droplet sizes for different levels of strain stiffening.}
\label{fig:SI_droplet_sizes}
\end{figure}
\section{Coarsening Plots for Longer Times} \label{S3}
{The following figures show normalized droplet radius, $\tilde{R}$, over time, for droplets at the interface on the right side. The three figures are for $n = 1, 0.95$ and $0.9$ respectively. Material Properties: $E = 100$ Pa (Left); $E = 50$ Pa (Right); $\Gamma = 5\times10^{-7}$ Nm$^{-1}$, $\nu_m = 5\times10^{-24}$ m$^{3}$, $D = 5\times10^{-11}$ m$^{2}$s$^{-1}$, $\alpha = 10^{13}$ N$^{-1}$m$^{-1}$, $\chi = 2.14$ (both sides) which translates to a solubility concentration of $\phi^{sat} = 0.06$ in the absence of elastic and surface effects. In strain-stiffening media, the coarsening is arrested, leading to final size as is evident from Figs. E3 and E4. The systems with smaller $\mathcal{K}$ take longer time to reach the final state. }
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{n1_fullprofile_1200mins.pdf}
\caption{{Coarsening dynamics for $n = 1$.}}
\label{fig:coarsening1}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{n095_coarsening.pdf}
\caption{Coarsening dynamics for $n = 0.95$.}
\label{fig:coarsening2}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{n09_coarsening.pdf}
\caption{Coarsening dynamics for $n = 0.9$.}
\label{fig:coarsening3}
\end{figure}
\hfill
\clearpage
\clearpage
\end{appendices}
|
1,108,101,564,162 | arxiv | \section{Introduction}
The properties of multi-electron systems can in principle be predicted by solving the interacting many-body Schr\"odinger equation. However, numerical solutions are only feasible for small systems consisting of a few interacting electrons due to the exponential scaling of the computational demand with the number of particles. One possible way to overcome this so-called ``exponential wall'' \cite{kohnNobel} is density functional theory (DFT) \cite{DFT,DFTbooks}, which has been successfully applied to many-body systems in a wide range of areas in physics and chemistry. DFT is based on the existence of an energy functional whose mini\-miza\-tion yields the ground state density. This minimization is usually performed via the so-called Kohn-Sham (KS) construction \cite{KS} where the interacting multi-particle system is mapped to a unique system of non-interacting particles having the same ground-state density. The non-interacting problem decouples into (non-linear) one-particle equations with an effective Hamiltonian depending on the density. The advantage of DFT thus originates from the fact that the solution of $N$ one-particle equations is less involved than solving one exponentially scaling $N$-particle problem. The crucial ingredient in the KS construction is the Hartree-exchange-correlation (Hxc) potential which, if it was exact, would comprise all many-body effects. Its exact form is, however, unknown in general, and approximations have to be employed in practice.
Time-dependent density functional theory (TDDFT) extends DFT to time-dependent problems \cite{Runge,TDDFTbooks,Leeuwen}. The existence of a time-dependent KS system is, however, no longer based on a minimization principle but on the local force equation of quantum mechanics \cite{TDDFTbooks,ruggibauer}. The exact time-dependent Hxc potential has additional, subtle features: it depends on the initial state (both interacting and non-interacting) and on the density at previous times (that is, it has ``memory'') \cite{inistatedep}. Therefore the construction of better approximations to the Hxc potential within TDDFT is much more involved than in DFT. This is even more of a problem as it turns out that the Hxc approximations known from DFT often fail when applied to TDDFT beyond linear response \cite{exampleswhereTDDFTfails}.
One might think that if TDDFT is employed to the study of multi-particle systems subject to time-periodic external potentials, e.g., an atom interacting with a monochromatic laser field, one could involve the Floquet theo\-rem. Indeed, in such situations the interacting many-body time-dependent Schr\"odinger equation is a partial differential equation with time-periodic coefficients and thus admits a time-periodic basis. As a consequence, the problem can be converted into an infinite set of time-independent equations by virtue of the Floquet theorem \cite{floquet,floquetclassics,floquetinbooks,floquetanalysis}.
Already at the beginning of the application of density-functional theory to time-dependent systems attempts were made to incorporate Floquet theory in a density-functional framework \cite{deb}. A minimization principle was proposed, which was perturbative in nature and hence valid only for weak and off-resonant fields. However, even if these conditions are met there are problems with defining a proper adiabatic limit, which is fundamental to the proposed minimization procedure \cite{Hone}. The problems arise due to the fact that Floquet theory maps the quasi-spectrum of the time-dependent problem into an interval of length $\omega$, i.e., the frequency of the periodicity employed. In any interval $I = \{x-\omega/2, x+\omega/2 \}$ for $x \in \mathbb{R}$ arbitrary we find infinitely many quasi-eigenenergies (they are dense in $I$) and thus infinitely many eigenfunctions around every point in the quasi-spectrum. A consequence of this is that there is no unique final state to which the system tends as the external perturbation is turned off adiabatically. In order to restore the adiabatic limit a truncation to a finite basis is usually employed, which is anyway unavoidable in practical calculations.
In Refs.~\cite{TDDFTstates1,examplerev,floquetreview,TDDFTstates2} Floquet-DFT approaches were pursued for non-perturbative fields and later criticized in Refs.~\cite{floquetcritic,criticfinite} where the authors also suggested to embark upon the problem from a TDDFT point of view, thereby avoiding the minimization problem. The basic question then remains whether a Floquet basis can be found for the associated KS system, i.e., whether the KS Hamiltonian itself is periodic. Known explicit expressions for the exchange-correlation potential in the time-dependent KS Hamiltonian such as the adiabatic local density approximation or generalized gradient approximations \cite{DFTbooks,TDDFTbooks} have the feature that a periodic density will lead to a periodic KS Hamiltonian (with the same period) since the adiabatic Hxc potentials depend on the instantaneous density only. However, the density does not have to be periodic and, in fact, it generally is not, as we will demonstrate in this work. On the other hand, even if an approximate functional leads to an aperiodic KS potential because of, e.g., an aperiodic density, this does not yet demonstrate the incompatibility of TDDFT and Floquet theory, because the unknown {\em exact} KS potential nevertheless could be periodic. In this work we will show by means of numerical and analytical counter examples that this, unfortunately, is not the case and thus TDDFT is, in general, not compatible with Floquet theory.
The paper is structured as follows. In Sec.~\ref{preliminaries} we review the basics of Floquet theory from a TDDFT perspective. In Sec.~\ref{exactpotential} we compute the exact KS potential for a two-electron model system, present the Fourier-transformed exact KS potential, and investigate whether the Floquet theorem is applicable to the KS Hamiltonian. In Sec.~\ref{generalresults} an analytical example is given to analyze the initial-state dependence of the KS potential and its relation to the periodicity of the KS Hamiltonian. We conclude in Sec.~\ref{conclude}.
For simplicity, we restrict ourselves to one-dimensional systems in this work. Such systems are frequently used in the theory of laser-matter interaction because they can be solved numerically exactly, and they are known to capture many of the essential features of their three-dimensional analogs. All equations in this work can be straightforwardly extended to the three-dimensional case.
Atomic units $\hbar=m_e=|e|=4\pi\varepsilon_0=1$ are used throughout unless stated otherwise.
\section{Basic Theory \label{preliminaries}}
Consider a system of $N$ interacting electrons governed by the Hamiltonian
\begin{equation}
\hat{H}(t)=\hat{ T} +\hat{V}_{\mathrm{ee}}+ \hat{ V}(t) \label{inthamop}
\end{equation}
with, in position-space representation, the kinetic energy operator
\begin{equation} \hat{ T}=\sum_{i=1}^N -\frac{1}{2}\frac{\partial^2}{\partial x_i^2} \end{equation}
the interaction potential
\begin{equation} \hat{V}_{\mathrm{ee}}=\frac{1}{2}\sum_{i\neq j}^N v_{\mathrm{ee}}(|x_i-x_j|),\end{equation}
and the external potential
\begin{equation} \hat{ V}(t)=\sum_{i=1}^N v(x_i,t). \end{equation}
We assume the interaction to be Coulombic. In one-dimensional models the Coulomb-interaction is usually smoothed by a softening parameter $\epsilon>0$,
\begin{equation} v_{\mathrm{ee}}(|x_i-x_j|)=\frac{1}{\sqrt{(x_i -x_j)^2+ \epsilon }}. \end{equation}
We further specialize on external potentials consisting of the interaction with a (static) nucleus of charge $Z$ and a laser field $E(t)$ in dipole approximation, i.e.,
\begin{equation}
\label{externalfield}
v(x_i,t) = -\frac{Z}{{\sqrt{ x_{i}^2+\epsilon} }} +x_{i}E(t). \end{equation}
The eigenstates and eigenenergies of the laser field-free system at time $t=0$ are obtained via the solution of the time-independent Schr\"odinger equation
\begin{equation} \hat{H}(t)\Psi(x_{1}\sigma_{1} \cdots x_{N}\sigma_{N})=\mathcal{E}\Psi(x_{1}\sigma_{1} \cdots x_{N}\sigma_{N}).\end{equation}
Here $\Psi(x_{1}\sigma_{1} \cdots x_{N}\sigma_{N})$ is an antisymmetric $N$-particle eigenfunction of the space and spin variables $x_i$, $\sigma_i$, and $\mathcal{E}$ is its eigenenergy.
In order to obtain $\Psi(x_{1}\sigma_{1} \cdots x_{N}\sigma_{N},t)$ for $t>0$ one may solve the time-dependent Schr\"odinger equation (TDSE)
\begin{equation}\mathrm{i}\partial_{t}\Psi(x_{1}\sigma_{1} \cdots x_{N}\sigma_{N}, t)=\hat{H}(t)\Psi(x_{1}\sigma_{1} \cdots x_{N}\sigma_{N},t)\label{TDSE}
\end{equation}
for a fixed initial state $\Psi_0(x_{1}\sigma_{1} \cdots x_{N}\sigma_{N})$. However, due to the ``exponential wall'' \cite{kohnNobel} it is computationally very challenging to solve this equation. In fact, in the case of intense laser fields where the numerical grids need to be large it is feasible only for $N\leq 3$.
Now we turn our attention to the non-interacting KS system that, by construction, yields the same single-particle density $n(x,t)$ as the interacting system. For simplicity, we assume that we are dealing with spin-neutral systems. The KS Hamiltonian then reads
\begin{equation}
\hat{H}_\mathrm{KS}([n];t)=-\frac{1}{2} \frac{\partial^2}{\partial x^2} + v(x,t) + v_{\mathrm{Hxc}}([n];x,t),
\end{equation}
where $v(x,t)$ is the external potential \reff{externalfield} and $v_{\mathrm{Hxc}}([n];x,t)$ is the Hxc potential which is a functional of the single-particle density $n(x,t)$ (for notational simplicity we do not indicate the dependence on the initial states). The two potential terms combined are called the KS potential, i.e.,
\begin{equation}
v_{\mathrm{KS}}([n]; x,t)=v(x,t) + v_{\mathrm{Hxc}}([n];x,t).\label{KSpot}
\end{equation}
In what follows we assume that the external laser field is monochromatic with a period $\omega_1$.
The time-dependent KS equation reads
\begin{equation} \mathrm{i}\partial_{t}\Phi_{k}(x,t)=\hat{H}_\mathrm{KS}([n];t)\Phi_{k}(x,t), \label{KS}\end{equation}
where $\Phi_{k}(x,t)$ is the $k$-th KS orbital for the KS particle with initial state $\Phi_{k}(x,0)$.
The time-dependent one-particle density $n(x,t)$ then is
\begin{equation}
n(x,t)=\sum_{k=1}^{N} | \Phi_{k}(x,t)|^2 \label{density}.
\end{equation}
Now we make the basic assumption of any Floquet approach in a density-functional framework: if the Hamiltonian $\hat{H}(t)$ describing the $N$ interacting electrons is periodic with the frequency $\omega_1$, i.e., $E(t+T)=E(t)$ with $T=2\pi/\omega_1$, then we assume the same periodicity for the KS Hamiltonian as well. We neglect for the moment potential problems with respect to the non-linear nature of the KS equations, which will be discussed in detail in the subsequent Sections of this work.
If the KS Hamiltonian is periodic with $T$ then, by virtue of the Floquet theorem, we can write the KS orbitals in a time-periodic (Floquet) basis $\{\phi_{\alpha}(x,t)\}_{\alpha \in \mathbb{N}}$ as
\begin{equation} \Phi_{k}(x,t)=\sum_{\alpha} c_{k \alpha} \mathrm{e}^{-\mathrm{i}\xi_{\alpha} t} \phi_{\alpha}(x,t), \label{psixt}\end{equation}
where the $\xi_{\alpha}$ are the so-called quasi-energies and $c_{k\alpha} = \braket{\phi_{\alpha}(t=0)}{\Phi_{k}(t=0)}$.
Further, the $\phi_{\alpha}(x,t)$ are periodic in $T$, i.e.,
\begin{equation} \phi_{\alpha}(x,t)=\phi_{\alpha}(x,t+T). \label{periodicphi}\end{equation}
The Floquet orbitals $\phi_{\alpha}(x,t)$ fulfill the eigenvalue equation
\begin{equation} \hat{\cal H}(t) \phi_{\alpha}(x,t) = \xi_\alpha\phi_{\alpha}(x,t) \label{fhami}\end{equation}
with
\begin{equation} \hat{\cal H}(t) = \hat{H}_\mathrm{KS}([n];t) - \mathrm{i}\partial_t, \label{ffham}\end{equation}
i.e., $\xi_\alpha$ assumes the role of an eigenvalue and $\phi_{\alpha}(x,t)$ is the corresponding eigenstate.
If so, also
\begin{equation} \xi_\alpha'=\xi_\alpha+m\omega_1,\quad \phi_{\alpha}'(x,t)=\mathrm{e}^{\mathrm{i} m \omega_1 t} \phi_{\alpha}(x,t),\quad m\in\mathbb{Z} \end{equation}
are solutions of the eigenvalue equation \reff{fhami}. Owing to the time periodicity of $\phi_{\alpha}(x,t)$ we can write
\begin{equation} \phi_{\alpha}(x,t)=\sum_{l} \varphi_{\alpha,l}(x) \mathrm{e}^{-\mathrm{i} l\omega_1 t}, \qquad l \in \mathbb{Z}.\label{fourexp}\end{equation}
With Eqs.~\reff{psixt} and \reff{fourexp} the KS orbital can thus be written as,
\begin{equation} \phi_{\alpha}(x,t)=\sum_{l\alpha} c_{k\alpha}\mathrm{e}^{-\mathrm{i}(\xi_{\alpha}+ l \omega_1) t} \varphi_{\alpha,l}(x) \label{fourex},\end{equation}
where the eigenstates $\{\varphi_{\alpha,l}(x)\}_{\alpha\in\mathbb{N},l\in\mathbb{Z}}$ form the time-independent Floquet basis.
We divide the Hamiltonian $\hat{H}_\mathrm{KS}([n];t)$ into a time-independent part
\begin{equation} \hat{H}_0 = -\frac{1}{2}\frac{\partial^2}{\partial x^2} -\frac{Z}{{\sqrt{ x^2+\epsilon} }}, \end{equation}
the coupling to the monochromatic external field
\begin{equation} x E(t)=v^{+}(x)\mathrm{e}^{\mathrm{i} \omega_1 t} +v^{-}(x)\mathrm{e}^{-\mathrm{i} \omega_1 t},\label{mono} \end{equation}
and $ v_{\mathrm{Hxc}}([n];x,t)$.
Since we tentatively assume time-periodicity of the whole KS Hamiltonian we can write
\begin{eqnarray}
v_{\mathrm{Hxc}}([n];xt) = \sum_{l}\mathrm{e}^{-\mathrm{i} l \omega_1 t}\left[v_{\mathrm{Hxc}}([n];x)\right]_l,\label{Hartree}
\end{eqnarray}
$l \in \mathbb{Z}$.
Plugging the expansions \reff{fourexp}, \reff{mono}, and \reff{Hartree} in Eq.~\reff{fhami} we obtain the TDDFT-Floquet equations \cite{TDDFTstates1}
\begin{eqnarray} \lefteqn{(\xi_\alpha +l \omega_{1} -\hat{H}_0)\varphi_{\alpha,l}(x)} \label{flqeq} \\ & = & v^{+}(x)\varphi_{\alpha,l-1}(x) + v^{-}(x)\varphi_{\alpha,l+1}(x)\nonumber \\ &+& \sum_{m}(v_{\mathrm{Hxc}}([n];x))_{l-m}(x)\varphi_{\alpha,l}(x). \nonumber \end{eqnarray}
The index $l$ of a Floquet state $\varphi_{\alpha,l}(x)$ is known as the ``block index,'' which may be interpreted as the number of photons involved in the process under study (provided one arranges that the $l=0$-block adiabatically connects to the field-free situation). The Floquet equation \reff{flqeq} couples any Floquet block $l$ to its neighboring blocks $l\pm 1$ via absorption or emission of a photon. Contributions of non-neighboring blocks may only be included through the Fourier-components of the Hxc potential. This is different from the Floquet equations for the interacting TDSE which couple only neighboring blocks because $E(t)$ is the only time-dependent element in the TDSE Hamiltonian. However, in the TDSE case the Floquet basis functions depend on all spatial variables, not just on a single one as in the KS case.
In principle, Eq.~\reff{flqeq} is an infinite-dimensional set of coupled partial differential equations, in practice, it is truncated so that $l_{\min} \leq l \leq l_{\max}$ where $|l_{\min}|$ and $| l_{\max}|$ should be large enough to capture all the relevant processes in which photons are emitted or absorbed.
If Eq.~\reff{flqeq} was valid, the periodic time-dependent many-body problem would be significantly simplified because the time-dependence had been eliminated via Floquet theory and the ``exponential wall'' via DFT.
\section{Periodic or aperiodic KS Hamiltonian? \label{exactpotential}}
In order to prove that Floquet theory is generally not applicable to TDDFT it certainly is sufficient to find one counterexample. However, a Floquet approach might still be useful as an approximative approach, especially given the fact that TDDFT in practice is itself approximative anyway.
Hence, we analyze under which circumstances the KS Hamiltonian is periodic or not. In order to do so we employ a widely used numerically exactly solvable one-dimensional model Helium atom ~\cite{helium,exactpotential2,rabidieter}. In this model both electrons move along the laser-polarization direction only, and the Coulomb interaction is replaced by a soft-core potential as introduced in Sec.~\ref{preliminaries}. The TDSE Hamiltonian of the model system thus corresponds to the Hamiltonian \reff{inthamop} with $N=2$. The smoothing parameter was $\epsilon=1$, as, e.g., in \cite{rabidieter}.
The initial TDSE state is chosen to be the spin-singlet ground state of the interacting system
\begin{equation}
\Psi_0(x_1 \sigma_1, x_2 \sigma_2) = \Psi_0(x_1,x_2) \frac{1}{\sqrt{2}} \left(\ket{\uparrow_1} \ket{\downarrow_2} -\ket{\downarrow_1} \ket{\uparrow_2} \right).
\end{equation}
Since the Hamiltonian is spin-independent, the system remains also during the dipole interaction with a laser field in a spin-singlet configuration, and we can concentrate on the symmetric spatial part $\Psi_0(x_1,x_2)$ of the wave function only. The TDSE~\reff{TDSE} is solved numerically using the Crank-Nicolson propagator to obtain the time-dependent spatial wavefunction $\Psi(x_1,x_2,t)$.
In Ref.~\cite{floquetanalysis} we introduced a method to extract the populated Floquet states of the interacting system directly from $\Psi(x_1,x_2,t)$.
By controlling the laser parameters we can either have an adiabatic evolution of the field-free state $\Psi_0(x_1,x_2)$ to a field-dressed (Floquet) state or a non-adiabatic one, where several Floquet states are populated. The laser intensity, frequency and the ramping time decide on the adiabaticity of the time-evolution of the interacting system. For adiabatic evolution we have in the TDSE-Floquet calculation only one relevant Floquet-state index $\alpha$ in the TDSE analog of \reff{fourex},
\begin{equation} \Psi(x_1,x_2,t)=\sum_{\alpha} c_{\alpha}\mathrm{e}^{-\mathrm{i}\xi_{\alpha} t} \sum_l \mathrm{e}^{-\mathrm{i} l\omega_1 t} \varphi_{\alpha,l}(x_1,x_2) \label{fourexTDSE}.\end{equation}
Hence, in this case
\begin{equation} \Psi(x_1,x_2,t) \sim \mathrm{e}^{-\mathrm{i}\xi_{\alpha} t} \sum_{l} \mathrm{e}^{-\mathrm{i} l\omega_1 t} \varphi_{\alpha,l}(x_1,x_2), \end{equation}
and the density $n(x,t)=2\int\mathrm{d} x'\, |\Psi(x,x',t)|^2$ will only have frequency components proportional to multiples of the laser frequency $\omega_1$. The KS Hamiltonian depends on the density. If the KS potential is periodic with respect to integer multiples of the laser frequency there would be no problem because $\hat{H}_\mathrm{KS}([n(t+T)];t+T)=\hat{H}_\mathrm{KS}([n(t)];t)$, and thus the Floquet theorem still holds. Instead, fractional harmonics or, even worse, incommensurate frequencies in $\hat{H}_\mathrm{KS}([n(t)];t)$ would render the Floquet theorem inapplicable. If more than one Floquet state is populated, say $\alpha=\alpha_1$ and $\alpha_2$, the Fourier-transformed density $n(x,\omega)$ will also have frequency components proportional to the inverse of the quasi energy difference $|\xi_{\alpha_2} - \xi_{\alpha_1}|$. It would be mind-boggling if the unknown {\em exact} $v_\mathrm{xc}([n];x)$ was able to remove such frequencies from $\hat{H}_\mathrm{KS}([n(t)];t)$. However, in order to {\em prove} that in general the exact $v_\mathrm{xc}([n];x)$ contains frequency components different from $\omega_1$ we construct the exact $v_\mathrm{xc}([n];x)$ explicitly in the following for both the adiabatic as well as the non-adiabatic evolution of the field-free state to the field-dressed states.
Once we have obtained $\Psi(x_1,x_2, t)$ by solving the TDSE \reff{TDSE} we can construct the exact KS orbital and the potential following Refs.~\cite{exactpotential2,exactpotential1}. In the two-electron spin-singlet case the KS wave function consists of only one spatial orbital $\Phi(x,t)$, i.e.,
\begin{align*}
\Phi(x_1 \sigma_1 &,x_2 \sigma_2,t)
\\
&= \Phi(x_1,t)\Phi(x_2,t) \frac{1}{\sqrt{2}} \left(\ket{\uparrow_1} \ket{\downarrow_2} -\ket{\downarrow_1} \ket{\uparrow_2} \right).
\end{align*}
The KS orbital can be written as
\begin{equation} \Phi(x,t)=\sqrt{n(x,t)/2}\ \mathrm{e}^{\mathrm{i} S(x,t)}, \label{KSdensphase} \end{equation}
where $n(x,t)$ is the exact particle density and $S(x,t)$ is the exact phase of the KS orbital. The expression for the phase in terms of density is given by the continuity equation as \cite{exactpotential1, QR}
\begin{align}
\label{continuity}
- \partial_x \left[ n(x,t) \partial_x S(x, t)\right] = \partial_t n(x,t).
\end{align}
Equation \reff{KS} can be inverted to write the KS potential in terms of the KS orbital as \cite{exactpotential1}
\begin{align}
\label{potentialvarphi}
v_{\mathrm{KS}}&(x,t) = \frac{\mathrm{i} \partial_t \Phi(x,t)+ \frac{1}{2} \partial_{x}^2\Phi(x,t)}{\Phi(x,t)} \nonumber
\\
&=\frac{1}{2} \frac{\partial_x^2 \sqrt{n(x,t)}}{\sqrt{n(x,t)}} -\partial_t S(x,t) - \frac{1}{2}\left[ \partial_x S(x,t)\right]^2.
\end{align}
The imaginary part of the potential is zero due to the continuity equation~(\ref{continuity}). The density $n(x,t)$ and the phase $S(x,t)$ are computed from $\Psi(x_1,x_2,t)$ \cite{exactpotential2}, and by the above construction we obtain the exact KS potential. Such a straightforward construction is possible only if we have a single spatial orbital. In the general case of several KS orbitals one would need to employ a computationally more demanding fixed-point method, as demonstrated in Refs.~\cite{Godby, Soeren}. Once the exact KS potential is computed, it is Fourier transformed in time to investigate its periodicity.
Besides the basic problem of the periodicity of the KS potential for a given interacting density, there is the inherent non-linearity of the KS scheme. Even though the exact KS potential might be periodic for a certain problem, it is far from obvious that one can employ a Floquet-based KS scheme to predict it. For instance, although an adiabatic approximation, e.g., in the two-electron spin-singlet case the exact exchange-only approximation $v_\mathrm{Hx}^\mathrm{(exact)}([n];x)=\int\mathrm{d} x'\, [n(x',t)/2]/\sqrt{(x-x')^2+\epsilon}$, does inherit the periodicity of the density, it is not guaranteed that the non-linear KS equations produce a periodic $n(x,t)$. This becomes obvious when we consider the iterative solution of the KS equations, where we start with an initial guess for the density that is periodic with $\omega_1$. We then have a periodic KS Hamiltonian from which we can (since in every iterative step we have a linear partial differential equation) infer a Floquet basis. We then solve the resulting linear equations and obtain a new density. This density will in general not be periodic and we no longer find a Floquet basis with period $\omega_1$ only. This makes the problem of the non-linearity in connection with a Floquet approach evident.
\subsection{Adiabatic and periodic example}
First we consider an $800$-nm ($\omega_1=0.056$) laser pulse with two cycles ramp-up and $16$ cycles of constant amplitude. The electric field amplitude is $\hat{E}=0.063$, corresponding to a laser intensity of $1.4\times 10^{14}$~W/cm$^2$. It turns out that in this case the density dynamics are periodic with the laser period. In Fig.~\ref{fig:800nm} we plot the exact $ \arrowvert {v}_\mathrm{KS}(x,\omega)\arrowvert^2 $ over four orders of magnitude vs the harmonic order $\omega/\omega_1$. Only harmonics of the laser frequency at all space points of the KS potential are visible. The Floquet theorem is applicable in this case, as $\hat{H}_\mathrm{KS}([n(t+T)];t+T)=\hat{H}_\mathrm{KS}([n(t)];t)$ to a high degree of accuracy.
\begin{figure}\includegraphics[width=0.5\textwidth]{fig1.png}
\caption{(Color online) Logarithmically scaled plot of $\arrowvert{v}_\mathrm{KS}(x,\omega)\arrowvert^2$ for $\omega_1=0.056$, $\hat{E}=0.063$, two-cycle ramp-up, and $16$ cycles constant amplitude. Notice that only harmonics of the laser frequency are present over a dynamic range of four orders of magnitude. Superpositions of Floquet states do not play a role, the dynamics are sufficiently adiabatic, the Floquet theorem is applicable to $\hat{H}_\mathrm{KS}([n(t)];t)$. \label{fig:800nm}}
\end{figure}
\subsection{Non-adiabatic and aperiodic example}\label{nonadiabaticexample}
As a second example we chose a short-wavelength $17.5$-nm ($\omega_1=2.6$) laser pulse with four cycles ramp-up and $172$ cycles of constant amplitude. The electric field amplitude $\hat{E}=0.34$ corresponds to a laser intensity of $4 \times 10^{15}$~W/cm$^2$. The fast ramping induces a non-adiabatic time-evolution and results in a superposition of Floquet states in the TDSE result. The exact KS potential oscillates with periods related to the inverse of the quasi energy differences. In Fig.~\ref{fig:3nm}, this new timescale manifests itself as side bands around the multiples of the laser frequency. The quasi energy differences are determined by the field-free spectrum of the system under study and by the ac Stark shifts so that it may well happen that they are irrational fractions or multiples of $\omega_1$. In that case even a $T' > T=2\pi/\omega_1$ for which $\hat{H}_\mathrm{KS}(t+T')=\hat{H}_\mathrm{KS}(t)$ does not exist.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig2.png}
\caption{(Color online) As Fig.~\ref{fig:800nm} but for $\omega_1=2.6$, $\hat{E}=0.34$, four-cycle ramp-up, and $172$ cycles constant amplitude. The Fourier-transformed potential displays anharmonic frequency components, i.e., it is aperiodic.\label{fig:3nm}}
\end{figure}
\subsection{Resonant interaction}\label{resonantinteraction}
When the laser is tuned to the exact resonance between the initial (ground) state and a dipole-accessible excited state, Rabi-oscillations set in, typically on a time scale that is much longer than the laser period so that for the Rabi frequency $\Omega$ one has $\Omega\ll\omega_1$. In this case the density is periodic with the Rabi-frequency $\Omega$, not with the laser frequency $\omega_1$. At time $T_{1/2}=\pi/\Omega$ the upper state is populated, at time $2\pi/\Omega$ the initial state is populated again. The Rabi-frequency depends on the electric field amplitude $\hat{E}$ of the laser and the transition dipole matrix element $\mu_{01}$ between the two bound states involved. Rabi-oscillations are not captured in TDDFT when known and practicable adiabatic exchange-correlation potentials are used. Of course, the density dynamics between the two states are correctly described when the exact KS potential is used, for instance for the numerically exactly solvable model-He system employed in this work. It is known that after the time $T_{1/2}$, when the single-particle density $n(x,T_{1/2})$ is that of the {\em excited interacting system}, the exact KS potential is the {\em ground state potential} to that density \cite{rabidieter,helbig}. In fact, there is no stationary state in the KS potential to which the population may be transferred. Hence, the exact KS system governs the dynamics by an ``adiabatic deformation'' of the ground state density. Despite this extremely simple ``Rabi-flopping'' dynamics, resonant interactions are among the worst cases for TDDFT with known and practicable exchange-correlation potentials.
It is well known that a Floquet treatment of the TDSE leads to avoided crossings of the two field-dressed state energies when plotted as a function of laser frequency \cite{floquetinbooks}. At exact resonance the two Floquet states are equally populated and separated in energy by $\hbar \Omega$. Hence, resonant interaction is a prime example where a superposition of Floquet states plays a role even if the laser pulse was turned on adiabatically.
The laser frequency in our model simulation was tuned to be at resonance between the ground spin-singlet state and the first excited spin-singlet state of the model Helium atom, $ \omega = \mathcal{E}_{1}-\mathcal{E}_{0}= 0.533$ \cite{rabidieter}. For the chosen field amplitude $\hat{E}=0.016$ (corresponding to a laser intensity of $9 \times 10^{12}$~W/cm$^2$) the ground state population reaches zero at $T_{1/2}\approx 174$, i.e., $\Omega=0.018$. Figure~\ref{fig:rabi} shows $\arrowvert{v}_\mathrm{KS}(x,\omega)\arrowvert^2$ for two cycles ramp-up and $148$ cycles of constant amplitude. The Fourier-transformed potential shows strong sideband peaks at $q\omega_{1}\pm p\Omega$ with $q,p \in \mathbb{Z}$. Hence, while in the previous example of non-adiabatic ramping one might argue that the anharmonic peaks in the spectra are weak and therefore could be ignored, a resonant interaction generates sideband peaks of strengths comparable to the harmonics.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig3.png}
\caption{(Color online) As Fig.~\ref{fig:800nm} but for the resonant interaction with $ \omega = \mathcal{E}_{1}-\mathcal{E}_{0}= 0.533$, $\hat{E}=0.016$, two cycles ramp-up, and $148$ cycles constant amplitude. Peaks at positions $q\omega_{1}\pm p\Omega$ with $q,p \in \mathbb{Z}$ are seen. \label{fig:rabi}}
\end{figure}
\section{Initial state choice \label{generalresults}}
For the above examples of non-adiabatic ramping or resonant interaction a minimization procedure with a finite Floquet basis would lead to a laser-aperiodic KS Hamiltonian that renders the Floquet theorem inapplicable in the first place. From the TDDFT perspective we obtain a laser-aperiodic KS Hamiltonian because of the time evolution starting from the chosen initial state. However, in TDDFT $\hat{H}_\mathrm{KS}([n(t)];t)$ should actually read $\hat{H}_\mathrm{KS}([n(t),\Psi_0,\{\Phi_{0k}\}];t)$ because of the dependence of the time-dependent KS potential on both the interacting initial state $\Psi_0$ and the KS initial states $\Phi_{0k}$ \cite{TDDFTbooks,inistatedep}. Thus a loophole for a most stubborn assumable proponent of TDDFT-Floquet theory remains: a different choice of initial KS states $\Phi_{0k}(x)=\Phi_k(x,t=0)$ could keep the KS Hamiltonian periodic in $T$. In this Section we give a counter example for which {\em all} possible initial states lead to laser-aperiodic KS potentials if the density is laser-aperiodic. To do so analytically we construct a KS system of two non-interacting electrons on a quantum ring of diameter $L$ so that we have periodic boundary conditions \cite{QR}. This makes it an ideal system to analyze the time-periodicity of the KS Hamiltonian for the various possible initial states. For spin-singlet states of these electrons one can describe the system by a single KS orbital \reff{KSdensphase} as in our model Helium system above (in the limit $L \rightarrow \infty$ the quantum ring becomes equivalent to the Helium model). Following the procedure outlined in Sec.~\ref{exactpotential} the potential can be written in terms of the density and the phase of the KS orbital as
\begin{eqnarray} \lefteqn{{v}_\mathrm{KS}([m,n],x,t)} \label{potential}\\
&=& \frac{1}{2} \frac{\partial_x^2 \sqrt{n(x,t)}}{\sqrt{n(x,t)}} -\partial_{t}S([m,n],x,t)-\frac{1}{2}[\partial_{x}S([m,n],x,t)]^2,\nonumber \end{eqnarray}
which is an explicit functional of the density $n=n(x,t)$ and an integer number $m\in\mathbb{Z}$. As shown in Ref.~\cite{QR}, for periodic boundary conditions the phase $S$ can be written in the integral form
\begin{eqnarray}
S([m,n],x,t)&=&\int_{0}^{L}dy\,K_{t}(x,y)\partial_{t}n(y,t) \label{phase} \\ &+& \frac{2\pi m}{\int_{0}^{L}\frac{dz}{n(z,t)}} \int_{0}^{x}\frac{dz}{n(z,t)}, \nonumber
\end{eqnarray}
where the Green's function, $K_{t}(x,y)$ is defined as
\begin{eqnarray}
K_{t}(x,y)&=&\frac{1}{2}[\theta(y-x)-\theta(x-y)]\int_{x}^{y}\frac{dz}{n(z,t)} \nonumber \\ &-&\frac{\eta(x,t)\eta(y,t)}{\int_{0}^{L}\frac{dz}{n(z,t)}},\label{kernel}
\end{eqnarray}
with $\theta$ the Heaviside step function and
\begin{equation} \eta(x,t)=\frac{1}{2}\left(\int_{0}^{x}\frac{dy}{n(y,t)} + \int_{L}^{x}\frac{dy}{n(y,t)} \right). \end{equation}
Since the KS orbital obeys the periodic boundary conditions, the phase $S$ has to satisfy
\begin{eqnarray}
&S(L,t)&=S(0,t)+2\pi m \\ &\partial_{x}S(L,t)&=\partial_{x}S(0,t).
\end{eqnarray}
Hence the integer $m$ plays the role of labeling all the possible KS orbitals (for different initial-state choices) that are consistent with the density $n(x,t)$.
If we assume that \begin{equation} n(x,t+T)=n(x,t), \end{equation} we have
\begin{equation} \int_{x}^{y}\frac{dz}{n(z,t)}=\int_{x}^{y}\frac{dz}{n(z,t+T)}.
\end{equation}
Since $K_{t}(x,y)$ in Eq.~\reff{kernel} consists only of such time-periodic integrals
\begin{equation}
K_{t}(x,y)=K_{t+T}(x,y).
\end{equation}
Also, since
\begin{equation}
\partial_{t}n(x,t)\arrowvert_{t}=\partial_{t}n(x,t)\arrowvert_{t+T},
\end{equation}
we conclude from ~\reff{phase} that
\begin{equation}
S([m,n],x,t)=S([m,n],x,t+T).
\end{equation}
The first term on the right hand side of Eq.~\reff{potential} is also periodic with the same period as the density. This implies that the entire potential is periodic with the same period as the density, i.e.,
\begin{equation}
{v}_\mathrm{KS}([m,n],x,t)={v}_\mathrm{KS}([m,n],x,t+T).
\end{equation}
Hence for any possible initial state (labeled by the index $m$) and a density periodic with the period of the external field we find that the KS Hamiltonian is also periodic with the period of the external field.
For the Floquet theorem to be applicable in a TDDFT framework, the time-dependent KS Hamiltonian must be periodic with the period of the external field $T=2\pi/\omega_1$ only, i.e.,
\begin{equation}
{v}_\mathrm{KS}([m,n],x,t)={v}_\mathrm{KS}([m,n],x,t+T).\label{singlyperiodic}
\end{equation}
Consider now the density being periodic with a period $T'$ different from the period of the external field,
\begin{equation}
n(x,t)=n(x,t+T'),
\end{equation}
as in the above examples in Secs.~\ref{nonadiabaticexample} and \ref{resonantinteraction}.
The periods $T$ and $T'$ are incommensurate in general. We just have proven that the KS potential is periodic with the same period as the density, which implies that
\begin{equation}
{v}_\mathrm{KS}([m,n],x,t)={v}_\mathrm{KS}([m,n],x,t+T^\prime).\label{doublyperiodic}
\end{equation}
This is in contradiction with the assumption of only one period $T=2\pi/\omega_1$ of Eq.~\reff{singlyperiodic} which allows the Floquet theorem to be applied in the first place. Hence, the Floquet theorem cannot be applied.
Here, for our example for which we are able to write down an explicit expression for the KS potential, we have proven that for \textit{any} initial KS state it is impossible to have a laser-periodic KS potential when the density has another period.
\section{Conclusions \label{conclude}}
We investigated the applicability of the Floquet theorem to time-dependent Kohn-Sham Hamiltonians.
By employing analytically and numerically exactly solvable counter examples we showed that, in general, Floquet theory is not compatible with time-dependent density functional theory. The reason is that, while periodic drivers such as laser fields of course render the interacting many-body Hamiltonian periodic, the corresponding Kohn-Sham Hamiltonian, in general, is aperiodic. We discussed how the periodicity properties of the single-particle density translate to the Kohn-Sham potential.
If in the Floquet analysis of the many-body time-dependent Schr\"odinger wave function more than one Floquet state plays a role---such as for non-adiabatic ramping or resonant interactions---the exact Kohn-Sham potential is aperiodic so that the Floquet theorem is inapplicable. Further we showed that also the initial-state dependence of the time-dependent Kohn-Sham Hamiltonian cannot be employed to restore its periodicity. Of course, one may view Kohn-Sham-Floquet theory as an approximative approach for the study of laser-matter phenomena in which resonances and non-adiabaticities are expected to be not relevant.
\section*{Acknowledgment}
This work was supported by the SFB 652 of the German Science Foundation (DFG). M.R. acknowledges financial support by the Erwin Schr\"odinger Fellowship J 3016-N16 of the FWF (Austrian Science Fund).
|
1,108,101,564,163 | arxiv | \section{Introduction}
Modern cities are diverse in their spatial structure: some cities are monocentric, with a single center of business, retail and other types of activity, while some exhibit polycentric patterns in which multiple activity clusters are distributed across space~\cite{Giuliano1991,Mcmillen2001,Tsai2005,Green2007,Meijers2008}. It is well known that a city structure affects economic productivity, environmental conditions and other aspects of human life~\cite{Newman1989,Ewing2015,Li2018,Kwon2018,Li2019}. Importantly, spatial structures of cities change over time in intricate ways, with the process of intra-urban migration being one of the main drivers of the city evolution~\cite{Batty2006,Batty2013,Schneider2013,Arcaute2015,Barthelemy2016book,Arcaute2016,Barthelemy2019,Sahasranaman2019,Crosato2018,Dynamic_resettlement_paper}. Yet, there is a lack of models that can quantitatively explain and accurately predict the city evolution in terms of intra-urban migration and the resultant patterns of an urban spatial structure.
In existing models of cities, the intra-urban migration is usually considered as a fast process in which an \textit{equilibrium} is reached very quickly \cite{Fujita1982, Harris1978, Louf2013, Crosato2018, Ellam2018, Dynamic_resettlement_paper}. Such an equilibrium is typically defined by the spatial distribution of infrastructure and employment \cite{Fujita1982, Louf2013, Crosato2018, Wu2019, Slavko2020, Crosato2020, Slavko2020}, and {the intra-urban dynamics} typically is not considered as an example of non-equilibrium process in the thermodynamic sense. A non-equilibrium approach is also hard to validate explicitly due to the difficulty of collecting the data.
In this work, we consider the intra-urban migration as irreversible process, and explicitly derive the dynamics of the corresponding non-equilibrium evolution. In doing so, we shall draw on an analogy with diffusive relaxation. This opens a way towards a systematic and coherent framework describing the human migration within cities (both temporally and spatially), as opposed to an unconstrained evolution of cities through expansion.
Human relocation has been widely considered as diffusion in open systems \cite{Woube2005, Vahia2017}. This approach employs an apparent and direct analogy between human migration and molecular diffusion, and has shown good predictions at different scales \cite{Barthelemy2019, Barbosa2018, Balcan2009, Barthelemy2016book, Arcaute2016, Bouchaud2013}, from city growth \cite{Lenormand2015, Gonzalez2008, Louf2013} and epidemic spread \cite{Gustafson2017, Wen2018, Balcan2009} to inter-continental migration \cite{Woube2005}. These migration processes can be characterized as expansive, as they increase the area of human habitat, viewing the human society as an open system. In the modern world, however, most of the migration processes result in a redistribution of the population across already occupied locations, rather than expanding to non-occupied areas. This constraint essentially confines the migration to happen in a closed system, with a fixed area and population. In this paper we propose a diffusion model describing migration processes in a \textit{closed system} from the perspective of non-equilibrium thermodynamics.
In general, migration from rural to urban areas, as well as expansion of metropolitan boundaries, are relatively slow and long-term processes. In contrast, the intra-urban migration happens at a much faster rate \cite{ Weidlich1990, Barthelemy2013Planning, Wu2004,Crosato2018,Dynamic_resettlement_paper,Slavko2020,Crosato2020}. Nevertheless, we argue that the intra-urban migration is a fundamental driving force shaping the long-term evolution of spatial urban structure, with external migration and spatial expansion playing only a secondary role. Thus, we aim to model urban transitions as dynamics developing in a closed system, at least in the first approximation.
Typically, the intra-urban human mobility has been considered as a process driven by certain attractiveness of various locations within a city, perceived in terms of proximity to schools, business centres, recreational facilities, etc. \cite{Ellam2018, Crosato2018}. This notion of attractiveness was modelled both explicitly, using specific socio-economic indicators \cite{Kim2005, Perez2003}, and implicitly, being reconstructed directly from the migration data \cite{Slavko2020}. The latter approaches can be classified as ``microscopic" \cite{Simini2012, Weidlich1988}, as they focus on a specific mechanism of human relocation.
In this paper we propose a concise ``phenomenological" approach
which considers the migration flows from the perspective of diffusion. Importantly, we do not make any assumptions on particular choices which may motivate individuals to relocate. Instead, we analyze their collective movement {and show that this process is similar to diffusion. As a result, we reveal the trends of population movement, analogous to the macroscopic movement of a fluid.} In doing so, we introduce a rigorous definition of an equilibrium state as the spatial configuration to which the apparent evolution relaxes, and a decomposition of the population into two distinct groups with different relocation frequencies.
This paper is organized as follows. In \secr{sec:markov} we present a general framework of irreversible evolution of a city driven by residential relocation. In \secr{sec:results} we apply this framework to the Australian Capital cities. In particular, we analyse the dynamic relocation patterns in \secr{sec:one_vs_two_components_data}, {develop the analogy between intra-urban resettlement and diffusion in \secr{sec:diffusion}, and predict} the equilibrium population distribution in \secr{sec:long_term_structure_predictions}. In \secr{sec:robustness}, we analyze {the robustness of the model}. Finally, in \secr{sec:conclusions} we summarize the findings of this work.
\section{{Dynamics of intra-urban migration}}\label{sec:markov}
We consider an urban area as a set of $N$ suburbs $i$ with a certain residential population $x_i(t)$ at time $t$. The total population at any time is fixed: $\sum_{i=1}^{N} x_i(t)=\overline{x}$. We assume that time is discrete, with the choice of the time step is dictated by the resolution of the available data. A migration flow $T_{ij}(t)$ is defined as a change in residential location from suburb $i$ to suburb $j$. This flow is uni-directional, so that in general $T_{ij}(t) \neq T_{ji}(t)$, and the net flow $J_{ij}(t) \equiv T_{ij}(t) - T_{ji}(t) \neq 0$. Non-zero net flow indicates that the system is out-of-equilibrium. In a diffusive system, the net flow gradually decays to zero with time, as the system evolves towards an equilibrium. Such an equilibrium state is stationary on the ``macroscopic" level, showing no change in the population of each suburb. However, on the ``microscopic" level there still exists some movement of people, resulting in non-zero uni-directional flows $T_{ij}(t)$. In an equilibrium, these uni-directional flows satisfy a microscopic detailed balance, so that $T_{ij}(t) = T_{ji}(t)$, resulting in a zero net flow $J_{ij}(t)$ between each pair of suburbs.
The uni-directional flow matrix allows one to predict the future population of any suburb. In particular, the population at the next time step $t+1$ can be expressed through the migration flow $T_{ij}(t)$ at the current time step $t$ as $x_j(t+1)=\sum_{i=1}^{N}T_{ij}(t)$, where the sum includes the term $T_{jj}(t)$ accounting for immobile population. Introducing the fraction of relocated people as $p_{ij}(t) \equiv T_{ij}(t)\,/ x_i(t)$, we can write the population evolution equation as
\begin{equation}\label{eq:forceflux1}
X(t+1) = X(t) P(t),
\end{equation}
where $X$ is the (row) vector of the suburbs' population and $P$ is the relocation matrix denoting the fractions of relocating people between each pair of suburbs, with the diagonal elements $p_{jj}$ denoting the fraction of non-relocating residents. The column sum for each row of the relocation matrix is equal to $\sum_{j=1}^{N}p_{ij} = 1$, so $P$ is a row-stochastic matrix \cite{Grinstead2012}.
{The population evolution \eqr{eq:forceflux1} represents a simple Markov process, converging to a distinct stationary state $X_{eq}$, which we identify as the equilibrium state. In equilibrium, the population of each suburb $x_{i,eq}$ does not change in time, such that}
\begin{equation}\label{eq:stationary_total}
X_{eq} \cdot P_{eq} = X_{eq}
\end{equation}
where $P_{eq} \equiv \lim_{t\to\infty}\,P(t)$.
{We assume that the urban area is a closed system with no external shocks and, thus,} the relocation matrix does not change in time, i.e., $P_{eq} \approx P(0) \equiv P$.
Since matrix $P$ is row-stochastic, it has a unit eigenvalue \cite{Grinstead2012} and the vector $X_{eq}$ can be found as a left eigenvector of matrix $P$ that corresponds to the unit eigenvalue. This eigenvector is unique (up to a constant multiplier) if some power of matrix $P$ has strictly positive elements \cite{Grinstead2012}.
Our next step is to explicitly represent the relocation dynamics in terms of both spatial and temporal {terms}. We decompose the relocation matrix according to the following structure
\begin{equation}\label{eq:migration_matrix_decomposition}
P = (1-\epsilon) I + \epsilon H,
\end{equation}
where $\epsilon$ is the share of people who relocate to a different suburb within a period of time. Such decomposition is known as the mover-stayer model \cite{Blumen1955} which has been used to describe relocation phenomena in biology, economics and social sciences \cite{Fuchs1988, Cook2002, Fougere2003, Frydman2004}.
Here, matrix $H$ shows the relocation structure of those residents who moved to a different suburb (i.e., have not stayed in the same suburb). The matrix $H$ shows the spatial structure of the system and characterizes the variation in microscopic ``attractiveness" between different suburbs. {Without loss of generality, we assume $h_{ii}=0$.} Furthermore, the coefficient $\epsilon$ can be alternatively written as $\epsilon \equiv 1/\tau$, where $\tau$ is the characteristic relocation time. We will refer to it as the relocation frequency or the population mobility.
{We next extend the simple decomposition \eqref{eq:migration_matrix_decomposition}, so that the population mobility has a more complex structure than a single frequency $\epsilon$. Without loss of generality, we assume that the relocation dynamics is governed by a discrete set of relocation frequencies $\epsilon_k$, where $k=1,2,...K$.} In the context of the population structure, this would suggest that the urban population comprises several distinct groups, which differ in their mobility. There may exist a number of classifications which differentiate population groups by their mobility, based on their ownership status (renters and home-owners), family status (singles and families), employment status (students, professionals, retirees). In this work we abstract away from the specific nature of these groups, assuming only their existence.
Expanding the structure of the population mobility does not affect the spatial structure of the relocation, i.e., the matrix $H$. {We therefore assume that the spatial structure of the relocation dynamics is the same for each population group. Although this is, in principle, a strong condition, we show in \secr{sec:heterogeneous_H} that it does not affect the results in practice.}
The equilibrium population of each component is obtained similarly to \eqr{eq:stationary_total}, as
\begin{equation}\label{eq:stationary}
X_{k,eq} \cdot P_{k} = X_{k,eq},
\end{equation}
We show in \appr{appendix:same_equilibrium} that the population of each component $X_{k}(t)$ converges to the equilibrium population structure
\begin{equation}\label{eq:fraction}
X_{k,eq} = \alpha_k X_{eq},
\end{equation}
where $\alpha_{k}$ is the total fraction of the city population belonging to the component $k$, so that $\sum_{k=1}^{C} \alpha_{k} = 1$ and $X_{eq}$ is the total equilibrium population structure which is independent of $\epsilon_k$ and $\alpha_k$. This is also illustrated in \figr{fig:convergence_example}. Thus, the total equilibrium $X_{eq}$ can be obtained using the full spatial matrix $H$, without the component-specific relocation matrices $P_k$ or even the component's fractions $\alpha_k$. This is very convenient, as the component structure of the population is not known \textit{a priori}, and in practice it is only matrix $H$ which can be obtained from the data directly.
\section{Results}\label{sec:results}
{In this section, we present the results of our framework for the Australian capital cities. First, in \secr{sec:one_vs_two_components_data}, we demonstrate that the model with homogeneous population fails to consistently describe the intra-urban migration dynamics, while a heterogeneous model resolves this issue.
Second, in \secr{sec:diffusion}, we develop an analogy between an intra-urban migration and diffusion. This allows us to interpret the heterogeneous dynamics of intra-urban migration as diffusion in a multi-component fluid mixture.
Third, in \secr{sec:long_term_structure_predictions}, we predict the equilibrium configuration of the considered cities.}
\subsection{{Revealing two-component structure of intra-urban evolution}}\label{sec:one_vs_two_components_data}
We first {analyze} the human relocation flows in eight Australian Greater Capital Areas, which represent populated metropolitan areas with diverse cultural and economic activities. The {model is} calibrated using the data from the Australian Census \cite{ABS2016}, which are reported as the migration flows $T_{ij}$ between each pair of suburbs within 1 year and 5 years, denoted as $T_{ij;1Y}$ and $T_{ij;5Y}$ respectively. This suggests the natural choice for the time step as 1 year. The data are available for two census years, 2011 and 2016, with the migration counted backwards. {Intra-suburb migration is not considered in this analysis. The data resolution we use, Statistical Area 2, is the finest in the Australian Census for which the migration data is available.}
A naive approach suggests calculating the one-year migration matrix directly as
\begin{equation*}
p_{ij;\,1Y} = T_{ij;\,1Y}/x_i.
\end{equation*}
This, however, produces results which are inconsistent with the 5-year migration data.
Indeed, the 5-year migration matrix is, by definition, $p_{ij;\,1Y} = \left[P^5\right]_{ij}$, where $P$ is the matrix of migration rates $p_{ij;\,1Y}$ and $\left[P^5\right]_{ij}$ stands for element in row $i$ and column $j$ of matrix $P^5$. The 5-year migration flow extrapolated from the 1-year migration flow is $\hat{T}_{ij;\,5Y} = p_{ij;\,5Y}\,x_i(t)$. Comparing the 5-year population obtained from actual migration flow $\sum_{j \neq i} T_{ij;\,5Y}(2016)$ with the 5-year population obtained from the predicted migration flow $\sum_{j \neq i}\hat{T}_{ij;\,5Y}(2016)$, as shown in \figr{fig:migration_5Y_prediction_non_relocating}, we observe a systematic disagreement: the predicted numbers of movers are consistently higher than the actual numbers. In particular, in all Greater Capital Areas the average share of people who do not change their place of residence within 1 year is about $0.87-0.92$. The analogous share within 5 years is about $0.7-0.78$, while the predicted one is approximately $0.9^5 \approx 0.6$, as shown in table \ref{table:stayer_share}.
{We note that there exist, in principle, several alternative ways to calibrate the single-group model. They, however, give the same result: the single-group model is not capable of explaining the 5-years migration patterns from the 1-year migration patterns. We refer to Appendix \ref{appendix:alternative_calibration} for the details of these calibrations.}
\begin{figure}[h!]
\centering
\includegraphics[height=7in]{migration_matrix_non_relocating_1Y_to_5Y_predicted_scatter_with_latent_groups__2016.png}
\caption{Number of movers in five-year migration data: actual ($\sum_{i \neq j} T_{ij;\,5Y}(2016)$) vs predicted ($\sum_{i \neq j}\hat{T}_{ij;\,5Y}(2016)$), with each dot representing one suburb. Red dots correspond to the one-component model, the green ones correspond to the two-component model. The blue solid line has the slope of $1$, showing the ideal prediction. {The corresponding calibration errors are shown in table~\ref{table:relative_error} in Appendix.}}
\label{fig:migration_5Y_prediction_non_relocating}
\end{figure}
\begin{table}[h!]
\centering
\caption{Share of people who do not change their place of residence (actual vs predicted). The values are based on 2016 Census data.}
\vspace{3mm}
\label{table:stayer_share}
\begin{tabular}{|| c || c | c | c | c || }
\hline
\multirow{2}{*}{GCA} & \multirow{2}{*}{1Y actual} & \multirow{2}{*}{5Y actual} & 5Y predicted & 5Y predicted \\
& & & 1-component & 2-component \\
\hline\hline
Sydney & 0.909 & 0.733 & 0.630 & 0.735 \\
\hline
Melbourne & 0.905 & 0.734 & 0.618 & 0.736 \\
\hline
Brisbane & 0.889 & 0.702 & 0.569 & 0.704 \\
\hline
Adelaide & 0.911 & 0.752 & 0.635 & 0.754 \\
\hline
Perth & 0.895 & 0.71 & 0.584 & 0.712 \\
\hline
Hobart & 0.923 & 0.783 & 0.678 & 0.788 \\
\hline
Darwin & 0.874 & 0.713 & 0.527 & 0.717\\
\hline
Canberra & 0.904 & 0.720 & 0.615 & 0.722\\
\hline
\end{tabular}
\end{table}
In order to resolve this problem, we extend the model, assuming that the population comprises two groups instead of one, while staying with the general framework \eqref{eq:conductivty_components}. Each group is characterized by its own relocation frequency, $\epsilon_1$ and $\epsilon_2$, which, in general, differ from each other. Furthermore, we restrict ourselves to the case where the population share of each group, $\alpha_1 \equiv \alpha$ and $\alpha_2 = 1 - \alpha$, is the same across all suburbs in the short-term and is equal to the total population share. If $\alpha$ is different for each suburb, the model will have an excessive number of parameters, which may improve the goodness of fit but will reduce the calibration robustness.
{We point out that the number of population groups with distinct relocation frequencies does not have to be equal to two: it simply has to differ from one. This is a crucial departure from a homogeneous population model which is not capable of explaining the actual relocation dynamics.
}
Within this framework, we deduce three parameters $\epsilon_1$ and $\epsilon_2$ from the data sets described above. There exist multiple estimation algorithms for similar models with a parametric structure of the matrices (see, e.g. \cite{Goodman1961, Cook2002, Frydman2018} for more details). Here we use a simple calibration technique by selecting parameters $\epsilon_1$, $\epsilon_2$ and $\alpha$ without specifying a parametric functional form for the elements of relocation matrix $H$.
We calculate migration flows $T_{ij}$ as the sum of two components:
\begin{equation}\label{eq:migration_flow_combined}
\hat{T}_{ij;\,5Y}(2016) = x_i \left(\alpha \left[P_1^5\right]_{ij} + (1-\alpha) \left[P_2^5\right]_{ij}\right),
\end{equation}
where $P_k=(1-\epsilon_k)I + \epsilon_k H$, $\left[P_k^5\right]_{ij}$ stands for element in row $i$ and column $j$ of the matrix $P^5_k$. Matrix $H$ is estimated as follows:
\begin{equation}\label{eq:H_estimate}
h_{ij}=\frac{T_{ij;\,1Y}(2016)}{\sum_{k: k \neq i}T_{ik;\,1Y}(2016)},
\end{equation}
where $T_{ij;\,1Y}(2016)$ is the number of people migrated from suburb $i$ to suburb $j$ within one year period of 2015-2016.
Relaxation rates $\epsilon_k$ and $\alpha$ can be found from the conditions \begin{equation}\label{eq:mobility_calibration}
\begin{array}{lclcl}
\alpha (1-\epsilon_1) &+& (1-\alpha)(1-\epsilon_2) &=&s_{1Y}, \\
\alpha (1-\epsilon_1)^5 &+& (1-\alpha)(1-\epsilon_2)^5 &=&s_{5Y},\\
0 \leq \epsilon_1 \leq 1, &\quad& 0 \leq \epsilon_2 \leq 1,
\end{array}
\end{equation}
where $s_{1Y}$ is average share of stayers within one-year, $s_{5Y}$ is average share of stayers within five-years period, which are calculated from the Census data. The actual values of $s_{1Y}$ and $s_{5Y}$ for the Australian Capital Areas are such that the solution to \eqr{eq:mobility_calibration} exists and unique. {It should also be noted that available data do not allow us to calibrate the value of $\alpha$, and thus it has to be fixed beforehand.} The general question of existence and uniqueness of the solution to \eqr{eq:mobility_calibration} with respect to $\alpha$ and $\epsilon_k$ is discussed in \secr{sec:two_components_calibration_analysis} in more details.
The magnitudes of the migration outflow for each suburb predicted by this model for $\alpha=0.9$ are plotted against their actual magnitudes in \figr{fig:migration_5Y_prediction_non_relocating} (numbers of movers, $\sum_{j \neq i}T_{ij;\,5Y}(2016)$). It is evident that the values predicted by the two-component model are in a stronger agreement with the actual data than the predictions obtained by the one-component model.
The same analysis can be performed with the 2011 migration data and the corresponding predictions are shown in \figr{fig:migration_5Y_prediction_non_relocating_2011} in appendix.
Here, we again observe that the naive model produces a systematic bias in its predictions, while the two-component model provides a good fit to the data. From this comparison, we can conclude that the described methodology predicts the migration flows with a high precision, once the systematic bias produced by the one-component model is eliminated.
\subsection{{Intra-urban migration as diffusion}}\label{sec:diffusion}
{In this section, we describe intra-urban migration as an irreversible process of diffusion. We first follow the general description in \secr{sec:markov}, building the analogy for a general multi-component fluid. Next, we illustrate the analogy for a specific case of intra-urban migration in Sydney, using the results from \secr{sec:one_vs_two_components_data}.}
The difference $U(t) \equiv X(t)-X_{eq}$ between the actual population at time $t$ and the equilibrium population $X_{eq}$ shows how far the system is away from equilibrium. Introducing the rate of population change $Q(t) \equiv X(t+1) - X(t)$ as the difference between two subsequent time steps and using \eqr{eq:stationary_total}, we can rewrite the population evolution equation \eqref{eq:forceflux1} as
\begin{equation}\label{eq:forceflux}
Q(t) = U(t) \cdot L
\end{equation}
where $L \equiv P-I$ and $I$ is the identity matrix. We view \eqr{eq:forceflux} as the central expression underlying {the analogy between intra-urban migration and diffusion. Indeed, if we consider two reservoirs with different fluid concentrations connected by a thin channel, there will exist a flow of fluid through that channel until these concentrations equilibrate. The rate of the concentration change in each of the reservoirs is proportional to the difference between the current concentration and the equilibrium concentration, with the proportionality coefficient being related to the diffusion coefficient, and following the same dependency as \eqr{eq:forceflux}. In general, \eqr{eq:forceflux} } has the form of a typical transport equation in non-equilibrium thermodynamics \cite{deGrootMazur}, which describes irreversible evolution of a thermodynamic system. For a closed system, this corresponds to a relaxation phenomenon, with $U(t)$ being the driving force, which drives the system towards equilibrium, $Q(t)$ being the {rate of material change}. {Furthermore, $L$ is the matrix of transport coefficients, or simply the transport matrix, comprising the transport coefficients between each pair of suburbs}. The transport matrix determines how fast the system relaxes towards equilibrium. In context of urban dynamics, the irreversibility is ensured by constancy of the relocation matrix $P$: the fractions of residents migrating between two suburbs, $p_{ij}$, remain constant during relaxation (while the flows $T_{ij}$ and populations $x_i$ keep changing). In other words, once the equilibrium is reached, there is no driving force to reverse the relocation dynamics \eqref{eq:forceflux1}.
The transport coefficients are central in describing the irreversible evolution of a thermodynamic system. Similarly, the knowledge of the transport matrix is central in predicting the relocation dynamics in an urban system. An essential property of a transport coefficient in a physical system is that, in a closed system, it does not depend explicitly on time. This reflects the microscopic reversibility of molecular motion. While such principle does not exist \textit{a priori} for an urban system, {demanding that the transport matrix $L$ (and, therefore, the relocation matrix $P \equiv L + I$) does not change with time, indicates the microscopic reversibility of intra-urban relocation.} We will see below that this assumption is supported by actual data, helping us to derive the transport matrix from the Census data on relocation.
Substituting decomposition \eqref{eq:migration_matrix_decomposition} we obtain
\begin{equation}\label{eq:conductivty}
L = -\epsilon\,(I - H),
\end{equation}
so that the transport matrix factorizes into two terms. The temporal term, the coefficient $\epsilon$, shows the speed of relaxation towards equilibrium and characterizes the rate of the system irreversibility. The spatial term, the matrix $H$, shows the spatial distribution of a migration potential.
{We next point out that the population groups, introduced above, correspond to distinct components in a multi-component fluid mixture. Indeed, extending \eqr{eq:conductivty} to multiple population groups, we can write }
\begin{equation}\label{eq:conductivty_components}
L_{k} = -\epsilon_{k}\,(I - H).
\end{equation}
{Here, the temporal term $\epsilon_{k}$ is different for each population group, corresponding to a fluid component with a distinct relaxation rate. In contrast, the spatial term $H$ is the same for all components, corresponding to an external potential field.}
Writing the transport equation for each component separately, we obtain:
\begin{equation}\label{eq:forceflux_components}
Q_{k}(t) = U_{k}(t)\cdot L_{k}
\end{equation}
where $L_{k}$ is the component-specific transport matrix defined by \eqr{eq:conductivty_components}, while $Q_k(t) \equiv X_k(t+1) - X_k(t)$ and $U_k(t) \equiv X_{k}(t)-X_{k,\,eq}$.
With this analogy, the overall dynamics of intra-urban evolution follows a profile of diffusive relaxation. Specifically, at large $t$, the asymptotic decay of the driving force should be exponential, {with exponent $\lambda_k < 1$ being proportional to the second largest eigenvalue of matrix $H$. }
This implies that near equilibrium (when the values of $U_k(t)$ are small), the rate $Q_k(t)$ is asymptotically proportional to the driving force:
\begin{equation}\label{eq:equilibrium_2}
\Vert Q_k(t) \Vert \sim (1-\lambda_k) \Vert U_k(t) \Vert.
\end{equation}
We illustrate the long-term relaxation dynamics for specific case of intra-urban migration in Sydney. As revealed in the previous subsection, there exist two population groups in Sydney, which correspond to two components in a fluid mixture. \figr{fig:diffusion_convergence} shows the relaxation profiles for these components. Particularly, the left panel shows that that the driving force decays exponentially with time, as expected. Furthermore, the right panel shows that the near-equilibrium rate of relaxation is linearly proportional to the driving force, according to \eqr{eq:equilibrium_2} . This is, indeed, in agreement with the framework of linear irreversible thermodynamics \cite{deGrootMazur}, where the near-equilibrium {relaxation rate} is linearly proportional to the driving force, and the coefficient of proportionality is characterized by the second eigenvalue of the relocation matrix.
\begin{figure}[h!]
\centering
\includegraphics[height=3in]{exponential_convergence_Sydney_2016.png}
\caption{Exponential convergence of $U_k(t)$ {for each of the population groups, $k=1,2$}: (A). $\log \Vert U_k(t) \Vert$ is plotted against time step $t$; (B). $\Vert Q_k(t)\Vert$ is plotted against $\Vert U_k(t)\Vert$ (thick dotted curves) and the tangential lines with the slope $1-\lambda_k$ (solid straight lines){, where $\lambda_k$ is the second eigenvalue of the group relocation matrix}. For illustration purpose, both $U_k(t)$ and $Q_k(t)$ are normalized by total number of residents, $\alpha_k \overline{x}$, in the corresponding group.}
\label{fig:diffusion_convergence}
\end{figure}
\subsection{{Predicting equilibrium population distribution}}\label{sec:long_term_structure_predictions}
{We next} build a long-term forecast for spatial structure of the Australian cities. {In doing so, we assume} that the current migration flows remain stable in the following years.
As it has been mentioned above, the equilibrium structure $X_{eq}$ is independent of $\alpha$, $\epsilon_1$ and $\epsilon_2$; we calculate it as the first eigenvector of matrix $H$, {which} is obtained using \eqr{eq:H_estimate}.
The corresponding predictions are shown in \figr{fig:population_prediction}. To test the consistency of our predictions, we compare the predictions derived from 2016 data with the analogous predictions based on the 2011 data. \figr{fig:long_term_prediction_comparison} demonstrates that the outcomes based on 2011 and 2016 configurations are in a good agreement with each other. {This indicates that the relocation trends are stable in time, supporting the assumption about constancy of the relocation matrix $P$.}
\begin{figure}[h!]
\centering
\includegraphics[height=7in]{predicted_eigenvector_heatmap__2016_bar.png}
\caption{Long-run population structure prediction based on eigenvectors of migration matrix{, obtained from 2016 Census data. Scale bars in lower-left corners indicate distances equivalent to 20 km.}}
\label{fig:population_prediction}
\end{figure}
These results reveal that the equilibrium states of three out of eight capital cities (Sydney, Melbourne and Perth) are more spread out, compared with their current structure (shown in \figr{fig:actual_population} in appendix). The equilibrium structures of Brisbane, Adelaide and Darwin are similar to the current ones, while the structures of Hobart and Canberra are more compact than the current one.
These qualitative observations can be quantified by the spreading index method \cite{Louail2014,Volpati2018,Slavko2020}, which determines the degree of polycentricity and dispersal, as opposed to monocentricty and compactness of the city. The values of spreading index (calculated for both actual configurations of the Australian capital cities and for the predicted ones) are shown in table \ref{table:spreading_index}. {It is remarkable that our long-term prediction is independent of $\alpha$, $\epsilon$, and even the number of heterogeneous components (which can be larger than two in reality) as long as the spatial migration pattern described by matrix $H$ is shared by the entire population.}
\begin{table}[h!]
\centering
\caption{Spreading index calculated for both current and predicted long-run structure of the Australian cities.}
\vspace{3mm}
\label{table:spreading_index}
\begin{tabular}{|| c || c | c || }
\hline
GCA
& Current
& Predicted \\
\hline\hline
Sydney & 0.29 & 0.74 \\
\hline
Melbourne & 0.26 & 0.43 \\
\hline
Brisbane & 0.32 & 0.37 \\
\hline
Adelaide & 0.54 & 0.53 \\
\hline
Perth & 0.44 & 0.58 \\
\hline
Hobart & 0.38 & 0.25 \\
\hline
Darwin & 0.75 & 0.79 \\
\hline
Canberra & 1.06 & 0.92 \\
\hline
\end{tabular}
\end{table}
\section{Robustness evaluation}\label{sec:robustness}
In the previous section, we have shown that our non-equilibrium framework of diffusive intra-urban relaxation explains the short-term migration data and is able to provide long-term predictions. The important element of the model is the assumption that the population comprises multiple components with different relocation frequencies, which, in the context of our framework, correspond to different relaxation rates. In this section, we investigate {robustness of this claim}, analysing the extent of its applicability. In particular, in \secr{sec:two_components_calibration_analysis} we study the two-component model, arguing that this case is sufficient to consistently describe the short-term migration. In \secr{sec:heterogeneous_H} and \secr{sec:heterogeneous_sydney}, we explore the sensitivity of the equilibrium configuration to the spatial migration patterns (captured by matrix $H$), varying between different population components. In \secr{sec:heterogeneous_H} we do this for an abstract city with extreme migration patterns, while in \secr{sec:heterogeneous_sydney} we extend this analysis to a specific case (Sydney).
\subsection{Heterogeneity of the population}
{In \secr{sec:one_vs_two_components_data}, we reported that the baseline one-component model systematically predicts higher rates of five-year migration than observed in the reality. Here we show that having a homogeneous population mobility can not produce consistent migration predictions; hence, the population has to be heterogeneous with respect to its mobility. The population, which is comprised of two groups with two distinct relocation frequencies is a minimal realization of such heterogeneity.
}
{One may expect that the 5-years relocation rate, observed in reality, is lower than the one predicted from the 1-year relocation rate, due to the low mobility of recently relocated people. Indeed, if an individual has moved into a new home recently, there might be not much incentive for them to move further relatively quickly. To represent this constraint, we assume that people do not relocate for $\tau$ years after their last relocation (immobility assumption), and calculate the share of those who have not relocated for different values of this parameter, $\tau$. \figr{fig:stayer_curve_with_memory} compares how this share changes in time for homogeneous populations (full solution of this model is provided in \secr{appendix:memory_model} in Appendix) and a two-component population without immobility assumption. It is evident from the figure, that the immobility assumption does not improve the model. Indeed, no value of the immobility period $\tau$ can match the five-year relocation flow. In such setting, presence of the residents that do not relocate within a certain period, implies a higher relocation frequency for those who do. This results in the 5-year relocation rate predicted by the homogeneous model with the immobility assumption to be higher than that produced by the simple homogeneous model (the baseline model), leading to an outcome opposite to the one that was expected.}
\begin{figure}[h!]
\centering
\includegraphics[height=2.7in]{stayer_curve_Sydney_memory_model.png}
\caption{The share of people who do not change their place of residence within period $t$ plotted against the length of this period. Green dots correspond to the actual values for Sydney ($s_{0Y}=1$; $s_{1Y}=0.91$; $s_{5Y}=0.73$). Solid curves correspond to the model where people do not relocate within $\tau$ years after their last relocation ($\tau$ ranges from 0 to 5), see \eqr{eq:migration_5Y_memory}. The dotted curve corresponds to the two component model ($\alpha = 0.9$), see \eqr{eq:migration_flow_combined}. All models are calibrated to the Sydney relocation data. All solid curves pass through the actual one-year relocation rate $s_{1Y}=0.91$ but go well below the corresponding five-year value, $s_{5Y}=0.73$.}
\label{fig:stayer_curve_with_memory}
\end{figure}
{This analysis shows that a lower mobility of the recently relocated people cannot be a valid explanation for the lower actual 5-year relocation rate. Therefore, we conclude that having a homogeneous population comprised of only one component is not sufficient to make the model consistent with the data.}
\subsection{Solution space of the two-component model}\label{sec:two_components_calibration_analysis}
In \secr{sec:one_vs_two_components_data}, we have calculated $\epsilon_1$ and $\epsilon_2$ using the average shares of stayers within one-year ($s_{1Y}$), five-years period ($s_{5Y}$) and conditions \eqref{eq:mobility_calibration}.
The values of $\epsilon_1$ and $\epsilon_2$ depend on $\alpha$, which is not known without specifying the nature of the groups. This, however, does not affect the possibility to split the population in two groups and obtain consistent predictions of the five-year migration patters from the one-year migration patterns.
The non-linear system of algebraic equations \eqref{eq:mobility_calibration} allows one to calculate the relocation frequencies $\epsilon_1$, $\epsilon_2$ for a given composition $\alpha$. It consists of two equations while containing 3 unknown variables ($\epsilon_1$, $\epsilon_2$ and $\alpha$),
and for a given $\alpha$ it can have up to 5 real roots. Some of the roots may not belong to the range from 0 to 1 and, therefore, cannot be valid solutions for $\epsilon_1$ and $\epsilon_2$. \figr{fig:calibration_diagram} demonstrates that, depending on $s_{1Y}$, $s_{5Y}$ and $\alpha$, there exist 2, 1 or 0 solutions. Furthermore, it is evident from \figr{fig:calibration_diagram}, that for any $s_{5Y}$ ranging from $s_{1Y}^5$ to $s_{1Y}$ there exists at least one solution for the pair $\epsilon_1$ and $\epsilon_2$. This means that it is always possible to calibrate the model \eqref{eq:mobility_calibration} as long as $s_{5Y} \geq s_{1Y}^5$ (the other inequality, $s_{5Y} \leq s_{1Y}$ holds automatically), which is the case for the actual migration data.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{actual_alpha_vs_alternative_mobility.png}
\caption{Number of solutions to \eqref{eq:mobility_calibration} depending on $\alpha$ and $s_{5Y}$ for A. $s_{1Y}=0.2$, B. $s_{1Y}=0.3$, C. $s_{1Y}=0.4$, D. $s_{1Y}=0.5$, E. $s_{1Y}=0.6$, F. $s_{1Y}=0.7$, G. $s_{1Y}=0.8$, H. $s_{1Y}=0.9$, I. $s_{1Y}=0.98$, J. $s_{1Y}=0.99$. Yellow areas correspond to two distinct solutions, green areas represent one solution, dark purple stands for no solution.}
\label{fig:calibration_diagram}
\end{figure}
Although it is not feasible to estimate $\alpha$ directly from the current data set, it would be possible to do so with a longer record of internal migration. In particular, if we also knew people's places of residence 10 years ago, 15 years ago etc., we would be able to determine the value of $\alpha$, which predicts the share of movers and stayers with a higher precision. {This idea is illustrated in \figr{fig:forward_stayer_curve}, which demonstrates the stayer share values predicted by the two-component model. Parameters $\epsilon_1$ and $\epsilon_2$ are calibrated to $s_{1Y}=0.91$ and $s_{5Y}=0.73$ (Sydney values are used as an example). Values of $\alpha$ vary from 0.7 to 0.95 (solution of \eqref{eq:mobility_calibration} exists and is unique if $0.05 \leq \alpha \leq 0.33$ and $0.67 \leq \alpha \leq 0.95$ but the solutions for $\alpha = a$ and $\alpha = 1 - a$ are equivalent due to symmetry).}
For the 30 years horizon, the predictions for the share vary from 0.2 (if $\alpha=0.95$) to 0.6 (if $\alpha=0.7$). This means that the two-component model can consistently calibrate a larger variety of data than the one-component model.
If, however, the actual structure of the population is more complex, e.g., is made of a larger number of components, the two-component model would not be able to adequately account for the corresponding data.
\begin{figure}
\centering
\includegraphics[height=2.7in]{stayer_curve_Sydney.png}
\caption{The share of people who do not change their place of residence within period $t$ plotted against the length of this period. Dotted curve corresponds to the naive single-component model (calibrated to one-year value, $s_{1Y}=0.91$). Solid lines describe a family of the two-component model predictions matching actual one-year and five-year values (Sydney values, $s_{1Y}=0.91$ and $s_{5Y}=0.73$, are taken as an example) for different levels of $\alpha$. All solid curves pass through 3 common points (green): $s_{0Y}=1$; $s_{1Y}=0.91$; $s_{5Y}=0.73$. The dotted curve passes through the first two green points and its five-year prediction is marked in red.}
\label{fig:forward_stayer_curve}
\end{figure}
\subsection{Component-specific relocation matrix}\label{sec:heterogeneous_H}
As has been shown previously, heterogeneity in mobility rates $\epsilon_k$ does not affect the equilibrium structure of the city as long as all groups have the same relocation matrix $H$. This assumption, however, may not be valid if we do not observe $H$ for each group directly. This might be important, as there exists an empirical evidence that different social groups (such as renters and mortgagors) have different migration patterns \cite{Crosato2020}. Thus, it is important to assess the possibility for the matrices $H_k$ to be component-specific, or in other words, heterogeneous.
If matrices $H_k$ are heterogeneous, the equilibrium population structure $X_{eq}$ is no longer independent of the compositions $\alpha_k$ and relocation rates $\epsilon_k$. In particular, matrices $H_k$ may have different stationary vectors $X_{k,eq}$, in which case it is impossible to estimate the stationary population structure unless matrices $H_k$ are observed directly. The equilibrium structure $X_{eq}=\sum_{k=1}^{C} X_{k,eq}$ depends on the individual matrices $H_k$ and cannot be expressed via the aggregated relocation matrix $\hat{H} = \sum_{k=1}^{C} \alpha_k H_k$.
In this section, we show that the structure calculated using the aggregated matrix $\hat{H}$ can still give a reasonable approximation for the stationary population structure $X_{eq}$, even if the individual vectors $X_{k,eq}$ differ drastically.
To demonstrate this, we consider two artificial examples. In the first example, the population components $1$ and $2$ generate the migration flows with opposite directions. In the second example, there are two groups of suburbs (A and B), and the members of component $1$ always relocate to the suburbs within $A$, while the members of component $2$ always relocate to suburbs within $B$. These two examples are considered for a linear toy city comprising 99 suburbs. All suburbs are located along a line, such that suburb 50 is the ``central'' one.
In the first example, matrix $H_1$ consists of elements $h_{1;\,ij}$ given by:
\begin{equation}
h_{1;\,ij} = \frac{e^{-\beta(d_j-d_i)}}{\sum_{k=1}^{99} e^{-\beta(d_k-d_i)}},
\end{equation}
where $d_i=|i-50|$ is the distance from $i$ to the ``central" suburb (suburb $50$), $\beta=0.1$. Elements of $H_2$ are defined as follows:
\begin{equation}
h_{2;\,ij} = \frac{e^{-\beta(d_i-d_j)}}{\sum_{k=1}^{99} e^{-\beta(d_k-d_j)}}.
\end{equation}
This form of $H_1$ and $H_2$ means that the members of component 1 prefer to relocate to more central suburbs (that are close to the suburb 50), while the members of component 2 relocate to the peripheral suburbs (which are far from the suburb 50) more frequently. The equilibrium population distributions $X_{1,eq}$ and $X_{2,eq}$ are displayed in \figr{fig:heterogeneous_H_opposite_flows_example} (left column). As one might have anticipated, the population of component $1$ forms a monocentric structure around the ``central'' suburb, while the population of component $2$ predominantly inhabits the peripheral suburbs. The corresponding total population structure $X_{eq}$ and its approximation $\hat{X}_{eq}$ obtained from matrix $\hat{H} = \sum_{k=1}^{C} \alpha_k H_k$ is shown in the right column. In all three cases: (A) $\alpha = 0.1$; (B) $\alpha = 0.5$; (C) $\alpha = 0.9$, the actual population structures $X_{eq}$ {(green bars)} lie very close to the corresponding approximations $\hat{X}_{eq}$ {(red solid line)}.
\begin{figure}
\centering
\includegraphics[height=7in]{heterogeneous_H_opposite_flows_example.png}
\caption{Stationary population structure for the case where components 1 and 2 have migration flows with opposite directions: (A) $\alpha = 0.1$; (B) $\alpha = 0.5$; (C) $\alpha = 0.9$. {Componentwise population structure is show in the left column. Total population structure is shown in the right column. For all values of $\alpha$, the approximations $\hat{X}_{eq}$ obtained from the \emph{observable} matrix $\Hat{H}$ (red solid line) are almost indistinguishable from the ground-truth equilibria $X_{eq}$ (green bars).}}
\label{fig:heterogeneous_H_opposite_flows_example}
\end{figure}
In the second example, we fill columns 26-74 of matrix $H_1$ with positive random numbers and the other columns are filled with zeros. In contrast, to fill matrix $H_2$, we assign zero values to columns 26-74 and positive random numbers to columns 1-25 and 75-99. Each row in both matrices is normalized so that its elements sum to one.
It is natural to anticipate that, in the equilibrium, all members of component $2$ will live in suburbs 26-74 while the members of component $1$ will live in suburbs 1-25 and 75-99 (left column in \figr{fig:heterogeneous_H_segregated_flows_example}; cases A, B and C correspond to $\alpha=0.1;\quad 0.5;\quad 0.9$ respectively). {In the right column of \figr{fig:heterogeneous_H_segregated_flows_example}, we again observe that, regardless of $\alpha$, the actual equilibrium structure $X_{eq}$ (green bars) does not deviate significantly from its approximation $\hat{X}_{eq}$ (red solid line).}
\begin{figure}
\centering
\includegraphics[height=7in]{heterogeneous_H_segregated_flows_example.png}
\caption{Stationary population structure for the case where the first group members always relocate to the central districts while the second group members migrate to the peripheral ones: (A) $\alpha = 0.1$; (B) $\alpha = 0.5$; (C) $\alpha = 0.9$. {Componentwise population structure is show in the left column. Total population structure is shown in the right column. For all values of $\alpha$, the approximations $\hat{X}_{eq}$ obtained from the \emph{observable} matrix $\Hat{H}$ (red solid line) are almost indistinguishable from the ground-truth equilibria $X_{eq}$ (green bars).}}
\label{fig:heterogeneous_H_segregated_flows_example}
\end{figure}
\subsection{Sydney case study}\label{sec:heterogeneous_sydney}
To demonstrate the robustness of the results presented in \secr{sec:long_term_structure_predictions} with respect to the heterogeneity of relocation patterns, we extend this analysis to Greater Sydney Capital Area.
In a real city, the migration matrices $H_1$ and $H_2$ are not normally observed separately. Moreover, these matrices cannot be assigned arbitrarily, as they need to be consistent with the actual migration data. In particular, following the procedure suggested in \secr{sec:heterogeneous_H}, the component-specific matrices have to be defined such that $\alpha H_1 + (1-\alpha)H_2 = H$, with $H_1$ accounting for the relocations flowing primarily into central districts, and $H_2$ corresponding to the relocations flowing primarily into the peripheral areas.
To accomplish this task, we choose a distance threshold $\overline{d}$, and select suburb groups $A(\overline{d})$ and $B(\overline{d})$ so that $A(\overline{d})$ contains only suburbs with the distance to central business district being less than $\overline{d}$, while $B(\overline{d})$ contains the rest of the suburbs. Next, we assume that when relocating, the members of component $1$ almost always choose suburbs from set $A(\overline{d})$ while group $2$ members choose suburbs from set $B(\overline{d})$.
Finally, we calibrate $H_1$ and $H_2$ to actual relocation data denoting $a_i \equiv \sum_{j:d_j \leq \overline{d}}\,h_{ij}$ in each row $i$ and define elements $h_{1;\,ij}$ of $H_1$ as follows:
\begin{equation*}
h_{1;\,ij}=
\begin{cases}
\displaystyle \frac{h_{ij}}{\alpha}, \quad \text{if } d_j \leq \overline{d},\\\\
\displaystyle \frac{\alpha-a_i}{\alpha\left(1-a_i\right)}h_{ij}, \quad \text{if } d_j > \overline{d},
\end{cases}
\end{equation*}
if $a_i \leq \alpha$, and
\begin{equation*}
h_{1;\,ij}=
\begin{cases}
\displaystyle \frac{h_{ij}}{a_i}, \quad \text{if } d_j \leq \overline{d},\\\\
0, \quad \text{if } d_j > \overline{d},
\end{cases}
\end{equation*}
if $a_i > \alpha$. The elements of $H_2$ are then given by:
\begin{equation*}
h_{2;\,ij}=\frac{1}{1-\alpha}(h_{ij}-\alpha \,h_{1;ij}).
\end{equation*}
In other words, all members of the first component
move to areas inside $A(\overline{d})$ and all members of the second component move to areas inside $B(\overline{d})$, but the total share $a_i$ of people from $i$ who move to suburbs inside $A(\overline{d})$ might differ from $\alpha$. If $a_i \leq \alpha$, we assume that all the people who relocate from $i$ to $A(\overline{d})$ belong to the first component and so does the proportion $(\alpha-a_i)/(1-a_i)$ of the people migrating to the other suburbs; while the rest of the $i$'s residents belong to the second component. Conversely, if $a_i > \alpha$, we assume that only the proportion $\alpha/a_i$ of those who relocate to $A(\overline{d})$ belong to the first component, while others belong to the second component.
It is easy to see that in that case $\alpha H_1 + (1-\alpha)H_2 = H$, and that all elements $h_{1;\,ij}$ and $h_{2;\,ij}$ are positive and in each row $i$, we have $\sum_{j=1}^N h_{1;\,ij}=1$ and $\sum_{j=1}^N h_{2;\,ij}=1$, which is recquired by construction.
The resulting equilibrium structure of the population density is shown in \figr{fig:Sydney_heterogeneous_H} for $\alpha=0.9$, $\overline{d}=22$ km (median distance to the central business district). Similarly to the previous examples, the approximated $\hat{X}_{eq}$ (\figr{fig:Sydney_heterogeneous_H}D) does not differ significantly from the actual value $X_{eq}$ (\figr{fig:Sydney_heterogeneous_H}C) although $X_{1,eq}$ and $X_{2,eq}$ do differ drastically (\figr{fig:Sydney_heterogeneous_H}A and B) .
\begin{figure}
\centering
\includegraphics[height=6in]{heatmap_example_different_H_Sydney_2016.png}
\caption{Equilibrium population density in Sydney for the case of heterogeneous relocation matrices $H_k$. (A) first group equilibrium structure $X_{1,eq}$; (B) second group equilibrium structure $X_{2,eq}$; (C) total equilibrium structure $X_{eq}$; (D) approximation $\hat{X}_{eq}$ obtained from the overall relocation matrix $H$. {The scale bar in lower-left corner indicates the distance equivalent to 20 km.}}
\label{fig:Sydney_heterogeneous_H}
\end{figure}
From these examples, we can conclude that heterogeneity in matrices $H_k$ has a limited effect on the long-term population structure and it is possible to obtain an accurate prediction by using only the aggregate relocation matrix $H = \sum_{i=1}^{C} \alpha_k H_k$.
\section{Conclusions}\label{sec:conclusions}
We have introduced a diffusive migration framework, which describes intra-urban migration as an irreversible evolution of the urban population. The results have been tested for residential relocation data available from the Australian Census for eight Greater Capital areas over 10 years.
Using this framework we were able to {explain} the medium-term (5 years) migration patterns from the short-term (1 year) migration patterns.
We have shown that this is possible to achieve only if the population is not homogeneous and has an internal structure. In particular, such population should be comprised of at least two components, with each component having a distinct relocation frequency. Such relocation frequency corresponds to a particular relaxation time of the component.
This heterogeneity of migration frequencies has an intuitive interpretation. For example, the group of residents which migrate more often can be interpreted as renters (who are less attached to their place of residence, and are relatively free to change it as soon as they identify a better option) and home-owners (for whom it may be more problematic to change the place of residence due to the transaction costs and peculiarities of the housing market and individual circumstances).
Using this diffusive migration framework, we produced a long-term prediction for the Australian capital cities' structures, based on the short-term migration data, with the only assumption about the temporal stability of the migration rates. According to our predictions, the largest capital cities (Sydney, Melbourne and Perth) are moving towards more spread-out configurations, while Hobart and Canberra exhibit a more compact structure in the equilibrium.
The other capitals, Brisbane, Adelaide and Darwin, are likely to preserve their current configuration in the long-run. These results are consistent with the previous studies predicting the possibility of polycentric transition in Sydney and Melbourne \cite{Crosato2018,Crosato2020}.
Our predictions are robust with respect to the composition of the migration components, as well as possible heterogeneity of their relocation patterns, both dynamic and spatial. In particular, we have analytically shown that the long-run equilibrium is independent of the size of each community and their relocation rates. The robustness with respect to spatial heterogeneity of relocation has been shown numerically through an abstract illustrative example. In this example, the relocation communities have opposite preferences regarding their destination: members of the first group prefer central districts, while their counterparts prefer the peripheral ones.
The temporal stability of the migration flows is a crucial element of our long-term analysis. Despite being consistent within the period of observation (2006--2016), they may be affected by multiple factors in future: the human migration is a complicated non-linear process involving multiple interdependent factors, often leading to various phase transitions and critical phenomena \cite{Weidlich1988,Harris1978,Wilson2008,Osawa2017,Crosato2018,Dynamic_resettlement_paper,Crosato2020}. However, the relocation data may contain some unique features that are not captured in other static human mobility and land use data (e.g., \cite{Clarke1998,Schneider2013,Crosato2018,Ellam2018,Crosato2020}). Thus, we believe that the proposed dynamic framework for the intra-urban migration, enabling robust long-term predictions, offers a principled approach to modeling out-of-equilibrium urban development.
\vspace{6pt}
\textbf{Author Contributions.} Conceptualization, B.S., M.P. and K.G.; methodology, B.S., M.P. and K.G.; software, B.S.; formal analysis, B.S. and K.G.; data curation, B.S.; writing--review and editing, B.S., M.P. and K.G.; supervision, K.G. and M.P.
\textbf{Funding.} This work was funded by the Australian Research Council Discovery Project \\ DP170102927.
\textbf{Conflicts of Interest.} The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
\bibliographystyle{elsarticle-num-names}
|
1,108,101,564,164 | arxiv | \section{Introduction}
Non-extremal black holes in anti-de Sitter (AdS) spacetime have a crucial role to play in the AdS/CFT correspondence as dual to thermal states of a boundary conformal field theory (CFT) \cite{Maldacena:1997re}. The correspondence is best understood when the gravitational theory is a maximally supersymmetric gauged supergravity theory. In four dimensions, a gauged theory with maximal number of supercharges exists, namely $\mathcal{N} = 8$, $\textrm{SO}(8)$ gauged supergravity \cite{de Wit:1981eq, de Wit:1982ig}\footnote{We are not considering the recently discovered theories \cite{Dall'Agata:2012bb}.}. This theory was originally obtained by gauging a global symmetry of the ungauged $\mathcal{N} = 8$ supergravity \cite{Cremmer:1978km, Cremmer:1978iv}. Alternatively, it can be obtained by reduction of 11-dimensional supergravity on $S^7$ \cite{de Wit:1986iy}. Notably, the theory admits an $\mathcal{N} = 2$ abelian truncation to the Cartan subgroup $\textrm{U}(1)^4$ of $\textrm{SO}(8)$ \cite{Cvetic:1999xp}, and it can be further truncated to $\mathcal{N} = 2$, $\textrm{U}(1)^2$ supergravity by setting the four gauge fields pairwise equal.
In $\mathcal{N} = 2$, $\textrm{U}(1)^4$ gauged supergravity, a general black hole solution might carry 4 electric charges and 4 magnetic charges. However, only partial progress has been made in finding a general rotating AdS black hole solution that admits the maximum number of independent charges. Although general solutions, with the maximum number of independent angular momenta and charges, of analogous theories have been found in $D = 5$ \cite{Wu:2011gq} and $D = 6$ \cite{Chow:2008ip}, less is explicitly known in $D = 4$. One reason for further difficulty in $D = 4$ is that a black hole can support both electric and magnetic charges. Nevertheless, even for the solutions with only electric charges, the gauged generalizations that include rotation are not known.
In ungauged supergravities, the presence of hidden symmetries allows us to generate charged solutions from uncharged solutions using inverse scattering methods or coset methods (see e.g.\ \cite{Belinski:1978aa, Belinski:1979aa, Breitenlohner:1987dg}). In gauged supergravities, the presence of a scalar potential term generically breaks all hidden symmetries, and it is therefore harder to generate black hole solutions of gauged, compared to ungauged, supergravities. More precisely, bosonic Lagrangians of $\mathcal{N} = 2$ gauged supergravities have the generic form
\begin{equation}
\mathcal{L}_{\textrm{gauged}} = \mathcal{L}_4 + g^2 V(\Phi^A)\star 1 ,\label{gauged0}
\end{equation}
where $g$ is the gauge-coupling constant and $V(\Phi^A)$ is a scalar potential depending upon scalars of the ungauged Lagrangian $\mathcal{L}_4$. There are a variety of heuristic methods to obtain a new gauged solution by staring at an ungauged solution, guessing the gauged solution, and then checking that the resulting ansatz obeys the gauged supergravity field equations. These methods have been successfully used to obtain several asymptotically AdS black hole solutions of 4-dimensional gauged supergravity \cite{Duff:1999gh, Chong:2004na, Chow:2010sf, Chow:2010fw}, and also in other dimensions.
In this paper we find new static charged solutions of $\mathcal{N} = 2$, $\textrm{U}(1)^4$ supergravity and rotating charged solutions to $\mathcal{N} = 2$, $\textrm{U}(1)^2$ supergravity with the maximum number of independent charges. We derive the first law of thermodynamics for these solutions and further discuss a new interplay between boundary conditions and thermodynamics that arises in this context.
We first consider static dyonic black holes of $\textrm{U}(1)^4$ gauged supergravity \cite{Cvetic:1999xp}. We find a 10 parameter family of asymptotically AdS black hole solutions, parameterized by mass, 4 electric charges, 4 magnetic charges, and the gauge-coupling constant, or equivalently the AdS radius, of the theory. To find this solution, we used the recently found asymptotically flat solution \cite{Chow:2013tia} (see \cite{Cvetic:1995kv} for a more implicit generating solution) of the ungauged theory, which is also known as the STU model (named after the 3 complex scalar fields sometimes denoted S, T and U). To generalize to a solution of gauged supergravity, it suffices to modify the metric by replacing a single function, with all matter fields unchanged. The solution generalizes previously known static solutions of this theory \cite{Duff:1999gh, Lu:2013eoa, Lu:2013ura}, which were found by the same method. This simple method has also been used to find analogous asymptotically AdS solutions in $D = 5$ \cite{Behrndt:1998jd}, $D = 6$ \cite{Cvetic:1999un}, $D = 7$ \cite{Cvetic:1999ne, Liu:1999ai}, and higher dimensions \cite{Chow:2011fh}.
The general solution can, in principle, be embedded into 11-dimensional supergravity. However, explicit formulae are not known in general. The embedding is explicitly known when there are no axions \cite{Cvetic:1999xp}, which suffices for the solutions discussed in \cite{Duff:1999gh, Lu:2013ura}. It is also known for when the gauge fields are pairwise equal \cite{Cvetic:1999au}, which suffices for the solutions discussed in \cite{Lu:2013eoa}.
The static solutions just discussed have spherical horizons, but solutions with planar horizons (black branes) are of more interest for studying applications of the AdS/CFT correspondence, since the dual field theory lives on a plane. For example, a particular asymptotically AdS$_4$ electrically charged planar black hole solution of supergravity has been studied in \cite{Gubser:2009qt}, and asymptotically AdS$_4$ dyonic planar black holes with scalars have been studied in \cite{Goldstein:2010aw, Cadoni:2011kv}. Planar black holes are obtained from taking a limit of spherical black holes that effectively zooms in on a narrow cone, say around the north pole. We find the explicit metric and matter fields of the planar black hole with 4 independent electric and 4 independent magnetic charges. We also analytically continue the solution with a spherical horizon to a solution with a hyperbolic horizon.
We secondly consider rotating dyonic black holes of $\mathcal{N} = 4$, $\textrm{SO} (4)$ gauged supergravity \cite{Das:1977pu}. The theory was originally obtained by gauging a global symmetry of the ungauged $\mathcal{N} = 4$ supergravity \cite{Das:1977uy, Cremmer:1977tt}. The explicit reduction ansatz for embedding solutions of $\mathcal{N} = 4$, $\textrm{SO}(4)$ gauged supergravity into 11-dimensional supergravity is known \cite{Cvetic:1999au}. To find black hole solutions, we work with an $\mathcal{N} = 2$ abelian truncation to $\textrm{U} (1)^2$ gauged supergravity. This is an $\mathcal{N} = 2$ gauged supergravity coupled to one vector multiplet. It is related to the $\textrm{U}(1)^4$ gauged theory by setting the 4 gauge fields pairwise equal.
We find a 7 parameter family of asymptotically AdS black hole solutions, parameterized by mass, rotation, 2 electric charges, 2 magnetic charges, and the gauge-coupling constant, or equivalently the AdS radius, of the theory. We also find the generalization when Newman--Unti--Tamburino (NUT) charge is present, leading to an 8 parameter family of solutions. A special case of the solution is the dyonic Kerr--Newman--Taub--NUT--AdS solution \cite{Plebanski}. Two special cases have been particularly useful for finding this solution. One special case is the ungauged limit, which is obtained by setting pairwise equal the 4 gauge fields in the general ungauged solution \cite{Chow:2013tia}. The second special case is when both gauge fields only contain electric charges \cite{Chong:2004na}. The gauged solution can be found from the ungauged solution by simply replacing two functions in the metric with everything else untouched, including the matter fields, as done in \cite{Chong:2004na}. However, whilst this straightforwardly gives a solution locally, more detailed analysis is necessary to examine its physical properties. In order to write it in an asymptotically AdS coordinate system, an analysis \textit{\`{a} la} Griffiths--Podolsk\'{y} \cite{Griffiths:2005qp} is necessary, which we perform here. It turns out that the metric can be set in a generalized Griffiths--Podolsk\'{y} form with one additional parameter as compared to the non-accelerating metric considered in \cite{Griffiths:2005qp}.
The derivation of the thermodynamic properties of dyonic black holes in AdS requires some care. It turns out that there is a subtle interplay between consistent boundary conditions for the gauge fields and the existence of a consistent thermodynamics. Let us first review the definition of boundary conditions for gauge fields in AdS$_4$. We consider pure Einstein--Maxwell theory for simplicity. Working in radial gauge, $\textrm{U}(1)$ gauge fields admit the Fefferman--Graham expansion
\begin{equation}
A_i = A^{(0)}_{i}+ \frac{1}{r}A^{(1)}_{i}+O(r^{-2}),
\end{equation}
where $i$ are tangent indices to a surface of constant radius (for definiteness $i=t,\theta,\phi$). The standard Dirichlet boundary condition for the gauge field consists of fixing the boundary magnetic field $b^i = \epsilon^{ijk}\partial_j A^{(0)}_k$. The boundary electric field $e_i = A^{(1)}_{i}$ is then freely varying. These boundary conditions admit the $\textrm{SO}(3,2)$ conformal group as the asymptotic symmetry group when $b^i = 0$ \cite{Henneaux:1985tv}, and they correspond in the AdS/CFT correspondence to allowing a dual conserved current in the CFT \cite{Witten:1998qj}. Now, an $\textrm{SL}(2,\mathbb Z)$ ambiguity exists in the quantization of gauge fields in AdS$_4$ \cite{Witten:2003ya}. The S transformation mapping $b^i \rightarrow e_i$, $e_i \rightarrow -b^i$ maps the gauge field $A_\mu$ to a dual gauge field $\widetilde A_\mu$. Applying the standard AdS/CFT correspondence to $\widetilde A_\mu$ is equivalent in terms of the original gauge field $A_\mu$ to imposing Neumann boundary conditions with $e_i$ fixed. The T operation consists of shifting the $\theta$ angle by $2\pi$. S and T together generate an $\textrm{SL}(2,\mathbb Z)$ family of boundary conditions. More general Lorentz-invariant mixed boundary conditions can also be imposed, and are dual to conformal field theories with multi-trace deformations \cite{Marolf:2006nd}. For each choice of boundary condition, the electric and magnetic fields cannot be independently varied. This statement corresponds to the fact that the gauge field corresponds to a $\textrm{U}(1)$ current in the boundary CFT and not a $\textrm{U}(1)\times \textrm{U}(1)$ current.
One could then na\"{\i}vely deduce that boundary conditions with both electric and magnetic charges varying in the bulk are generally inconsistent. However, a loophole in the above argument is that only Lorentz-invariant boundary conditions were considered, consistent with the AdS/CFT correspondence. Varying independently electric and magnetic charges simply means that the time components of the electric and magnetic fields $e_{t} = Q$, $b^t = P$ are dynamical. One could still hold fixed the spatial components of the electric and magnetic fields. These boundary conditions are not Lorentz-invariant and the resulting theory will therefore not be dual to a CFT. Note that a dual non-relativistic field theory would also be consistent with the existence of two conserved bulk quantities for a given gauge field. The existence of these boundary conditions therefore has to be studied from first principles in the bulk, independently of any holographically dual picture. One classical criterion for the existence, or not, of these boundary conditions is the existence of a conserved symplectic structure. We performed this analysis which we summarize as follows.
We have four gauge fields $A^I$, with $I = 1, 2,3, 4$, with corresponding electric charges $Q_I$ and magnetic charges $P^I$. First, the covariant phase space and the mass are defined in the case with only electric charges (all $P^I = 0$) or only magnetic charges (all $Q_I = 0$), which is expected, since these boundary conditions are consistent with a boundary CFT dual description \cite{Witten:2003ya}. Na\"{\i}vely, one would infer the existence of an $\textrm{SL}(2,\mathbb Z)^4$ family of boundary conditions since there are four gauge fields. However, the gauge fields are coupled to scalar fields, which themselves need to admit consistent boundary conditions. After a detailed analysis, we identified three distinct classes of mixed boundary conditions. The first boundary condition amounts to imposing exactly equal or opposite four electric and magnetic charges $P^I = \pm Q_I$, $I=1,2,3,4$, with an even number of minus signs. The second boundary condition consists in setting to zero two sets of charges, let say $Q_1=Q_2=P^1=P^2=0$, while imposing on the two remaining sets of charges the constraints $Q_3 = \pm P^3$, $Q_4 = \pm P^4$ for any choice of signs. The third boundary condition consists in setting all but one set of charges to zero, let say $Q_1$ and $P^1$, and impose $Q_1 = \pm P^1$. The last case was discussed in \cite{Lu:2013ura}. For all such cases, the mass is defined and the first law of thermodynamics holds.
More generally, it turns out that for the generic static black hole with 4 independent electric and 4 independent magnetic charges, the symplectic structure is not conserved. This results in the non-existence of a Hamiltonian, and the black hole mass is ill-defined. We therefore deduce that boundary conditions with such varying fields are not consistent. The non-vanishing symplectic flux at infinity can be seen to be related to the backreaction of the scalar fields due to the gauge fields. The result also holds when only one gauge field is turned on with independent electric and magnetic charges, which explains why the first law does not hold in general in the analysis of \cite{Lu:2013ura}.
Now, there are also cases where Lorentz-violating boundary conditions are consistent (at least in the restricted phase space of black hole solutions). One such example is when the gauge fields are set pairwise equal and both electric and magnetic charges are allowed to be varied. This includes as a subcase the dyonic Reissner-Nordtr\"om--AdS black hole. One can study the first law of thermodynamics using standard treatments and the result is as expected: the first law holds, which provides with a non-trivial check of our expressions.
The rotating solutions that we find possess hidden symmetries in the form of various types of Killing tensors. This is a widespread feature of charged, rotating black holes in supergravity, in diverse dimensions, that generalize the Kerr solution, see e.g.\ \cite{Chow:2008fe}. From the form of our rotating solutions, we are motivated to consider a wider class of metrics. These general metrics possess Killing--Yano tensors with torsion, a known feature of some other black hole solutions in supergravity \cite{Kubiznak:2009qi, Houri:2010fr}. These are antisymmetric tensors $Y_{\mu \nu}$ satisfying $\nabla_{(\mu}^T Y_{\nu) \rho} = 0$, where the connection has a torsion that is totally antisymmetric. Unfortunately, for our most general rotating solutions, the physical significance of the torsions is unclear. In fact, we find these tensors in two different conformal frames, string frame and Einstein frame. Other types of Killing tensors may be constructed from the Killing--Yano tensors with torsion. The existence of these Killing tensors is related to the separability of the Hamilton--Jacobi equation for geodesic motion in the two conformal frames, and of the massive Klein--Gordon equation in Einstein frame.
The rest of the paper is organized as follows. We first present in Section \ref{action} two equivalent formulations of the $\mathcal{N} =2$, $\textrm{U}(1)^4$ gauged supergravity of interest, and review its truncation to $\mathcal{N} =2$, $\textrm{U}(1)^2$ gauged supergravity. We also provide the formula for its symplectic structure. We present the general static AdS black hole of $\mathcal{N} =2$, $\textrm{U}(1)^4$ gauged supergravity in Section \ref{BH1} and discuss its thermodynamics. We move to the general rotating black hole of $\mathcal{N} =2$, $\textrm{U}(1)^2$ gauged supergravity in Section \ref{BH2} and derive its thermodynamics as well. We generalize the rotating metrics to a wider class of metrics that have various Killing tensors, and study separability. We finally conclude, and provide an appendix with the details of the general static solution with a spherical horizon.
\section{Gauged supergravities}
\label{action}
The $\mathcal{N}=2$, $\textrm{U}(1)^4$ gauged supergravity theory is an $\mathcal{N} = 2$ gauged supergravity coupled to 3 vector multiplets. The bosonic fields are the metric, 4 $\textrm{U} (1)$ gauge fields $A^I$, 3 dilatons $\varphi_i$ and 3 axions $\chi_i$. We label the gauge fields by $I = 1, 2, 3, 4$, and label the dilatons and axions by $i = 1, 2, 3$. It is common in the literature to denote $x_i = - \chi_i$ and $y_i = \expe{-\varphi_i}$, which can be united as a complex scalar $z_i = x_i + \textrm{i} y_i$. Since gauge fields can be dualized in four dimensions, several formulations exist which depend on the duality frame. We will discuss two such formulations. We will also discuss the truncation of $\mathcal{N}=2$, $\textrm{U}(1)^4$ gauged supergravity to $\mathcal{N}=2$, $\textrm{U}(1)^2$ gauged supergravity.
\subsection{From maximal gauged supergravity}
\label{sec1}
The original formulation of $\mathcal{N}=2$, $\textrm{U}(1)^4$ gauged supergravity comes from the $\mathcal{N} = 2$ abelian truncation of the maximal $\mathcal{N} = 8$, $\textrm{SO} (8)$ gauged supergravity that can be obtained from $S^7$ reduction of 11-dimensional supergravity. The gauged Lagrangian can be found in \cite{Cvetic:1999xp} and consists of the ungauged Lagrangian $\mathcal{L}_4$ with an extra scalar potential,
\begin{equation}
\mathcal{L}_{\textrm{gauged}} = \mathcal{L}_4 + g^2 \sum_{i = 1}^3 (2 \cosh \varphi_i + \chi_i^2 \expe{\varphi_i}) \star 1 ,\label{gauged}
\end{equation}
where $g$ is the gauge-coupling constant. We denote the gauge fields in this formulation as $A^I$, for $I=1,2,3,4$. The field strengths are $F^I = \textrm{d} A^I$ and the dual field strengths are $\widetilde{F}_I = \textrm{d} \widetilde{A}_I$, where $\widetilde{A}_I$ are dual potentials. The 6 scalar fields have a squared mass equal to $m^2 =- 2g^2$, which lies in the Breitenlohner--Freedman range \cite{Breitenlohner:1982jf},
\begin{equation}
m_{\textrm{BF}}^2 \leq m^2 < m_{\textrm{BF}}^2 + g^2 ,
\end{equation}
where $m_{\textrm{BF}}^2=-\tfrac{9}{4}g^2$.
The derivation of the abelian $\mathcal{N} = 2$ truncation from the full $\mathcal{N} = 8$, $\textrm{SO} (8)$ gauged theory treats the 4 gauge fields $A^I$ on an equal footing, corresponding to the $\textrm{U}(1)^4$ Cartan subgroup of the full $\textrm{SO}(8)$ gauge group \cite{Cvetic:1999xp}. Since dualization is not possible for the general non-abelian gauged theory, this is the preferred formulation of the gauged theory. We shall generally use the terminology ``electric'' and ``magnetic'' according to the nature of $F^I$. This form of the ungauged Lagrangian is complicated, so we shall not use it directly. However, dualization is possible for the abelian truncation, which allows for interesting alternative formulations. We describe one such other formulation in the next section, which has the advantage of a simpler Lagrangian.
\subsection{Dual formulation}
\label{thirdS}
A second formulation of the ungauged theory is from directly reducing 6-dimensional bosonic string theory
\begin{equation}
\mathcal{L}_6 = R \star 1 - \tfrac{1}{2} \star \textrm{d} \varphi \wedge \textrm{d} \varphi - \tfrac{1}{2} \expe{-\sqrt{2} \varphi}\star F_{(3)} \wedge F_{(3)}
\end{equation}
on $T^2$, and then dualizing the 4-dimensional 2-form potential $B_{(2)}$ to an axion (Here $F_{(3)}=\textrm{d} B_{(2)}$). As before, gauging adds the potential \eqref{gauged}. The 6-dimensional theory can be regarded as minimal $\mathcal{N} = 2$ supergravity coupled to a tensor multiplet. After relabelling and changing the signs of some axions, the Lagrangian of \cite{Cvetic:1999xp, Chong:2004na} can be written as
\footnote{Our field strengths are related to the hatted field strengths of \cite{Chong:2004na} by $\mathcal{F}^1 = \widehat F_{(2){2}}$, $\widetilde{\mathcal{F}}^2 = \widehat F_{(2)1}$, $\widetilde{\mathcal{F}}^3 = \widehat{\mathcal{F}}_{(2)}^1$, $\mathcal{F}^4 = \widehat{\mathcal{F}}_{(2)}^2$ and the signs of $\chi_1$ and $\chi_3$ are flipped while the one of $\chi_2$ is kept fixed. }
\begin{align}
\mathcal{L}_4 & = R \star 1 - \frac{1}{2} \sum_{i = 1}^3 (\star \textrm{d} \varphi_i \wedge \textrm{d} \varphi_i + \expe{2 \varphi_i} \star \textrm{d} \chi_i \wedge \textrm{d} \chi_i) \nonumber \\
& \quad - \frac{1}{2} \expe{-\varphi_1} (\expe{\varphi_2 + \varphi_3} \star \mathcal{F}^1 \wedge \mathcal{F}^1 + \expe{\varphi_2 - \varphi_3} \star \widetilde{\mathcal{F}}_2 \wedge \widetilde{\mathcal{F}}_2 \nonumber \\
& \quad + \expe{- \varphi_2 + \varphi_3} \star \widetilde{\mathcal{F}}_3 \wedge \widetilde{\mathcal{F}}_3 + \expe{- \varphi_2 - \varphi_3} \star \mathcal{F}^4 \wedge \mathcal{F}^4) \nonumber \\
& \quad + \chi_1 (F^1 \wedge F^4 + \widetilde{F}_2 \wedge \widetilde{F}_3) ,\label{Lthird}
\end{align}
where
\begin{align}
\mathcal{F}^1 & = F^1 + \chi_3 \widetilde{F}_2 + \chi_2 \widetilde{F}_3 - \chi_2 \chi_3 F^4 , \nonumber \\
\widetilde{\mathcal{F}}_2 & = \widetilde{F}_2 - \chi_2 F^4 , \nonumber \\
\widetilde{\mathcal{F}}_3 & = \widetilde{F}_3 - \chi_3 F^4 , \nonumber \\
\mathcal{F}^4 & = F^4 .
\end{align}
Note that the parity-odd terms may equivalently be written as $\chi_1 (\mathcal{F}^1 \wedge \mathcal{F}^4 + \widetilde{\mathcal{F}}_2 \wedge \widetilde{\mathcal{F}}_3)$.
The first formulation is recovered, by dualizing $\widetilde{F}_2$ and $\widetilde{F}_3$, and by performing the S-duality that replaces $S = \chi_1 + \textrm{i} \expe{-\varphi_1}$ by $-1/S$. To dualize $\widetilde{F}_2$, we first add to the Lagrangian an extra term
\begin{equation}
A^2 \wedge \textrm{d} \widetilde{F}_2 = F^2 \wedge \widetilde{F}_2 - \textrm{d} (A^2 \wedge \widetilde{F}_2) .
\end{equation}
$A^2$ is a Lagrange multiplier to enforce the Bianchi identity $\textrm{d} \widetilde{F}_2 = 0$. Varying the modified Lagrangian with respect to $\widetilde{F}_2$, we obtain the dual field strength
\begin{align}
F^2 &= \expe{-\varphi_1 + \varphi_2 - \varphi_3} \star (\widetilde{\mathcal{F}}_2 +\chi_3 e^{2\varphi_3} \mathcal{F}^1) \nonumber \\
& \quad - \chi_1 (\widetilde{\mathcal{F}}_3+\chi_3 F^4) .
\label{dualF21}
\end{align}
One can obtain similarly the expression for $F^3$. Subtituting $\widetilde{F}_2$ and $\widetilde{F}_3$ in terms of $F^2$ and $F^3$ and performing the S-duality leads to the formulation of the previous section in terms of $(A^1, A^2, A^3, A^4)$.
It is also useful to write the Lagrangian \eqref{Lthird} in the general form
\begin{align}
\mathcal{L}_4 & = \textrm{d}^4 x \, \sqrt{-g} [ R - \tfrac{1}{2}f_{AB}(\Phi)
\partial_\mu \Phi^A \partial^\mu \Phi^B \nonumber \\
& \quad - \tfrac{1}{4}k_{IJ}(\Phi)\mathbf F^I_{\mu\nu}\mathbf F^{J\mu\nu}
+ \tfrac{1}{4}h_{IJ}(\Phi)\epsilon^{\mu\nu\rho\sigma}\mathbf F^I_{\mu\nu} \mathbf F^J_{\rho\sigma} ] , \nonumber \\
\label{generalaction}
\end{align}
where $\Phi^A = (\varphi_1,\varphi_2,\varphi_3,\chi_1,\chi_2,\chi_3)$ are the scalar fields and $\mathbf A^I = (A^1,\widetilde A_2,\widetilde A_3,A^4)$ are the $\textrm{U}(1)$ gauge fields. The kinetic coefficients $f_{AB}$, $h_{IJ}$ are
\begin{align}
f_{AB}& = \textrm{diag} (1,1,1,\expe{2\varphi_1},\expe{2\varphi_2},\expe{2\varphi_3}), \nonumber \\
h_{IJ}& = - \frac{\chi_1}{2}
\begin{pmatrix}
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0
\end{pmatrix} ,
\end{align}
and $k_{IJ}$ is a longer expression that can be easily deduced from \eqref{Lthird}.
\subsection{Truncation to $\textrm{U}(1)^2$ gauged supergravity}
We can consistently truncate the second formulation of $\textrm{U}(1)^4$ gauged supergravity in \eq{Lthird} by setting $A^1 = A^4$, $\widetilde{A}_2 = \widetilde{A}_3$, $\varphi_2 = \varphi_3 = \chi_2 = \chi_3 = 0$, obtaining
\begin{align}
\mathcal{L}_4 & = R \star 1 - \tfrac{1}{2} \star \textrm{d} \varphi \wedge \textrm{d} \varphi - \tfrac{1}{2} \expe{2 \varphi} \star \textrm{d} \chi \wedge \textrm{d} \chi \nonumber \\
& \quad - \expe{-\varphi} (\star F^1 \wedge F^1 + \star \widetilde{F}_2 \wedge \widetilde{F}_2) + \chi (F^1 \wedge F^1 \nonumber \\
& \quad + \widetilde{F}_2 \wedge \widetilde{F}_2) + g^2 (4 + \expe{\varphi} + \expe{- \varphi} + \chi^2 \expe{\varphi}) \star 1 , \label{LU2}
\end{align}
where $\varphi \equiv \varphi_1$, $\chi \equiv \chi_1$.
Dualizing $\widetilde{F}_2$ as in \eq{dualF21}, we have
\begin{equation}
F^2 = \expe{-\varphi} \star \widetilde{F}_2 - \chi \widetilde{F}_2 .
\end{equation}
We then obtain the Lagrangian
\begin{align}
\mathcal{L}_4 & = R \star 1 - \tfrac{1}{2} \star \textrm{d} \varphi \wedge \textrm{d} \varphi - \tfrac{1}{2} \expe{2 \varphi} \star \textrm{d} \chi \wedge \textrm{d} \chi \nonumber \\
& \quad - \expe{-\varphi} \star F^1 \wedge F^1 + \chi F^1 \wedge F^1 \nonumber \\
& \quad - \frac{1}{1 + \chi^2 \expe{2 \varphi}} (\expe{\varphi} \star F^2 \wedge F^2 + \chi \expe{2 \varphi} F^2 \wedge F^2) \nonumber \\
& \quad + g^2 (4 + \expe{\varphi} + \expe{- \varphi} + \chi^2 \expe{\varphi}) \star 1 ,\label{SO4}
\end{align}
which is a bosonic abelian truncation of $\mathcal{N} = 4$, $\textrm{SO} (4)$ gauged supergravity.
If we further consistently truncate to $A \equiv A^1 = A^2$ and $\varphi = \chi = 0$, we obtain
\begin{equation}
\mathcal{L}_4 = R \star 1 - 2 \star F \wedge F + 6 g^2 \star 1 ,
\end{equation}
which is simply Einstein--Maxwell theory with a negative cosmological constant. This is the bosonic sector of $\mathcal{N} = 2$ minimal gauged supergravity.
\subsection{Symplectic structure}
One fundamental object of interest of any theory is the symplectic structure, which is a two-form on the space spanned by all the fields, which we denote as $\{\Phi^i\}$. The covariant phase space definition (see e.g.\ \cite{Iyer:1994ys}) is
\begin{equation}
\bm{\omega}(\delta_1 \Phi^i , \delta_2 \Phi^i) = \delta_2 \bm{\Theta} (\delta_1 \Phi^i) - \delta_1 \bm{\Theta} (\delta_2 \Phi^i) , \label{sympl}
\end{equation}
where $ \bm{\Theta}$ is the pre-symplectic form defined from the variation of the action
\begin{equation}
\delta \bm{L} = \frac{\delta \bm{L}}{\delta \Phi_i}\delta \Phi_i + \textbf{d}\bm{\Theta} (\delta \Phi) .
\end{equation}
The invariant symplectic structure defined solely from the equations of motion \cite{Barnich:2007bf} differs from the covariant phase space definition by a boundary term. Since this boundary term has no influence on the present discussion, we will simply ignore it and only present the covariant phase space definition.
The pre-symplectic form for a Lagrangian of the form $\mathcal{L}_4/16 \pi G$, with $\mathcal{L}_4$ given by the general form \eqref{generalaction}, can be worked out. The fields are $\Phi^i = (g_{\mu\nu},\Phi^A,\mathbf A^I)$. The result is (see e.g.\ \cite{Compere:2009dp})
\begin{align}
\bm{\Theta} & = \star \bm{X} = \frac{1}{3!}\epsilon_{\mu\alpha_1\alpha_2\alpha_3} X^\mu \, \textrm{d} x^{\alpha_1}\wedge \textrm{d} x^{\alpha_2}\wedge \textrm{d} x^{\alpha_3} \ , \nonumber \\
X^\mu & = \frac{1}{16\pi G}\big[
(\nabla_\nu h^{\nu\mu} - \nabla^\mu h) - \,f_{AB}(\Phi)\nabla^\mu\chi^B \delta\chi^A \nonumber \\
& \quad - \, (k_{IJ}(\Phi)\mathbf F^{J\mu\nu}
- \, h_{IJ}(\Phi)\epsilon^{\mu\nu\rho\sigma}\mathbf F^J_{\rho\sigma})\delta \mathbf A_\nu^I \big].
\end{align}
Here we defined $h_{\mu\nu}=\delta g_{\mu\nu}$, $h^{\mu\nu} = g^{\mu\alpha}g^{\nu\beta}h_{\alpha\beta}$ and $h = g^{\alpha\beta}h_{\alpha\beta}$.
\section{$\textrm{U}(1)^4$ dyonic static black holes}
\label{BH1}
A generating solution for the general asymptotically flat, static, non-extremal, dyonic black hole of ungauged $\mathcal{N} = 2$, $\textrm{U}(1)^4$ supergravity (and, more generally, for $\mathcal{N} = 8$ supergravity), also known as the STU model, was obtained a long time ago \cite{Cvetic:1995kv}. More recently, the solution with 8 explicit electromagnetic charges was given in a simplified form in \cite{Chow:2013tia}. The spherically symmetric metric has the form
\begin{equation}
\textrm{d} s^2 = -\frac{R}{W} \, \textrm{d} t^2 + \frac{W}{R} \, \textrm{d} r^2 + W (\textrm{d} \theta^2 + \sin^2 \theta \, \textrm{d} \phi^2) , \label{metric1s}
\end{equation}
where $R(r)$ is a quadratic polynomial in $r$, and $W^2(r)$ is a quartic polynomial in $r$. It turns out that it suffices to modify the metric by replacing $R$ by $R+g^2 W^2$ and one obtains a solution of $\mathcal{N} = 2$, $\textrm{U}(1)^4$ gauged supergravity with potential \eqref{gauged}, with all matter fields unchanged. Indeed, we checked that the metric accompanied with the matter fields obeys the field equations using Mathematica \cite{Mathematica}. Static AdS$_4$ black holes have been found by this method previously \cite{Duff:1999gh, Lu:2013eoa, Lu:2013ura}, but the function $W^2$ is substantially more complicated in the general case, because it does not factorize. The solution depends upon 10 arbitrary parameters: the mass parameter $m$, the gauge-coupling constant $g$, four electric charge parameters ($\delta_I$), and four magnetic charge parameters ($\gamma_I$), for $I=1,2,3,4$. Regular supersymmetric static black holes with spherical or planar horizons do not exist in the purely electric or purely magnetic cases \cite{Duff:1999gh}. Regular supersymmetric static black holes do however exist when the horizon is hyperbolic \cite{Caldarelli:1998hg}.
Let us first present the static black holes with spherical horizons. We will then present the limit with planar horizons, which presents interesting simplifications. Static solutions with hyperbolic horizons can be obtained by analytic continuation of the static solutions with spherical horizons.
\subsection{Spherical black hole}
\subsubsection{Metric}
The metric is
\begin{align}
\textrm{d} s^2 & = -\frac{R + g^2 W^2}{W} \, \textrm{d} t^2 + \frac{ W \, \textrm{d} r^2}{R + g^2 W^2} \nonumber \\
& \quad + W ( \textrm{d} \theta^2 + \sin^2 \theta \, \textrm{d} \phi^2 ) , \label{metric1}
\end{align}
where
\begin{align}
W^2(r) & = R^2(r)+2 R(r) (2M r+V )+L(r)^2,\label{defW2}\\
R(r) & = r^2 -2 m r -n_0^2,\label{defR} \\
L(r) & = \lambda_1 r + \lambda_0 .
\end{align}
Here we already note that $M$ is the physical mass, when it can be defined, which we will discuss in Section \ref{thermo}. The five constants $M,n_0, \lambda_1, \lambda_0 ,V$ are functions of the parameters $m,\gamma_I,\delta_I$ and can be expressed as follows,
\begin{align}
\label{mass}
M & = m \mu_1 + n_0 \mu_2 , \\
n_0 & = - m \frac{\nu_1}{\nu_2},\label{n0d}\\
\lambda_1 & = 2(m \nu_2 - n_0 \nu_1), \\
\lambda_0 & = 4(m^2 +n_0^2)D, \\
V & = 2 (n_0 \mu_1 - m \mu_2)n_0 +2 (m^2+n_0^2) C,
\end{align}
where the coefficients $\mu_1,\mu_2,\nu_1,\nu_2,D,C$ are functions of only the charge parameters $(\delta_I,\gamma_I)$, and are given in \eq{munu}, \eq{Dconstant} and \eq{Cconstant}. We choose the orientation $\varepsilon_{tr\theta \phi}=1$.
The metric \eqref{metric1} in the ungauged case $g=0$ reduces to the one presented in \cite{Chow:2013tia} upon setting the NUT charge to zero and identifying $V(n_0)$ in \cite{Chow:2013tia} with $V$ here. The parametrization is asymmetrical between the electric ($\delta_I$) and magnetic ($\gamma_I$) parameters but symmetrical under the exchange of indices $I = 1,2,3,4$. This asymmetry is rooted in the way the solution was obtained in \cite{Chow:2013tia} from coset model techniques, by first magnetic charging and then electric charging the Kerr--Taub--NUT seed with original mass and NUT charges $(m,n)$. The total NUT charge can be cancelled at the end of the process by fixing the NUT parameter $n = n_0(m,\delta_I,\gamma_I)$ in terms of the remaining parameters, which results in the explicit expression for $n_0$ displayed in \eqref{n0d}.
While it would be interesting to obtain a symmetric parametrization of the black hole, the general features of the solution (the radial dependence of the various functions and all physical properties) do not depend upon the parametrization.
\subsubsection{Matter}
The gauge fields and dual gauge fields are
\begin{align}
A^I & = \zeta^I \, \textrm{d} t + P^I \cos \theta \, \textrm{d} \phi , & \widetilde{A}_I & = \widetilde\zeta_I \, \textrm{d} t - Q_I \cos \theta \, \textrm{d} \phi ,
\end{align}
where the electromagnetic scalars $\zeta^I$ and $\widetilde \zeta_I$ are
\begin{align}
\zeta^I &= \frac{- L (P^I n_0 + L^I) + R (Q_I r + V^I )}{W^2} , \nonumber \\
\widetilde\zeta_I &= \frac{L (Q_I n_0 +\widetilde L_I) + R (P^I r + \widetilde V_I)}{W^2}
\end{align}
The electric charges $Q_I$ and magnetic charges $P^I$ are
\begin{align}
Q_I &= 2 \bigg( m \frac{\partial \mu_1}{\partial \delta_I}+ n_0 \frac{\partial \mu_2}{\partial \delta_I} \bigg) \equiv m \rho_I^1 + n_0 \rho_I^2,\nonumber \\
P^I &= -2 \bigg( m\frac{\partial \nu_1}{\partial \delta_I} + n_0 \frac{\partial \nu_2}{\partial \delta_I} \bigg) \equiv m \pi^I_1 + n_0 \pi^I_2,
\end{align}
where the last expressions are the definitions of $\rho_I^1,\,\rho_I^2,\,\pi^I_1,\,\pi^I_2$ in terms of $(\delta_I, \gamma_I)$. The remaining functions $L^I(r)$ and $\widetilde L_I(r)$ are given in \eq{linearfunctions}, and the constants $V^I$ and $\widetilde V_I$ are given in \eq{Vconstants}.
The scalar fields are of the form
\begin{align}
\chi_i & = \frac{f_i}{r^2 + n_0^2 + g_i} , & \expe{\varphi_i} & = \frac{r^2 + n_0^2 + g_i}{W} ,\label{defsc}
\end{align}
where
\begin{align}
f_{i} & = 2 (m r + n_0^2) \xi_{i 1} - 2 n_0 (r - m) \xi_{i 2} + 4 (m^2 + n_0^2) \xi_{i 3} , \nonumber \\
g_{i} & = 2 (m r + n_0^2) \eta_{i 1} - 2 n_0 (r - m) \eta_{i 2} + 4 (m^2 + n_0^2) \eta_{i 3} ,
\end{align}
and $\xi_{i1},\,\xi_{i2},\,\xi_{i3},\,\eta_{i1},\,\eta_{i2},\,\eta_{i3}$ are given in \eq{xieta} as functions of the electromagnetic parameters $(\delta_I,\gamma_I)$.
\subsubsection{Special cases}
If $\delta_I = \delta$ and $\gamma_I = \gamma$ for $I=1,2,3,4$, then we have the dyonic Reissner--Nordstr\"{o}m--AdS solution of Einstein--Maxwell theory. The conserved charges are then
\begin{align}
M & = \frac{m [1 + \cosh (4 \delta) \cosh (4 \gamma)]}{2 \cosh (2 \delta) \cosh (2 \gamma)} , \nonumber \\
Q_I & = \frac{m \sinh (2 \delta) \cosh (4 \gamma)}{\cosh (2 \gamma)} , \nonumber \\
P^I & = \frac{m \sinh (2 \gamma)}{\cosh (2 \delta)} ,
\end{align}
for each $I$. If we define $r' = r + M - m$, $Q = Q_I$ and $P = P^I$, then the metric is
\begin{equation}
\textrm{d} s^2 = - f \, \textrm{d} t^2 + \frac{\textrm{d} r'{^2}}{f} + r'{^2} (\textrm{d} \theta^2 + \sin^2 \theta \, \textrm{d} \phi^2) ,
\end{equation}
where $f = 1 - 2 M/r' + (Q^2 + P^2)/r'{^2} + g^2 r'{^4}$, and the gauge fields are
\begin{align}
A^I & = \frac{Q}{r'} \, \textrm{d} t + P \cos \theta \, \textrm{d} \phi , & \widetilde{A}_I & = \frac{P}{r'} \, \textrm{d} t - Q \cos \theta \, \textrm{d} \phi ,
\end{align}
which is recognizable as dyonic Reissner--Nordstr\"{o}m--AdS.
If $g = 0$, then we have the static and asymptotically flat limit of the general ungauged solution, which has recently been found \cite{Chow:2013tia}. It is a solution of the so-called STU model.
If $\gamma_I = 0$, then we have the static AdS black hole with 4 electric charges \cite{Duff:1999gh}. If $\delta_I = 0$, then we have the static AdS black hole with 4 magnetic charges \cite{Duff:1999gh}.
If $\delta_2 = \delta_3 = \delta_4 = \gamma_2 = \gamma_3 = \gamma_4 = 0$, then we have the static AdS black hole with 1 dyonic gauge field \cite{Lu:2013ura}.
If $\delta_1 = \delta_2$, $\gamma_1 = \gamma_2$ and $\delta_3 = \delta_4 = \gamma_3 = \gamma_4 = 0$, then we have the static AdS black hole with 2 equal dyonic gauge fields \cite{Lu:2013eoa}.
\subsection{Planar black hole}
Static black holes with planar horizons and 8 independent electromagnetic charges can be obtained as a limit of the solution with spherical horizons. We replace $t \rightarrow \epsilon t$, $r \rightarrow r/\epsilon$, $\theta \rightarrow \epsilon \rho$, $m \rightarrow m/\epsilon^3$, $\delta_I \rightarrow \epsilon \delta_I$, $\gamma_I \rightarrow \epsilon \gamma_I$, and then take the limit $\epsilon \rightarrow 0$. We can then switch from plane polar coordinates $(\rho, \phi)$ to cartesian coordinates $(x, y)$. We use the notation $\delta_{IJ}=\delta_I \delta_J$, $\delta_{1234}=\delta_1 \delta_2\delta_3 \delta_4$, $\gamma_{IJ}=\gamma_I \gamma_J$, $\gamma_{1234}=\gamma_1 \gamma_2\gamma_3 \gamma_4$.
The solution with planar horizons has the metric
\begin{equation}
\textrm{d} s^2 = -f \, \textrm{d} t^2 +\frac{\textrm{d} r^2}{f} + W ( \textrm{d} x^2 + \textrm{d} y^2 ) ,
\end{equation}
where
\begin{align}
f(r) &= \frac{- 2 m r + g^2 W^2}{W}, \nonumber \\
W^2 (r) & = r^4 + w_3 m r^3 + w_2 m^2 r^2 + w_1 m^3 r + w_0 m^4 ,
\end{align}
and where
\begin{align}
w_3 & = 2 \textstyle \sum_I (\delta_I^2 + \gamma_I^2) , \nonumber \\
w_2 & = 2 [ 3 \textstyle \sum_I \delta_I^2 \gamma_I^2 + 2 \textstyle \sum_{I < J} ( \delta_{I J}^2 + \gamma_{I J}^2 - \delta_{I J} \gamma_{I J} \nonumber \\
& \quad + 2 \gamma_{1 2 3 4} \delta_{I J}/\gamma_{I J} ) ] , \nonumber \\
w_1 & = 2 \{ ( \textstyle \sum_I \delta_I \gamma_I )^2 \sum_J (\delta_J^2 + \gamma_J^2) + 4 \textstyle \sum_I ( \delta_{1 2 3 4}^2/\delta_I^2 \nonumber \\
& \quad + \gamma_{1 2 3 4}^2/\gamma_I^2 - \delta_{1 2 3 4} \gamma_I^2 - \gamma_{1 2 3 4} \delta_I^2 ) \nonumber \\
& \quad + 4 \textstyle \sum_{I < J} [ \delta_{1 2 3 4} \gamma_{I J} ( \delta_I/\delta_J + \delta_J/\delta_I ) \nonumber \\
& \quad + \gamma_{1 2 3 4} \delta_{I J} ( \gamma_I/\gamma_J + \gamma_J/\gamma_I ) - \delta_{I J} \gamma_{I J} (\delta_I^2 + \delta_J^2 + \gamma_I^2 \nonumber \\
& \quad + \gamma_J^2) ] \} , \nonumber \\
w_0 & = [ 4 (\delta_{1 2 3 4} + \gamma_{1 2 3 4}) - \textstyle \sum_I \delta_I^2 \gamma_I^2 + 2 \sum_{I < J} \delta_{I J} \gamma_{I J} ] ^2 .
\end{align}
Note that the coefficient $w_0$ is the square of a Cayley hyperdeterminant, and is invariant under $\textrm{SL}(2, \mathbb{R})^3$ transformations, see e.g.\ \cite{Duff:2006uz}. We choose the orientation $\varepsilon_{t r x y} = 1$.
The gauge fields and dual gauge fields are
\begin{align}
A^I & = \zeta^I \, \textrm{d} t - \tfrac{1}{2} P^I (x \, \textrm{d} y - y \, \textrm{d} x ), \nonumber \\
\widetilde{A}_I & = \widetilde\zeta_I \, \textrm{d} t + \tfrac{1}{2} Q_I \, (x \, \textrm{d} y - y \, \textrm{d} x ) .
\end{align}
The electromagnetic charges are
\begin{align}
Q_I & = 2 m \delta_I , & P^I & = 2 m \gamma_I .
\end{align}
The electromagnetic scalars can be expressed concisely as
\begin{align}
\zeta^I & = \frac{1}{W^2} \bigg( \frac{1}{2} \frac{\partial (W^2)}{\partial \delta_I} + 2 m^3 r \gamma_I Z \bigg) ,\nonumber \\
\widetilde\zeta_I & = \frac{1}{W^2} \bigg( \frac{1}{2} \frac{\partial (W^2)}{\partial \gamma_I} - 2 m^3 r \delta_I Z \bigg) ,
\end{align}
where
\begin{align}
Z & = 2 \sum_I \bigg( (\gamma_I^2 - \delta_I^2) \delta_I \gamma_I + \gamma_{1 2 3 4} \frac{\delta_I}{\gamma_I} - \delta_{1 2 3 4} \frac{\gamma_I}{\delta_I} \bigg) \nonumber \\
& \quad + \sum_I (\delta_I^2 - \gamma_I^2) \sum_J \delta_J \gamma_J .
\end{align}
The scalars are
\begin{align}
\chi_1 & = \frac{2 m [r (\delta_2 \gamma_3 + \delta_3 \gamma_2 - \delta_1 \gamma_4 - \delta_4 \gamma_1) + m f_1]}{r^2 + 2 m r (\delta_2^2 + \delta_3^2 + \gamma_1^2 + \gamma_4^2) + m^2 g_1} , \nonumber \\
\expe{\varphi_1} & = \frac{r^2 + 2 m r (\delta_2^2 + \delta_3^2 + \gamma_1^2 + \gamma_4^2) + m^2 g_1}{W} ,
\end{align}
where
\begin{align}
f_1 & = (\delta_2 \gamma_2 + \delta_3 \gamma_3 - \delta_1 \gamma_1 - \delta_4 \gamma_4 ) (\delta_{1 4} + \delta_{2 3} + \gamma_{1 4} + \gamma_{2 3}),\nonumber \\
g_1 & = 4 (\delta_{2 3} + \gamma_{1 4})^2 + (\delta_1 \gamma_1 + \delta_4 \gamma_4 - \delta_2 \gamma_2 - \delta_3 \gamma_3)^2,
\end{align}
with $\varphi_2$, $\varphi_3$, $\chi_2$ and $\chi_3$ obtained by appropriate interchange of indices.
Note that in the planar limit the parametrization is symmetrical between the electric charges $\delta_I$ and magnetic charges $\gamma_I$. It is possible to periodically identify $x$ and $y$, in which case the horizons are toroidal.
As for spherical horizons, various special cases with planar horizons have been given before \cite{Duff:1999gh, Duff:1999rk, Lu:2013eoa, Lu:2013ura}. The planar solution for Einstein--Maxwell theory has been known for a long time, see e.g.\ \cite{Stephani:2003tm}, and much used in the AdS/CFT correspondence.
\subsection{Hyperbolic horizon}
From the static solution with a spherical horizon, we may perform the analytic continuations $(t, r, \theta, m, \delta_I, \gamma_I) \rightarrow \textrm{i} (t, r, \theta, - m, \delta_I, \gamma_I)$ to obtain a solution with a hyperbolic horizon. Conversely, for the hyperbolic solution, $(t, r, \theta, m, \delta_I, \gamma_I)$ are analytic continuations of the spherical $- \textrm{i} (t, r, \theta, - m, \delta_I, \gamma_I)$, so it is convenient in the hyperbolic case to define $(n_0, \zeta^I, \widetilde{\zeta}_I)$ to be the analytic continuations of the spherical $\textrm{i} (n_0, \zeta^I, \widetilde{\zeta}_I)$. We also define $W^2, Q_I, P^I, \xi_{i 1}, \xi_{i 2}, \xi_{i 3}, \eta_{i 1}, \eta_{i 2}, \eta_{i 3}$ to be the analytic continuations of their spherical counterparts. $W$ is defined as the positive square root of $W^2$, before or after analytic continuation.
The solution with a hyperbolic horizon has the metric
\begin{align}
\textrm{d} s^2 & = - f \, \textrm{d} t^2 + \frac{\textrm{d} r^2}{f} + W (\textrm{d} \theta^2 + \sinh^2 \theta \, \textrm{d} \phi^2) ,
\end{align}
where
\begin{equation}
f = \frac{r^2}{W} \bigg( -1 - \frac{2 m}{r} + \frac{n_0^2}{r^2} + g^2 W^2 \bigg) .
\end{equation}
The gauge fields and dual gauge fields are
\begin{align}
A^I & = \zeta^I \, \textrm{d} t + P^I \cosh \theta \, \textrm{d} \phi , & \widetilde{A}_I & = \widetilde{\zeta}_I \, \textrm{d} t - Q_I \cosh \theta \, \textrm{d} \phi .
\end{align}
The scalars are
\begin{align}
\chi_i & = \frac{f_i}{r^2 + n_0^2 + g_i} , & \expe{\varphi_i} & = \frac{r^2 + n_0^2 + g_i}{W} ,
\end{align}
where
\begin{align}
f_{i} & = 2 (n_0^2 - m r) \xi_{i 1} + 2 n_0 (r + m) \xi_{i 2} + 4 (m^2 + n_0^2) \xi_{i 3} , \nonumber \\
g_{i} & =2 (n_0^2 - m r) \eta_{i 1} + 2 n_0 (r + m) \eta_{i 2} + 4 (m^2 + n_0^2) \eta_{i 3} .
\end{align}
Some special cases have been considered previously. As for the spherical case, the purely electric solutions with $\gamma_I = 0$ and the purely magnetic solutions with $\delta_I = 0$ were given in \cite{Duff:1999gh, Duff:1999rk} (except in our parameterization, real $\delta_I$ and $\gamma_I$ give real gauge fields). Similarly, there are the dyonic solutions \cite{Lu:2013eoa, Lu:2013ura}. The hyperbolic version of Reissner--Nordstr\"{o}m--AdS, with $\delta_I = \delta$ and $\gamma_I = \gamma$, has been known for a long time, see e.g.\ \cite{Stephani:2003tm}, but its physics was studied later \cite{Aminneborg:1996iz, Mann:1996gj, Mann:1997zn, Brill:1997mf,Emparan:1999gf}.
\subsection{Thermodynamics}
\label{thermo}
\subsubsection{Mass}
\label{secmass}
In asymptotically flat spacetimes, the mass can be defined unambiguously from the Arnowitt--Deser--Misner (ADM) surface charge or from a Komar integral, and is evaluated as $M/G$ where $G$ is Newton's constant and $M$ is written in \eqref{mass}. In asymptotically AdS spacetimes, the computation of the mass presents several subtleties (see e.g.\ \cite{Gibbons:2004ai,Chen:2005zj}). The Abbott--Deser formula \cite{Abbott:1981ff}, valid in Einstein gravity with a cosmological constant, is in general not valid when slowly falling scalar fields are present. In the example of the static solution of $\mathcal{N} = 2$ gauged supergravity with four independent electric charges, it was observed that the mass is not given by the Abbott--Deser formula \cite{Chen:2005zj, Lu:2013ura}. In fact, the contributions of the scalar fields to the mass can be evaluated explicitly and uniquely (see e.g.\ \cite{Henneaux:2006hk,Barnich:2002pi,Compere:2009dp}) for any stationary solution, using either Hamiltonian \cite{Regge:1974zd,Brown:1986ed} or Lagrangian \cite{Abbott:1981ff,Iyer:1994ys,Barnich:2001jy,Barnich:2007bf} canonical methods. Holographic methods \cite{Papadimitriou:2005ii} could also be used. Here we simply implement the canonical methods on the theory at hand and uniquely compute the mass. The explicit formulae for the conserved charges of a Lagrangian of the form \eqref{generalaction} can be found in \cite{Compere:2009dp}.
From now on, we set the NUT charge to zero and consider spherical horizons. In order to perform the asymptotic analysis, it is convenient to first expand $W^2(r)$ and the scalar fields $(\varphi_i,\chi_i)$, as
\begin{align}
W^2(r) & = r^4 + w_3 m r^3 + w_2 m^2 r^2 + w_1 m^3 r + w_0 m^4,\nonumber \\
\varphi_i & = \frac{\Sigma_i}{r}+\frac{\Sigma_i^{(2)}}{r^2}+O(r^{-3}),\nonumber \\
\chi_i & = \frac{\Xi_i}{r}+\frac{\Xi^{(2)}_i}{r^2}+O(r^{-3}).
\label{subsc}
\end{align}
The various coefficients can be obtained from the definitions \eqref{defW2} and \eqref{defsc}. It is only important to notice the relationships
\begin{align}
w_3 & = 4 \bigg( \frac{M}{m} - 1\bigg) , \\
w_2 & = \frac{3}{8}w_3^2-\frac{1}{2m^2}\sum_i (\Sigma_i^2 + \Xi_i^2) ,\label{vald}\\
w_1 & = \frac{1}{16}w_3^3 - \frac{5}{12m^2} w_3 \sum_i (\Sigma_i^2 + \Xi_i^2) \nonumber \\
& \quad -\frac{1}{3m^3}\sum_i (\Sigma_i \Xi_i^2 + 2\Sigma^{(2)}_i \Sigma_i + 2 \Xi^{(2)}_i \Xi_i) ,
\end{align}
which can be checked explicitly.
The contribution to an infinitesimal variation of the mass coming from the Einstein term in the action can be obtained with the result
\begin{equation}
G \, \delta M_{\textrm{Einstein}} = \frac{g^2}{32} \delta \big[ (8w_2-3w_{3}^2)m^2 \big] r +O(r^0).
\label{dM1}
\end{equation}
The linear divergence in $r$ is a clear sign that matter fields are contributing to the mass. One can easily check that the gauge fields are not contributing to the mass using the explicit formulae for the conserved charges \cite{Compere:2009dp}. Since the magnetic monopole fields do not enter the expression for the mass at radial infinity, there is no subtlety associated with Dirac strings or singularities of the gauge fields. The scalar field contribution reads
\begin{align}
G \, \delta M_{\textrm{scalars}} & = - \frac{1}{16\pi }\sum_i \int \! \textrm{d} \theta \, \textrm{d} \phi \, \sqrt{-g} g^{r r} ( \delta \varphi_i \partial_r \varphi_i \nonumber \\
& \quad + \expe{2\varphi_i} \delta \chi_i \partial_r \chi_i ) .
\end{align}
For asymptotically flat black holes, $\delta M_{\textrm{scalars}} =O(r^{-1})$, and the scalars do not contribute to the mass. For asymptotically AdS black holes, there is an additional quartic branch in $g^{rr}$ and we obtain a linearly divergent term as well as a finite term:
\begin{equation}
G \, \delta M_{\textrm{scalars}} = \frac{g^2 r}{8} \sum_i \delta (\Sigma_i^2+\Xi_i^2) +O(r^0) .
\label{dM2}
\end{equation}
Summing up the two contributions \eqref{dM1} and \eqref{dM2} and using \eqref{vald}, the linearly divergent terms cancel. The variation of the mass then takes the finite form,
\begin{align}
G \, \delta \mathcal M &= \delta M +\frac{g^2}{12} \sum_i [ - (\Xi_i^2 +\Sigma_i^2) \delta (M- m) \nonumber \\
& \quad + \tfrac{1}{2}(M-m) \delta (\Xi_i^2 + \Sigma_i^2) + 2 \Xi_i^{(2)} \delta \Xi_i - \Xi_i \delta \Xi_i^{(2)} \nonumber \\
& \quad + 2\Sigma_i^{(2)} \delta \Sigma_i - \Sigma_i \delta \Sigma_i^{(2)} +\Sigma_i \delta (\Xi_i^2) - 2 \Xi_i^2 \delta \Sigma_i ] .
\end{align}
The variations proportional to $\delta m$ exactly cancel in the last term. Therefore, if the mass is integrable, its value will be $\mathcal M = M/G$. The integrability condition then amounts to
\begin{equation}
\mathcal I = 0 ,
\end{equation}
where
\begin{align}
\mathcal I & \equiv g^2 \sum_i [ \delta_2 (\Sigma^{(2)}_i - \Xi_i^2) \wedge \delta_1 \Sigma_i + \delta_2 \Xi^{(2)}_i \wedge \delta_1 \Xi_i \nonumber \\
& \quad -\tfrac{1}{2} \delta_2 (\Xi_i^2 + \Sigma_i^2) \delta_1 (M-m) - (1\leftrightarrow 2) ] .\label{II}
\end{align}
Let us now discuss the physical content of the integrability condition. It turns out that for generic values of the electric and magnetic charge parameters, the integrability condition is not obeyed, and therefore, the mass does not exist. The non-integrability of the mass comes more precisely from the presence of non-trivial scalar field profiles when the gauge fields with both electric and magnetic charges are turned on. This non-integrability is rooted in the non-existence of a phase space with such generic variations. Indeed, one can evaluate the symplectic flux \eqref{sympl} on constant $r$ slices when $r\rightarrow \infty$ using \eqref{vald}. The result is
\begin{equation}
\bm\omega (\delta_1 \phi,\delta_2\phi) |_{r \;\text{fixed}}= \mathcal{I} \sin \theta \, \textrm{d} t\wedge \textrm{d} \theta \wedge \textrm{d} \phi +O(r^{-1}),
\end{equation}
where $\mathcal I$ is given in \eqref{II}. One fundamental requirement of consistent boundary conditions is that the symplectic form has to be conserved, i.e.\ the symplectic flux at infinity should be zero. Therefore, the mass does not have to exist, since there is no classical phase space that contains generic independently varying electric and magnetic charges. In the case studied in \cite{Lu:2013ura}, where one $\textrm{U}(1)$ gauge field is turned on, one can also check that the mass is not generically defined, which follows from the non-existence of a conserved symplectic structure.
In the cases where we expect a phase space to be defined, such as only electric charges (all $P^I = 0$) or only magnetic charges (all $Q_I = 0$), where a dual CFT description should be available \cite{Witten:2003ya}, the mass should also be defined, and indeed it is. One can check that $\mathcal I=0$ for these cases, and the mass is then $\mathcal{M} = M/G$.
Quite surprisingly, there are also cases with independent electric and magnetic charges where the symplectic flux vanishes ($\mathcal I=0$), and the mass is defined. These boundary conditions will violate boundary Lorentz invariance and therefore will be outside of the standard AdS/CFT description. One such example is when the gauge fields are pairwise equal, e.g.\ $Q_1=Q_4$, $Q_2=Q_3$, $P^1=P^4$, $P^2=P^3$ (which is equivalent to $\delta_1 = \delta_4$, $\delta_2 =\delta_3$, $\gamma_1 = \gamma_4$, $\gamma_2=\gamma_3$). This includes as a subcase the dyonic Reissner-Nordtr\"om--AdS black hole.
Quite intriguingly, at least three additional distinct cases arise where the integrability conditions $\mathcal I=0$ are obeyed. The first case is when all four electric and magnetic charges are set equal or opposite to each other $P^I = \pm Q_I$, $I=1,2,3,4$, with an even number of minus signs. The second case arises with two vanishing sets of charges, let say $Q_1=Q_2=P^1=P^2=0$, and with the two remaining sets of charges obeying $Q_3 = \pm P^3$, $Q_4 = \pm P^4$ for any choice of signs. The third case arises when all but one set of charges vanishes, let say $Q_1$ and $P^1$, with $Q_1 = \pm P^1$. The last case was discussed in \cite{Lu:2013ura}. For free gauge fields in AdS$_4$, one has an $\textrm{SL}(2,\mathbb Z)$ family of Lorentz-invariant boundary conditions \cite{Witten:2003ya}. Here, we find a smaller set of mixed boundary conditions, which are very restricted by the interactions with the scalar fields. Other examples might exist, since we were not able to perform a complete classification of the solutions to $\mathcal I=0$.
\subsubsection{Other conserved charges}
\label{consch}
Let us obtain the electromagnetic charges and the angular momentum of the solution. Using the canonical expressions for the charges \cite{Compere:2009dp} for the Lagrangian \eqref{generalaction}, one obtains the electromagnetic charges in geometrical units, $\overline{Q}_I = \tfrac{1}{4G} Q_I$ and $\overline{P}^I = \tfrac{1}{4G} P^I$. Since electric and magnetic charges are present, the definition of angular momentum requires more care.
The angular momentum is usually associated with the large gauge transformation $(\xi,\Lambda^I) = (- \partial / \partial \phi,0)$, where $\xi$ is the generator of a diffeomorphism, and $\Lambda^I$ are generators of $\textrm{U}(1)$ gauge transformations. However, in the presence of magnetic charges, the definition has to be modified. Remember that in the presence of magnetic charges, the gauge field $A^I$ might be defined in the north and south patches as
\begin{align}
A_{\textrm{north}}^I & = P^I (\cos\theta -1) \, \textrm{d} \phi +O(r^{-1}), \nonumber \\
A_{\textrm{south}}^I & = P^I (\cos\theta +1) \, \textrm{d}\phi + O(r^{-1}).
\end{align}
One requirement from the applicability of the canonical formalism at $r \rightarrow \infty$ is that $\xi^\mu A^I_\mu + \Lambda^I$ has to be continuous across the equator at $r \rightarrow \infty$ \cite{Copsey:2005se}. We define the magnetic charges of the gauge field $\mathbf A^I$ as defined in \eqref{generalaction} by $\mathbf{P}^I$, $I=1,2,3,4$, namely $\mathbf{P}^1 = P^1$, $\mathbf{P}^2 = -Q_2$, $\mathbf{P}^3 = -Q_3$ and $\mathbf{P}^4 = P^4$. The angular momentum can then be defined as associated with $(-\partial / \partial \phi,\Lambda^I)$ where $\Lambda^I_{\textrm{north}}=-\mathbf{P}^I$, $\Lambda^I_{\textrm{south}}= \mathbf{P}^I$. One can then check that the angular momentum is zero. Indeed, one has explicitly
\begin{equation}
\delta J = \int \! \frac{(\textrm{d} ^2 x)_{\mu\nu}}{16\pi G} \, \left[ \delta T_I^{\mu \nu} (\mathbf A^I_\phi +\Lambda^I ) + T_I^{\mu\nu} \delta \mathbf A_\phi^I \right] ,
\end{equation}
where $T_I^{\mu\nu} = k_{IJ}\mathbf F^{J \mu \nu}-h_{IJ}\epsilon^{\mu\nu\lambda\sigma}\mathbf F_{\lambda \sigma}^J$. Evaluating on the sphere at infinity $S^2_\infty$ and using
\begin{align}
k_{IJ} & = \delta_{IJ} + O(r^{-1}), & h_{IJ} & = O(r^{-1}),
\end{align}
one then has
\begin{equation}
\delta J = \frac{1}{16 \pi G} \int_{S^2_\infty} \! \textrm{d} \theta \, \textrm{d} \phi \, r^2 \mathbf F_I^{tr} \delta \mathbf A_\phi^I = 0 ,
\end{equation}
after a non-trivial cancellation between the north and south patches. Physically, since the magnetic monopole sits on the electric monopole, no net angular momentum is produced.
\subsubsection{First law}
\label{fl1}
Black hole solutions have horizons at $r = r_\pm$, which are the roots of the polynomial $R_g(r)$ defined as
\begin{equation}
R_g(r) = R(r) + g^2 W^2(r) ,
\end{equation}
where $R$ and $W$ are given in \eqref{defW2} and \eqref{defR}. From the geometry, we obtain the temperature $T$ and entropy $S$
\begin{align}
T & = \frac{R_{g}' (r_+)}{4 \pi W(r_+)} , & S & = \frac{\pi}{G} W(r_+) ,
\end{align}
where prime denotes radial derivative. The electric and magnetic potentials on the horizon are respectively
\begin{align}
\Phi^I & = \zeta^I (r_+) , & \Psi_I & = \widetilde{\zeta}_I (r_+).
\end{align}
In the asymptotically flat case ($g=0$), these expressions can be simplified after using the property $W(r_+)=L(r_+)$ as
\begin{align}
\Phi^I |_{\textrm{flat}} & = -\frac{P^I n_0+L^I(r_+)}{L(r_+)}, & \Psi_I|_{\textrm{flat}} & = \frac{Q_I n_0+\widetilde L_I(r_+)}{L(r_+)},
\end{align}
but there is no obvious simplification in the asymptotically AdS case.
For each boundary condition where we explicitly checked that the mass can be defined, we find that the thermodynamic quantities satisfy the first law of thermodynamics
\begin{equation}
\delta \mathcal M = T \, \delta S + \sum_{I = 1}^4 ( \Phi^I \, \delta \overline{Q}_I + \Psi_I \, \delta \overline{P}^I) ,
\end{equation}
where $\overline{Q}_I = \tfrac{1}{4G} Q_I$ and $\overline{P}^I = \tfrac{1}{4G} P^I$ are the electromagnetic charges in geometrical units obtained earlier.
We finally note that the thermodynamics may be studied similarly for the solutions with planar or hyperbolic horizons. The thermodynamic quantities for the planar solutions can easily be read off from the solution. The thermodynamic quantities for hyperbolic horizons can be obtained from those of the solution with a spherical horizon by analytic continuation.
\section{$\textrm{U}(1)^2$ dyonic rotating black holes}
\label{BH2}
Let us now present rotating solutions to the Lagrangian with the four gauge fields set pairwise equal, \eqref{LU2} or equivalently \eqref{SO4}. The general stationary rotating black hole of $\mathcal{N} = 2$, $\textrm{U}(1)^2$ ungauged supergravity was found in \cite{LozanoTellechea:1999my}, and is asymptotically flat or more generally asymptotically Taub--NUT. A generalization of this black hole to $\mathcal{N} = 2$, $\textrm{U}(1)^4$ ungauged supergravity was presented in a simplified form in \cite{Chow:2013tia}. Using the $(r, u)$ symmetric notation \cite{Chong:2004na}, the metric can be written as
\begin{align}
\textrm{d} s^2 & = - \frac{\widehat R}{\widehat W} \bigg( \textrm{d} \widehat t - \frac{\widehat a^2 - \widehat u_1 \widehat u_2}{\widehat a} \, \textrm{d} \widehat \phi \bigg)^2 + \frac{\widehat W}{\widehat{R}} \, \textrm{d} \widehat r^2 \nonumber \\
& \quad + \frac{\widehat U}{\widehat W} \bigg( \textrm{d} \widehat t - \frac{\widehat r_1 \widehat r_2 + \widehat a^2}{\widehat a} \, \textrm{d} \widehat \phi \bigg)^2 + \frac{\widehat{W}}{\widehat{U}} \, \textrm{d} \widehat u^2 ,
\end{align}
where $\widehat W=\widehat r_1 \widehat r_2 + \widehat u_1 \widehat u_2$; $\widehat R(\widehat r)$ and $\widehat U(\widehat u)$ are quadratic polynomials of their arguments; and $\widehat r_a,\widehat u_a$ for $a=1,2$ are linear functions of $\widehat r, \widehat u$, respectively. One can think of $\widehat u$ as a function of $\cos\theta$, where $\theta$ is the polar angle. Following \cite{Chong:2004na}, it turns out that one can obtain a solution to $\textrm{U}(1)^2$ gauged supergravity upon replacing the functions as
\begin{align}
\widehat R & \rightarrow \widehat R+g^2 \widehat r_1 \widehat r_2 (\widehat r_1 \widehat r_2 +\widehat a^2), \\
\widehat U &\rightarrow \widehat U+g^2 \widehat u_1 \widehat u_2 (\widehat u_1 \widehat u_2 -\widehat a^2),
\end{align}
with everything else untouched, including the matter fields. In order to set the metric in an asymptotically AdS coordinate system, an analysis \emph{\`a la} Griffiths--Podolsk\'y \cite{Griffiths:2005qp} is necessary, which we will perform in Section \ref{AAdS}. Since the interest of these solutions is the inclusion of angular momentum, we will only discuss the black hole solutions with spherical horizons.
\subsection{General solution}
\subsubsection{Metric}
We use hatted coordinates to emphasize that the solution is not in an asymptotically AdS coordinate system. The coordinates are rotating at infinity and $\widehat{\phi}$ is not canonically normalized. However, this coordinate system is convenient for expressing the solution in a simple form. Using the notations of \cite{Chong:2004na}, the metric is
\begin{align}
\textrm{d} s^2 & = - \frac{\widehat R_g}{\widehat W} \bigg( \textrm{d} \widehat t - \frac{\widehat a^2 - \widehat u_1 \widehat u_2}{\widehat a} \, \textrm{d} \widehat \phi \bigg)^2 + \frac{\widehat W}{\widehat R_g} \, \textrm{d} \widehat r^2 \nonumber \\
& \quad + \frac{\widehat U_g}{\widehat W} \bigg( \textrm{d} \widehat t - \frac{\widehat r_1 \widehat r_2 + \widehat a^2}{\widehat a} \, \textrm{d} \widehat \phi \bigg)^2 + \frac{\widehat{W}}{\widehat U_g} \, \textrm{d} \widehat{u}^2 ,\label{mads3}
\end{align}
where
\begin{align}
\widehat R_g(\widehat r) & = \widehat r^2 - 2 \widehat m \widehat r + \widehat a^2 + g^2 \widehat r_1 \widehat r_2 (
\widehat r_1 \widehat r_2 + \widehat a^2) , \nonumber \\
\widehat U_g(\widehat u) & = -\widehat u^2 + 2 \widehat n \widehat u +\widehat a^2 + g^2 \widehat u_1 \widehat u_2 (\widehat u_1 \widehat u_2 - \widehat a^2) , \nonumber \\
\widehat W(\widehat r,\widehat u) & = \widehat r_1 \widehat r_2 + \widehat u_1 \widehat u_2 . \label{hRg}
\end{align}
The variables $\widehat r_a$, $\widehat u_a$ are defined as
\begin{align}
\widehat r_a & = \widehat r + \Delta \widehat r_a , & \widehat u_a & = \widehat u + \Delta \widehat u_a ,
\end{align}
where
\begin{align}
\Delta \widehat r_1 & = \widehat m [\cosh (2 \delta_1) \cosh (2 \gamma_2) - 1] \nonumber \\
& \quad + \widehat n \sinh (2 \delta_1) \sinh (2 \gamma_1) , \nonumber \\
\Delta \widehat r_2 & = \widehat m [\cosh (2 \delta_2) \cosh (2 \gamma_1) - 1] \nonumber \\
& \quad + \widehat n \sinh (2 \delta_2) \sinh (2 \gamma_2) , \nonumber \\
\Delta \widehat u_1 & = \widehat n [\cosh (2 \delta_1) \cosh (2 \gamma_2) - 1] \nonumber \\
& \quad - \widehat m \sinh (2 \delta_1) \sinh (2 \gamma_1) ,\nonumber \\
\Delta \widehat u_2 & = \widehat n [\cosh (2 \delta_2) \cosh (2 \gamma_1) - 1] \nonumber \\
& \quad - \widehat m \sinh (2 \delta_2) \sinh (2 \gamma_2) .
\end{align}
It is convenient to define the linear combinations of $\Delta \widehat r_a,\Delta \widehat u_a$, as
\begin{align}
\Sigma_{\Delta r} & = \tfrac{1}{2} (\Delta \widehat r_1 + \Delta \widehat r_2) , & \Delta_{\Delta r} & = \tfrac{1}{2} (\Delta \widehat r_2 - \Delta \widehat r_1) , \nonumber \\
\Sigma_{\Delta u} & = \tfrac{1}{2} (\Delta \widehat u_1 + \Delta \widehat u_2) , & \Delta_{\Delta u} & = \tfrac{1}{2} (\Delta \widehat u_2 - \Delta \widehat u_1) . \label{SigmaDelta}
\end{align}
We define
\begin{align}
M & = \widehat m + \Sigma_{\Delta r} , & N & = \widehat n + \Sigma_{\Delta u} ,
\end{align}
which are the physical mass and NUT charge in the asymptotically flat case when no gauging is present ($g=0$). It turns out that the total NUT charge in asymptotically AdS spacetime vanishes when $N=0$ as we will explicitly show in Section \ref{AAdS}. The NUT charge can therefore be cancelled upon setting $\widehat n=\widehat n_0$, where
\begin{equation}
\widehat n_0 = \frac{\sinh (2 \gamma_1) \sinh (2 \delta_1) + \sinh (2 \gamma_2) \sinh (2 \delta_2)}{\cosh (2 \gamma_1) \cosh (2 \delta_2) + \cosh (2 \gamma_2) \cosh (2 \delta_1)} \widehat m.\label{noNUT1}
\end{equation}
We fix the orientation as $\varepsilon_{t r \phi u} =1$.
\subsubsection{Matter}
The electromagnetic charges are
\begin{align}
Q_1 & = \frac{\partial M}{\partial \delta_1} = \frac{1}{2}\frac{\partial \widehat r_1}{\partial \delta_1}, &Q_2 & = \frac{\partial M}{\partial \delta_2} = \frac{1}{2}\frac{\partial \widehat r_2}{\partial \delta_2}, \nonumber \\
P^1 & = - \frac{\partial N}{\partial \delta_1} = - \frac{1}{2}\frac{\partial \widehat u_1}{\partial \delta_1}, & P^2 & = - \frac{\partial N}{\partial \delta_2} = - \frac{1}{2}\frac{\partial \widehat u_2}{\partial \delta_2} .
\end{align}
The gauge fields and dual gauge fields are
\begin{align}
A^{1} & = {\zeta}^1 (\textrm{d}\widehat t - \widehat a \, \textrm{d}\widehat \phi) + \frac{\widehat r_2 \widehat u_2 \widetilde\zeta_1}{\widehat a}\textrm{d}\widehat \phi, \nonumber \\
A^{2} & = {\zeta}^2 (\textrm{d} \widehat t -\widehat a \,\textrm{d} \widehat \phi) + \frac{\widehat r_1 \widehat u_1 \widetilde\zeta_2}{\widehat a}\textrm{d}\widehat \phi , \nonumber \\
\widetilde{A}_1 & = \widetilde\zeta_1 (\textrm{d} \widehat t - \widehat a \, \textrm{d} \widehat \phi) - \frac{\widehat r_1 \widehat u_1 {\zeta}^1}{\widehat a} \, \textrm{d} \widehat \phi , \nonumber \\
\widetilde{A}_2 & = \widetilde\zeta_2 (\textrm{d} \widehat t - \widehat a \, \textrm{d} \widehat \phi) - \frac{\widehat r_2 \widehat u_2 {\zeta}^2}{\widehat a} \, \textrm{d} \widehat \phi ,
\end{align}
where the 3-dimensional electromagnetic scalars are
\begin{align}
{\zeta}^1 & = \frac{1}{2 \widehat W} \frac{\partial \widehat W}{\partial \delta_1} = \frac{Q_1 \widehat r_2 - P^1 \widehat u_2}{\widehat W}, & \widetilde\zeta_1 & = \frac{Q_1 \widehat u_1 + P^1 \widehat r_1}{\widehat W}, \nonumber \\
{\zeta}^2 & = \frac{1}{2 \widehat W} \frac{\partial \widehat W}{\partial \delta_2} = \frac{Q_2\widehat r_1 - P^2 \widehat u_1 }{\widehat W} , & \widetilde\zeta_2 & = \frac{Q_2 \widehat u_2 + P^2 \widehat r_2}{\widehat W} .
\end{align}
Here, partial differentiation is done for generic $\widehat n$ and the result is then evaluated on $\widehat n=\widehat n_0$. The scalar fields are given by
\begin{align}
\chi & = \frac{\widehat r_2 \widehat u_1 - \widehat r_1 \widehat u_2}{\widehat r_2^2 + \widehat u_2^2} , & \expe{\varphi} & = \frac{\widehat r_2^2+\widehat u_2^2}{\widehat W} . \label{scalarspairwise}
\end{align}
It is quite remarkable that the gauge fields have such simple expressions in terms of the scalars $\zeta^I$ and $\widetilde \zeta_I$, and that the scalars themselves are simple. We checked that the metric \eqref{mads3}, accompanied with the matter fields, is a solution to the field equations of the Lagrangian \eqref{LU2} using Mathematica \cite{Mathematica}.
\subsection{Asymptotically AdS coordinates}
\label{AAdS}
We have presented a local form of the metric and the matter fields. The identification of the total NUT charge and the global identifications of coordinates necessary in order to have a regular solution can be obtained by finding a suitable asymptotically AdS coordinate system in the asymptotic region. We follow the method of Griffiths and Podolsk\'y \cite{Griffiths:2005qp}, which consists of setting the metric in a suitable generalization of the Pleba\'{n}ski--Demia\'{n}ski form for which the analysis can be done most easily.
Starting from the coordinates $(\widehat r,\widehat u,\widehat \phi,\widehat t)$, we define the new coordinates $(r,p,\overline \phi,t)$ as
\begin{align}
\widehat r & = \beta r - \Sigma_{\Delta r} , \\
\widehat \phi & = \frac{\widehat a}{\beta^3 a}\overline \phi,\\
\widehat u & = \beta (N_g + a p) -\Sigma_{\Delta u} ,\\
\widehat t & = \frac{t}{\beta} + \frac{\widehat a^2 + \Delta_{\Delta u}^2 -(N_g+a)^2 \beta^2 }{a \beta^3} \overline \phi
\end{align}
where $a,\;\beta,\; N_g$ are three constants that we will fix shortly in terms of the parameters of the solution. The metric then reads as
\begin{align}
\textrm{d} s^2 & = -\frac{R_g}{W} \big( \textrm{d} t - [2N_g (1- p) + a (1- p^2) ] \, \textrm{d} \overline \phi \big) ^2 \nonumber \\
& \quad + \frac{P_g}{W} \big( a \, \textrm{d} t - [r^2 - v^2 + (N_g + a)^2] \, \textrm{d} \overline \phi \big)^2 \nonumber\\
& \quad + W \bigg( \frac{\textrm{d} r^2}{R_g} + \frac{\textrm{d} p^2}{ P_g} \bigg) ,
\end{align}
with
\begin{align}
W & = r^2 + (N_g + a p)^2 - v^2 , \nonumber \\
R_g & = k + e^2 -2 m r +(\epsilon -2 g^2 v^2)r^2+g^2 r^4, \nonumber \\
P_g & = a_0 +a_1 p+a_2 p^2+a_3 p^3 +a_4 p^4,
\end{align}
where
\begin{align}
a_0 &= a^{-2}(k -N_g^2 \epsilon +2 n N_g +g^2 N_g^4),\nonumber \\
a_1 &= 2a^{-1}(n-N_g \epsilon +2g^2 N_g^3),\nonumber \\
a_2 &= 6 g^2 N_g^2 -\epsilon,\nonumber \\
a_3 &= 4 a g^2 N_g,\nonumber \\
a_4 &= g^2 a^2,
\end{align}
and one can express the new parameters $(\epsilon,k,e,m,n,v)$ in terms of the old ones as
\begin{align}
m &= \beta^{-3}(\widehat m+\Sigma_{\Delta r}),\label{valm}\\
n & = \beta^{-3} (\widehat n+\Sigma_{\Delta u} ) , \label{valn}\\
v^2 &= \beta^{-2} (\Delta_{\Delta r}^2 +\Delta_{\Delta u}^2 ),\label{vv}\\
\epsilon & = \beta^{-2} [1 + g^2 (\widehat a^2+2\Delta^2_{\Delta u} ) ] , \label{valeps}\\
k & = \beta^{-4} [\widehat a^2 - 2 \widehat{n} \Sigma_{\Delta u} - \Sigma_{\Delta u}^2 + g^2 \Delta_{\Delta u}^2 (\widehat a^2+\Delta_{\Delta u}^2 )] , \label{valk}\\
e^2 +k & = \beta^{-4} [\widehat a^2+2 \widehat m \Sigma_{\Delta r} + \Sigma_{\Delta r}^2 - g^2\Delta_{\Delta r} ^2 (\widehat a^2-\Delta_{\Delta r}^2)] .\label{vale}
\end{align}
We have set the metric in a form where the treatment of \cite{Griffiths:2005qp} is applicable. Note that our metric is slightly more general than the non-accelerating metric studied in \cite{Griffiths:2005qp} (obtained by setting their $\alpha$ to zero), since here $v$ is generically non-zero. In generalizing the Griffiths--Podolsk\'y form, we insisted on keeping the same dependence for $p$-dependent functions, since this dependence is used to discuss the range of the angular coordinates of the solutions. Instead, the radial dependence is changed by $O(r^{-2})$ terms that do not affect the analysis.
When $P_g$ has two real roots (which is the case of interest here), one can choose to put these real roots at $1$ and $-1$,
\begin{equation}
P_g = (1- p^2)(a_0 -a_3 p-a_4 p^2),
\end{equation}
which implies that the above coefficients obey $a_1+a_3 =0 = a_0+a_2+a_4$. These two conditions provide two linear equations that specify $\epsilon$ and $n$ in terms of $a$ and $N_g$ as
\begin{align}
\epsilon & = \frac{k}{a^2-N_g^2}+g^2 (a^2+3N_g^2),\label{eqepsn0} \\
n & = \frac{k N_g}{a^2-N_g^2}+g^2 N_g(N_g^2-a^2).\label{eqepsn}
\end{align}
The parameter $N_g$ is then recognized as the total NUT charge since the metric admits a NUT singularity proportional to $N_g$ at the pole $p = -1$. The condition that the NUT charge $N_g$ is zero is equivalent to $n=0$, which from \eq{valn} translates into
\begin{equation}
\widehat{n} + \Sigma_{\Delta u} = 0 ,
\end{equation}
which is equivalent to \eqref{noNUT1}. This justifies therefore our previous claim.
Let us assume $a_0 > 0$. This assumption can be verified for all Pleba\'{n}ski--Demia\'{n}ski black holes and for our general black holes when $N_g=0$. We can then set $a_0= 1$ using the remaining scaling symmetry. The equation $a_0 = 1$ can be solved for $k$ as
\begin{equation}
k = (a^2 - N_g^2)(1+3g^2 N_g^2). \label{solk}
\end{equation}
Hence, equations \eqref{eqepsn0} and \eqref{eqepsn} become
\begin{align}
\epsilon & = 1 +g^2 (a^2 + 6 N_g^2), \label{soleps} \\
n & = N_g [1 -g^2(a^2-4N_g^2)] .
\end{align}
After setting $p =\cos\theta$ and $\overline\phi = \widetilde{\phi}\, \Xi^{-1}$, where
\begin{equation}
\Xi = 1-a^2 g^2-4 a N_g g^2,
\end{equation}
the metric in $(t,r,\theta,\widetilde{\phi})$ coordinates becomes
\begin{align}
\textrm{d} s^2 & = -\frac{R_g}{W}\left( \textrm{d} t - \frac{a \sin^2 \theta + 4 N_g \sin^2(\theta/2)}{\Xi}\, \textrm{d} \widetilde{\phi} \right)^2 \nonumber \\
& \quad + \frac{\Theta_g \sin^2\theta}{W }\left(a \, \textrm{d} t - \frac{L}{\Xi} \, \textrm{d} \widetilde{\phi} \right)^2 + W \bigg( \frac{\textrm{d} r^2}{R_g} + \frac{\textrm{d} \theta^2}{\Theta_g} \bigg) ,\label{AAdSm}
\end{align}
with
\begin{align}
R_g(r) & = r^2 - 2 m r + a^2 + e^2 - N_g^2 + g^2 [ r^4 \nonumber \\
& \quad + (a^2 + 6 N_g^2-2v^2 )r^2+ 3N_g^2(a^2-N_g^2) ],\label{Rg}\\
\Theta_g(\theta) & = 1 - a^2 g^2 \cos^2\theta - 4 a g^2 N_g \cos\theta , \\
W(r,\theta) & = r^2 + (N_g+a\cos\theta )^2 - v^2, \label{defW} \\
L(r) & = r^2 +(N_g+a)^2 - v^2,\label{defL}
\end{align}
which is a straightforward generalization of the Kerr--Newman--Taub--NUT--AdS solution. The solution is regular at the north and south poles $\theta = 0,\pi$ upon identifying $\widetilde{\phi} \sim \widetilde{\phi} + 2\pi$ with $0 \leq \theta \leq \pi$.
When the NUT charge is zero, the $(t, \widetilde{\phi})$ coordinate frame is rotating at infinity, but provides a concise way of stating the solution. To obtain a coordinate frame that is static at infinity, we should use
\begin{equation}
\phi = \widetilde{\phi} + a g^2 t ,
\label{Omegainf}
\end{equation}
which also has period $2 \pi$. Furthermore, manifestly asymptotically AdS coordinates $(t,r_*,\theta_*,\phi)$ can be obtained from $(t,r,\theta,\phi)$ coordinates as
\begin{align}
r & = r_* \sqrt{1 - a^2 g^2 \sin^2\theta_*} \bigg( 1 \nonumber \\
& \quad + \frac{v^2-a^2\sin^2\theta_* [1-g^2 (v^2-a^2)]}{2(1 - a^2 g^2 \sin^2\theta_*)^2 r_*^2}+O(r_*^{-4}) \bigg) , \nonumber \\
\sin\theta & = \frac{\sqrt{\Xi}}{\sqrt{1-a^2 g^2 \sin^2\theta_*}}\sin\theta_* \bigg( 1 \nonumber \\
& \quad - \frac{a^2 \cos\theta_*}{2(1-a^2 g^2 \sin^2\theta_*)^2 r_*^2}+O(r_*^{-4})\bigg) ,
\end{align}
The metric reads as
\begin{align}
\textrm{d} s^2 & = -(1 + g^2 r_*^2) \textrm{d} t^2 + \frac{\textrm{d} r_*^2}{1 + g^2 r_*^2} + r_*^2 (\textrm{d} \theta_*^2 +\sin^2\theta_* \, \textrm{d} \phi^2) \nonumber \\
& \quad + h_{\mu\nu} \, \textrm{d} x^\mu \, \textrm{d} x^\nu \label{adsr}
\end{align}
with
\begin{align}
h_{t t} & = O(r_*^{-1}), \quad h_{t \phi} = O(r_*^{-1}),\quad h_{\phi \phi} = O(r_*^{-1}),\nonumber \\
h_{t r_*} & = h_{t_* \theta_*} = h_{\phi r_*} = h_{\phi \theta_*} =0,\nonumber \\
h_{r_* r_*} & = -\frac{v^2}{g^2(1-g^2 a^2 \sin^2\theta_*)r_*^4}+O(r_*^{-5}),\nonumber \\
h_{r_* \theta_*} & = O(r_*^{-3}),\qquad h_{\theta_* \theta_*} = O(r_*^{-2}).
\end{align}
and the coordinates are identified as $0 \leq \theta_* \leq \pi$. This completes our program of finding a global coordinate system for the solution in the asymptotic region.
In the presence of scalar fields, asymptotically AdS boundary conditions are generically modified while the asymptotic symmetry group is generically unchanged, see e.g.\ \cite{Hertog:2004dr,Henneaux:2006hk}. Here, when $v \neq 0$, the metric \eqref{adsr} does not obey the Henneaux--Teitelboim boundary conditions \cite{Henneaux:1985tv}. When only electric charges are allowed, we expect that the asymptotic symmetry group is still the $\textrm{SO}(3,2)$ group. When both electric and magnetic charges are allowed to be varied, boundary conditions for the gauge fields violate Lorentz invariance as we discussed in the introduction. We then expect that the asymptotic symmetry group only consists of the Galilean group of rotations and translations.
In order to express the solution in terms of the original parameters $(\widehat m, \widehat n,\widehat a)$, we can compare the solutions for $(\epsilon,k)$ in \eqref{solk} and \eqref{soleps} with \eqref{valk} and \eqref{valeps}. Comparing $\epsilon$ shows that $\beta$ is given by
\begin{equation}
\beta^2 = \frac{1+g^2(\widehat a^2+2 \Delta_{\Delta u}^2)}{1+g^2 (a^2+6N_g^2)} . \label{solbeta}
\end{equation}
Comparing $k / \epsilon^2$ shows that $\widehat a$ and $a$ are related by a quadratic equation for $\widehat{a}^2$ in terms of $a^2$ and vice versa. When $N_g=0$, this relation reduces to
\begin{equation}
\frac{\widehat a^2 - \Sigma_{\Delta u}^2+g^2 \Delta_{\Delta u}^2 (\widehat a^2 + \Delta_{\Delta u}^2)}{[1+g^2(\widehat a^2 +2\Delta_{\Delta u}^2)]^2} = \frac{a^2}{(1+a^2 g^2)^2}.\label{solhata}
\end{equation}
If furthermore $\Delta_{\Delta u} = 0 = \Sigma_{\Delta u}$, then the two solutions are $\widehat a^2 = a^2$ and $\widehat a^2 = a^{-2} g^{-4}$. We select the branch that is smooth in the limit $g \rightarrow 0$, which reduces in pure gravity to $\widehat a = a$. This fixes uniquely the relationship between $\widehat a$ and $a$. The parameters $(m,n)$ are then related to $(\widehat m,\widehat n)$ by \eqref{valm} and \eq{valn} where $\beta$ is given by \eqref{solbeta}.
To summarize, the solution depends on the gauge-coupling constant $g$ and 7 other parameters: the mass parameter $m$, the NUT charge $N_g$, the rotation parameter $a$, and four electromagnetic charge parameters $\Sigma_{\Delta r}, \Delta_{\Delta r}, \Sigma_{\Delta u}, \Delta_{\Delta u}$. From the original solution expressed with hatted parameters $(\widehat{m}, \widehat{n}, \widehat{a})$, and charge parameters $(\delta_1, \delta_2, \gamma_1, \gamma_2)$, it is trivial to compute $\Sigma_{\Delta r}, \Delta_{\Delta r}, \Sigma_{\Delta u}, \Delta_{\Delta u}$ from \eq{SigmaDelta}. $a$ is then computed from \eq{solhata}, or its generalization to non-zero NUT charge. $\beta$ is then computed from \eq{solbeta}, with which $m$ and $n$ are computed using \eq{valm} and \eq{valn}. Finally, $v$ and $e$ can be computed using \eq{vv}, \eq{valk} and \eq{vale}.
The matter fields can then be expressed in the coordinates $(t,r,\theta,\widetilde\phi)$. In the asymptotically AdS case, setting $N_g = 0$, we notice that $\widehat W = \beta^2 W$ and
\begin{align}
\widehat r_1 & = \beta r -\Delta_{\Delta r}, & \widehat r_2 & = \beta r +\Delta_{\Delta r}, \nonumber \\
\widehat u_1 & = \beta a \cos\theta -\Delta_{\Delta u}, & \widehat u_2 & = \beta a \cos\theta +\Delta_{\Delta u} .
\end{align}
We can immediately express the scalar fields \eq{scalarspairwise} in the new coordinates. The gauge fields are
\begin{align}
A^{1} & = \frac{Q_1}{\beta^2 W}(r+\beta^{-1}\Delta_{\Delta r}) \bigg( \textrm{d} t -\frac{a\sin^2\theta}{\Xi} \, \textrm{d} \widetilde{\phi} \bigg) \nonumber \\
& \quad - \frac{P^1}{\beta^2 W}(a\cos\theta+\beta^{-1}\Delta_{\Delta u}) \bigg(\textrm{d} t - \frac{L}{\Xi a} \, \textrm{d} \widetilde{\phi} \bigg) ,\nonumber \\
A^{2} & = \frac{Q_2}{\beta^2 W}(r-\beta^{-1}\Delta_{\Delta r}) \bigg( \textrm{d} t -\frac{a\sin^2\theta}{\Xi} \, \textrm{d} \widetilde{\phi} \bigg) \nonumber \\
& \quad - \frac{P^2}{\beta^2 W}(a\cos\theta - \beta^{-1}\Delta_{\Delta u}) \bigg(\textrm{d} t - \frac{L}{\Xi a} \, \textrm{d} \widetilde{\phi} \bigg) ,
\end{align}
and the dual gauge fields are
\begin{align}
\widetilde{A}_{1} & = \frac{P^1}{\beta^2 W}( r - \beta^{-1}\Delta_{\Delta r}) \bigg( \textrm{d} t -\frac{a\sin^2\theta}{\Xi} \, \textrm{d} \widetilde{\phi} \bigg) \nonumber \\
& \quad + \frac{Q_1}{\beta^2 W}(a\cos\theta -\beta^{-1}\Delta_{\Delta u}) \bigg( \textrm{d} t- \frac{L}{\Xi a} \, \textrm{d} \widetilde{\phi} \bigg) ,\nonumber \\
\widetilde{A}_{2} & = \frac{P^2}{\beta^2 W}( r + \beta^{-1}\Delta_{\Delta r}) \bigg( \textrm{d} t -\frac{a\sin^2\theta}{\Xi} \, \textrm{d} \widetilde{\phi} \bigg) \nonumber \\
& \quad + \frac{Q_2}{\beta^2 W}(a\cos\theta +\beta^{-1}\Delta_{\Delta u}) \bigg( \textrm{d} t- \frac{L}{\Xi a} \, \textrm{d} \widetilde{\phi} \bigg) ,
\end{align}
where $W(r, \theta)$ is given by \eq{defW} and $L(r)$ is given by \eq{defL}.
\subsection{Known subcases}
\subsubsection{Kerr--Newman--AdS}
When $\delta_1 = \delta_2$, $\gamma_1 = \gamma_2$ and $N = 0$, one recovers the dyonic Kerr--Newman--AdS black hole \cite{Carter, Carter:1968ks}. We provide some details of this solution, showing its embedding in our more general solution, since it might be the case best known to the reader. The scalar fields vanish, $\chi = 0$, $\varphi = 0$ and the gauge fields are equal, $A_1=A_2$, $\widetilde A^1 = \widetilde A^2$. The field equations are the Einstein--Maxwell equations with a cosmological constant. We have $v=0$, $ \beta^4 e^2=(Q_1)^2+(P^1)^2$ and the solution is the standard Kerr--Newman--AdS solution (see e.g.\ \cite{Caldarelli:1999xj}). To match the literature, we use the rescaled charge parameters $\overline{q} = Q_1/\beta^2$ and $\overline{p} = P^1/\beta^2$. Then the metric is
\begin{align}
\textrm{d} s^2 & = -\frac{R_g}{W}\left( \textrm{d} t-\frac{a}{\Xi} \sin^2\theta \, \textrm{d} \widetilde{\phi} \right)^2 + W \bigg( \frac{\textrm{d} r^2}{R_g} + \frac{\textrm{d} \theta^2}{\Theta_g} \bigg) \nonumber \\
& \quad + \frac{\Theta_g \, \sin^2\theta}{W }\left(a \, \textrm{d} t - \frac{r^2+a^2}{\Xi} \, \textrm{d} \widetilde{\phi} \right)^2 \label{AKN}
\end{align}
with
\begin{align}
W(r,\theta) &= r^2 + a^2\cos^2\theta , \nonumber \\
R_g(r) & = (1+ g^2r^2) ( r^2 + a^2 )-2 m r + \overline{q}^2 + \overline{p}^2 ,\nonumber \\
\Theta_g(\theta) & = 1 - a^2 g^2 \cos^2\theta.
\end{align}
The gauge field and dual gauge field are
\begin{align}
A^{1} & = \frac{\overline{q} r }{W} \bigg( \textrm{d} t - \frac{a\sin^2\theta}{\Xi} \, \textrm{d} \widetilde{\phi} \bigg) \nonumber \\
& \quad - \frac{\overline{p} a \cos\theta}{W} \bigg( \textrm{d} t - \frac{r^2+a^2}{\Xi a} \, \textrm{d} \widetilde{\phi} \bigg) , \nonumber \\
\widetilde{A}_{1} & = \frac{\overline{p} r}{W} \bigg( \textrm{d} t -\frac{a\sin^2\theta}{\Xi}\, \textrm{d} \widetilde{\phi} \bigg) \nonumber \\
& \quad + \frac{\overline{q} a \cos\theta}{W} \bigg( \textrm{d} t- \frac{r^2+a^2}{\Xi a} \, \textrm{d} \widetilde{\phi} \bigg) .
\end{align}
The physical electric and magnetic charges will be derived in \eqref{QP}. Note that the parametrization of the electric and magnetic charge is still intricate. One has
\begin{align}
\overline{q} & = \beta^{-2} Q_1 = \beta^{-2}\widehat m \frac{\cosh (4\gamma_1) \, \sinh (2 \delta_1)}{\cosh (2\gamma_1)}, \nonumber \\
\overline{p} & = \beta^{-2} P^1 = \beta^{-2}\widehat m \frac{\sinh (2\gamma_1)}{\cosh (2\delta_1)}. \label{sys}
\end{align}
If however the black hole is only electrically or magnetically charged, then $\beta = 1$ and the parametrization is optimal in terms of a single hyperbolic function.
\subsubsection{Ungauged solutions}
In the ungauged limit $g=0$, one recovers the solution of $\mathcal{N} = 4$ supergravity \cite{LozanoTellechea:1999my}, restricting to 2 of the 6 vectors, and to scalars that vanish at infinity. A further specialization of taking $\delta_2 = \gamma_2 = 0$ gives the solution of Einstein--Maxwell--dilaton--axion gravity \cite{Galtsov:1994pd}, restricting to scalars that vanish at infinity. To match with these, take our solution in $(\widehat{t}, \widehat{r}, \widehat{u}, \widehat{\phi})$ coordinates. Note that
\begin{equation}
\widehat{W} = (\widehat{r} + \Sigma_{\Delta r})^2 + (\widehat{u} + \Sigma_{\Delta u})^2 - \upsilon^2 ,
\end{equation}
where
\begin{equation}
(M^2 + N^2) \upsilon^2 = \Delta_2^2 -\Delta_4
\end{equation}
and $\Delta_2$ and $\Delta_4$ are defined as
\begin{align}
\Delta_2 & = \frac{1}{4}\sum_{a=1}^2 \big( (Q_a)^2+(P^a)^2 \big), & \Delta_4 & = \frac{1}{4} (Q_1 Q_2 + P^1 P^2)^2.
\end{align}
Alternatively, we have
\begin{align}
16 (M^2 + N^2) \upsilon^2 & = [(Q_1)^2 + (P^2)^2 - (Q_2)^2 - (P^1)^2]^2 \nonumber \\
& \quad + 4 (Q_1 P^1 - Q_2 P^2)^2 ,
\end{align}
which shows that $\upsilon^2$ is non-negative. To match with \cite{LozanoTellechea:1999my}, we dualize the second gauge field and make the identifications $Q_1 = 2 Q^{(1)}$, $P_1 = 2 P^{(1)}$, $Q_2 = 2 P^{(2)}$, $P^2 = - 2 Q^{(2)}$, so that $\upsilon^2 = | \Upsilon |^2$ there. Now make the coordinate change $\widehat{r} + \Sigma_{\Delta r} = r + M$ and $\widehat{u} + \Sigma_{\Delta u} = N + \alpha \cos \theta$, $\widehat{a} = \alpha$, and the solution is seen to match. To match with \cite{Galtsov:1994pd}, set $\delta_2 = \gamma_2 = 0$, $Q_1 = 2 Q$, $P^1 = 2 P$, and perform similar translations of $\widehat{r}$ and $\widehat{u}$.
This ungauged solution is also a subset of the general black hole solution discussed in \cite{Chow:2013tia}, which has 4 electric charges and 4 magnetic charges. Indeed, defining the shifted coordinate $\widehat t_0$ as
\begin{equation}
\widehat t = \widehat t_0 - \frac{\Delta \widehat u_1 \Delta \widehat u_2}{\widehat a}\widehat \phi,
\end{equation}
the metric reads as
\begin{align}
\textrm{d} s^2 & = \widehat W \bigg( \frac{\textrm{d} \widehat r^2}{\widehat R_g} + \frac{\textrm{d} \widehat u^2}{\widehat U_g} \bigg) - \frac{\widehat R_g}{\widehat W} \bigg( \textrm{d} \widehat t_0 - \frac{\widehat U(\widehat u) - 2 N \widehat u}{\widehat a} \, \textrm{d} \widehat \phi \bigg)^2 \nonumber \\
& \quad + \frac{\widehat U_g}{\widehat W} \bigg( \textrm{d}\widehat t_0 - \frac{\widehat R(\widehat r) + 2 M \widehat r + V_2}{\widehat a} \, \textrm{d} \widehat \phi \bigg)^2 ,
\end{align}
where
\begin{align}
\widehat W(\widehat r,\widehat u) & = R(\widehat r)- U(\widehat u) + 2 M \widehat r + 2 N \widehat u + V_2,\nonumber \\
\widehat R_g(\widehat r) & = R(\widehat r) + g^2 \widehat r_1 \widehat r_2 (\widehat r_1 \widehat r_2 + \widehat a^2),\nonumber \\
\widehat U_g(\widehat u)& = U(\widehat u) + g^2 \widehat u_1 \widehat u_2 (\widehat u_1 \widehat u_2 - \widehat a^2),\nonumber \\
R(\widehat r) & = \widehat r^2 - 2 \widehat m \widehat r + \widehat a^2 \nonumber \\
U(\widehat u) & = -\widehat u^2 + 2 \widehat n \widehat u + \widehat a^2,\nonumber \\
V_2 & = \Delta \widehat r_1 \Delta \widehat r_2 + \Delta \widehat u_1 \Delta \widehat u_2.
\end{align}
One can compare this metric to the one written in \cite{Chow:2013tia}, when one restricts the solution of \cite{Chow:2013tia} to pairwise equal gauge fields in order to be a solution of $\mathcal{N}= 2$, $\textrm{U}(1)^2$ supergravity. The metrics match, upon recognizing that the function $L(r)$ defined in \cite{Chow:2013tia} is given by $2 M r+V_2$ in the case of pairwise equal gauge fields, and making the change of parameters and variables $\widehat{a}^2 = a^2 - n^2$ and $\widehat{\phi}/\widehat{a} = \phi/a$.
\subsubsection{Gauged solutions}
If $\gamma_1 = \gamma_2 = 0$, then we have the solution of \cite{Chong:2004na} that is electrically charged and includes a NUT charge. If furthermore the NUT charge vanishes, then we have the simplifications that $\Delta \widehat u_1 = \Delta \widehat u_2 = 0$, $\widehat a = a$ and $\beta = 1$. We shall later reexamine the supersymmetric asymptotically AdS solutions within this class.
\subsection{Thermodynamics}
\label{thermo2}
\subsubsection{Conserved charges}
\label{thermo2mass}
The crux of the analysis of black hole thermodynamics is the definition of the black hole mass. We follow here the canonical Lagrangian definitions \cite{Regge:1974zd,Abbott:1981ff,Iyer:1994ys,Barnich:2001jy,Barnich:2007bf}. The explicit form of the conserved charge associated with a symmetry of a Lagrangian of the form \eqref{generalaction} has been derived in \cite{Compere:2009dp}. The definition of angular momentum can be obtained from the same formalism. For special cases, the thermodynamical quantities have been previously computed in \cite{Kostelecky:1995ei, Caldarelli:1999xj, Cvetic:2005zi,Papadimitriou:2005ii}.
We will set the NUT charge to zero from now on, which implies $n=N=N_g=0$. As emphasized in \cite{Gibbons:2004ai}, the thermodynamics is best understood using a non-rotating frame at infinity. Since the coordinate frame $(t, r, \theta, \phi)$ is non-rotating at infinity, in these coordinates the mass and angular momentum and associated with the respective Killing vectors $\partial/\partial t$ and $- \partial/\partial \phi$.
The gravitational contribution (which is derived from the Einstein action) to the angular momentum is finite and integrable. The gravitational contribution to the mass is linearly divergent in $r$ and not integrable. The matter sector therefore contributes to the charges. Looking at the asymptotic form of the canonical expression for the conserved charges, we notice that the gauge fields do not contribute to the mass or angular momentum. The scalar fields do not contribute to the angular momentum but they do contribute to the mass. More precisely, the scalar fields contribute to a linearly divergent piece in $r$, but do not contribute to the finite $r^0$ piece. This linearly divergent piece exactly cancels the divergence in the gravitational contribution, after using the explicit asymptotic form
\begin{align}
\varphi & = 2\frac{\Delta_{\Delta r}}{\beta r} + \frac{2\Delta_{\Delta u}(a \beta \cos\theta + \Delta_{\Delta u})}{\beta^2 r^2}+O(r^{-3}), \nonumber \\
\chi & = 2\frac{\Delta_{\Delta u}}{\beta r} - \frac{2 \Delta_{\Delta r}(a \beta \cos\theta + 2 \Delta_{\Delta u})}{\beta^2 r^2}+O(r^{-3}),
\end{align}
and the expression for $v$ in \eqref{vv}. The conserved mass is then finite and integrable.
The final expressions for the mass and angular momentum are
\begin{align}
M & = \frac{m}{G \Xi^2} , & J & = \frac{m a}{G \Xi^2} .
\end{align}
These expressions are familiar already for the Kerr--AdS black hole \cite{Gibbons:2004ai,Deruelle:2004mv,Barnich:2004uw}. Since the matter fields do not contribute to the finite conserved charges, it is not surprising that the mass and angular momentum agree with the uncharged black hole.
Using the canonical expressions for the charges derived from the action, one also obtains that the electromagnetic charges in geometrical units are
\begin{align}
\overline{Q}_a & = \frac{Q_a}{2G \beta^2 \Xi}, & \overline{P}^a & = \frac{P^a}{2G \beta^2 \Xi} .\label{QP}
\end{align}
The factor of 2 difference between the normalization of the gauge kinetic terms between \eqref{Lthird} and \eqref{LU2} is responsible for the factor of 2 between the definitions of $\overline{Q}_a,\, \overline{P}^a$ defined here and $\overline{Q}_I,\, \overline{P}^I$, $I=1,2,3,4$ defined in Section \ref{consch}. The factor of $1/\Xi$ is familiar already for the Kerr--Newman--AdS black hole \cite{Kostelecky:1995ei, Caldarelli:1999xj}. The factor of $\beta^{-2}$ is a new feature for dyonic Kerr--Newman--AdS black holes in our parametrization.
\subsubsection{First law}
\label{fl2}
The inner and outer black hole horizons are located at $r=r_\pm$ which are the zeros of the radial function $R_g(r)$ defined in \eqref{Rg}. The Killing generator is $\xi^\mu \, \partial_\mu = \partial_t + \Omega \, \partial_\phi$, where the angular velocity is
\begin{equation}
\Omega = a \bigg( \frac{\Xi}{L(r_+)} + g^2 \bigg) .
\end{equation}
The temperature and entropy of the outer horizon are
\begin{align}
T & = \frac{R_g'(r_+)}{4\pi L(r_+)}, & S & = \frac{\pi L(r_+)}{\Xi \, G},
\end{align}
where $L$ is defined in \eqref{defL}. The electric and magnetic potentials are defined as the difference between the potentials at the horizon and at infinity,
\begin{align}
\Phi^a & = \xi^\mu A^a_{\mu}|_{r=r_+} - \xi^\mu A^a_{\mu}|_{r=\infty}, \nonumber \\
\Psi_a & = \xi^\mu \widetilde{A}_{a \mu}|_{r=r_+}-\xi^\mu \widetilde{A}_{a \mu}|_{r=\infty} .
\end{align}
After remarkable simplifications, one obtains
\begin{align}
\Phi^1 & = \frac{Q_1 (\beta r_+ + \Delta_{\Delta r}) - P^1 \Delta_{\Delta u}}{\beta^3 L(r_+)}, \nonumber \\
\Phi^2 & = \frac{Q_2 (\beta r_+ - \Delta_{\Delta r}) + P^2 \Delta_{\Delta u}}{\beta^3 L(r_+)}, \nonumber \\
\Psi_1 & = \frac{P^1 (\beta r_+ - \Delta_{\Delta r})-Q_1 \Delta_{\Delta u}}{\beta^3 L(r_+)}, \nonumber \\
\Psi_2 & = \frac{P^2 (\beta r_+ + \Delta_{\Delta r})+Q_2 \Delta_{\Delta u}}{\beta^3 L(r_+)} .
\end{align}
The first law
\begin{equation}
\delta M = T \, \delta S + \Omega \, \delta J + \Phi^a \, \delta \overline{Q}_a + \Psi_a \, \delta \overline{P}^a,
\end{equation}
is obeyed for generic variations, which provides with a non-trivial check of our expressions. In the static case, we checked explicitly that the symplectic flux is zero at $r \rightarrow \infty$, see Section \ref{secmass}. Boundary conditions with varying electric and magnetic charges in $\mathcal{N}= 2$, $\textrm{U}(1)^2$ gauged supergravity are therefore consistent on the restricted phase space of black hole solutions. It was then expected that the first law holds for static configurations. The closure of the first law in the general rotating case suggests that no qualitative change occurs when angular momentum is turned on. In particular, we checked explicitly that the symplectic flux is zero at $r \rightarrow \infty$ for the Kerr--Newman--AdS solutions.
\subsection{Supersymmetric solutions}
For purely electric solutions, the Bogomolny--Prasad--Sommerfield (BPS) condition for supersymmetric solutions was given in \cite{Cvetic:2005zi}.
\begin{equation}
M = gJ + \overline{Q}_1 + \overline{Q}_2 ,
\end{equation}
after choosing signs so that $g$, $J$, $\overline{Q}_1$ and $\overline{Q}_2$ are non-negative. The subsequent analysis was been performed previously, however to correct a previous typographical error we repeat the calculation. The BPS condition is
\begin{equation}
\expe{2 (\delta_1 + \delta_2)} = 1 + \frac{2}{a g} .
\end{equation}
With this condition, we have
\begin{align}
R_g & = g^2 \bigg( r_1 r_2 - \frac{2}{g^2 (\expe{2 (\delta_1 + \delta_2)} - 1)} \bigg) ^2 \nonumber \\
& \quad + \coth^2 (\delta_1 + \delta_2) \bigg( r - \frac{2 m s_{\delta 1} s_{\delta 2}}{\cosh (\delta_1 + \delta_2)} \bigg) ^2 .
\end{align}
This is a sum of two squares, so at a horizon both squares must vanish. In general, the zeros of the two squares are different, so the supersymmetry condition alone leads to a solution that is singular. If there is a horizon, then the second square implies that it is at
\begin{equation}
r = r_0 \equiv \frac{2 m s_{\delta 1} s_{\delta 2}}{\cosh (\delta_1 + \delta_2)} .
\end{equation}
Substituting into the first square and requiring it to vanish gives an additional condition has to be imposed so that the solution will have a regular horizon, namely
\begin{equation}
m^2 g^2 = \frac{\cosh^2 (\delta_1 + \delta_2)}{\expe{\delta_1 + \delta_2} \sinh^3 (\delta_1 + \delta_2) \sinh(2 \delta_1) \sinh(2 \delta_2)} .
\end{equation}
$R_g$ then possesses a double root at $r = r_0$, which indicates that the temperature vanishes, as it must for a supersymmetric solution. The supersymmetric Kerr--Newman--AdS black hole \cite{Kostelecky:1995ei, Caldarelli:1998hg} is recovered when furthermore $\delta_1 = \delta_2$.
There are complications with BPS bounds when magnetic charge is included. For example, in minimal gauged supergravity two different BPS bounds were found \cite{Hristov:2011ye}, corresponding to different superalgebras. The analysis has been generalized to include more general matter couplings \cite{Hristov:2011qr}.
An alternative, and definitive, approach to finding supersymmetric solutions is to find Killing spinors. This approach has recently been carried out in full \cite{Klemm:2013eca} for the Pleba\'{n}ski--Demia\'{n}ski solution \cite{Plebanski:1976gy}, which includes the dyonic Kerr--Newman--Taub--NUT--AdS solution, of Einstein--Maxwell theory with a negative cosmological constant, i.e.\ the bosonic sector of minimal $\mathcal{N} = 2$ gauged supergravity. Within the Kerr--Newman--AdS family, supersymmetric black holes must carry only electric charge \cite{Caldarelli:1998hg}. Supersymmetric solutions of $\mathcal{N} = 2$ gauged supergravity coupled to vector multiplets have been classified \cite{Cacciatori:2008ek, Klemm:2009uw}.
\subsection{Killing tensors and separability}
The spacetime admits a web of various interrelated (conformal) Killing tensors. For reviews of these tensors, see e.g.\ \cite{Yasui:2011pr}. They are related to the separability of equations for geodesic motion and for probe scalar fields.
\subsubsection{Killing tensors}
We now recall some relevant defintions of Killing tensors. A Killing--St\"{a}ckel (KS) tensor $K_{\mu \nu} = K_{(\mu \nu)}$ satisfies $\nabla_{(\mu} K_{\nu \rho)} = 0$. A conformal Killing--St\"{a}ckel (CKS) tensor $Q_{\mu \nu} = Q_{(\mu \nu)}$ satisfies $\nabla_{(\mu} Q_{\nu \rho)} = q_{(\mu} g_{\nu \rho)}$ for some $q_\mu$, given in 4 dimensions by $q_\mu = \tfrac{1}{6} (\partial_\mu Q{^\nu}{_\nu} + 2 \nabla_\nu Q{^\nu}{_\mu})$. A Killing--Yano (KY) tensor $Y_{\mu \nu} = Y_{[\mu \nu]}$ satisfies $\nabla_{(\mu} Y_{\nu) \rho} = 0$. A Killing--Yano tensor with torsion (KYT tensor), $Y_{\mu \nu} = Y_{[\mu \nu]}$, satisfies $\nabla^T_{(\mu} Y_{\nu) \rho} = 0$, where the covariant derivative $\nabla^T$ uses the connection $\Gamma{^\mu}{_{\nu \rho}} + T{^\mu}{_{\nu \rho}}$, including both the Levi-Civita connection $\Gamma{^\mu}{_{\nu \rho}}$ and a torsion $T{^\mu}{_{\nu \rho}}$ such that $T_{\mu \nu \rho} = T_{[\mu \nu \rho]}$, i.e.\ derived from a 3-form. A conformal Killing--Yano tensor with torsion (CKYT tensor), $k_{\mu \nu} = k_{[\mu \nu]}$, satisfies $\nabla^T_{(\mu} k_{\nu) \rho} = k_\rho g_{\mu \nu} - k_{(\mu} g_{\nu) \rho}$ for some $k_\mu$, given in 4 dimensions by $k_\mu = \tfrac{1}{3} \nabla{^T}{_\nu} k{^\nu}{_\mu}$. A CKYT tensor is a closed conformal Killing--Yano tensor with torsion (CCKYT tensor) if furthermore $\nabla{^T}{_{[\mu}} k_{\nu \rho]} = 0$. The literature sometimes uses ``generalized'' to mean ``with torsion''.
We now recall some relevant results about Killing tensors. The Hodge dual of a KYT tensor is a CCKYT tensor (with the same torsion), and vice versa. If $Y_{\mu \nu}$ is a KY(T) tensor, then $K_{\mu \nu} = Y{_\mu}{^\rho} Y_{\rho \nu}$ is a KS tensor. If $Q^{\mu \nu}$ is a CKS tensor, then the components $Q^{\mu \nu}$ give a CKS tensor for any conformally related metric.
There are two metrics of interest: the usual Einstein frame metric $\textrm{d} s^2$, and the conformally related string frame metric
\begin{equation}
\textrm{d} \widetilde{s}^2 = \frac{r^2 + u^2}{W} \, \textrm{d} s^2 .
\label{stringframemetric}
\end{equation}
For black hole solutions of supergravity, usually only the string frame metric admits a KS tensor \cite{{Chow:2008fe}}, not the Einstein frame metric. However, for the special solutions considered here, both the string frame and Einstein frame metrics admit KS tensors. Both KS tensors have two ``square roots'' given by KYT tensors.
If we make the coordinate change $\tau = \widehat t - a \widehat \phi$, $\psi = \widehat \phi/a$, then the Einstein frame metric \eqref{mads3} is
\begin{align}
\textrm{d} s^2 & = - \frac{R_g}{W} (\textrm{d} \tau + u_1 u_2 \, \textrm{d} \psi)^2 + \frac{W}{R_g} \, \textrm{d} r^2 \nonumber \\
& \quad + \frac{U_g}{W} (\textrm{d} \tau - r_1 r_2 \, \textrm{d} \psi)^2 + \frac{W}{U_g} \, \textrm{d} u^2 .
\end{align}
Here and below, we omit the hat on functions and coordinates for simplicity. The gauge fields are
\begin{align}
A^1 & = \frac{1}{ W} \bigg(Q_1 r_2 (\textrm{d} \tau + u_1 u_2 \, \textrm{d} \psi) -P^1 u_2 (\textrm{d} \tau - r_1 r_2 \, \textrm{d} \psi) \bigg) , \nonumber \\
A^2 & = \frac{1}{ W} \bigg( Q_2 r_1 (\textrm{d} \tau + u_1 u_2 \, \textrm{d} \psi) -P^2 u_1 (\textrm{d} \tau - r_1 r_2 \, \textrm{d} \psi) \bigg) ,
\end{align}
and the dual gauge fields are
\begin{align}
\widetilde{A}_1 & = \frac{1}{ W} \bigg( P^1 r_1 (\textrm{d} \tau + u_1 u_2 \, \textrm{d} \psi) + Q_1 u_1 (\textrm{d} \tau - r_1 r_2 \, \textrm{d} \psi) \bigg) , \nonumber \\
\widetilde{A}_2 & = \frac{1}{ W} \bigg( P^2 r_2 (\textrm{d} \tau + u_1 u_2 \, \textrm{d} \psi) +Q_2 u_2 (\textrm{d} \tau - r_1 r_2 \, \textrm{d} \psi) \bigg) . \end{align}
In fact, these are probably the simplest coordinates for expressing the solution locally.
Consider more generally the metric
\begin{align}
\textrm{d} s^2 & = - \frac{R}{W} (\textrm{d} \tau + W_u \, \textrm{d} \psi)^2 + \frac{W}{R} \, \textrm{d} r^2 \nonumber \\
& \quad + \frac{U}{W} (\textrm{d} \tau - W_r \, \textrm{d} \psi)^2 + \frac{W}{U} \, \textrm{d} u^2 ,
\end{align}
where $W = W_r + W_u$; $R$ and $W_r$ are arbitrary functions of $r$; and $U$ and $W_u$ are arbitrary functions of $u$. Here and below we drop the subscript $_g$ in $R$ and $U$ for simplicity. Introduce the vielbeins
\begin{align}
e^0 & = \frac{\sqrt{R}}{\sqrt{W}} (\textrm{d} \tau + W_u \, \textrm{d} \psi) , & e^1 & = \frac{\sqrt{W}}{\sqrt{R}} \, \textrm{d} r , \nonumber \\
e^2 & = \frac{\sqrt{U}}{\sqrt{W}} (\textrm{d} \tau - W_r \, \textrm{d} \psi) , & e^3 & = \frac{\sqrt{W}}{\sqrt{U}} \, \textrm{d} u .
\end{align}
Two KYT tensors are
\begin{equation}
Y_\pm = \sqrt{W_u} e^0 \wedge e^1 \pm \sqrt{W_r} e^2 \wedge e^3 ,
\end{equation}
with corresponding torsions
\begin{align}
T_\pm & = \frac{1}{W} \bigg[ U \bigg( \partial_r W_r \mp \frac{\sqrt{W_r}}{\sqrt{W_u}} \partial_u W_u \bigg) \, \textrm{d} r \nonumber \\
& \quad + R \bigg( \partial_u W_u \mp \frac{\sqrt{W_u}}{\sqrt{W_r}} \partial_r W_r \bigg) \, \textrm{d} u \bigg] \wedge \textrm{d} \tau \wedge \textrm{d} \psi .
\end{align}
Squaring either $Y_\pm$ gives the KS tensor
\begin{equation}
K_{\mu \nu} \, \textrm{d} x^\mu \, \textrm{d} x^\nu = W_u (- e^0 e^0 + e^1 e^1) - W_r (e^2 e^2 + e^3 e^3) .
\end{equation}
Taking Hodge duals of $Y_\pm$ gives the CCKYT tensors
\begin{equation}
k_\pm = \pm \sqrt{W_r} e^0 \wedge e^1 - \sqrt{W_u} e^2 \wedge e^3 .
\end{equation}
If we make the coordinate change $r' = \sqrt{W_r}$, $u' = \sqrt{W_u}$, and define the functions $f_r = 1/ (\partial_r \sqrt{W_r})$, $f_u = 1/ (\partial_u \sqrt{W_u})$, $R' = R/f_r^2$, $U' = U/f_u^2$, then the metric takes the form
\begin{align}
\textrm{d} s^2 & = - \frac{R' f_r^2}{r'{^2} + u'{^2}} (\textrm{d} \tau + u'{^2} \, \textrm{d} \psi)^2 + \frac{r'{^2} + u'{^2}}{R'} \, \textrm{d} r'{^2} \nonumber \\
& \quad + \frac{U' f_u^2}{r'{^2} + u'{^2}} (\textrm{d} \tau - r'{^2} \, \textrm{d} \psi)^2 + \frac{r'{^2} + u'{^2}}{U'} \, \textrm{d} u'{^2} .
\end{align}
The torsions take the form
\begin{align}
T_\pm & = - \frac{2}{r'{^2} + u'{^2}} \bigg[ \bigg( \frac{f_u}{f_r} \mp 1 \bigg) \frac{r' \sqrt{U'}}{\sqrt{r'{^2} + u'{^2}}} e^1 \nonumber \\
& \quad + \bigg( \frac{f_r}{f_u} \mp 1 \bigg) \frac{u' \sqrt{R'}}{\sqrt{r'{^2} + u'{^2}}} e^3 \bigg] \wedge e^0 \wedge e^2 .
\end{align}
These forms of the metric and torsion $T_+$ manifestly fit into the classification of metrics admitting a KYT tensor \cite{Houri:2012eq}, specifically even-dimensional of type A, after analytically continuing to Riemannian signature.
The string frame metric \eq{stringframemetric} can be expressed in terms of the vielbeins
\begin{align}
\widetilde{e}^0 & = \frac{\sqrt{(r^2 + u^2) R}}{W} (\textrm{d} \tau + W_u \, \textrm{d} \psi) , & \widetilde{e}^1 & = \frac{\sqrt{r^2 + u^2}}{\sqrt{R}} \, \textrm{d} r , \nonumber \\
\widetilde{e}^2 & = \frac{\sqrt{(r^2 + u^2) U}}{W} (\textrm{d} \tau - W_r \, \textrm{d} \psi) , & \widetilde{e}^3 & = \frac{\sqrt{r^2 + u^2}}{\sqrt{U}} \, \textrm{d} u .
\end{align}
Two KYT tensors are
\begin{equation}
\widetilde{Y}_\pm = u \widetilde{e}^0 \wedge \widetilde{e}^1 \pm r \widetilde{e}^2 \wedge \widetilde{e}^3 ,
\end{equation}
with corresponding torsions
\begin{align}
\widetilde{T}_\pm & = \frac{r^2 + u^2}{W} \bigg[ U \bigg( \frac{\partial_r W_r}{W} \mp \frac{2 r}{r^2 + u^2}\bigg) \, \textrm{d} r \nonumber \\
& \quad + R \bigg( \frac{\partial_u W_u}{W} \mp \frac{2 u}{r^2 + u^2} \bigg) \, \textrm{d} u \bigg] \wedge \textrm{d} \tau \wedge \textrm{d} \psi \nonumber \\
& = - \bigg[ \bigg( \frac{\partial_r W_r}{W} \mp \frac{2 r}{r^2 + u^2}\bigg) \frac{\sqrt{U}}{\sqrt{r^2 + u^2}} \widetilde{e}^1 \nonumber \\
& \quad + \bigg( \frac{\partial_u W_u}{W} \mp \frac{2 u}{r^2 + u^2} \bigg) \frac{\sqrt{R}}{\sqrt{r^2 + u^2}} \widetilde{e}^3 \bigg] \wedge \widetilde{e}^0 \wedge \widetilde{e}^2 .
\end{align}
Squaring either $\widetilde{Y}_\pm$ gives the KS tensor
\begin{equation}
\widetilde{K}_{\mu \nu} \, \textrm{d} x^\mu \, \textrm{d} x^\nu = u^2 (- \widetilde{e}^0 \widetilde{e}^0 + \widetilde{e}^1 \widetilde{e}^1) - r^2 (\widetilde{e}^2 \widetilde{e}^2 + \widetilde{e}^3 \widetilde{e}^3) .
\end{equation}
Taking Hodge duals of $\widetilde{Y}_\pm$ gives the CCKYT tensors
\begin{equation}
\widetilde{k}_\pm = \pm r \widetilde{e}^0 \wedge \widetilde{e}^1 - u \widetilde{e}^2 \wedge \widetilde{e}^3 .
\end{equation}
The string frame metric can also be written as
\begin{align}
\textrm{d} \widetilde{s}^2 & = - \frac{R}{r^2 + u^2} (\textrm{d} \tau + u^2 \, \textrm{d} \psi - \mathcal{A})^2 + \frac{r^2 + u^2}{R} \, \textrm{d} r^2 \nonumber \\
& \quad + \frac{U}{r^2 + u^2} (\textrm{d} \tau - r^2 \, \textrm{d} \psi - \mathcal{A})^2 + \frac{r^2 + u^2}{U} \, \textrm{d} u^2 ,
\end{align}
where
\begin{align}
\mathcal{A} & = \frac{r^2 + u^2}{W} \bigg( \frac{W_r - r^2}{r^2 + u^2} (\textrm{d} \tau + u^2 \, \textrm{d} \psi) \nonumber \\
& \quad + \frac{W_u - u^2}{r^2 + u^2} (\textrm{d} \tau - r^2 \, \textrm{d} \psi)\bigg) .
\end{align}
The torsion $\widetilde{T}_+$ can also be written as
\begin{align}
\widetilde{T}_+ & = - \bigg[ \partial_r \log \bigg( \frac{W}{r^2 + u^2} \bigg) \frac{\sqrt{U}}{\sqrt{r^2 + u^2}} \widetilde{e}^1 \nonumber \\
& \quad + \partial_u \log \bigg( \frac{W}{r^2 + u^2} \bigg) \frac{\sqrt{R}}{\sqrt{r^2 + u^2}} \widetilde{e}^3 \bigg] \wedge \widetilde{e}^0 \wedge \widetilde{e}^2 .
\end{align}
These forms of the metric and torsion $T_+$ also manifestly fit into the classification of metrics admitting a KYT tensor \cite{Houri:2012eq}, again even-dimensional of type A, after analytically continuing to Riemannian signature.
The string frame KS tensor $\widetilde{K}_{\mu \nu}$ induces a CKS tensor $Q_{\mu \nu}$ for the Einstein frame metric, with components $Q^{\mu \nu} = \widetilde{K}^{\mu \nu}$. Similarly, the Einstein frame KS tensor $K_{\mu \nu}$ induces a CKS tensor $\widetilde{Q}_{\mu \nu}$ for the string frame metric, with components $\widetilde{Q}^{\mu \nu} = K^{\mu \nu}$. In fact,
\begin{align}
Q_{\mu \nu} & = K_{\mu \nu} + q g_{\mu \nu} , & \widetilde{Q}_{\mu \nu} = \widetilde{K}_{\mu \nu} + \widetilde{q} \widetilde{g}_{\mu \nu} ,
\end{align}
where
\begin{align}
q & = \frac{u^2 W_r - r^2 W_u}{r^2 + u^2} , & \widetilde{q} & = \frac{r^2 W_u - u^2 W_r}{W} .
\end{align}
Therefore, $\nabla_{(\mu} Q_{\nu \rho)} = q_{(\mu} g_{\nu \rho)}$ and $\nabla_{(\mu} \widetilde{Q}_{\nu \rho)} = \widetilde{q}_{(\mu} \widetilde{g}_{\nu \rho)}$, where $q_\mu = \partial_\mu q$ and $\widetilde{q}_\mu = \partial_\mu \widetilde{q}$.
Despite the interesting geometrical structures, the physical interpretations of the torsions are generally unclear. While the torsions $T_+$ and $\widetilde{T}_+$ vanish for the uncharged Kerr--Taub--NUT--AdS solution (for which the string and Einstein frames coincide), recovering the known KY tensor, in this limit the torsions $T_-$ and $\widetilde{T}_-$ do not vanish. In special cases, the string frame torsion $\widetilde{T}_+$, but not the Einstein frame torsion $T_+$, is physically motivated by dualizing the axion $\chi$ to give a 3-form field strength $H$, which is identified as the torsion \cite{Houri:2010fr}. Analogously in 5-dimensional minimal gauged supergravity, the vector can be dualized \cite{Kubiznak:2009qi}. However, the dualization cannot be performed for solutions of gauged supergravity, since the potential \eq{gauged} involves the bare axion potential, not its derivative. Furthermore, although \cite{Houri:2010fr} found the correct torsion by dualizing the axion for the ``Kerr--Sen'' black hole of ungauged supergravity (the $\delta_2 = \gamma_1 = \gamma_2 = \widehat{n} = g = 0$ solution), for which there is a single electric gauge field, the procedure fails for our more general dyonic solution. More specifically, in the Kerr--Sen case $\textrm{d} \widetilde{T}_+ + F^1 \wedge F^1 = 0$, which is consistent with identifying the torsion with $H$, but $\textrm{d} \widetilde{T}_+ + F^1 \wedge F^1 \neq 0$ when we generalize to $\gamma_1 \neq 0$.
\subsubsection{Separability}
The KS tensors in Einstein frame and string frame guarantee the complete integrability of geodesic motion in both these frames, which we now demonstrate explicitly. The Einstein and string frame metric have respective inverses
\begin{align}
\bigg( \frac{\partial}{\partial s} \bigg) ^2 & = \frac{1}{W} \bigg( - \frac{(W_r \, \partial_\tau + \partial_\psi)^2}{R} + R \, \partial_r^2 \nonumber \\
& \quad + \frac{(W_u \, \partial_\tau - \partial_\psi)^2}{U} + U \, \partial_u^2 \bigg) , \nonumber \\
\bigg( \frac{\partial}{\partial \widetilde{s}} \bigg) ^2 & = \frac{1}{r^2 + u^2} \bigg( - \frac{(W_r \, \partial_\tau + \partial_\psi)^2}{R} + R \, \partial_r^2 \nonumber \\
& \quad + \frac{(W_u \, \partial_\tau - \partial_\psi)^2}{U} + U \, \partial_u^2 \bigg) ,
\end{align}
and metric determinants given by
\begin{align}
\sqrt{-g} & = W , & \sqrt{- \widetilde{g}} & = \frac{(r^2 + u^2)^2}{W} .
\end{align}
In Einstein frame, the Hamilton--Jacobi equation for geodesic motion is
\begin{equation}
\frac{\partial S}{\partial \lambda} + \frac{1}{2} g^{\mu \nu} \, \partial_\mu S \, \partial_\nu S = 0 ,
\end{equation}
where $S$ is Hamilton's principal function, $\partial_\mu S = p_\mu = \textrm{d} x_\mu / \textrm{d} \lambda$, $p_\lambda$ are momenta conjugate to $x^\mu$, and $\lambda$ is an affine parameter. Consider the ansatz
\begin{equation}
S = \tfrac{1}{2} \mu^2 \lambda - E \tau + L \psi + S_r (r) + S_u (u) .
\end{equation}
The constants $p_\tau = - E$ and $p_\phi = L$ are momenta conjugate to the ignorable coordinates $\tau$ and $\psi$, related to energy and angular momentum. The particle mass is $\mu$, so that $p^\mu p_\mu = - \mu^2$. The components $W g^{\mu \nu}$ are additively separable into functions of $r$ and of $u$, and so the Hamilton--Jacobi equation is additively separable. Explicitly, we have
\begin{align}
& - \frac{(W_r E - L)^2}{R} + \frac{(W_u E + L)^2}{U} + R \bigg( \frac{\textrm{d} S_r}{\textrm{d} r} \bigg) ^2 \nonumber \\
& + U \bigg( \frac{\textrm{d} S_u}{\textrm{d} u} \bigg) ^2 + \mu^2 (W_r + W_u) = 0 ,
\label{KleinGordon}
\end{align}
and so
\begin{align}
\frac{\textrm{d} S_r}{\textrm{d} r} & = \frac{1}{R} \sqrt{(W_r E - L)^2 - (C + \mu^2 W_r) R} , \nonumber \\
\frac{\textrm{d} S_u}{\textrm{d} u} & = \frac{1}{U} \sqrt{- (W_u E + L)^2 + (C- \mu^2 W_u) U} ,
\end{align}
where $C$ is a separation constant. We then determine $r(\lambda)$ and $u (\lambda)$ by integrating
\begin{align}
\frac{\textrm{d} r}{\textrm{d} \lambda} & = g^{r r} p_r = \frac{R}{W} \frac{\textrm{d} S_r}{\textrm{d} r} , & \frac{\textrm{d} u}{\textrm{d} \lambda} & = g^{u u} p_u = \frac{U}{W} \frac{\textrm{d} S_u}{\textrm{d} u} .
\end{align}
Finally, we determine $\tau (\lambda)$ and $\psi (\lambda)$ by integrating
\begin{align}
\frac{\textrm{d} \tau}{\textrm{d} \lambda} & = g^{\tau \tau} p_\tau + g^{\tau \psi} p_\psi \nonumber \\
& = \frac{E}{W} \bigg( \frac{W_r^2}{R} - \frac{W_u^2}{U} \bigg) - \frac{L}{W} \bigg( \frac{W_r}{R} + \frac{W_u}{U} \bigg) , \nonumber \\
\frac{\textrm{d} \psi}{\textrm{d} \lambda} & = g^{\tau \psi} p_\tau + g^{\psi \psi} p_\psi \nonumber \\
& = \frac{E}{W} \bigg( \frac{W_r}{R} + \frac{W_u}{U} \bigg) + \frac{L}{W} \bigg( \frac{1}{U} - \frac{1}{R} \bigg) .
\end{align}
The separation can analogously be explicitly demonstrated in string frame, by replacing $\mu^2 (W_r + W_u)$ in \eq{KleinGordon} by $\mu^2 (r^2 + u^2)$.
The massive Klein--Gordon equation for the Einstein frame metric is
\begin{equation}
\square \Phi = \frac{1}{\sqrt{-g}} \partial_\mu (\sqrt{-g} g^{\mu \nu} \partial_\nu \Phi) = \mu^2 \Phi .
\end{equation}
Consider the ansatz
\begin{equation}
\Phi = \Phi_r (r) \Phi_u (u) \expe{\textrm{i} (k \psi - \omega \tau)} .
\end{equation}
Then the Klein--Gordon equation gives
\begin{align}
\mu^2 W & = \frac{(\omega W_r - k)^2}{R} - \frac{(\omega W_u + k)^2}{U} + \frac{1}{\Phi_r} \frac{\textrm{d}}{\textrm{d} r} \bigg( R \frac{\textrm{d} \Phi_r}{\textrm{d} r} \bigg) \nonumber \\
& \quad + \frac{1}{\Phi_u} \frac{\textrm{d}}{\textrm{d} u} \bigg( U \frac{\textrm{d} \Phi_u}{\textrm{d} u} \bigg) ,
\end{align}
which separates to give
\begin{align}
& \frac{\textrm{d}}{\textrm{d} r} \bigg( R \frac{\textrm{d} \Phi_r}{\textrm{d} r} \bigg) + \bigg( \frac{(\omega W_r - k)^2}{R} - \mu^2 W_r + C \bigg) \Phi_r = 0 , \nonumber \\
& \frac{\textrm{d}}{\textrm{d} u} \bigg( U \frac{\textrm{d} \Phi_u}{\textrm{d} u} \bigg) - \bigg( \frac{(\omega W_u + k)^2}{U} + \mu^2 W_u + C \bigg) \Phi_u = 0 ,
\end{align}
where $C$ is an integration constant. For the black hole solutions we found, these are Fuchsian second-order ordinary differential equations.
For other examples \cite{Houri:2010fr, Wu:2009cn, Wu:2009ug}, the existence of KYT tensors is related to the separability of Dirac equations that are modified by terms involving the torsion. We expect the same separability properties for the class of metrics that we have discussed.
\section{Conclusion}
We extended the known classes of 4-dimensional dyonic AdS black holes in maximal gauged supergravity along two fronts. We obtained a class of static AdS black holes with 4 independent electric charges and 4 independent magnetic charges, and a class of rotating AdS black holes with 2 independent electric and 2 independent magnetic charges (and also an independent NUT charge). In particular, the class of planar dyonic black holes that we derived might be of interest for the AdS/CFT correspondence as models for large $N$ gauge theories with finite charge density and background magnetic field. It is remarkable that such solutions can be obtained from their asymptotically flat cousins by simply modifying one or two functions in the metric while leaving the matter fields unchanged. A natural continuation of this work would be to construct a rotating solution with 8 independent electromagnetic charges, which would require an ansatz that remains to be guessed. It would be interesting to investigate further the supersymmetry of these solutions.
We showed that varying independently the electric and magnetic charges of dyonic black holes in AdS is in general inconsistent with the existence of a Hamiltonian. Several exceptions exist, including the dyonic Kerr--Newman--AdS family, where the variation of both electromagnetic charges can be performed at the expense of imposing Lorentz-violating boundary conditions for the gauge field. Non-relativistic holographic theories corresponding to such boundary conditions, if they exist, remain to be constructed. More generally, we found several distinct classes of boundary conditions for gauge fields in AdS$_4$. It would be interesting to generalize them to include propagating fields, compute their asymptotic symmetry group and classify their supersymmetric extensions.
Like the Kerr solution, the rotating solutions that we constructed have special algebraic properties, in particular various types of Killing tensors. We were led to consider a wider class of metrics and found, in two different conformal frames, Killing--Yano tensors for connections with torsion. Although the physical significance of these torsions is unclear, they underlie the separability of the Hamilton--Jacobi equation for geodesic motion.
\vspace*{10pt}
\begin{center}
\small\textbf{ACKNOWLEDGEMENTS}
\end{center}
\vspace*{10pt}
We gratefully thank the Centro de Ciencias de Benasque Pedro Pascual for its warm hospitality. The work of D.C. was partially supported by the ERC Advanced Grant ``SyDuGraM'', by IISN-Belgium (convention 4.4514.08) and by the ``Communaut\'e Fran\c{c}aise de Belgique" through the ARC program. G.C. is a Research Associate of the Fonds de la Recherche Scientifique F.R.S.-FNRS (Belgium) and is supported by NSF grant 1205550.
|
1,108,101,564,165 | arxiv | \section{Introduction}
\textbf{\emph{Introduction.---}} The wave-particle duality is one of the core aspects of quantum mechanics. This tells us about a profound and, within classical intuitions, paradoxical behavior of nature by which a quantum system exhibits the property of being both a wave and a particle.
The wave nature of a quantum entity is distinctly revealed in the double-slit experiment.
The wave aspect of a quantum system makes it pass through both slits at the same time, resulting in interference. This phenomenon can certainly be explained if the corresponding quantum system is considered to be a classical wave, but not if it was a classical particle.
On the other hand, the photoelectric effect is an example of a phenomenon where a quantum system exhibits particle-like characteristics.
Contrary to the double-slit experiment, the photoelectric effect would not be explainable if we consider the quantum system as a classical wave.
It is intriguing that there are quantum systems -- photons -- that exhibit both wave phenomenon via interference in double-slit experiment and particle aspect in the photoelectric effect.
The wave nature of a quantum system was observed since the beginnings of quantum mechanics, and the quantum formalism was found to be able to
incorporate it within its folds. However, a more careful conceptual foundation and quantification of the wave nature, that is independent of any particular experiment, was presented only a few years back,
where ``quantum coherence'' was quantified using a resource-theoretic framework
\cite{eibar-bachhadhan-chat-bagale, nijer-Dhak}.
On the other hand, the particle nature of a quantum system, while having been observed and incorporated into the quantum formalism since its beginnings, to the best of our knowledge, it has not yet been conceptually formalized independent of the
detailed aspects of the photoelectric effect. We hope to provide a way to bridge this gap to a certain extent, and indicate directions towards a resource theory of ``particleness'' of a quantum system.
Below we begin
by providing a toy model for detection of particle nature, inspired by the photoelectric effect. This provides the basis for conceptualizing the particle aspect of a quantum system, and the corresponding resource theory.
We subsequently identify the ``free states'' of the resource theory, and follow this by
considering the possible ``free operations''.
Next,
we consider the particular cases of two- and three-level systems respectively. We then discuss about measures of particleness.
Moreover, we comment on the complementarity between coherence and particleness for arbitrary quantum states,
providing numerical evidence using Haar-uniformly generated arbitrary three-dimensional pure and mixed quantum states.
\textbf{\emph{Model for conceptualizing particle aspect.---}}Let us introduce here a toy model for detecting the particle aspect of a quantum system, taking inspiration from the photoelectric effect. This will help in quantifying the particle aspect of an arbitrary quantum system, wherein a \(d\)-level incoming quantum system impinges on an effectively two-level ``solid state system''. The Hamiltonian of the incoming system is given by $H=\sum _{n=0}^{d-1}\hbar \omega_n |n\rangle \langle n| $.
Here,
$\hbar=h/(2\pi)$, with $h$ being the Planck's constant.
We consider the effective Hamiltonian of the solid state system to be $H_{SS}=\hbar \omega|e\rangle \langle e|$, where $ |e\rangle$ is the excited state of the effective two-level solid state system. In a more realistic situation, there can be a band of levels near the zero level energy of our effective solid state system, which is being approximated here by a single energy level with zero energy. Similarly, a possible band of energies near the excited state energy is being approximated here by a single excited state with energy \(\hbar \omega\).
More generally, there can be metastable states between the two bands, and these are being not considered in this toy model. We do not explicitly write the interaction Hamiltonian. Instead, similar to what happens in case of the photoelectric effect, we assume that if there is an incoming state $\rho_{in}$ (on \(\mathbb{C}^d\)) such that $ \mbox{Tr}( \rho_{in} H)> \hbar \omega$, then the incoming state has a nonzero \emph{``particleness''}.
For simplicity, we assume the ``zero detuning'' scenario, so that $\omega_n=n\omega$ for $n=0,1, \ldots, d-1$.
Quantum coherence of a quantum state depends not only on the state but also on the basis. Indeed, the slits in an interference experiment defines such a basis, and the interference pattern changes depending on the
character of the slits.
Similarly, the particleness of a quantum state depends not only on the state but also on the Hamiltonians of the incoming system and the solid state. We have ignored the transfer mechanism of the energy from the impinging system to the solid state.
\emph{Free states.---} In any resource theory,
an
important aspect is to characterize the states which will not act as a resource --
the so-called \emph{``free states''}, which we denote as $\rho_f$. If we consider particleness to be a resource of any given quantum system, the free states will be those states which cannot exhibit particleness of the system. This depends on the triplet consisting of
\begin{itemize}
\item the state of the incoming system (\(\rho_{in}\)),
\item the Hamiltonian of the incoming system (\(H\)), and
\item the ``threshold energy '' (\(\hbar \omega\)) of the solid state system.
\end{itemize}
The free states will be
those
for which the energy content of the state
is less than or equal to $ \hbar \omega$. Denoting the set of free states as \(F_S\), we have
\begin{equation}F_S= \{\rho_f| \mbox{Tr}( \rho_f H) \leq \hbar \omega \}.
\end{equation}
It is interesting to note that the set $F_S$ is a convex set and the corresponding ``\emph{edge states}''
are those for which the energy of the system is exactly equal to $ \hbar \omega$.
The states $|0\rangle$ and $|1\rangle$ are free states for any dimension, \(d\), of the incoming quantum system, and $|1\rangle$ is an edge state therein.
For $n=2, \ldots, d$, the states $|n\rangle$ are resource states, i.e., have nonzero particleness.
We remember that the states \(\{|n\rangle\}\) forms the eigenbasis of the Hamiltonian \(H\).
If we consider the mixed state $\rho^p_f=p|0\rangle \langle 0|+(1-p)|1\rangle \langle 1|$ ($p \in [0,1]$), we find that $\mbox{Tr} ( \rho^p_f H)=\hbar \omega (1-p)$, so that \(\rho^p_f\) lies in the interior of
the set of free states for all $d$, unless $p=0$.
Interestingly, the state $\frac{1}{3}I_3=\frac{1}{3}(|0\rangle \langle 0|+|1\rangle \langle 1|+|2\rangle \langle 2|)$ is an edge state for any input system, since $\mbox{Tr} ( \frac{1}{3}I_3 H)=\hbar \omega $. A convex combination of this state and the state $|1\rangle$, i.e, a state of the form $\frac{p}{3}I_3 +(1-p)1\rangle \langle 1|$ ($p \in [0,1]$), is an edge state for any \(p\) and any input dimension, \(d\). The facts that \(\frac{1}{3}I_3\) and \(|1\rangle\) are edge states depends
on our choice of equally spaced energy levels and the zero detuning. Changing the Hamiltonians will change the status of the free as well as edge states.
\textit{Existence of witness operators.} It is easy to see that the set of free states is convex, as already alluded to above. It is also possible to show that the set is compact. To prove the compactness, first note that \(F_S\) is bounded. We now need to show that it is also closed. Since
\(\mbox{Tr}(\cdot H)\) is a continuous function of its argument, the pre-image \(F_S\) of the closed set
\([0, \hbar \omega]\) is closed.
Since \(F_S\) is convex and compact, it is possible to use the Hahn-Banach theorem \cite{nishiddhya-istehar} to provide the concept of witness operators for detecting states with nonzero particleness, similar to, e.g., the concept of entanglement witnesses \cite{pinjar}.
\textit{Free operations.---} Along with free states, it is also important in any resource theory to characterize the set of quantum operations which will keep free states as free states, and these operations are usually referred
as ``\emph{free operations}''.
Let us now
define our set of
free operations in this resource theory as those collections of Kraus operators, $K_n$, for which \(\sum_n K_n^\dagger K_n\) is the identity on \(\mathbb{C}^d\) (``completeness condition''), and the energy of any free state is bounded above by \(\hbar \omega\) even after the application of individual Kraus operators.
In other words, the set of free operations can be given by
\begin{eqnarray}
F_O= \left\{\left\{K_n\right\}| \mbox{Tr}(\sum_{\tilde{n}} K_{\tilde{n}} \rho_f K_{\tilde{n}}^{\dagger} H) \leq \hbar \omega )~\forall \rho_f \in F_S\right\},\quad
\end{eqnarray}
where the completeness condition is implicitly assumed. The summation over \(\tilde{n}\)
is over a subset, possibly proper, of elements of the entire set \(\{K_n\}\), and where a normalization factor \(\mbox{Tr}(\sum_{\tilde{n}} K_{\tilde{n}} \rho_f K_{\tilde{n}}^{\dagger} )\) is assumed but kept silent in the notation.
It is evident from the above definition
that energy-invariant quantum operations form a class of free operations. Precisely, these are the Kraus operator sets for which
\(\mbox{Tr}(\sum_{\tilde{n}} K_{\tilde{n}} \rho_f K_{\tilde{n}}^{\dagger} H) = \mbox{Tr}(\rho_f H)\) for all \(\rho_f \in F_S\).
A subset of the class of energy-invariant free operations are ones for which the Kraus operators commute with the Hamiltonian of the incoming system.
\textit{Qubit is almost never particle like.---}
If the incoming quantum system is a two-level system, let us write the corresponding state as
$|\psi \rangle= a |0\rangle +b |1\rangle$ ($a$ and $b$ are the amplitudes, $|a|^2 +|b|^2 =1$). In this case, the Hamiltonian operator determining the energy of the input is given by $H=\hbar \omega |1\rangle \langle 1|$. In this scenario, for all pure states $|\psi \rangle$, we have $\langle \psi |H|\psi \rangle= |b|^2\hbar \omega$, which
is always $\leq \hbar \omega $ (since $|b|^2 \leq 1$). This implies all the pure states are free states, and the state $|1\rangle$ lies at the edge. Since mixed states are convex combinations of pure states, they will also lie in the set of free states, and since there is only a single pure edge state, there are no non-pure edge states.
All two-level quantum systems are therefore devoid of any particleness. The scenario changes, slightly, if we change our definition of free states to one in which the edge states are resourceful, in which case, exactly one state, viz. \(|1\rangle\),
becomes resourceful. A qubit will still be almost never particle-like.
\textit{Qutrits: first signature of particleness.---}
Next we consider physical situation where a solid state system is exposed to a quantum source which is a three-level quantum system (e.g., a ladder-type three-level photonic system). This is the lowest dimension where we obtain a finite volume of quantum states having a nonzero particleness. The Hamiltonian
of the quantum system is
given by $H=\hbar \omega|1\rangle \langle 1|+2\hbar \omega |2\rangle \langle 2|$. A pure state of the incoming system can be expressed as
$|\psi \rangle= a |0\rangle +b |1\rangle+c |2\rangle $, where $a$, $b$, and $c$ are the amplitudes, with $|a|^2 +|b|^2 +|c|^2 =1$.
The solid state system on which the incoming particles are incident is still described by the Hamiltonian $H_{SS}=\hbar \omega|e\rangle \langle e| $.
There is no emission of particles (e.g., electrons) from the zero-energy band of the solid state system unless the energy content of the incoming state is more than $\hbar\omega$, and hence there is no signature particle aspect of the system before that. It is easy to see that the pure state $|\psi \rangle$
is free if and only if $|c| \leq |a|$. For a general three-level state \(\rho\), it is free if
\(\rho_{11}
+ 2 \rho_{22}
\leq 1\), where \(\rho_{nn}\) is the \(n\)\textsuperscript{th} diagonal element of \(\rho\), when written in the energy eigenbasis.
\textit{\textbf{Measure of Particleness.---}} In the next step, we look for possible ways to quantify particleness
of the incoming quantum system $\rho_{in}$.
One way to do so can be to use a concept akin to the definition of distillable entanglement \cite{boRda}. In that case, one begins by identifying a state that is the most resourceful, which in our case can be the highest eigenstate of the incoming system Hamiltonian.
``Distillable particleness'' of an input state can then be defined as the asymptotic fraction of the most resourceful state, per input state, that can be obtained by free operations. In this paper, we follow a different track, viz. the distance-based approach.
\begin{figure}
\includegraphics[width=10cm,height=6cm,keepaspectratio]{schematic.pdf} \label{tumi-ki-kebali}
\caption{Particleness of a qutrit: Free and resourceful states. We depict here the free and resourceful states in the space of density matrices of a three-level quantum system. Free states form a convex and compact set. The states \(\frac{1}{3}I_3\) and \(|1\rangle\) are edge states. The state \(|0\rangle\) is free but is not an edge state. The state \(|2\rangle\) is a resourceful state.}
\end{figure}
\textit{Distance-based measure of particleness.---}
We study here a distance-based measure of particleness, $P_D(\rho_{in})$, of a quantum state, \(\rho_{in}\) on \(\mathbb{C}^d\),
given by $P_D(\rho_{in})= \min_{\rho_f \in F_S} D(\rho_{in},\rho_f)$, where \(D(\rho,\sigma)\) is a distance function on the space of densities.
Since we know that the state $\frac{1}{3}I_3=\frac{1}{3}(|0\rangle \langle 0|+|1\rangle \langle 1|+|2\rangle \langle 2|)$ is an edge state for any \(d\), we obtain the following result.
\textit{Lemma.---} The
particleness of an arbitrary quantum state \(\rho_{in}\) is bounded above by $D(\rho_{in},\frac{1}{3}I_3)$.
Next we try to see that whether we can obtain a stronger
bound, when we consider the distance of $\rho_{in}$ from a free state lying on the line joining $\rho_{in}$ with a free state in the interior of $F_S$. See Fig. 1 for a
representation.
As noted before, the state $\rho_f^p= p|0\rangle \langle 0| +(1-p) |1\rangle \langle 1|$ is a free state for any \(p\) and any \(d\).
And the state $|\psi \rangle= a |0\rangle +b |1\rangle+c |2\rangle $ ($a$, $b$, and $c$ are the amplitudes, with $|a|^2 +|b|^2 +|c|^2 =1$), is free if and only if \(|c| \leq |a|\). Consider now the state
$\rho (p,q)= q \rho_f^p + (1-q)|\psi \rangle \langle \psi| $, for \(q \in [0,1]\).
Considering a resourceful \(|\psi\rangle\), for any given \(p\),
$\rho (p,q)$ is free if $q\geq \frac{|c|^2-|a|^2}{p+|c|^2-|a|^2}$. The equality sign holds for a $\rho (p,q)$ on the edge.
Therefore, the particleness $P_D(|\psi \rangle)$ for a three-level pure state $|\psi \rangle$, is bounded by
\(\min_p D(|\psi \rangle \langle \psi |, \rho (p,q_p))\), where \(q_p = \frac{|c|^2-|a|^2}{p+|c|^2-|a|^2}\).
\begin{figure}
\includegraphics[width=11cm,height=6cm,keepaspectratio]{complementarity.pdf} \label{complementarity}
\caption{Complementarity between particleness and quantum coherence.
The points in the scatter diagram correspond to states of different ranks, and represents the coherence and particleness of those states. The measures for coherence and particleness used are the trace-norm coherence ($C_{tr}$) and trace-norm particleness ($P_{tr}$).
The horizontal axis represents particleness, while the vertical one represents quantum coherence. All quantities are dimensionless. For
each rank (\(=\) 1 (red pluses), 2 (blue crosses), 3 (magenta asterisks)), a low multiple of \(10^3\) states are generated Haar uniformly. For non-rank 1 states, the induced metric is used for the Haar-uniform generation \cite{zyc}.}
\end {figure}
\noindent
\textbf{\emph{Complementarity between coherence and particleness.---}} The
wave-particle duality is an important aspect of experiments that formed
the very basis of the enunciation of the quantum theory of nature. And
while coherence has been
regarded
as the measure of the wave nature of
quantum systems, we have argued that particleness is a measure of the
particle nature of the same. It is therefore conceivable that there will
appear a complementary relation between coherence and particleness for
arbitrary quantum states. In Fig. \ref{complementarity}, we exhibit a
(numerically) generated planar scatter diagram for Haar-uniformly generated arbitrary
three-dimensional quantum states that shows that such a complementarity
is indeed valid. We separately generate states of ranks 1, 2, and 3, Haar-uniformly in each case.
The measures used to plot the diagram are respectively the trace-norm coherence ($C_{tr}$) and trace-norm particleness ($P_{tr}$),
which are given by
\begin{equation}
A_{tr}(\rho) = \min_{\sigma \in \mathcal{F}_{S}} |\rho-\sigma|, \hspace{1cm} A=C,P,
\end{equation}
and $|\rho-\sigma|=Tr \sqrt{(\rho-\sigma)^{\dagger}(\rho-\sigma)}$. Here, $\mathcal{F}_{S}$ is the set of free states of the corresponding measure, so that $\mathcal{F}_{S}=F_S$ for \(A=P\), and $\mathcal{F}_{S}$ is the set of incoherent states (i.e., the states diagonal in the basis \(\{|0\rangle, |1\rangle, |2\rangle\}\)) for \(A=C\) \cite{eibar-bachhadhan-chat-bagale, nijer-Dhak}. The numerically generated scatter diagram points lie below the line
\begin{equation}
P_{tr} + 1.3 C_{tr} \leq 1.8,
\end{equation}
where saturation is attained for certain pure states in \(\mathbb{C}^3\), with
rank 2 states lying relatively away (and below) the bounding line in comparison to pure states, and rank 3 states being even farther away.
This complementarity could, we believe, be yet another
face of the wave-particle duality of quantum physics \cite{gajanan, nijer-Dhak}.
\noindent \textit{\textbf{Summary.---}} The wave-particle duality is one of the crucial aspects of the edifice of quantum mechanics. While both the wave and particle aspects were well-known from the beginnings of quantum mechanics, their conceptual foundations were not formalized except in particular quantum systems until recently, when the wave aspect was quantified, and called ``quantum coherence''. We have proposed
a general framework for the particle aspect of an arbitrary quantum system, and hinted towards a resource theory of particleness.
We also indicate the existence of a complementarity between
the concepts of particleness
and quantum coherence.
Since the notions of quantum coherence and particleness depends on the choice of basis in the former and that of the incoming state and solid state Hamiltonians in the latter, one can associate a myriad of waves and particles for a single quantum. This is in sharp contrast to the notion of wave and particle that we have in the classical world.
\textit{Acknowledgment.} We acknowledge discussions with Nirman Ganguly, Chiranjib Mukhopadhyay, Rounak Mundra, and
Kornikar Sen. The research of SD was supported in part by the INFOSYS scholarship for senior students.
|
1,108,101,564,166 | arxiv | \section{Introduction}
Heisenberg's uncertainty principle is often considered as
one of the most important features of quantum theory.
In every text book on quantum theory one can find
its explanation proposed by
Heisenberg himself \cite{Heisenberg}
and its ``derivation" by Robertson \cite{Robertson}.
However, as recently claimed by several researchers,
the above explanation and the derivation have
a certain gap between them. On the one hand
Heisenberg is concerned with a simultaneous measurement
of position and momentum, on the other hand the Robertson's
formulation treats two distinct measurements each on position and
momentum.
The later formulation has been actively investigated
since then, and there are a several different inequalities
for general observables
depending on quantities that characterize the
uncertainty of probability distributions
\cite{LP,Deutsch,Maassen,KP,Miyadera}.
In contrast to it, it seems quite recent that
the former formulation began being
reflected extensively. In this formulation
one must deal with a simultaneous (or joint) measurement of two
or more observables and somehow estimate its
accuracy compared with
individual measurements of the observables.
Appleby, in his pioneering work \cite{Appleby},
investigating simultaneous measurement of
position and momentum of quantum mechanical particles,
introduced error operators
and disturbance operators and derived
simple inequalities between them.
Ozawa \cite{Ozawa} treated a pair of general
self-adjoint observables and
considered a tradeoff relation between his
error operator and disturbance operator that have
an interpretation in the context of his extended notion of
simultaneous measurement.
Werner \cite{Werner} formulated the problem from an operational viewpoint
and derived a beautiful inequality between
position and momentum operator of quantum mechanical particles.
Janssens' work \cite{Janssens} is related with it and
showed a nice inequality on added variances
between arbitrary
self-adjoint observables in a simple manner.
Busch and Pearson \cite{errorbar} introduced a notion of
error bar that represents a resolution of the measurements, and
discuss a tradeoff relation related with position and
momentum. Busch and Heinosaari \cite{Heinosaari} estimated a tradeoff
between observables in a single qubit in detail.
Busch, Heinonen and Lahti gave a review on this topic in \cite{BHL}.
\par
In this paper, we deal with simultaneous measurements
of two arbitrary observables (positive operator valued measures).
In general, a pair of noncommutative observables
is not simultaneously measurable.
We derive an inequality that relates the limitation on
the simultaneous measurements with the noncommutativity
of observables.
The paper is organized as follow:
In section \ref{sec:formulation}, we
explain our formulation of the problem.
For that purpose we introduce
notions of distance between
observables and simultaneous measurability of
observables.
In section \ref{sec:main}, we derive our main
result. Tradeoff relations related with
two types of distances are explained.
As a byproduct we give a necessary condition for
two positive operator valued measures to be
simultaneously measurable.
A simple example is also discussed.
\section{Formulation}\label{sec:formulation}
In this paper, we follow a formulation introduced by Werner \cite{Werner}
which has a clear operational meaning.
Suppose that we have a quantum system described by a Hilbert space
${\cal H}$ and
a pair of observables;
$A$ and $B$ on it.
$A$ and $B$ are
represented as positive operator
valued measures (POVMs) $A=\{A_a\}_{a \in \Omega_A}$ and
$B=\{B_b\}_{b \in \Omega_B}$.
That is,
for each $a$ and $b$, $0\leq A_a, B_b\leq {\bf 1}$ holds and
$\sum_a A_a =\sum_b B_b ={\bf 1}$ is satisfied.
(For simplicity, we hereafter assume the sets of outcomes (say $\Omega_A$)
finite.)
Given a state $\omega$, we can compute a probability distribution
$\{\omega(A_a)\}_a$, where $P^{\omega}_{A}(a):=\omega(A_a)$
is interpreted as a probability to
obtain an outcome $a$ when we make a measurement on $A$ with
respect to the state $\omega$.
If $A_a$ and $B_b$ commute with each other for all $a$ and $b$,
$A\circ B:=\{A_a B_b\}_{(a,b) \in \Omega_A \times \Omega_B}$
again defines a POVM.
This newly defined POVM gives a probability distribution
$P^{\omega}_{A\circ B}(a,b)=\omega(A_aB_b)$ whose marginal distributions
coincide exactly with
$P^{\omega}_{A}$ and $P^{\omega}_B$ for any state $\omega$.
That is, in this case one can achieve a simultaneous measurement
of $A$ and $B$ perfectly by using $A\circ B$.
This is not always the case in general for noncommutative pairs.
What we are interested in is that impossibility.
What is the quantitative limitation on simultaneous measurements
of general pairs of observables?
\par
To formulate this problem quantitatively, one has to
introduce a proper distance between two probability distributions.
Suppose that we have a space ${\cal P}(\Omega)$
of probability distributions
over a (finite) sample space $\Omega$.
(i.e., ${\cal P}(\Omega):=\{p:\Omega \to [0,1]|\ \sum_x p(x)=1\}$)
\par
A function $d: {\cal P}(\Omega) \times {\cal P}(\Omega)
\to {\bf R}_{+}$ is called a distance if the following conditions are
satisfied:
\begin{itemize}
\item[(i)] $d(p,q)=d(q,p)$,
\item[(ii)]$d(p,p)=0$ $\Leftrightarrow$ $p\equiv 0$,
\item[(iii)] $d(q,p)+d(p,r)\geq d(q,r)$ (triangle inequality).
\end{itemize}
For instance,
$l_{\infty}$ distance (uniform distance) is defined as
\begin{eqnarray*}
d_{\infty}(p,q):=\max_{x\in \Omega} |p(x)-q(x)|,
\end{eqnarray*}
and $l_{1}$ distance is defined as
\begin{eqnarray*}
d_1(p,q):=\frac{1}{2}
\sum_{x \in \Omega} |p(x)-q(x)|.
\end{eqnarray*}
(For later convenience
we put a coefficient $\frac{1}{2}$.)
Once we fix a distance $d$ over the probability spaces,
we can define a distance between two observables.
Suppose that POVMs $A=\{A_a\}_{a \in \Omega}$ and $B=\{B_b\}_{b \in \Omega}$
have an identical set of outcomes. We define a distance between
$A$ and $B$ with respect to $d$ as
\begin{eqnarray*}
D_d(A,B):=\sup_{\omega} d(P^{\omega}_A,P^{\omega}_B),
\end{eqnarray*}
where the supremum is taken over all the states.
One can easily show that $D_d(A,B)=0$ if and only if $A=B$.
All other conditions for distance also follow easily.
Thus $D_d$ becomes a distance between a pair of POVMs
which have an identical set of outcomes.
\par
Let us recall the previous example in which
all the elements of $A=\{A_a\}_{a \in \Omega_A}$
and $B=\{B_b\}_{b \in \Omega_B}$ commute with each other.
In such a case one can define $A\circ B$
that has $\Omega_A \times \Omega_B$ as the set of
outcomes.
If we define $f_A: \Omega_A \times \Omega_B \to \Omega_A$ as
$f_A(a,b)=a$ and $f_B: \Omega_A \times \Omega_B \to \Omega_B$ as
$f_B(a,b)=b$ for all $a\in \Omega_A$ and
$b \in \Omega_B$, $A_a =\sum_{(a',b')}^{f(a',b')=a}A_{a'} B_{b'}$
and $B_b =\sum_{(a',b')}^{f(a',b')=b}A_{a'} B_{b'}$ hold.
That is, truncating outcomes of $A\circ B$ with some functions
reproduces both $A$ and $B$.
This can be generalized to the following procedure.
Let us consider a POVM $F:=\{F_x\}_{x \in \Omega}$.
$f_A: \Omega \to \Omega_A$ defines a new POVM
$f_A(F):=\{f_A(F)_a\}_{a \in \Omega_A}$ as
\begin{eqnarray*}
f_A(F)_a :=\sum_{x: f_A(x)=a} F_x.
\end{eqnarray*}
Also, an other function $f_B: \Omega
\to \Omega_B$ defines a POVM $f_B(F)$.
Here it is appropriate to say that
POVMs $f_A(F)$ and $f_B(F)$ are simultaneously
measurable (through a POVM $F$).
Actually, the following definition of the simultaneously
measurable observables is a standard one (See e.g. \cite{BHL,Araki}).
\begin{definition}
A pair of POVMs $A=\{A_a\}_{a\in \Omega_A}$ and $B=\{B_b\}_{b \in \Omega_B}$
is called simultaneously measurable if and only if
there exists a POVM $F=\{F_x\}_{x \in \Omega}$ and a pair of
functions $f_A: \Omega \to \Omega_A$ and $f_B: \Omega \to \Omega_B$
such that $A=f_A(F)$ and $B=f_B(F)$ hold.
\end{definition}
\par
Having introduced the notions of distance and simultaneous
measurement, we can formulate our problem as follows.
For given arbitrary pair of POVMs $A=\{A_a\}$ and $B=\{B_b\}$,
we choose a proper POVM $F=\{F_x\}$ and functions $f_A$ and $f_B$
that approximately
reconstruct them. How close can we make
$f_A(F)$ and $f_B(F)$
to $A$ and $B$?
Once we fix $F$, $f_A$ and $f_B$, we can define
the closeness of them to $A$ and $B$ as $D_d(f_A(F),A)$ and
$D_d(f_B(F), B)$.
One may expect that if one chooses $F$ to make
$D_d(f_A(F),A)$ small, $D_d(f_B(F),B)$ becomes
large. This tradeoff relation is what we are interested in.
\section{Results}\label{sec:main}
\subsection{Uncertainty principle in $l_{\infty}$ distance}
To proceed with the analysis, we have to fix a distance $d$ first.
The simplest $l_{\infty}$ distance $d_{\infty}$ has a clear
meaning and in addition makes
our problem tractable. Indeed, in this case the corresponding
distance between two observables $A=\{A_a\}_{a\in \Omega}$
and $A'=\{A'_a\}_{a\in \Omega}$
can be written as \cite{Heinosaari}
\begin{eqnarray*}
D_{d_{\infty}}(A,A')
=\sup_{\omega} \max_{a} |P^{\omega}_A(a)-
P^{\omega}_{A'}(a)|
=\max_a \Vert A_a -A'_a\Vert,
\end{eqnarray*}
where $\Vert\cdot \Vert$ represents an operator norm.
Suppose that we fix a POVM $F$ and $f_A$ and $f_B$.
$D_{d_{\infty}}(A,f_A(F))
=\max_a \Vert A_a-f_A(F)_a\Vert$ and
$D_{d_{\infty}}(B,f_B(F))
=\max_b \Vert B_b-f_B(F)_b\Vert$ naturally follow.
\par
The second observation that makes our analysis easier is a
representation of POVMs by completely positive maps (CP maps)
\cite{JanssensMaster}.
When we have an Abelian von Neumann algebra ${\cal M}$ and a
unital (completely \cite{completely})
positive linear map $T:{\cal M} \to {\bf B}({\cal H})$,
a decomposition of unity (POVM)
in ${\cal M}$, ${\bf 1}=\sum_{x \in \Omega}m_x$
with $m_x \geq 0$, defines a POVM $\{T(m_x)\}_{x\in \Omega}$ in
${\bf B}({\cal H})$.
The inverse is also true.
When we have a POVM $F=\{F_x\}_{x\in \Omega}$,
we define ${\cal M}_{\Omega}$ a set of all the diagonal
matrices acting on ${\bf C}^{|\Omega|}$.
For each $x\in \Omega$,
a projection onto the canonical basis vector
$e_x:=|x\rangle \langle x|$
is an
element of ${\cal M}_{\Omega}$ and $e:=\{e_x\}_{x\in \Omega}$
is a decomposition of unity in ${\cal M}_{\Omega}$.
One can define $T:{\cal M}_{\Omega} \to {\bf B}({\cal H})$
as
\begin{eqnarray*}
T(\sum_{x\in \Omega}c_x e_x)
=\sum_{x\in \Omega}c_x F_x.
\end{eqnarray*}
It is easy to see that thus defined $T$ is indeed a CP map.
\par
Let us begin the analysis.
Suppose we have a pair of POVMs $A=\{A_a\}_{a\in \Omega_A}$
and $B=\{B_b\}_{b \in \Omega_B}$. Take an
arbitrary POVM $F=\{F_x\}_{x\in \Omega}$ and functions
$f_A: \Omega \to \Omega_A$ and $f_B: \Omega \to \Omega_B$.
We represent the POVM $F$ in terms of a CP map
$T: {\cal M}_{\Omega} \to {\bf B}({\cal H})$.
That is, $F_x=T(e_x)$ holds for each $x \in \Omega$.
Since a CP map is linear, if we define
$
E^A_a:=\sum_{x}^{f_A(x)=a}e_x=f_A(e)
$ and
$E^B_b:=\sum_{x}^{f_B(x)=b}e_x=f_B(e)$
for each $a\in \Omega_A$ and $b \in \Omega_B$,
we have
\begin{eqnarray*}
f_A(F)_a&=& T(E^A_a) \\
f_B(F)_b&=& T(E^B_b).
\end{eqnarray*}
Since what we are interested in is
$D_{d_{\infty}}(f_A(F),A)$ and $D_{d_{\infty}}(f_B(F),B)$,
we shall
estimate $\Vert T(E^A_a)-A_a\Vert$ and $\Vert T(E^B_b)-B_b \Vert$.
They are represented in simple forms
if one defines {\it error operators},
\begin{eqnarray*}
\epsilon^A_a&:=&T(E^A_a)-A_a \\
\epsilon^B_b&:=&T(E^B_b)-B_b.
\end{eqnarray*}
That is, $D_{d_{\infty}}(f_A(F),A)=\max_a \Vert \epsilon^A_a\Vert$
and $D_{d_{\infty}}(f_B(F),B)=\max_b \Vert \epsilon^B_b\Vert$ hold.
We consider a commutator
$[T(E^A_a),T(E^B_b)]=[\epsilon^A_a+A_a,\epsilon^B_b+B_b]
=[\epsilon^A_a, \epsilon^B_b]
+[\epsilon^A_a, B_b]+[A_a,\epsilon^B_b]+[A_a,B_b]$.
Taking the norm on both sides of the equation,
$[B_b,A_a]=[\epsilon^A_a,\epsilon^B_b]
+[\epsilon^A_a,B_b]+[A_a,\epsilon^B_b]+[T(E^B_b),T(E^A_a)]$,
we have
\begin{eqnarray}
\Vert [B_b,A_a]\Vert
&\leq & \Vert [\epsilon^A_a, \epsilon^B_b ]\Vert
+\Vert [\epsilon^A, B_b]\Vert +\Vert [\epsilon^B_b, A_a]\Vert
+\Vert[T(E^B_b), T(E^A_a)]\Vert,
\label{eqnorm}
\end{eqnarray}
where we have used the triangle inequality for norm.
Each term in the right hand side is bounded as follows.
The first term is $\Vert[\epsilon^A_a, \epsilon^B_b]\Vert \leq 2
\Vert \epsilon^A_a\Vert \Vert \epsilon^B_b\Vert$.
The second term is, by use of a relation $\Vert X\Vert =\sup_{\psi;
\Vert \psi\Vert =1} |\langle \psi|X|\psi \rangle|$
for self-adjoint operator $X$,
\begin{eqnarray*}
\Vert [\epsilon^A_a, B_b]\Vert &=&
\Vert i[\epsilon^A_a, B_b]\Vert \\
&=&\sup_{\psi}|\langle \psi|
[\epsilon^A_a, B_b]\psi\rangle|
\\
&\leq&
2 \sup_{\psi} \langle \psi|(\epsilon^A_a)^2 |\psi \rangle^{1/2}
\langle \psi|(\Delta B_b)^2 |\psi\rangle^{1/2}
\\
&\leq&
2 \Vert \epsilon^A_a\Vert \sup_{\psi}\langle
\psi|(\Delta B_b)^2|\psi\rangle^{1/2},
\end{eqnarray*}
where $\Delta X:=X-\langle \psi|X|\psi \rangle$ and we used the
Robertson uncertainty relation.
Due to $0\leq B_b \leq {\bf 1}$,
$\langle \psi|(\Delta B_b)^2 |\psi\rangle^{1/2}\leq 1/2$ holds and
we have
\begin{eqnarray*}
\Vert [\epsilon^A_a, B_b]\Vert\leq \Vert \epsilon^A_a\Vert.
\end{eqnarray*}
For the third term, we have in the similar manner,
\begin{eqnarray*}
\Vert [\epsilon^B_b, A_a]\Vert \leq \Vert \epsilon^B_b\Vert.
\end{eqnarray*}
Thus (\ref{eqnorm}) is bounded as
\begin{eqnarray}
\Vert [A_a,B_b]\Vert
\leq 2\Vert \epsilon^A_a \Vert \Vert \epsilon^B_b \Vert
+\Vert \epsilon^A_a \Vert +\Vert \epsilon^B_b \Vert
+\Vert [T(E^A_a), T(E^B_b)]\Vert.
\label{mouchoi}
\end{eqnarray}
To estimate the last term in the right hand side of the above
inequality we use the following lemma proved by Janssens\cite{Janssens}.
\begin{lemma}
Let ${\cal A}$ and ${\cal B}$ be von Neumann algebras and
$T: {\cal B} \to {\cal A}$ a CP map.
Let $B, \tilde{B}$ be commuting Hermitian operators in ${\cal B}$,
then,
\begin{eqnarray*}
\Vert T(B^2)-T(B)^2\Vert^{1/2}
\Vert T(\tilde{B}^2)-T(\tilde{B})^2\Vert^{1/2}
\geq \frac{1}{2}\Vert [T(B),T(\tilde{B})]\Vert
\end{eqnarray*}
holds.
\end{lemma}
The proof is done by a direct application of the
Cauchy-Schwarz inequality for operators.
Since our $E^A_a$ and $E^B_b$ commute with each other,
we can apply the above lemma to our inequality to obtain
\begin{eqnarray}
\Vert [T(E^A_a),T(E^B_b)]\Vert
\leq 2 \Vert T(E^A_a)-T(E^A_a)^2\Vert^{1/2}
\Vert T(E^B_b)-T(E^B_b)^2\Vert^{1/2},
\label{Tcommutator}
\end{eqnarray}
where we utilized $(E^A_a)^2=E^A_a$ and $(E^B_b)^2 =E^B_b$.
To delete $T(E^A_a)$ and $T(E^B_b)$ from the above inequality, we use
$T(E^A_a)=\epsilon^A_a +A_a$ and $T(E^B_b)=\epsilon^B_b +B_b$.
It derives
\begin{eqnarray*}
\Vert T(E^A_a)-T(E^A_a)^2 \Vert
&=&\Vert \epsilon^A_a +A_a -(\epsilon^A_a)^2 -A_a^2
-\epsilon^A_a A_a -A_a \epsilon^A_a
\Vert
\\
&\leq&
\Vert \epsilon^A_a -(\epsilon^A_a)^2 -\epsilon^A_a A_a -A_a \epsilon^A_a
\Vert
+ \Vert A_a -A_a^2\Vert,
\end{eqnarray*}
whose first term in the right hand side
is further bounded as follows:
\begin{eqnarray*}
\Vert \epsilon^A_a -(\epsilon^A_a)^2 -\epsilon^A_a A_a -A_a \epsilon^A_a
\Vert
&=&
\Vert (1-A_a)\epsilon^A_a -\epsilon^A_a A_a -(\epsilon^A_a)^2\Vert
\\
&=&
\Vert [A_a, \epsilon^A_a] -A_a \epsilon^A_a + (1-A_a)\epsilon^A_a -
(\epsilon^A_a)^2\Vert
\\
&\leq &
\Vert [A_a,\epsilon^A_a]\Vert
+\Vert (1-2A_a -\epsilon^A_a) \epsilon^A_a\Vert.
\end{eqnarray*}
Here we use the relation $\Vert [A_a, \epsilon^A_a]\Vert
\leq \Vert \epsilon^A_a\Vert$ and $1-2A_a -\epsilon^A_a
=1-(A_a +T(E^A_a))$ to derive
\begin{eqnarray*}
\Vert [\epsilon^A_a, A_a]\Vert
+\Vert (1-2A_a -\epsilon^A_a) \epsilon^A_a\Vert
&\leq& \Vert \epsilon^A_a \Vert
+\Vert 1-(A_a +T(E^A_a))\Vert \Vert \epsilon^A_a\Vert
\leq
2 \Vert \epsilon^A_a\Vert,
\end{eqnarray*}
where we used a relation $\Vert 1- (A_a +T(E^A_a))\Vert \leq 1$
that is derived from $0\leq A_a +T(E^A_a)\leq 2 {\bf 1}$.
Finally we obtain
\begin{eqnarray}
\Vert T(E^A_a)-T(E^A_a)^2 \Vert
\leq 2\Vert \epsilon^A_a\Vert +\Vert A_a-A_a^2\Vert.
\label{TAbound}
\end{eqnarray}
We are ready to state the following theorem.
\begin{theorem}\label{maintheorem}
Suppose that we have a pair of POVMs $A=\{A_a\}_{a\in \Omega_A}$
and $B=\{B_b\}_{b\in \Omega_B}$. For any choice of a POVM
$F=\{F_x\}_{x\in \Omega}$ and a pair of functions
$f_A: \Omega \to \Omega_A$ and $f_B: \Omega \to \Omega_B$,
\begin{eqnarray*}
&&2 D_{d_{\infty}}(A, f_A(F))D_{d_{\infty}}(B, f_B(F))
+D_{d_{\infty}}(A,f_A(F))+D_{d_{\infty}}(B,f_B(F))
\\
&&
+2 (2D_{d_{\infty}}(A,f_A(F))+ V(A))^{1/2}(2D_{d_{\infty}}(B,f_B(F))
+V(B))^{1/2}\geq \max_{a,b}\Vert[A_a,B_b]\Vert
\end{eqnarray*}
holds, where $V(A):=\max_a\Vert A_a-A_a^2\Vert$ represents
an intrinsic uncertainty of a POVM $A$ (and similarly for $V(B):=
\max_b \Vert B_b -B_b^2\Vert$).
\end{theorem}
{\bf Proof:}
Combining (\ref{mouchoi}), (\ref{Tcommutator}) and
(\ref{TAbound}), we obtain
\begin{eqnarray}
\Vert [A_a,B_b]
\Vert &&
\leq 2\Vert \epsilon^A_a \Vert \Vert \epsilon^B_b \Vert
+\Vert \epsilon^A_a \Vert +\Vert \epsilon^B_b \Vert
\nonumber
\\
&&
+2(2\Vert \epsilon^A_a\Vert +\Vert A_a-A_a^2\Vert
)^{1/2}
(2\Vert \epsilon^B_b\Vert +\Vert B_b-B_b^2\Vert
)^{1/2}.
\label{proof}
\end{eqnarray}
We take its maximum over
$a$ and $b$ to obtain the theorem.
\hfill Q.E.D.
\par
The intrinsic uncertainty of a POVM $A$ satisfies
$0\leq V(A)\leq \frac{1}{4}$.
As corollaries, we obtain some observations.
\begin{corollary}\label{forPVM}
Suppose we have a pair of projection valued measures (PVMs)
$A=\{A_a\}_{a\in \Omega_A}$ and $B=\{B_b\}_{b \in \Omega_B}$.
For any choice of a POVM
$F=\{F_x\}_{x\in \Omega}$ and a pair of functions
$f_A: \Omega \to \Omega_A$ and $f_B: \Omega \to \Omega_B$,
\begin{eqnarray*}
&&2 D_{d_{\infty}}(A, f_A(F))D_{d_{\infty}}(B, f_B(F))
+D_{d_{\infty}}(A,f_A(F))+D_{d_{\infty}}(B,f_B(F))
\\
&&
+4 D_{d_{\infty}}(A,f_A(F))^{1/2}D_{d_{\infty}}(B,f_B(F))^{1/2}
\geq \max_{a,b}\Vert[A_a,B_b]\Vert
\end{eqnarray*}
holds.
\end{corollary}
{\bf Proof:}
For projection $P$, $P =P^2$ and thus $V(A)=V(B)=0$ hold.
\hfill Q.E.D.
\par
From this corollary one can see that
a pair of PVMs that have noncommutative elements
is not simultaneously measurable.
This tradeoff relation is, what we may call,
Heisenberg's uncertainty principle.
\par
On the other hand, it is possible for a pair of POVMs
to be simultaneously measurable even if they are
noncommutative with each other.
\begin{corollary}
Suppose that we have a pair of POVMs $A=\{A_a\}_{a\in \Omega_A}$
and $B=\{B_b\}_{b\in \Omega_B}$.
If they are simultaneously measurable, their
intrinsic uncertainties $V(A)$ and $V(B)$ satisfy
\begin{eqnarray*}
V(A)^{1/2}V(B)^{1/2}\geq \frac{1}{2} \max_{a,b}\Vert [A_a, B_b]\Vert.
\end{eqnarray*}
\end{corollary}
{\bf Proof:}
Put $D_{d_{\infty}}(f_A(F),A)=D_{d_{\infty}}(f_B(F),B)=0$ in
Theorem \ref{maintheorem}.
\hfill Q.E.D.
\par
The simultaneous measurability of noncommutative POVMs
is not surprising at all. In fact, suppose that we have
a doubly indexed POVM $F=\{F_{(a,b)}\}_{(a,b)\in \Omega_A \times \Omega_B}$,
we can construct a pair of
simultaneously measurable
POVMs $f_A(F)$ and $f_B(F)$
by functions $f_A$ and $f_B$ with $f_A(a,b)=a$ and $f_B(a,b)=b$.
The above corollary says that any such a pair must have
sufficiently large intrinsic uncertainties.
\par
The following result is also easy to obtain.
We restrict the observable employed for the simultaneous
measurement to PVM.
\begin{corollary}
Suppose that we have a pair of POVMs $A=\{A_a\}_{a\in \Omega_A}$
and $B=\{B_b\}_{b\in \Omega_B}$.
For any choice of a PVM
$F=\{F_x\}_{x\in \Omega}$ and a pair of functions
$f_A: \Omega \to \Omega_A$ and $f_B: \Omega \to \Omega_B$,
\begin{eqnarray*}
2 D_{d_{\infty}}(A, f_A(F))D_{d_{\infty}}(B, f_B(F))
+D_{d_{\infty}}(A,f_A(F))+D_{d_{\infty}}(B,f_B(F))
\geq \max_{a,b}\Vert[A_a,B_b]\Vert
\end{eqnarray*}
holds.
\end{corollary}
{\bf Proof:}
In (\ref{mouchoi}), we can put $[T(E^A_a),T(E^B_b)]=0$.
\hfill Q.E.D.
\subsection{Uncertainty principle in $l_1$ distance}
Next we consider another distance $D_{d_1}$ induced by
the $l_1$ distance.
The following observation \cite{Werner} plays a crucial role
for the analysis. For a pair of POVMs $A=\{A_a\}_{a\in \Omega}$
and $A'=\{A'_a\}_{a\in \Omega}$,
\begin{eqnarray*}
D_{d_1}(A,A')=\sup_{\omega}\frac{1}{2}
\sum_{a\in \Omega}
|P^\omega_A(a)-P^\omega_{A'}(a)|
=\max_{\Delta \subset \Omega}
\Vert \sum_{a\in \Delta}A_a
-\sum_{a\in \Delta}A'_a\Vert
\end{eqnarray*}
holds. Thus if we define for each $\Delta_A \subset \Omega_A$
and $\Delta_B \subset \Omega_B$,
$A_{\Delta_A}:=\sum_{a\in \Delta_A}A_a$,
$E^A_{\Delta_A}:=\sum_{a\in \Delta_A}E^A_a$
and
$B_{\Delta_B}:=\sum_{b\in \Delta_B}B_b$,
$E^B_{\Delta_B}:=\sum_{b\in \Delta_B}E^B_b$,
error operators should be introduced as
$\epsilon^A_{\Delta_A}:=T(E^A_{\Delta_A})-A_{\Delta_A}$
and $\epsilon^B_{\Delta_B}:=T(E^B_{\Delta_B})-
B_{\Delta_B}$, and the analysis to obtain
equation (\ref{proof})
works
just by replacing as follows:
\begin{eqnarray*}
A_a &\to& A_{\Delta_A} \\
B_b &\to& B_{\Delta_B} \\
E^A_a &\to& E^A_{\Delta_A}\\
E^B_b &\to& E^B_{\Delta_B} \\
\epsilon^A_a &\to& \epsilon^A_{\Delta_A}\\
\epsilon^B_b &\to& \epsilon^B_{\Delta_B}.
\end{eqnarray*}
That is, it holds
\begin{eqnarray*}
\Vert [A_{\Delta_A},B_{\Delta_B}]
\Vert &&
\leq 2\Vert \epsilon^A_{\Delta_A}
\Vert \Vert \epsilon^B_{\Delta_B} \Vert
+\Vert \epsilon^A_{\Delta_A} \Vert +\Vert \epsilon^B_{\Delta_B}
\Vert
\\
&&
+2(2\Vert \epsilon^A_{\Delta_A}
\Vert +\Vert A_{\Delta_A}-A_{\Delta_A}^2\Vert
)^{1/2}
(2\Vert \epsilon^B_{\Delta_B}\Vert +\Vert B_{\Delta_B}
-B_{\Delta_B}^2\Vert
)^{1/2}.
\end{eqnarray*}
Taking the maximum over the subsets $\Delta_A$ and $\Delta_B$,
we obtain the following theorem.
\begin{theorem}
Suppose that we have a pair of POVMs $A=\{A_a\}_{a\in \Omega_A}$
and $B=\{B_b\}_{b\in \Omega_B}$. For any choice of a POVM
$F=\{F_x\}_{x\in \Omega}$ and a pair of functions
$f_A: \Omega \to \Omega_A$ and $f_B: \Omega \to \Omega_B$,
\begin{eqnarray*}
&&2 D_{d_{1}}(A, f_A(F))D_{d_{1}}(B, f_B(F))
+D_{d_{1}}(A,f_A(F))+D_{d_{1}}(B,f_B(F))
\\
&&
+2 (2D_{d_{1}}(A,f_A(F))+ V_1(A))^{1/2}(2D_{d_{1}}(B,f_B(F))
+V_1(B))^{1/2}
\\
&&
\geq \max_{\Delta_A \subset
\Omega_A, \Delta_B\subset \Omega_B}\Vert
\sum_{a\in \Delta_A}
\sum_{b \in \Delta_B} [A_a,B_b]\Vert
\end{eqnarray*}
holds, where $V_1(A):=\max_{\Delta_A\subset
\Omega_A}\Vert \sum_{a\in \Delta_A}
A_a ({\bf 1}-\sum_{a\in \Delta_A} A_a)\Vert$ represents
an intrinsic uncertainty of a POVM $A$ (and similarly for $V_1(B)$).
\end{theorem}
The corresponding corollaries can be derived easily.
For instance, the following statement hold.
\begin{corollary}
Suppose we have a pair of projection valued measures (PVMs)
$A=\{A_a\}_{a\in \Omega_A}$ and $B=\{B_b\}_{b \in \Omega_B}$.
For any choice of a POVM
$F=\{F_x\}_{x\in \Omega}$ and a pair of functions
$f_A: \Omega \to \Omega_A$ and $f_B: \Omega \to \Omega_B$,
\begin{eqnarray*}
&&2 D_{d_{1}}(A, f_A(F))D_{d_{1}}(B, f_B(F))
+D_{d_{1}}(A,f_A(F))+D_{d_{1}}(B,f_B(F))
\\
&&
+4 D_{d_{1}}(A,f_A(F))^{1/2}D_{d_{1}}(B,f_B(F))^{1/2}
\geq
\max_{\Delta_A \subset
\Omega_A, \Delta_B\subset \Omega_B}\Vert
\sum_{a\in \Delta_A}
\sum_{b \in \Delta_B} [A_a,B_b]\Vert
\end{eqnarray*}
holds.
\end{corollary}
\subsection{Example: A qubit}
As the most simple example, we study a pair of
PVMs for a single qubit.
Each projection operator is parameterized by a Bloch sphere as,
$E({\bf n}):=\frac{1}{2}({\bf 1}+{\bf n}\cdot {\bf \sigma})$
for ${\bf n}\in {\bf R}^3$ with $|{\bf n}|=1$.
Let us consider two PVMs,
$A_{\bf n}=\{E({\bf n}),E(-{\bf n})\}$ and
$A_{\bf m}=\{E({\bf m}), E(-{\bf m})\}$ with
$\angle({\bf n},{\bf m})=\theta \ (0\leq \theta \leq \frac{\pi}{2})$.
They satisfy the following inequality.
\begin{corollary}
For any POVM $F$ and functions $f_{A_{\bf n}}$ and $f_{A_{\bf m}}$,
\begin{eqnarray*}
&&2 D_{d_{\infty}}(A_{\bf n}, f_{A_{\bf n}}(F))
D_{d_{\infty}}(A_{\bf m}, f_{A_{\bf m}}(F))
+D_{d_{\infty}}(A_{\bf n},f_{A_{\bf n}}(F))
+D_{d_{\infty}}(A_{\bf m},f_{A_{\bf m}}(F))
\\
&&
+4 D_{d_{\infty}}(A_{\bf n},f_{A_{\bf n}}(F))^{1/2}
D_{d_{\infty}}(A_{\bf m},f_{A_{\bf m}}(F))^{1/2}
\geq \frac{\sin \theta}{2}
\end{eqnarray*}
holds.
\end{corollary}
{\bf Proof:}
With $[E({\bf n}), E({\bf m})]=\frac{i}{2}{\bf \sigma}\cdot
({\bf n} \times {\bf m})$, it is immediate from Corollary \ref{forPVM}.
\hfill Q.E.D.
\\
This bound should compared with a bound obtained in
\cite{Heinosaari}.
They derived an inequality,
\begin{eqnarray}
D_{d_{\infty}}(A_{\bf n},f_{A_{\bf n}}(F))
+D_{d_{\infty}}(A_{\bf m},f_{A_{\bf m}}(F))
\geq \sqrt{\frac{1}{2}}
\left(\cos \frac{\theta}{2}+\sin \frac{\theta}{2}
-1
\right).
\label{eqHeino}
\end{eqnarray}
In Figure \ref{fig:one} the contours
of their admissible regions
are shown. One can see that in some region ours is
better and in other region worse. It may be interesting
that our method gives a nonlinear estimate in contrast with
\cite{Heinosaari}.
\begin{figure}[thbp]
\begin{center}
\includegraphics[width=40mm]{uncertain-1}
\end{center}
\caption{ \label{fig:one}
Admissible region of $X:=D_{d_0}(f_A(F),A)$ ($x$-axis) and
$Y:=D_{d_0}(f_B(F),B)$ ($y$-axis) for $\theta=\frac{\pi}{2}$.
A curved line is a contour of admissible region
obtained by our method. A straight line is a contour of
admissible region obtained in (\ref{eqHeino}).
}
\end{figure}
\section{Summary}
In this paper,
a limitation on simultaneous
measurement of two arbitrary (discrete)
positive operator valued measures
was investigated.
Following Werner's work \cite{Werner},
we introduced the distance between observables
by using
distance between probability distributions.
Introduction of the error operators simplified
the analysis.
We derived a novel inequality (Theorem \ref{maintheorem}), a possible
representation of Heisenberg's uncertainty
principle, that relates the limitation with
noncommutativity.
As a byproduct, we obtained a
corollary indicating a necessary condition for
a pair of POVMs to be simultaneously measurable.
Compared with the previous works on this subject,
the broad applicability of our result to an arbitrary
discrete pair of POVMs is an advantage.
Extension of our result to other distances will be a future problem.
\\
{\bf Acknowledgments}
\par
The authors thank an anonymous referee for fruitful comments.
|
1,108,101,564,167 | arxiv |
\section{Memory Profile for models}
Word Embedding Layers: 69MB\\BiLSTM Layers: 9.5MB\\
Classification and tagging layers: 0.94MB
\section{Data Relabeling Strategies} \label{app:data_label}
In order to train models suitable for on-device NLU, we prepare different datasets to assist in training, evaluation and testing of candidate models. We start with the same dataset that cloud uses for training its models. This is a curated list of annotated utterances that were either mined and annotated from the live production traffic or are generated synthetically (using rules and grammars). An utterance annotation in this dataset contains its domain label, intent label and the slot labels for all the words in the utterance. However, since the on-device NLU supports only a subset of cloud domains and intents, we apply multiple re-labeling strategies to the utterance annotations. These re-labeling strategies revolve around the basic idea of how we treat annotated labels for utterances that belong to non-supported domain or intent.
\iffalse
Table \ref{tab:lab_strat} lists the 5 relabeling strategies that we applied and their resultant dataset names.
\begin{table*}[ht]
\centering
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{@{}|>{\centering\arraybackslash}m{1.5in}|>{\centering\arraybackslash}m{2.5in}|>{\centering\arraybackslash}m{2.5in}|@{}}
\hline
\textbf{Dataset Name} & \textbf{Utterance in Supported Domain} & \textbf{Utterance in Unsupported Domain} \\ \hline
domain\_filtered &
\begin{itemize}[leftmargin=*]
\item Domain, Intent and slot labels untouched.
\end{itemize} &
\begin{itemize}[leftmargin=*]
\item Domain, Intent and slot labels mapped to OOD, OODIntent and Other respectively
\end{itemize} \\ \hline
intent\_filtered\_1.0 &
\begin{itemize}[leftmargin=*]
\item Domain Label untouched.
\item Unsupported Intents and Slots mapped to OODIntent and Other
\end{itemize} &
\begin{itemize}[leftmargin=*]
\item Domain, Intent and slot labels mapped to OOD, OODIntent and Other respectively
\end{itemize} \\ \hline
intent\_filtered\_1.1 &
\begin{itemize}[leftmargin=*]
\item Domain Label untouched.
\item Unsupported Intents Mapped to OODIntent.
\item Supported Slots in all Intents untouched.
\end{itemize} &
\begin{itemize}[leftmargin=*]
\item Domain, Intent mapped to OOD and OODInetnt.
\item Supported slots untouched
\end{itemize} \\ \hline
intent\_filtered\_2.0 &
\begin{itemize}[leftmargin=*]
\item Domain and Intent labels untouched
\item Unsupported Slots mapped to Other
\end{itemize} &
\begin{itemize}[leftmargin=*]
\item Domain, Intent and slot labels mapped to OOD, OODIntent and Other respectively
\end{itemize} \\ \hline
intent\_filtered\_2.1 &
\begin{itemize}[leftmargin=*]
\item Domain and Intent labels untouched
\item Unsupported Slots mapped to Other
\end{itemize} &
\begin{itemize}[leftmargin=*]
\item Domain, Intent mapped to OOD and OODIntent.
\item Supported slots untouched
\end{itemize} \\ \hline
\end{tabular}
\caption{Data Relabeling Strategies} \label{tab:lab_strat}
\end{table*}
\fi
\section{Dataset Selection Experiments} \label{app:data_sel}
The model configuration used for this stage is: Bi-LSTM Hidden Size: \textbf{512}; Tagging Architecture: \textbf{CRF}; Number of Bi-LSTM layers: \textbf{1}; Pre-trained Embeddings: \textbf{Alexa300}.
\iffalse
\begin{table*}[tb]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{@{}|p{0.2\textwidth}|p{0.1\textwidth}|p{0.1\textwidth}|p{0.1\textwidth}|p{0.1\textwidth}|p{0.1\textwidth}|p{0.1\textwidth}|@{}}
\hline
\textbf{Dataset} & \textbf{Supported IRER} & \textbf{Overall IRER} & \textbf{Overall ICER} & \textbf{Overall DCER} & \textbf{Overall SER} & \textbf{Model Size (MB)} \\ \hline
domain\_filtered & 11.21\% & 3.24\% & 0.80\% & 0.67\% & 0.92\% & 222.41 \\ \hline
\textbf{intent\_filtered\_1.0} & \textbf{11.19\%} & \textbf{3.21\%} & \textbf{0.78\% }&\textbf{ 0.65\%} & \textbf{0.92\% }& \textbf{221.33}\\ \hline
intent\_filtered\_1.1 & 11.27\% & 3.22\% & 0.79\% & 0.66\% & 0.93\% &221.33 \\ \hline
intent\_filtered\_2.0 & 11.21\% & 3.23\% & 0.79\% & 0.66\% & 0.92\% & 221.85 \\ \hline
intent\_filtered\_2.1 & 11.26\% & 3.25\% & 0.81\% & 0.68\% & 0.92\% & 221.85 \\ \hline
\end{tabular}
\caption{Dataset Selection - Best performing dataset on `TEST-LIVE' testset} \label{tab:data_sel}
\end{table*}
We created five different datasets using the data relabeling strategies. In order to find the best-performing dataset, we keep the model configuration fixed and train models with these datasets. The dataset used to train the model with the best performance (lowest IRER), against the TEST-LIVE testset, represents the best dataset. Table \ref{tab:data_sel} shows the model performances. We select the dataset \textbf{intent\_filtered\_1.0} for all our further experiments. We fixed the model configuration as shown below:
\begin{itemize}
\item Bi-LSTM hidden size: 512
\item Embedding Dimension: 300
\item Tagging Architecture: CRF
\item Number of Bi-LSTM layers: 1
\item Pre-trained Embeddings: Alexa300
\end{itemize}
\fi
\section{SVD Based Compression - Analysis}
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\linewidth]{singular_val_vs_component_100_dim.png}
\caption{Singular Values vs \# SVD components retained}
\label{fig:svd1}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\linewidth]{SVD_reconstruction_loss_vs_num_components_100.png}
\caption{SVD Reconstruction loss vs \# SVD components retained}
\label{fig:svd2}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\linewidth]{irer_vs_svd.png}
\caption{IRER vs \# SVD components retained}
\label{fig:svd3}
\end{figure}
\section{Introduction}
\blfootnote{
\hspace{-0.65cm}
This work is licensed under a Creative Commons
Attribution 4.0 International License.
License details:
\url{http://creativecommons.org/licenses/by/4.0/}.
}
\input{introduction.tex}
\section{Related Work} \label{related work}
\input{related_work.tex}
\pagebreak
\section{Method}
\input{model_compression.tex}
\section{Experiments}
\input{experiments.tex}
\section{Results and Analysis}
\input{results.tex}
\section{Conclusion}
In this paper, we present approaches for extreme model compression for performing natural language understanding on resource-constrained device. We use a unified multi-domain, multi-task neural model that performs DC, IC and NER for all supported domains. We discuss model compression approaches to compress the bulkiest components of our models - the word embeddings, and propose a task-aware end-to-end compression method based on deep compositional code learning where we jointly train the compression layers with the downstream task. This approach reduced word embeddings sizes to just a few MB, achieving a word-embedding compression rate of 98.4\% and outperforms all other task-agnostic and task-aware embedding compression baselines. We further apply post-training 8-bit linear quantization to compress the recurrent layers of the model. These approaches together result in a net model compression rate of 97.5\%, with a minimal performance degradation of 3.64\% when compared to the uncompressed model baseline.
DCCL approaches are complementary to other compression approaches such as knowledge distillation and model pruning. While our work demonstrates the effectiveness of task-aware DCCL on the classification and tagging tasks in NLU, the approach itself is generic and can be applied to other NLP tasks that rely on large word-embeddings. As part of future work, we would like to explore the effectiveness of task-aware DCCL on NLP tasks such as machine translation and language modeling. We would also like to explore compression of models with advanced architectures using contextual embeddings.
\bibliographystyle{coling}
\subsection{NLU Task Model Architecture}
\label{sec:arch}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.95\linewidth]{mt_dnn.png}
\caption{Multi-Domain, Multi-Task Recurrent Architecture for on-device NLU.}
\label{fig:mt_model}
\end{figure*}
\noindent \textbf{Model Architectural Constraints:} Our choice of a suitable on-device NLU architecture is largely driven by hardware resource constraints. First, on-device systems come with a strict memory budget, restricting our choices to architectures with fewer parameters. Second, the architectures chosen should not only be amenable to model compression, but should result in \textit{minimal} degradation in performance on compression. Third, on-device models have rigorous latency targets, requiring fast inference. This restricts our choices to simpler, seasoned architectures, like LSTMs and GRUs, that require fewer layers and FLOPs as opposed to the newer computationally intensive transformer-based architectures like BERT. Moreover, on-device inference engines often lack support for sophisticated layers such as self-attention layers. Driven by these constraints and relying on the considerable effectiveness of recurrent architectures \cite{hakkani-tr2016multi-domain,Liu_2016,zhang_2016}, we use a multi-domain, multi-task RNN model (MT-RNN), built using bi-directional LSTMs (Figure \ref{fig:mt_model}) for performing NLU. We train a single neural model that can jointly perform DC, IC and NER for a given input utterance. Furthermore, in order to reduce inference latency, we use word-level LSTMs as opposed to character or sub-word based models. \\\vspace{-8pt}
\noindent \textbf{Architecture Details} - Our task model, which we call the MT-RNN model, is shown in Figure~\ref{fig:mt_model}. It consists of a \textit{shared} bi-directional LSTM (Bi-LSTM) to extract features shared by all tasks, and \textit{task-specific} layers for the classification and tagging tasks. The input to the recurrent layers are pretrained embeddings and are fine-tuned during training. The input to each of the classification components is a sentence representation, obtained by concatenating the final states of the forward- and the backward-LSTM. This is passed on to a fully-connected dense layer with a softmax to predict the domain and intent for the utterance. The tagging layer produces a slot tag for each word in the utterance. The input at each time step consists of the forward- and backward-LSTM states for each word and the output is the slot tag. We choose the popularly used \textit{Conditional Random Fields (CRF)} layer for NER. The network is trained to minimize a joint NLU loss defined as the sum of the cross-entropy losses for IC and DC and the CRF loss for NER:
$$ \mathcal{L}_{NLU} = \mathcal{L}_{DC} + \mathcal{L}_{IC} + \mathcal{L}_{NER} $$
In the following sections, we describe our approach for compressing the word embeddings and the recurrent components of our MT-RNN model.
\subsection{Word Embedding Compression}
\label{sec:we_compression}
Word embeddings have been shown to be the largest components in an NLP model, owing to large vocabulary sizes and floating point parameters, accounting for \textgreater 90\% of the model sizes \cite{shu2017compressing}. Hence, compressing embeddings is crucial for reducing NLP model sizes.
Our approach is based on additive quantization~\cite{babenko2014additive}, which has shown great success in compressing word embeddings, achieving high compression rates \cite{shu2017compressing}.
\subsubsection{Additive Quantization using Deep Compositional Code Learning} Additive quantization \cite{babenko2014additive} aims to approximate vectors by representing them as a sum of basis vectors, called codewords. Originally proposed for image compression and approximate nearest neighbor search, this method has recently been used for post-processing word embedding compression~\cite{chen2018learning,shu2017compressing} achieving high compression rates, upwards of 90\%, on modest vocabulary sizes.
Let $W \in \mathcal{R}^{V \times D}$ be the original word embedding matrix, where $V$ denotes the vocabulary size and $D$ denotes the embedding size. Using additive quantization, the original word embedding matrix is compressed into a matrix of integer codes as $W_c \in \mathcal{Z_K}^{V \times M}$, where $\mathcal{Z_K}$ denotes the set of integers from $1$ to $K$, $\mathcal{Z_{K}} = \{1,2,\ldots,K\}$. This is achieved using a set of $M$ codebooks, $C_1$ through $C_M$, $C_m \in \mathcal{R}^{K \times D}$, each containing $K$ codewords of size $D$. $C_m^k$ is the $k^{\text{th}}$ codeword in the $m^{\text{th}}$ codebook.
For each word embedding $w_i \text{ in } W$, the compressed codes can be $w_{ci}$, where
$$w_{ci} = [z_1^i, z_2^i, \ldots, z_M^i] \qquad \text{where } z_m^i \in \mathcal{Z_K}, \forall m \in \{1,2,\ldots,M\} $$
The original word embedding $w_i$ is approximated from the codes and codebooks as $w_i'$ by summing the $(z_m^i)^{\text{th}}$ codeword in the $m^{\text{th}}$ codebook over all codebooks: $$ w_i' = \sum_{m=1}^M C_m^{z_m^i}$$
\begin{figure*}[tb]
\centering
\includegraphics[width=1\linewidth]{Autoencoder.pdf}
\caption{Deep Compositional Code Learning Architecture.}
\label{fig:code_learning}
\end{figure*}
\newcite{shu2017compressing} propose the deep compositional code learning (DCCL) architecture to learn discrete codes and codebooks for a given word embedding matrix through an unsupervised autoencoding task. In this model, a continuous word vector input, $w_i \in \mathcal{R}^{D}$ is first projected into a lower dimensional space using a linear transformation. This is projected through a second linear layer into $M$ different $K$-dimensional vectors. Each of these $M$ vectors is passed through a gumbel-softmax activation to get $M$ one-hot vectors, $r_m^i \in \mathcal{R}^{1 \times K}$:
$$ r_m^i = \sigma_G(f_L({w_i})) \qquad \forall m \in \{1,2,\ldots,M\}$$
where $f_L$ denotes the linear transformations and $\sigma_G$ denotes the gumbel-softmax activation. The gumbel-softmax activation allows the network to learn discrete codes via gumbel-sampling, while also making the network differentiable, enabling the backpropagation of gradients \cite{jang2016categorical}.
These one-hot vectors are converted to integer codes corresponding to the input word embedding. In order to reconstruct the word embedding, the following operations are performed:
\begin{equation}
\label{reconstruct}
w_i' = \sum_{m=1}^M r_m^i*C_m \qquad \text{where } r_m^i \in \mathcal{R}^{1 \times K}, C_m \in \mathcal{R}^{K \times D}, w_i' \in \mathcal{R}^{1 \times D}
\end{equation}
Figure ~\ref{fig:code_learning} provides an overview of the DCCL model. Since the word embedding matrix $W$ can be reconstructed using just the codes $W_c$ and the codebooks $C=[C_i\ldots C_m]$, the original embedding matrix $W$ with $V \times D$ floating point values need not be stored on-device, thus achieving the required compression. Furthermore, $W_c$ would be an integer matrix requiring only $M\log_2 K$ bits per embedding and the codebook $C$ requires just $M*K*D*32$ bits on disk, where each floating point element takes 32 bits. By choosing $M$ and $K \ll V$, the size of the codes and codebooks can be greatly reduced when compared to the original embedding matrix.
\subsubsection{Task-agnostic Post-Processing Compression}
\newcite{shu2017compressing} propose to use the DCCL architecture to perform post-processing embedding compression, where embeddings are compressed after the downstream task model has been trained. The task model is first initialized with pretrained word embeddings that are fine-tuned during task model training to obtain task-specific embeddings. These are compressed using the DCCL architecture trained on an unsupervised autoencoding task. The input to the autoencoder is the embedding matrix $W \in \mathcal{R}^{V \times D}$ and the model is trained to minimize the average embedding reconstruction loss (denoted by $l(W, W')$) for words in the embedding matrix:
$$l(W, W') = \frac{1}{V}\sum_{i=1}^V (w_i - w_i')^2$$
DCCL is shown to outperform other approaches such as parameter pruning and product quantization on sentence classification and machine translation tasks.
Since compression is performed as a post-processing step after the task model is trained, the compression algorithm has no information about the downstream task, making the compression task-agnostic and results in several drawbacks. First, unsupervised post-processing compression treats all words equally for compression. However, in practice, some words may be more important than others for the downstream task. Hence, better reconstructions of more important words may benefit the downstream task. Second, post-processing compression typically is lossy resulting in a degradation in downstream performance since the task model is not adapted to the compression error. We propose a task-aware end-to-end compression approach which aims to address these issues.
\pagebreak
\subsubsection{Task-aware End-to-End Compression}
\label{sec:ta_comp}
\begin{figure*}[tb]
\centering
\includegraphics[width=1\linewidth]{E2E_Compression.pdf}
\caption{Task-aware end-to-end compression with the MT-RNN model.}
\label{fig:e2e}
\end{figure*}
Our algorithm improves on the above said approach, by training the DCCL a.k.a. the compression model, jointly with the downstream task model (Figure \ref{fig:e2e}). End-to-end training allows the compression model to receive signals about the downstream task, thus adapting the compression to the downstream task. Intuitively, since the compression model now has the information about how the words are used in the downstream task (via the downstream loss), it can spend more network capacity in achieving better reconstructions for more important words. At the same time, the downstream task model also adapts to the lossy reconstructions learned by the compression model, thus improving on the downstream performance. We call this \textit{task-aware end-to-end compression}, where the compression algorithm takes the downstream task loss into account during embedding compression.
In order to perform task-aware compression with a DCCL model, we replace the original embedding lookup operations in the task model with layers from the DCCL model a.k.a. the compression layers. The input to our model is now a sequence of $L$ word embeddings corresponding to words from the input text utterances. These are passed through the compression layers and are reconstructed, as shown in equation \ref{reconstruct}, to obtain a sequence of $D$ dimensional word representations corresponding to each word in the input. The word representation is then fed to the recurrent layers in the task model and the remaining network is unchanged. The entire setup is trained end-to-end to minimize the downstream task loss and the gradients are back-propagated through the entire network, including the compression layers. Further, the compression layers can be initialized with pretrained model parameters from the task-agnostic DCCL model, and the NLU layers can be initialized from a trained NLU model.
Training an end-to-end DCCL model is tricky, especially when the number and size of codebooks is large. The stochasticity introduced by gumbel-sampling can easily stray off the training, leading to sub-optimal convergence. For these cases, we ground the training by adding
the word embedding reconstruction loss to the downstream task loss as follows:
\begin{equation*}
\mathcal{L} = \mathcal{L}_{NLU} + \mathcal{L}_{e} \text{ where } \mathcal{L}_{e} = \frac{1}{N}\sum_{i=1}^N \Big(w_i - w_i'\Big)^2
\vspace{-5pt}
\end{equation*}
Adding the embedding reconstruction loss not only stabilizes the training, but also provides stronger gradients to the compression layers. Note that unlike task-agnostic compression where all words are treated equally for compression, the embedding reconstruction loss term in task-aware compression considers only the words appearing the in the input batch. This ensures that the words that are more frequent in the training data have better reconstructions, resulting in better downstream performance.
\subsection{Recurrent Layer Compression} Quantization~\cite{hubara2017quantized} is a simple and effective technique for model compression. Quantization maps each floating point model paramater to it’s closest representative from a pre-chosen set of floating-point values. More concretely, the model parameter range is divided into $B$ equally spaced bins (or buckets), and each parameter is assigned it’s closest bin. The bins can be represented by integer indices and require at most $\log_2 B$ bits. For instance, with 256 bins, a 32-bit floating point parameter can represented by an integer bin index occupying just 8 bits.
We apply post-training 8-bit linear quantization to quantize the recurrent layers of the model. Since 32-bit floating point model parameters are now represented by 8-bit integers, this results in an instant 4$\times$ compression. Furthermore, quantization improves model latency, as all the floating point operations are performed using integers. While more sophisticated compression techniques exist for compressing recurrent layers, we found that quantization was extremely effective and resulted in no degradation in performance.
|
1,108,101,564,168 | arxiv | \section{intrduction}
The study of the properties of interacting Dirac (or Weyl) fermions in (topological) semimetals under a magnetic field is a fundamental subject of the condensed matter physics \cite{Castro,Goerbig}. One of the physical themes is to investigate the orbital magnetization (OM) of the Dirac fermions (DFs) with Coulomb interactions. The OM of an electron system is usually defined as \cite{Shi}
\begin{equation}
M=-(\partial \Omega/\partial B)_{T,\mu} \label{mgn}
\end{equation}
where $\Omega = \Omega(T,\mu,B)$, as a function of the temperature $T$ and the chemical potential $\mu$ and the magnetic field $B$, is the thermal dynamic potential. Equation (\ref{mgn}) is equivalent to a statistical average of the OM operator \cite{Hirst}. However, for Dirac (or Weyl) fermions, Eq. (\ref{mgn}) is ill defined because the occupation of the Landau levels in the lower band leads to divergence of $\Omega$ and thereby $M$. For noninteracting DFs in graphene, $\Omega$ can be evaluated with a special method \cite{Gusynin,Hesse,SGB,Slizovskiy} by which the field $B$ dependent part of $\Omega$ is separated out. The effects of finite-temperature occupations and the impurity broadening of the Landau levels on the OM of the noninteracting DFs have been studied \cite{SGB,Slizovskiy,Koshino}. Nonetheless, for interacting DFs, it is not easy to separate the $B$-dependent part of $\Omega$ from that of the independent part. Study of the OM of Dirac fermions with Coulomb interactions is lacking. How to calculate the OM of interacting DFs is still an open question. In this paper, we are developing a general approach for solving this problem and use it to calculate the OM of interacting Dirac fermions in graphene.
\section{formalism}
The electrons in graphene are moving on a honeycomb lattice of carbon atoms. The Hamiltonian of the electrons with a neutralizing background is
\begin{equation}
H=-t\sum_{\langle ij\rangle s}c^{\dagger}_{is}c_{js}+U\sum_{j}\delta n_{j\uparrow}\delta n_{j\downarrow} +\frac{1}{2}\sum_{i\neq j}v_{ij}\delta n_{i}\delta n_{j} \nonumber\\
\end{equation}
where $c^{\dagger}_{is}$ ($c_{is}$) creates (annihilates) an electron of spin $s$ in site $i$, $\langle ij\rangle$ sums over the nearest-neighbor (NN) sites, $t \approx$ 3 eV is the NN hopping energy, $\delta n_{is}=n_{is}-n_s$ is the number deviation of electrons of spin $s$ at site $i$ from the average occupation $n_s$, and $U$ and $v_{ij}$ are the Coulomb interactions between electrons. In real space, $v_{ij} = v(r_{ij})$ with $r_{ij}$ the distance between sites $i$ and $j$ is given by
\begin{equation}
v(r) = \frac{e^2}{r}[1-\exp(-q_0r)], \nonumber\\
\end{equation}
where $q_0$ is a parameter taking into account the wavefunction spreading effect in the short-range interactions between electrons. Here we take $q_0 = 0.5/a_0$ with $a_0 \approx 2.46$ \AA~ as the lattice constant of graphene. For carrier concentration close to the charge neutrality point (CNP), one usually adopts the simplified continuum model. With the continuum model and using the mean-field theory (MFT, or the self-consistent Hartree-Fock approximation), we have recently studied the Landau quantization of the interacting electrons taking into account the charge and spin orderings and the exchange interactions between all the levels \cite{Yan}.
According to the many-particle theory \cite{Luttinger}, the thermodynamical potential $\Omega$ per unit volume of an electron system under a magnetic field $B$ is given by
\begin{eqnarray}
\Omega &=& k_BT\{\Phi-\frac{B}{2\pi}\sum_{k\omega}\exp(i\omega\eta){\rm Tr}[\Sigma(k,i\omega)G(k,i\omega) \nonumber\\
&&-\ln(-G(k,i\omega))]\} \label{tmp}
\end{eqnarray}
where $\Phi$ is the `free energy' functional of the Green's function $G$, $\Sigma$ is the self-energy, $k$ is the state index, $\omega$ is the fermionic Matsubara frequency, and $\eta$ is an infinitesimal small positive quantity. For Dirac fermions in graphene, $G$ and $\Sigma$ are $2\times 2$ matrices in the space of sublattices $a$ and $b$, and $k$ stands for $(n,v,s)$ with $n,v,s$ respectively the indexes of the Landau level (LL) and valley and spin \cite{Yan}. The self-energy matrix element $\Sigma_{ll'}(k,i\omega)$ with $l (l') = a$ or $b$ is related with $\Phi$ by
\begin{equation}
\Sigma_{ll'}(k,i\omega) = \delta\Phi/\delta G_{l'l}(k,i\omega), \label{sfe}
\end{equation}
which ensures the microscopic conservation law being satisfied \cite{Baym}. The point here is, after the summation over the Matsubara frequency, $\Omega$ can be expressed as the sum over the LLs from $n = 0$ to $\infty$. We will use the units in which $\hbar = e = c = a_0 = 1$, the energy unit $\epsilon_0 = \hbar v_0/a_0 = 1$ (with $v_0$ the Fermi velocity of electrons in graphene), and the unit of magnetic field $B_0 = \hbar c/ea_0^2 = 1$.
\begin{figure}[t]
\centerline{\epsfig{file=fig1.ps,height=8.5cm}}
\caption{(color online) Sketch of Landau levels in momentum space. Under a magnetic field, the states in momentum space are quantized onto the circles. The red dashed circle between the $N$th and $N+1$th Landau levels is the cutoff.}
\end{figure}
To get rid of the divergence difficulty, we consider a system in momentum space containing finite LLs as shown in Fig. 1. The cutoff momentum is given by $k_c = \sqrt{(2N+1)B}$ where $N$ is the highest Landau index at the field $B$. The thermodynamic potential of this finite system is then given by $\Omega_N(B)$ (suppressing the $T$ and $\mu$ dependence for brevity). The number $N$ changes with $B$ varying for fixed $k_c$. When the magnetic field $B$ varies from $B = k_c^2/(2N+1)$ to $B + \Delta B$ with $\Delta B = 2B/(2N-1)$, the index of the highest LL changes to $N-1$. We then define the OM of the finite system as
\begin{equation}
M=-\frac{\Omega_{N-1}(B+\Delta B)-\Omega_N(B)}{\Delta B}. \label {mdf}
\end{equation}
The ratio given by Eq. (\ref{mdf}) with $k_c \to \infty$ can be considered as the special limit of the derivative in Eq. (\ref{mgn}). For sufficient large cutoff $k_c$, this definition should give rise to the result of the entire system. For low carrier concentration close to the charge neutrality point (CNP), the cutoff can be taken as $k_c = 1$. Here, we should remark that our finite system of $N$ LLs is part of the whole system of infinite LLs. It does not mean we can consider an isolated system of only $N$ LLs from the beginning. For the Dirac fermions, the cutoff for such an isolated system leads to unphysical results. The consideration of such an isolated Dirac system is equivalent to thinking only the top $N$ LLs being occupied with the rest lower LLs as empty in the lower band. This is apparently unphysical.
Now that $\Omega_N(B)$ contains $N$ terms, we define
\begin{equation}
\Omega_N = BS_N(B)
\end{equation}
and suppose each term in $S_N(B)$ be an analytical function of $B$. Write $S_{N-1}(B+\Delta B) = S_{N}(B+\Delta B)-y_{N}(B+\Delta B)$ with $y_N$ the $N$th term in the sum $S_N$. Then, by expanding $S_{N}(B+\Delta B)$ to order $(\Delta B)^2$ and $y_{N}(B+\Delta B)$ to order $\Delta B$, the OM can be expressed as
\begin{eqnarray}
M &=&-S_N(B)-\frac{2N+1}{2N-1}[BS'_N(B)+\frac{B^2S_N''(B)}{2N-1 }]\nonumber\\
~~&&+(N+1/2)[y_N(B) + By_N'(B)/(N-1/2)]
\label{mgnr}
\end{eqnarray}
where the primes mean the derivatives with respect to $B$.
\section {OM of noninteracting Dirac fermions}
As an example, here, we consider the free Dirac fermions in graphene at zero temperature. The Hamiltonian of a single Dirac fermion is
\begin{eqnarray}
H_v(p) = s_vp_x\sigma_1+p_y\sigma_2, \label{hfdf}
\end{eqnarray}
where $s_v = 1$ (-1) for particle in valley $v = K$ ($K'$), the momentum $\vec p$ in each valley is measured from the Dirac point, and $\sigma's$ are the Pauli matrices operating in the sublattice ($a, b$) space. Under a magnetic field $B$ applied perpendicularly to the system plane, the states of the Dirac fermions are given by the Landau quantization. In the LL representation, the Hamiltonian (\ref{hfdf}) reads
\begin{eqnarray}
H_{vn} = \sqrt{2Bn}\sigma_1, \label{ldq}
\end{eqnarray}
where $n$ is the LL index. The LLs are obtained as $\epsilon_{\lambda}(n) = \lambda\sqrt{2Bn}$ with $\lambda = \pm$ for $n \ne 0$, and $\epsilon_0 = 0$ for $n=0$. At CNP and $T = 0$, the LLs in the lower band are fully occupied while the LLs in the upper band are completely empty. The thermodynamic potential reads (see Appendix)
\begin{eqnarray}
\Omega &=& \frac{k_BTB}{2\pi}\sum_{k\omega}\exp(i\omega\eta){\rm Tr}\ln(-G(k,i\omega)) \nonumber\\
&=& \frac{2B}{\pi}\sum_{n}\epsilon_{-}(n), \label{tmpf}
\end{eqnarray}
where the $k$ sum in the first line is understood over the Landau index $n$ and the valley $v$ and the spin $s$. The sum $S_N(B)$ is then obtained as \cite{ram}
\begin{eqnarray}
S_N(B) &=& -c_0\sum_{n=1}^N\sqrt{n} \nonumber\\
&=& -c_0[\frac{2}{3}(N+1/2)^{3/2}+\zeta(-1/2)+O(\frac{1}{\sqrt{N}})]\nonumber
\end{eqnarray}
with $c_0 = 2\sqrt{2B}/\pi$ and $\zeta(-1/2) = -0.207886225$. We then have $2BS_N'(B)=-4B^2S_N''(B) = S_N(B)$ and $By_N'(B) = y_N/2 = -c_0\sqrt{N}/2$.
$M$ is calculated as
\begin{eqnarray}
M &=&-S_N(B)\{1+\frac{1}{2}\frac{2N+1}{2N-1}[1-\frac{1}{2(2N-1)}]\}\nonumber\\
~~&&-c_0\sqrt{N}(N+1/2)(1 + \frac{1}{2N-1}) \nonumber\\
&=&\frac{3c_0}{2}\zeta(-1/2)+O(\frac{1}{\sqrt{N}}). \label{fdf}
\end{eqnarray}
Due to the expansion of $S_N(B+\Delta B)$ to second order in $\Delta B$ and $y_N(B+\Delta B)$ to linear $\Delta B$, there are precise cancellations from $O(N^{3/2})$ to $O(1)$ in Eq. (\ref{mgnr}). The result given by Eq. (\ref{fdf}) is consistent with the existing one \cite{SGB,Slizovskiy,Ghosal}.
At finite doping with chemical potential $\mu > 0$ and $T = 0$, the sum $S_N(B)$ is given by
\begin{eqnarray}
S_N(B) = \frac{2}{\pi}[\sum_{n=1}^N(-\sqrt{2Bn}-\mu)+\sum_{n=1}^{N_F}(\sqrt{2Bn}-\mu)-\mu] \nonumber
\end{eqnarray}
where the sums in the square brackets are, respectively, from the lower and upper bands with $N_F$ the index of highest LL below the chemical potential, and the last term $-\mu$ comes from the zero LL. Suppose $\mu << k_c$, we then have $N_F << N$. In the limit $B \to 0$, because of $N_F >> 1$, we obtain
\begin{eqnarray}
S_N(B) &=& \frac{2c_0}{3}[(N_F+1/2)^{3/2}-(N+1/2)^{3/2}+O(\frac{1}{\sqrt{N_F}})]\nonumber\\
&& -\frac{2}{\pi}(N+N_F+1)\mu. \nonumber
\end{eqnarray}
The OM is given by
\begin{eqnarray}
M = \frac{2}{\pi}(N_F+1/2)\{\mu-\sqrt{2B(N_F+1/2)}[1+O(N_F^{-2})]\}. \nonumber
\end{eqnarray}
As $B \to 0$, $M$ oscillates rapidly between $-\mu/2\pi$ and $\mu/2\pi$ with period $\Delta B = 2B^2/\mu^2$. This is the de Haas-van Alphen oscillation. The average of $M$ vanishes, which leads to the vanishing orbital magnetic susceptibility $\chi = 0$ (defined as the derivative of $M$ with respect to $B$ at $B = 0$) at finite doping. On the other hand, at the CNP, because $M \propto \sqrt{B}$ as given by Eq. (\ref{fdf}), $\chi$ diverges at $B = 0$. This is consistent with the existing result \cite{Koshino,JWM,Safran}, $\chi = -(2/3\pi)\delta(\mu)$, which is obtained by the response of uniform Dirac fermions to the magnetic field without considering the Landau quantization \cite{Safran,Fukuyama}.
\section{MFT for interacting Dirac fermions in graphene}
As in our previous work, we use the MFT to deal with the interactions between the electrons \cite{Yan,Cote}. By the MFT, the `free energy' functional $\Phi$ is approximated as shown in Fig. 2(a). The self-energy is then obtained as in Fig. 2(b), which is independent of the Matsubara frequency. In terms of $G$, $\Phi$ is given by
\begin{eqnarray}
\Phi&=&\frac{B}{4\pi\beta}\sum_{kk',\omega\omega',ll'}e^{i\omega\eta}G_{ll'}(k,i\omega)
v_{l'l}(0)\nonumber\\
&&~~\times e^{i\omega'\eta'}G_{l'l}(k',i\omega') \nonumber\\
&&-\frac{B}{4\pi\beta}\sum_{kk',\omega\omega',ll'}e^{i\omega\eta}G_{ll'}(k,i\omega)
v^x_{l'l}(k,k')\nonumber\\
&&~~\times[e^{i\omega'\eta'}G_{l'l}(k',i\omega')-\beta\delta_{\omega\omega'}\delta_{ll'u}] \nonumber\\
&=&\frac{B\beta}{4\pi}\sum_k{\rm Tr}\{[\Sigma(k)+V^x/2]F(k)\}, \label{phi}
\end{eqnarray}
with $\Sigma(k)=\Sigma_H+\Sigma_X(k)$ and
\begin{eqnarray}
F(k)&=&\frac{1}{\beta}\sum_{\omega}e^{i\omega\eta}G(k,i\omega),\nonumber\\
\Sigma_{H,ll'}&=&\sum_{k'}v_{ll'}(0)F_{ll'}(k') \nonumber\\
&=&(v_c\rho_{l}-sUm_{l})\delta_{ll'}, \nonumber\\
\Sigma_{X,ll'}(k)&=&-\sum_{n'}v^x_{ll'}(k,k')[F_{ll'}(k')-\delta_{ll'}/2], \label{sf1}\\
V^x_{ll'}&=&\delta_{ll'}v^x(r)|_{r=0}, \nonumber
\end{eqnarray}
where $\beta = 1/k_BT$, and $\Sigma_{H,ll'}$ has been written in terms of the charge $\rho_{l}$ and the spin $m_{l}$ order parameters with $v_c$ and $U$ the corresponding interaction parameters. The first sum in the first equal of Eq. (\ref{phi}) is due to the direct Coulomb interaction, while the second sum comes from the exchange interaction. Here, $v_{\mu\nu}(0)$ and $v^x_{\mu\nu}(k,k')$ are the interaction elements in the LL representation; they are dependent on the magnetic field $B$ \cite{Yan}. The appearance of the extra term -1/2 in addition to the diagonal distribution function $F_{ll}(k')$ in Eq. (\ref{sf1}) originates from the interaction form of the system of DFs with a neutralizing background given in terms of the density-density multiplication instead of the normal order of the fermion operators. Corresponding to this term, there is a shift $V^x/2$ from the self-energy as shown in Eq. (\ref{phi}); this shift is not drawn in the diagrams in Fig. 2. Because of this shift, the particle-hole symmetry of the system is reflected by the invariance under the transform $\mu \to -\mu$ with $\mu = 0$ at the CNP \cite{Yan3}.
\begin{figure}[t]
\centerline{\epsfig{file=fig2.ps,height=8.cm}}
\caption{(color online) (a) `Free energy' functional $\Phi$ under the MFT. (b) Self-energy. The solid line with an arrow denotes the Green's function. The wave line is the interaction. The thick wave line is the exchange interaction including the electron screening effect.}
\end{figure}
In the LL's picture, the Green's function is given by
\begin{equation}
G(k,i\omega) = \sum_{\lambda}\frac{\psi_{\lambda}(k)\psi^{\dagger}_{\lambda}(k)}{i\omega+\mu-\epsilon_{\lambda}(k)} \label{grn}
\end{equation}
where $\psi_{\lambda}(k)$ is the $\lambda$th eigen-wave function with eigen-energy $\epsilon_{\lambda}(k)$. The LLs $\epsilon_{\lambda}(k)$ and the wavefunctions $\psi_{\lambda}(k)$ are determined by \cite{Yan}
\begin{equation}
[\sqrt{2Bn}\sigma_1+\Sigma(k)]\psi_{\lambda}(k)=\epsilon_{\lambda}(k)\psi_{\lambda}(k). \label{ll}
\end{equation}
Express the self-energy matrix as $\Sigma(k) = \Sigma_0(k)\sigma_0+\Sigma_1(k)\sigma_1+\Sigma_3(k)\sigma_3$. The energy levels for $n\ne 0$ are obtained as
\begin{eqnarray}
\epsilon_{\lambda}(k)&=&\Sigma_0(k)+\lambda\{[\sqrt{2Bn}+\Sigma_1(k)]^2+\Sigma^2_3(k)\}^{1/2} \nonumber\\
&\equiv&\Sigma_0(k)+\lambda E(k), \label{engy}
\end{eqnarray}
and the corresponding wavefunctions are
\begin{eqnarray}
\psi_{+}(k) &=&
\left[\begin{array}{c}R_+(k)\\
R_-(k)
\end{array}\right],\nonumber\\
\psi_{-}(k) &=&
\left[\begin{array}{c}-R_-(k)\\
R_+(k)
\end{array}\right], \label{wvf}
\end{eqnarray}
where $R_{\pm}(k) = \sqrt{1\pm\Sigma_3(k)/E(k)}/\sqrt{2}$. For $n = 0$, the eigenstates are given by
\begin{eqnarray}
\epsilon_0(0Ks)&=&\Sigma_{bb}(0Ks), ~~~~ \psi(0Ks) =
\left[\begin{array}{c}0\\
1\end{array}\right], \nonumber\\
\epsilon_0(0K's)&=&\Sigma_{aa}(0K's), ~~~~ \psi(0K's) =
\left[\begin{array}{c}1\\
0\end{array}\right],
\end{eqnarray}
in valleys $v = K$ and $v = K'$, respectively. The charge and spin orders are calculated by
\begin{eqnarray}
\rho_a&=&\frac{s_0B}{4\pi}\sum_{l\lambda k}s_lf_{\lambda}(k)|\psi_{l\lambda}(k)|^2, \label{rho}\\
m_l&=&\frac{s_0B}{4\pi}\sum_{\lambda k}sf_{\lambda}(k)|\psi_{l\lambda}(k)|^2, \label{ml}
\end{eqnarray}
where $s_0 = \sqrt{3}/2$ is the area of the unit cell, $B/2\pi$ is the spatial degeneracy of the Landau state, $\psi_{l\lambda}(k)$ is the $l$th component of $\psi_{\lambda}(k)$ and $s_l$ = 1 (-1) for $l = a$ ($b$), $s = 1$ (-1) for spin-up (down), and $f_{\lambda}(k) =f(\xi_{\lambda}) = 1/[\exp(\beta\xi_{\lambda})+1]$ with $\xi_{\lambda}(k)=\epsilon_{\lambda}(k)-\mu$ is the Fermi distribution function.
Here, we need to pay special attention to the equation for the self-energy element $\Sigma_{ab}(k)~[=\Sigma_{ba}(k)]$ or $\Sigma_1(k)$ given by Eq. (\ref{sf1}). Using the wavefunctions given by Eq. (\ref{wvf}), we have
\begin{eqnarray}
F_{ab}(k) = [f_+(k)-f_-(k)]\frac{\epsilon_1(k)+\Sigma_1(k)}{2E(k)}, \label{f1}
\end{eqnarray}
with $\epsilon_1(k) = \sqrt{2Bn}$. Note that $F_{ab}(k)$ goes to $-1/2$ in the limit $n \to \infty$. Equation for $\Sigma_1(k)$ can be written as
\begin{eqnarray}
\Sigma_1(k) &=& -\sum_{n'\ne 0}v_{ab}^{xv}(n,n')[F_{ab}(k')+1/2]+V^x_1(n)/2, \nonumber\\
\label{s1}
\end{eqnarray}
with $v_{ab}^{xv}(n,n') = v_{ab}^{x}(k,k')$ and
\begin{eqnarray}
V^x_1(n) &=& \sum_{n'\ne 0}v_{ab}^{xv}(n,n').
\end{eqnarray}
By so doing, the sum over $n'$ in Eq. (\ref{s1}) converges fast. For $\Sigma_{X,ll}(k)$, Eq. (\ref{sf1}) is the proper form since $F_{ll}(k)-1/2$ goes to zero in the limit $n \to \infty$ and therefore the sum over $n'$ converges quickly. Usually, the self-energy given by Eq. (\ref{sf1}) is evaluated with a cutoff $k_c = 1$ \cite{Barlas,Hwang,Kusminskiy}. By the similar treatment, we have solved Eq. (\ref{sf1}) with cutoff $k_c = 1$ for DFs in a magnetic field in our previous work \cite{Yan}. This cutoff has little effect on the low energy levels close to zero. However, it influences substantially the high levels. In particular, the LLs at the cutoff are strongly modified. As indicated in Sec. II, we should solve the equations of the self-energy for the LLs in the whole range $0 \le n < \infty$. Therefore, the revision given by Eq. (\ref{s1}) is necessary. The big task now is to calculate $V^x_1(n)$.
To calculate $V^x_1(n)$, we first consider the case of $B = 0$ and look for an approximation scheme from the result. By the transform $T(\phi_v) = Diag[1,\exp(i\phi_v)]$ with $\phi_v$ the angle of momentum $(s_v k_x,k_y)$, the effective mean-field Hamiltonian reads
\begin{eqnarray}
T^{\dagger}(\phi_v)H^{vs}(\vec k)T(\phi_v) = k\sigma_1 + \Sigma^{vs}(k),
\end{eqnarray}
which is independent on the angle $\phi_v$. Here, $k$ is understood as the momentum. The self-energy element $\Sigma^{vs}_{ab}(k)$ reads
\begin{eqnarray}
\Sigma^{vs}_{ab}(k) = -\frac{1}{V}\sum_{k'}v^{x}(|\vec k-\vec k'|)\cos\theta F^{vs}_{ab}(k'),
\label{sk1}
\end{eqnarray}
where $V$ is the volume (area) of the two-dimensional system, $\theta$ is the angle between $\vec k$ and $\vec k'$, and $F^{vs}_{ab}(k)$ has the same form as given by Eq. (\ref{f1}) provided the Landau energy $\epsilon_1(k)$ is replaced with the energy $k$. We can revise Eq. (\ref{sk1}) to get a similar form as Eq. (\ref{s1}) and obtain the corresponding $V_1(k)$ as
\begin{eqnarray}
V_1(k) &=& \frac{1}{V}\sum_{k'}v^{x}(|\vec k-\vec k'|)\cos\theta \nonumber\\
&=&\int^{\infty}_0\frac{dq}{2\pi}v^{x}(q)f(k,q), \label{vk1}
\end{eqnarray}
where $f(k,q) = [(k-q)K(\alpha)+(k+q)E(\alpha)]/\pi k$ with $K(\alpha)$ and $E(\alpha)$ the elliptic integrals and $\alpha = 2\sqrt{kq}/(k+q)$. Now, for the quantized interaction $V^x_1(n)$, a reasonable approximation is to replace the continuous momentum $k$ with the quantized one $k_n =\sqrt{2Bn}$ in $V_1(k)$,
\begin{eqnarray}
V^x_1(n) \approx V_1(k_n).
\end{eqnarray}
\section{thermodynamic potential}
Using the result (see Appendix)
\begin{eqnarray}
\sum_{\omega}\exp(i\omega\eta){\rm Tr}\ln[-G(k,i\omega)]
&=& \sum_{\lambda}\ln[e^{-\beta\xi_{\lambda}(k)}+1],\nonumber
\end{eqnarray}
we obtain $\Omega(B)$ under the MFT as
\begin{eqnarray}
\Omega(B)&=& -\frac{B}{4\pi}\sum_{k}\{{\rm Tr}[(\Sigma-V^x/2)F(k)] \nonumber\\
&&+\sum_{\lambda}\frac{2}{\beta}\ln[e^{-\beta\xi_{\lambda}(k)}+1]\}. \label{omg}
\end{eqnarray}
From Eq. (\ref{omg}), we may get $\Omega_N(B)$. However, to maintain the particle-hole symmetry in $\Omega_N(B)$, we must revise the form.
We need to write the equations for the self-energy $\Sigma_{0,3}(k)$ more clearly
\begin{widetext}
\begin{eqnarray}
\Sigma_0(k) &=& -sUm_0-{\sum_{n'}}'\{[v_{11}^v(n,n')+v_{22}^v(n,n')][g_+(k')+g_-(k')]/4 \nonumber\\
&&+[v_{11}^v(n,n')-v_{22}^v(n,n')][g_+(k')-g_-(k')]\Sigma_3(k')/4E(k')\}-g_0(0vs)v^K_{22}(n,0)/2, \nonumber\\
\Sigma_3(k) &=& v_c\rho-sUm_3-{\sum_{n'}}'\{[v_{11}^v(n,n')-v_{22}^v(n,n')][g_+(k')+g_-(k')]/4 \nonumber\\
&&+[v_{11}^v(n,n')+v_{22}^v(n,n')][g_+(k')-g_-(k')]\Sigma_3(k')/4E(k')\}+s_vg_0(0vs)v^K_{22}(n,0)/2, \label{s0}
\end{eqnarray}
where $\rho = \rho_a, m_{0,3} = (m_a \pm m_b)/2, g_{\lambda}(k)=f_{\lambda}(k)-1/2$. Note that $v^K(n,n') = \sigma_1v^{K'}(n,n')\sigma_1$, we then obtain
\begin{eqnarray}
{\sum_k}'\Sigma_0(k)+\sum_{vs}[\Sigma_0(0vs)-s_v\Sigma_3(0vs)]/2 = -V^x{\sum_{k\lambda}}'g_{\lambda}(k)/2-V^x\sum_{vs}g_0(0vs)/2, \label{rlt}
\end{eqnarray}
where we have used the relation
\begin{eqnarray}
\sum_{n'}v^K_{bb}(n,n') = V^x
\end{eqnarray}
which is independent on $n$. Using Eq. (\ref{rlt}), we rewrite Eq. (\ref{omg}) in the form
\begin{eqnarray}
\Omega(B) &=&-\frac{B}{2\pi}{\sum_{k}}'\{\sum_{\lambda}\frac{1}{\beta}\ln(e^{\beta\xi_{\lambda}/2}+e^{-\beta\xi_{\lambda}/2})
+\Sigma_0(F_0-1/2)+\Sigma_1F_1+\Sigma_3F_3+\mu-V^x/4\}\nonumber\\
&& -\frac{B}{2\pi}\sum_{vs}\{\frac{1}{\beta}\ln(e^{\beta\xi_0/2}+e^{-\beta\xi_0/2})
+(\Sigma_0-s_v\Sigma_3)(f_0-1/2)/2+\mu/2-V^x/8\}, \label{omg1}
\end{eqnarray}
\end{widetext}
where $F_{0,1,3}$ are distribution functions defined as
\begin{eqnarray}
F_0&=&[f_+(k)+f_-(k)]/2, \nonumber\\
F_1&=& \frac{\epsilon_0(k)+\Sigma_1(k)}{E(k)}[f_+(k)-f_-(k)]/2, \nonumber\\
F_3&=& \frac{\Sigma_3(k)}{E(k)}[f_+(k)-f_-(k)]/2, \nonumber
\end{eqnarray}
$f_0 = f_0(0vs)$ is the Fermi distribution function of level $n=0$, and ${\sum_k}'$ means $n\ne 0$.
Under the transform $\mu \to -\mu$, the self-energy components change as $\Sigma_{0}(nvs) = -\Sigma_{0}(nvs)$ and $\Sigma_1(nvs) = \Sigma_1(nvs)$ and $\Sigma_3(nvs) = -\Sigma_3(n\bar vs)$ or $\Sigma_{1,3}(nvs) = \Sigma_{1,3}(n\bar vs)$ (with $\bar v$ means $\bar K = K'$ and $\bar K' = K$). Note that the constant terms $\mu-V^x/4$ and $\mu/2-V^x/8$ in Eq. (\ref{omg1}) will disappear in the final formula for $M$ because a cancellation between the terms $-S_N(B)$ and $(N+1/2)y_N(B)$ as indicated by Eq. (\ref{mgnr}). We can then conclude that $M$ is symmetric under the particle-hole transform. The function $S_N(B)$ can now be extracted from Eq. (\ref{omg1}).
For calculating $S_N'(B)$ and $S_N''(B)$, we need to derive the equations of the self-energy elements with respect to $B$ and solve them. The derivation is elementary but tedious. For brevity of the paper, we will not express these equations here.
\section{orbital magnetization}
We have numerically solved the equations for the self-energy $\Sigma(k), \partial\Sigma(k)/\partial B$, and $\partial^2\Sigma(k)/\partial B^2$. In the present calculation, the on-site interaction is set as $U/\epsilon_0 = 2$. The coupling constant of the interaction is $e^2/a_0\epsilon_0 = 2.2$. With the results for the self-energy and its derivatives, we calculate the OM at CNP and at finite carrier concentration.
Shown in Fig. 3 are the numerical results for the interacting and free DFs at CNP and at $T =0$. It is seen that the magnitude of the OM of interacting DFs (blue solid circles) is smaller than that of the free DFs (red circles). At $T = 0$, there exists antiferromagnetic spin ordering in the interacting DFs catalyzed by the magnetic field as investigated in many works \cite{Yan,Gusynin1,Herbut,Kharitonov,Lado,Roy,Khveshchenko,Alicea,Gusynin2,Jung,Lukose}. This spin ordering results in the splitting of the zero Landau levels. In the low energy zero-LL states, the spin-up and down electrons move in the sublattices $a$ and $b$, respectively. The spin ordering also modifies the electron distributions in the two sublattices at other LLs. Overall, in the presence of the spin ordering, the electrons cannot move freely in the whole lattice. Since the antiferromagnetic spin ordering acts as the obstacle for the orbital circumnutation, the OM is therefore weakened.
\begin{figure}[t]
\centerline{\epsfig{file=fig3.ps,width=8.cm,height=8.cm,angle=0}}
\caption{(color online) Orbital magnetization of interacting Dirac fermions (blue solid circles with line) compared with the result for free Dirac fermions (red circles) at CNP and at $T=0$. The black line represents the analytical result given by Eq. (\ref{fdf}) with $N \to \infty$ for the free Dirac fermions.}
\end{figure}
The black line in Fig. 3 represents the analytical formula Eq. (\ref{fdf}) for $N \to \infty$. In the numerical calculation, the cutoff $N$ is finite given by $N = k_c^2/2B - 1/2$ with $k_c = 1$ (and $B$ in units of $B_0 = 1.1\times 10^4$ T). At small $B$ close to zero, since $N$ is sufficiently large, the numerical result (red circles) for the free DFs is in very good agreement with the analytical formula. The difference between them increases with increasing $B$. For $B \sim 8$ T, the numerical result seems still good.
In Fig. 4, we present the results at $T/\epsilon_0 = 0.01$ and at CNP. Since $T$ is high, there are many LLs within the temperature range. As a result, the OM of free DFs varies linearly with $B$ consistent with the existing result \cite{Slizovskiy}. While for the interacting DFs, the OM is not linear in $B$ and its magnitude is larger than that of free DFs. At this high temperature, the spin ordering vanishes but the LLs of the DFs are strongly changed by the interactions through the self-energy $\Sigma_1$. $\Sigma_1$ gives rise to an enhancement of the velocity \cite{Yan2,Sarma,Menezes}, leading to fast orbital circumnutations. The nonlinear behavior of $M$ with $B$ implies the renormalized velocity varies with momentum. Because of the vanishing of spin ordering and the enhancement of the velocity, the OM of the interacting DFs is stronger than that of the free DFs.
\begin{figure}[t]
\centerline{\epsfig{file=fig4.ps,width=8.5cm,height=8.5cm,angle=-90}}
\caption{(color online) Orbital magnetization of interacting Dirac fermions (blue solid circles with line) compared with the result for free Dirac fermions (red circles) at CNP and at $T=0.01$.}
\end{figure}
Figure 5 exhibits the result for the OM of the interacting DFs at finite carrier concentration with $\mu/\epsilon_0 = 0.02$ and at $T/\epsilon_0 = 0.001$. The chemical potential for the free DFs is set as $\mu_0/\epsilon_0 =0.00167$ so that the first LL in the upper band for both interacting DFs and the free DFs has almost the same position $B$. At the finite carrier concentration and temperature, the charge and spin orderings disappear and all the LLs are degenerated with degeneracy 4. The index of the first LL in the upper band is $n = 1$. With the field $B$ varying, when the LLs pass cross the Fermi level, the OM shows the de Haas-van Alphen oscillations. The LL of $n = 1$ is at about $B \approx 0.97$ T. Above this field, there are no LLs below the Fermi level in the upper band and the OM decreases monotonically with $B$.
For finite doping at very small $B$ and $T = 0$, there are rapid de Haas-van Alphen oscillations similarly as that indicated in Sec. III for noninteracting DFs. For interacting DFs, however, the average of the oscillations should not be vanishing at small $B$. According to the perturbation theory, the system shows orbital paramagnetism at very small $B$ \cite{Principi}; with the first order perturbation calculation for Thomas-Fermi screened Coulomb interactions, it has been shown that the orbital magnetic susceptibility $\chi$ is positive for DFs in doped graphene. Therefore, the average $M$ should increase from $M = 0$ with increasing the field $B$. At finite $T$, the oscillations are smeared by temperature. The average $M$ should be weakened by the thermal fluctuations. (We did not perform the calculation at very small $B$ because for which the cutoff number $N$ is so large that the accuracy requirement for the numerical calculation exceeds the ability of our computer.)
\begin{figure}[t]
\centerline{\epsfig{file=fig5.ps,width=8.5cm,height=8.5cm,angle=-90}}
\caption{(color online) Orbital magnetization of interacting Dirac fermions (blue solid line) at $T/\epsilon_0=0.001$ and $\mu/\epsilon_0 = 0.02$compared with the result for free Dirac fermions (red line) with $\mu_0/\epsilon_0 = 0.00167$. }
\end{figure}
\section{Remark}
In the present approach, the OM is calculated by expanding the sum $S_N(B+\Delta B)$ [and the term $y_N(B+\Delta B)$] to second (first) order in $\Delta B$ as shown in Eq. (\ref{mgnr}). The formalism works only for the system of eigen-energy being linear in momentum $k$. For a Dirac or Weyl system of $\epsilon_{\lambda}(k) \to \lambda k^{\nu}$ as $k \to \infty$, the sum $S_N$ is order $N^{\nu/2+1}$. We need to expand $S_N(B+\Delta B)$ [$y_N(B+\Delta B)$] to $m$th ($m-1$th) order in $\Delta B$ with $m = [\nu/2]+2$. Here $[\nu/2]$ means the integer part of the number $\nu/2$. For example, for an $L$-layered graphene, since it has $[L/2]$ bilayer bands and $L$ mod 2 monolayer bands \cite{Koshino1}, we need to expand $S_N(B+\Delta B)$ to $(\Delta B)^3$ and $y_N(B+\Delta B)$ to $(\Delta B)^2$. By so doing, the unphysical part will be eliminated due to the precise cancellations between these expanded terms.
Though the system of infinitive LLs is considered, the contribution to the total OM comes mostly from the LLs below $k_c = 1$ as reflected by the result for free DFs shown in Fig. 3. In graphene, the Dirac cone approximation to the energy bands of electrons is valid within the circle of radius $k_c=1$ in the momentum space. Therefore, the present result for the OM is a fairly good measure of that of electrons in graphene. However, as already stressed, we cannot isolate the LLs below $k_c$ from the entire system. The reason is that the high LLs (especially the LLs close to the cutoff) are strongly modified by the isolation. We have performed the numerical calculation for the isolated system. The consequence of the isolated system is that the magnitude of the OM is several orders larger than the result presented here; it becomes bigger and bigger as $B \to 0$ even not vanishing at $B = 0$.
\section{conclusion}
We have developed the approach for calculating the orbital magnetization of Dirac fermions. The main points in the formalism are: (1) To overcome the divergence difficulty due to the occupation in the lower band, the orbital magnetization is defined as the special limit for the derivative of the thermodynamic potential with respect to the magnetic field. (2) The equations for the self-energy and its derivatives with respect to the magnetic field need to be solved. (3) The particle-hole symmetry should be ensured in the partly sum of the thermodynamic potential. (4) The system with finite LLs is part of the entire system but not isolated from the rest of the entire system.
With the formalism, we have calculated the OM for interacting DFs in graphene and compared the results with that of the free DFs. At very low carrier concentration close to CNP, when the antiferromagnetic spin ordering catalyzed by the magnetic field exists, the OM is weakened. Without the spin and charge orderings, the OM is enhanced due to the velocity renormalization by interactions. At low temperature and finite carrier concentration, the de Haas-van Alphen oscillation appears in the OM as a function of magnetic field.
The present approach may be extended to study the OM of Weyl fermions in the topological semimetals as well.
\acknowledgments
This work was supported by the National Basic Research 973 Program of China under Grant No. 2016YFA0202300 and the Robert A. Welch Foundation under Grant No. E-1146.
|
1,108,101,564,169 | arxiv | \section{Introduction}
Deep neural networks (NNs) display a rich and often-perplexing spectrum of generalization behaviors. Highly overparameterized NNs may possess the expressivity to fit random noise, yet in practice can still generalize well to unseen data \cite{zhang2021understanding,belkin2019reconciling}. The ability of NNs to flexibly learn features from data is widely believed to be a critical contributor to their practical success \cite{yang2021feature,aitchison2020bigger,zhang2021understanding,belkin2019reconciling}, but the precise contributions of feature learning to their generalization behavior remain incompletely understood \cite{nakkiran2021deep,aitchison2020bigger,yang2019scaling,zhang2021understanding,belkin2019reconciling,yang2021feature,refinetti21fail,woodworth2020kernel,geiger2020lazy}.
In recent years, intensive theoretical work has begun to elucidate the properties of deep networks in the limit of infinite hidden layer width. In this limit, a dramatic simplification occurs, and inference in deep networks is equivalent to kernel regression or classification \cite{neal1996priors,williams1997computing,lee2018deep,matthews2018gaussian,jacot2018neural,lee2018deep,hron2020exact,yang2019scaling}. This correspondence has enabled detailed characterizations of inference at infinite width in both maximum-likelihood and fully Bayesian settings, providing new insights into the inductive biases that allow deep networks to overfit benignly \cite{mei2019generalization,hu2020universality,spigler2020asymptotic,canatar2021spectral,barbier2021performance,dascoli2021triple,adlam2020understanding,dascoli2020double,jin2022learning,loureiro2021learning}. Yet, understanding inference in the kernel limit is not sufficient, because kernel descriptions cannot capture feature learning \cite{lee2020finite,yang2021feature,refinetti21fail,woodworth2020kernel,geiger2020lazy}.
As a result, a growing number of recent works have aimed to study the behavior of networks near the kernel limit, with the hope that leading-order corrections to the large-width behavior might elucidate how width and depth affect inference \cite{antognini2019finite,yaida2020,zv2021exact,roberts2022principles,grosvenor2021edge,halverson2021neural,zv2021asymptotics,zv2021scale,li2021statistical,naveh2021predicting,naveh2021self,aitchison2020bigger,dyer2020asymptotics,aitken2020asymptotics}. Some of these works focus on the properties of the function-space prior distribution \cite{antognini2019finite,yaida2020,zv2021exact,grosvenor2021edge,roberts2022principles,halverson2021neural}, some consider maximum-likelihood inference with gradient descent \cite{dyer2020asymptotics,aitken2020asymptotics,roberts2022principles}, and some consider properties of the full Bayes posterior \cite{yaida2020,zv2021asymptotics,li2021statistical,naveh2021predicting,naveh2021self,roberts2022principles,aitchison2020bigger,zv2021scale}. This body of research has resulted in several conjectural conditions under which when narrower and deeper networks might perform better than their infinitely-wide cousins in the Bayesian setting, as measured by generalization for fixed data \cite{zv2021asymptotics,li2021statistical,naveh2021predicting} or by some alternative criterion based on entropic considerations \cite{roberts2022principles}.
However, previous studies of Bayesian neural network generalization near the kernel limit have not clearly differentiated the effect of width on feature learning from its other potential effects on inference. Concretely, it is not clear whether potential improvements in generalization afforded by the leading finite-width correction reflect the benefits of feature learning, or if a similar gain would be observed in random feature models, where only the readout layer is trained. Here, we explore how random and learned features affect generalization in the simplest class of Bayesian NNs---deep linear models---when trained on unstructured, noisy data. By developing a detailed understanding of this simple setting, one might hope to gain intuition that may prove useful in studying more complex networks \cite{saxe2013exact,fukumizu1998effect,nakkiran2019more,hastie2019surprises,advani2020high,zv2021exact}.
In this work, we study the asymptotic generalization performance of deep linear Bayesian regression for data generated with an isotropic Gaussian covariate model. Using the replica trick \cite{mezard1987spin,engel2001statistical}, we compute learning curves for simple linear regression, deep linear Gaussian random feature (RF) models, and deep linear NNs. Our results are obtained using an isotropic Gaussian likelihood in the limit of small likelihood variance, which renders this analysis analytically tractable \cite{zv2021asymptotics,zv2021scale}. Using alternative replica-free methods and numerical simulation, we show that the predictions obtained under a replica-symmetric (RS) \emph{Ansatz} are accurate for all three model classes. In particular, the RS result for learning curves of NNs with hidden layers of equal widths is consistent with results obtained by Li and Sompolinsky \cite{li2021statistical} using a different approximation method.
In the presence of label noise, both RF and NN models display sample-wise non-monotonicity in their learning curves. As we work in a high-dimensional limit, this non-monontonicity is of a particularly extreme form: the generalization error diverges at a particular data density. In keeping with modern deep learning parlance, we refer to this behavior as ``double-descent" \cite{belkin2019reconciling,krogh1992generalization,nakkiran2019more,nakkiran2021deep,dascoli2020double,dascoli2021triple,adlam2020understanding,canatar2021spectral}. If one introduces a bottleneck layer that is narrower than the input dimension, an RF model will display model-wise double-descent behavior at fixed data density---or equivalently sample-wise double-descent at fixed width---even in the absence of label noise, while an NN model will not show this divergence. This distinct small-width behavior shows one advantage afforded by the flexibility to learn features. For both models, we analyze how optimal network architecture depends on data density and prior mismatch. We show that, at a given data density, RF models have a particular optimal width for fixed depth and optimal depth for fixed width that minimizes the generalization error. In contrast, it is always optimal to take an NN to be as wide or as narrow as possible, depending on the regime.
We further analyze models of arbitrary depth perturbatively in the limit in which the network depth and dataset size are small relative to the hidden layer widths, connecting these results to those of previous work on fixed-dataset perturbation theory \cite{zv2021asymptotics}. We find that the leading order correction to the large-width behavior of RF and NN models is identical, hence first-order perturbation theory for the generalization error cannot distinguish between random and learned features. To distinguish between training only the readout layer and training all layers, one must go to second order in perturbation theory. Therefore, at large widths, the ability to perform representation learning provides only a small advantage in generalization performance in these simple models relative to random features, which is invisible in first-order perturbation theory. In total, our results provide new insight into how the generalization behavior of deep Bayesian linear regression in high dimensions depends on architectural details. Moreover, they shed light onto which qualitative features of generalization behavior can or cannot be captured by low-order perturbative corrections \cite{zv2021exact}.
\section{Problem setting}
In this section, we introduce the three classes of regression models we consider in this work, as well as our generative data model. Our notation throughout is standard; we use $\Vert \cdot \Vert$ to denote the Euclidean norm, $\mathbf{I}_{d}$ to denote the $d \times d$ identity matrix, and $\mathbf{1}$ to denote the vector with all elements equal to one.
\subsection{Regression models and parameter priors}
In this work, we consider three classes of scalar Bayesian linear regression models for a scalar-valued function of $d$-dimensional inputs. All three of these model classes are of the form
\begin{align}
\begin{split}
g_{\mathbf{w}}(\mathbf{x}) = \frac{1}{\sqrt{d}} \mathbf{w}^{\top} \mathbf{x},
\end{split}
\end{align}
and differ in the parameterization of the `end-to-end' weight vector $\mathbf{w} \in \mathbb{R}^{d}$. We will choose parameter priors such that the $\mathbb{E} \Vert \mathbf{w} \Vert^{2} = \sigma^{2} d$ for each model, where $\sigma > 0$ is a hyperparameter which sets the prior variance of the network outputs. We remark that each model is positive-homogeneous in its parameters, hence this choice is made without loss of generality.
Below, we list the three classes of models we consider, and introduce a two-letter abbreviation for each:
\begin{itemize}
\item[(LR)] Simple Bayesian linear regression. For this model, the end-to-end weight vector is directly parameterized as
\begin{align}
\mathbf{w}_{\textrm{LR}} = \sigma \mathbf{v}
\end{align}
for a trainable parameter vector $\mathbf{v} \in\mathbb{R}^{d}$ with isotropic Gaussian prior distribution
\begin{align}
\mathbf{v} \sim \mathcal{N}(\mathbf{0},\mathbf{I}_{d}).
\end{align}
Previous works have extensively studied this model in both maximum-likelihood and fully Bayesian settings \cite{krogh1992generalization,nakkiran2019more,hastie2019surprises,canatar2021spectral,advani2016statistical,barbier2021performance}, hence we include it as a baseline against which we will compare our results for more complicated models.
\item[(RF)] Deep Bayesian random feature models. For these models, the weight vector is parameterized as
\begin{align}
\mathbf{w}_{\textrm{RF}} = \frac{\sigma}{\sqrt{n_{1} \cdots n_{\ell}}} \mathbf{U}_{1} \cdots \mathbf{U}_{\ell} \mathbf{v}
\end{align}
for matrices $\mathbf{U}_{1} \in \mathbb{R}^{d \times n_{1}}$, $\mathbf{U}_{2} \in \mathbb{R}^{n_{1} \times n_{2}}$,\ldots, $\mathbf{U}_{\ell} \in \mathbb{R}^{n_{\ell-1} \times n_{\ell}}$ and a vector $\mathbf{v} \in \mathbb{R}^{n_{\ell}}$. Here, $\ell \in \mathbb{N}_{>0}$ is the network depth, while $n_{1},\ldots,n_{\ell} \in \mathbb{N}_{>0}$ are the hidden layer widths. For the RF model, only the readout weight vector $\mathbf{v}$ is trainable, while the hidden layer weights $\mathbf{U}_{l}$ are fixed and random. We choose an isotropic Gaussian prior for the readout weights
\begin{align}
\mathbf{v} \sim \mathcal{N}(\mathbf{0},\mathbf{I}_{n_{\ell}}),
\end{align}
while the hidden layer weights are drawn from a fixed isotropic Gaussian distribution
\begin{align}
(\mathbf{U}_{l})_{ij} &\sim \mathcal{N}(0,1) \qquad (l = 1, \ldots, \ell).
\end{align}
\item[(NN)] Deep Bayesian linear neural networks. For these models, the weight vector is parameterized as
\begin{align}
\mathbf{w}_{\textrm{NN}} = \frac{\sigma}{\sqrt{n_{1} \cdots n_{\ell}}} \mathbf{U}_{1} \cdots \mathbf{U}_{\ell} \mathbf{v} .
\end{align}
Though NNs are parameterized identically to the RF models above, they differ in that all of the weights are trainable, not only the readout. We again choose isotropic Gaussian prior distributions
\begin{align}
(\mathbf{U}_{l})_{ij} &\sim \mathcal{N}(0,1) \qquad (l = 1, \ldots, \ell),
\\
\mathbf{v} &\sim \mathcal{N}(\mathbf{0},\mathbf{I}_{n_{\ell}}).
\end{align}
From a physical perspective, the hidden layer weights in the RF model are `quenched' disorder, whereas they are `annealed' disorder in NNs \cite{mezard1987spin,engel2001statistical}.
\end{itemize}
For all models, we denote expectation with respect to the prior distribution of the trainable parameters by $\mathbb{E}_{\mathcal{W}}$.
\subsection{Data model and the Bayes posterior}
We train all models on a dataset $\{(\mathbf{x}_{\mu},y_{\mu})\}_{\mu=1}^{p}$ of $p$ examples, generated according to a standard isotropic Gaussian covariate model \cite{hastie2019surprises,barbier2021performance,advani2016statistical,krogh1992generalization,nakkiran2019more,advani2020high}. In this model, the example inputs are independent and identically distributed samples from a standard Gaussian distribution:
\begin{align}
\mathbf{x}_{\mu} \sim \mathcal{N}(\mathbf{0},\mathbf{I}_{d}),
\end{align}
while the labels are generated by a ground truth linear model, possibly corrupted by additive Gaussian noise:
\begin{align}
y_{\mu} = \frac{1}{\sqrt{d}} \mathbf{w}_{\ast}^{\top} \mathbf{x}_{\mu} + \eta \xi_{\mu},
\end{align}
where $\eta \geq 0$ sets the noise variance. The noise variables are independent and identically distributed as
\begin{align}
\xi_{\mu} \sim \mathcal{N}(0,1),
\end{align}
and are independent of the inputs. We take the `teacher' weight vector $\mathbf{w}_{\ast}$ to have fixed norm $\Vert \mathbf{w}_{\ast} \Vert^2 = d$. In some places, we will average over teacher weights distributed uniformly on the sphere (i.e., $\mathbf{w}_{\ast} \sim \mathcal{U}[\mathbb{S}^{d-1}(\sqrt{d})]$), though our main results will hold pointwise for any $\mathbf{w}_{\ast}$ on the sphere. We will collect the training inputs and outputs into a matrix $(\mathbf{X})_{\mu j} = (\mathbf{x}_{\mu})_{j}$ and a vector $(\mathbf{y})_{\mu} = y_{\mu}$, respectively.
For a dataset thusly generated, we introduce an isotropic Gaussian likelihood of variance $1/\beta$:
\begin{align}
p(\{(\mathbf{x}_{\mu},y_{\mu})\}_{\mu=1}^{p} \,|\,\mathcal{W}) \propto \exp\left(-\frac{\beta}{2} \sum_{\mu=1}^{p} [g_{\mathbf{w}}(\mathbf{x}_{\mu}) - y_{\mu}]^2 \right),
\end{align}
where $\mathcal{W}$ denotes the set of trainable parameters for a given model, and the normalization constant is implied. We will refer to $\beta$ as the `inverse temperature' by standard analogy with statistical mechanics \cite{zv2021asymptotics,krogh1992generalization,engel2001statistical,biehl1998phase,solla1992learning,levin1990statistical}. Then, the partition function of the resulting Bayes posterior is given as
\begin{align}
Z = \mathbb{E}_{\mathcal{W}} \exp\left(-\frac{\beta}{2} \sum_{\mu=1}^{p} [g_{\mathbf{w}}(\mathbf{x}_{\mu}) - y_{\mu}]^2 \right).
\end{align}
We denote expectations with respect to this Bayes posterior by $\langle \cdot \rangle$.
\subsection{Generalization error in the thermodynamic limit}
With the initial setup of the previous sections, we can now introduce our concrete objective. We consider a proportional asymptotic limit in which the input dimension $d$, the dataset size $p$, and (for NN and RF models) the hidden layer widths $n_{1}, \ldots, n_{\ell}$ tend to infinity for fixed depth $\ell$ and fixed ratios
\begin{align}
\alpha &\equiv p/d = \mathcal{O}(1),
\\
\gamma_{l} &\equiv n_{l}/d = \mathcal{O}(1) \qquad (l = 1, \ldots, \ell).
\end{align}
Moreover, we focus on the zero-temperature limit $\beta \to \infty$, in which the likelihood tends to a constraint that the network interpolates the training set with probability one. In the noise-free case, this limiting likelihood is matched to the true generative model of the data, but it is clearly mismatched in the presence of label noise. This limit has been considered in several recent studies of deep linear Bayesian neural networks \cite{zv2021asymptotics,zv2021scale,li2021statistical,aitchison2020bigger,roberts2022principles}.
Our goal is to study the average-case generalization error $\epsilon$ of the resulting model, as measured by the deviation of its end-to-end weight vector $\mathbf{w}$ from the true teacher weight vector $\mathbf{w}_{\ast}$:
\begin{align} \label{eqn:generalization_error}
\epsilon = \lim_{\beta \to \infty} \lim_{d,p,n_{1},\ldots,n_{\ell} \to \infty} \mathbb{E}_{\mathcal{D}} \left\langle \frac{1}{d} \Vert \mathbf{w} - \mathbf{w}_{\ast} \Vert^{2} \right\rangle.
\end{align}
Here, $\mathbb{E}_{\mathcal{D}}$ denotes expectation with respect to all quenched disorder for a given model. For all models, this includes the training inputs and label noise; for the RF model, it also includes the hidden layer weights. We remark that \eqref{eqn:generalization_error} is the average-case error of the Gibbs estimator (i.e., a single sample from the posterior); one could instead consider the error of the mean estimator $\langle \mathbf{w} \rangle$. As one has the thermal bias-variance decomposition $\langle \Vert \mathbf{w} - \mathbf{w}_{\ast} \Vert^2 \rangle = \Vert \langle \mathbf{w} \rangle - \mathbf{w}_{\ast} \Vert^2 + \tr(\langle \mathbf{w} \mathbf{w}^{\top} \rangle - \langle \mathbf{w} \rangle \langle \mathbf{w} \rangle^{\top})$, the error of the mean estimator is bounded from above by the average error of the Gibbs estimator.
We compute the limiting average generalization error using the replica method, a non-rigorous but powerful heuristic that has seen broad use in statistical mechanical studies of inference \cite{engel2001statistical,mezard1987spin,canatar2021spectral,dascoli2020double,loureiro2021learning}. As our main results can be understood independently of calculation through which they were obtained, we relegate the details to Appendices \ref{app:replica_framework} and \ref{app:rs_saddle_point}. We note the important caveat that our main results are obtained under a replica-symmetric \emph{Ansatz}. We expect this assumption to hold exactly for the LR and RF models by virtue of the concavity of their log-posteriors, but replica symmetry may be broken in deep linear NNs \cite{barbier2021strong,mezard1987spin}. We will not address this possibility analytically by considering \emph{Ans\"atze} with broken replica symmetry \cite{mezard1987spin}, but will instead simply compare the RS predictions against results obtained through a combination of alternative analytical methods and numerics.
\section{Learning curves for the LR model}
We begin by briefly describing the learning curve of the simple LR model. Our result extends the classic result of Krogh and Hertz \cite{krogh1992generalization} for ridge regression in the ridgeless limit to the Bayesian setting:
\begin{align}\label{eqn:lr_learning_curve}
\epsilon_{\textrm{LR}} =
\begin{dcases}
(1 + \sigma^{2}) (1-\alpha) + \frac{\alpha}{1-\alpha} \eta^{2}, &\textrm{if}\ \alpha < 1
\\
\frac{1}{\alpha - 1} \eta^{2}, &\textrm{if}\ \alpha > 1.
\end{dcases}
\end{align}
For this simple model, the learning curve can also be computed directly by first evaluating the posterior average defining $\epsilon_{\textrm{LR}}$ for a fixed realization of the disorder, and then averaging the result over the disorder in the zero-temperature limit (see Appendix \ref{app:rf_posterior_expectations} for details). The result of \cite{krogh1992generalization} can be recovered from \eqref{eqn:lr_learning_curve} by setting $\sigma = 0$. We provide further discussion of the relationship between the Bayesian LR model in the zero-temperature limit and ridge regression in the ridgeless limit in Appendix \ref{app:rf_posterior_expectations}.
Therefore, as \cite{krogh1992generalization} found in the ridge regression setting, the LR model exhibits sample-wise double-descent behavior---i.e., non-monotonicity in $\epsilon_{\textrm{LR}}$ as a function of $\alpha$ \cite{belkin2019reconciling,nakkiran2021deep}---in the presence of label noise. In the thermodynamic limit, the double-descent behavior is particularly striking: $\epsilon_{\textrm{LR}}$ diverges as $\alpha \to 1$. In the absence of noise, $\epsilon_{\textrm{LR}}$ decreases monotonically from $1+\sigma^2$ to $0$ as $\alpha \uparrow 1$, and then remains at zero for all $\alpha > 1$. We remark that, for this and subsequent models, we will not conduct a detailed analysis of what happens precisely at exceptional points, e.g., $\alpha = 1$. In the ridge regression setting, the phase transition at $\alpha = 1$ has recently been analyzed in detail by Canatar \emph{et al.} \cite{canatar2021spectral}. We also direct the interested reader to an expository note by Nakkiran \cite{nakkiran2019more} for further intuitions on double-descent in ridge regression, and to work by Hastie \emph{et al.} \cite{hastie2019surprises} for a detailed rigorous analysis. We will take the model-wise double-descent behavior of the LR model as a benchmark for our subsequent analyses of the more complex RF and NN models.
\section{Learning curves for the RF model}
\subsection{Learning curve and double-descent behavior }
\begin{figure*}
\centering
\includegraphics[width=4in]{fig/rf_noisy_labels_v1.pdf}
\caption{Sample- and model-wise double-descent in deep Bayesian random feature models.
\textbf{(a).} Contour plot in $(\alpha,\gamma)$-space of the theoretical error surface $\epsilon_{\textrm{RF}}$ \eqref{eqn:rf_learning_curve} for a single-hidden-layer RF model in the absence of label noise ($\eta = 0$). For all panels, we set the input dimensionality $d = 100$ and prior variance $\sigma^2 = 1$. For details of our numerical methods, see Appendix \ref{app:numerical_methods}.
\textbf{(b).} As in (a)., but in the presence of label noise ($\eta = 0.5$).
\textbf{(c).} Horizontal cross sections of above (a). Theory curves are overlayed with experiment points, plotted with $\pm 2$ SE bars.
\textbf{(d).} Horizontal cross sections of above (b).
\textbf{(e).} Vertical cross sections of above (a).
\textbf{(f).} Vertical cross sections of above (b).
}
\label{fig:rf_learning_curve}
\end{figure*}
For RF models, we obtain a closed-form expression for the learning curve at any depth. Let $\gamma_{\textrm{min}} = \min\{\gamma_{1},\ldots,\gamma_{\ell}\}$ be the minimum hidden layer width. Then, we find that
\begin{widetext}
\begin{align} \label{eqn:rf_learning_curve}
\epsilon_{\textrm{RF}} =
\begin{dcases}
(1 - \alpha) \left(1 + \sigma^{2} \prod_{l=1}^{\ell} \frac{\gamma_{l}-\alpha}{\gamma_{l}} + \sum_{l=1}^{\ell} \frac{\alpha}{\gamma_{l}-\alpha} \right) + \left(\frac{\alpha}{1 - \alpha} + \sum_{l=1}^{\ell} \frac{\alpha}{\gamma_{l}-\alpha} \right) \eta^{2} , & \textrm{if}\ \alpha < \min\{1, \gamma_{\textrm{min}}\}
\\
\alpha \frac{1 - \gamma_{\textrm{min}}}{\alpha - \gamma_{\textrm{min}}} + \frac{\gamma_{\textrm{min}}}{\alpha - \gamma_{\textrm{min}}} \eta^{2}, & \textrm{if}\ \alpha > \gamma_{\textrm{min}}\ \textrm{and}\ \gamma_{\textrm{min}} < 1
\\
\frac{1}{\alpha - 1} \eta^{2}, & \textrm{if}\ \alpha > 1 \ \textrm{and}\ \gamma_{\textrm{min}} > 1.
\end{dcases}
\end{align}
\end{widetext}
We validate the accuracy of this RS result by comparing it against the result of an alternative semi-analytical approach. As shown in Appendix \ref{app:rf_posterior_expectations}, the zero-temperature posterior average in \eqref{eqn:generalization_error} can be computed for a fixed realization of the disorder. Even without explicitly evaluating the disorder average, this shows that the RF model should display the three phases indicated by the RS result \eqref{eqn:rf_learning_curve}, and confirms the prediction for in which of the phases the learning curve should depend on the prior variance $\sigma^2$ (see Appendix \ref{app:rf_posterior_expectations}). To quantitatively test the accuracy of the RS result, the disorder average can be evaluated numerically using sampling (see Appendix \ref{app:numerical_methods}). As shown in Figures \ref{fig:rf_learning_curve} and \ref{fig:rf_narrow}, we observe excellent agreement over a broad range of parameter values. These results are consistent with our expectation that the RS \emph{Ansatz} should yield accurate results for the RF models \cite{mezard1987spin,barbier2021strong,advani2020high,canatar2021spectral}.
While the LR model only exhibits double-descent behavior in the presence of label noise \eqref{eqn:lr_learning_curve}, the RF model can also exhibit double-descent behavior in the absence of label noise if any one of the hidden layers is narrower than the input dimension, i.e., $\gamma_{\textrm{min}} < 1$. This phenomenon occurs in a model-wise fashion at fixed data density: if one considers a decreasing sequence of widths $\gamma_{\textrm{min}}$ at fixed $\alpha$, $\epsilon_{\textrm{RF}}$ will diverge as $\gamma_{\textrm{min}} \downarrow \alpha$ (Figure \ref{fig:rf_learning_curve}a,c,e). Equivalently, this divergence can be observed in a sample-wise fashion at fixed width, with $\epsilon_{\textrm{RF}} \to \infty$ as $\alpha \to \gamma_{\textrm{min}}$. Moreover, as illustrated in Figure \ref{fig:rf_narrow}, it is determined by the width of the narrowest hidden layer. If one adds more bottleneck layers, then the expression for the generalization error in the regime $\alpha < \gamma_{\textrm{min}}$ will formally include more poles \eqref{eqn:rf_learning_curve}, but these poles will not be visible as one varies the size of the training set or the width of the narrowest bottleneck.
Like the LR model, the RF model exhibits sample-wise double-descent behavior in the presence of label noise (Figure \ref{fig:rf_learning_curve}). However, if there is a bottleneck layer with width $\gamma_{\textrm{min}} < 1$, then the addition of label noise does not introduce additional divergences in $\epsilon_{\textrm{RF}}$ beyond that at $\alpha \to \gamma_{\textrm{min}}$; the pole at $\alpha = 1$ is visible only if $\gamma_{\textrm{min}} > 1$ (Figure \ref{fig:rf_learning_curve}b,d,e). This is clearly illustrated by comparing learning curves of two-layer ($\ell=1$) RF models with $\gamma_{\textrm{min}} = 1/2$ in the absence (Figure \ref{fig:rf_learning_curve}c) and presence (Figure \ref{fig:rf_learning_curve}d) of label noise: $\epsilon_{\textrm{RF}}$ increases with the addition of noise, but the only visible divergence is at $\alpha \to \gamma_{\textrm{min}}$. The presence of only a single divergence in $\epsilon_{\textrm{RF}}$ for two-layer models is consistent with work by d'Ascoli and colleagues on the phenomenology of double-descent in two-layer RF models trained with ridge regression \cite{dascoli2021triple}.
\begin{figure*}
\centering
\includegraphics[width=4in]{fig/rf_narrowest_hidden_layer_v1.pdf}
\caption{Double-descent in deep random feature models depends on the narrowest hidden layer.
\textbf{(a).} Contour plot in $(\gamma_1,\gamma_2)$-space of the theoretical error surface $\epsilon_{\textrm{RF}}$ \eqref{eqn:rf_learning_curve} for a deep RF model with two hidden layers and $\alpha=0.5$. For all panels, we set the input dimensionality $d = 100$, prior variance $\sigma^2 = 1$, and no label noise ($\eta = 0$). For details of our numerical methods, see Appendix \ref{app:numerical_methods}.
\textbf{(b).} As in (a)., but with $\alpha=1.5$.
\textbf{(c).} Horizontal cross sections of above (a). Theory curves are overlayed with experiment points, plotted with $\pm 2$ SE bars.
\textbf{(d).} Horizontal cross sections of above (b).
}
\label{fig:rf_narrow}
\end{figure*}
\subsection{Large-width behavior }
We now analyze the behavior of RF models at large widths. As $\gamma_{1},\ldots,\gamma_{\ell} \to \infty$, $\epsilon_{\textrm{RF}} \to \epsilon_{\textrm{LR}}$ for any fixed $\alpha$, $\sigma$, and $\eta$. We will refer to this simplification---the reduction of the linear curve of a deep linear model to that of simple linear regression---as the \emph{kernel limit} \cite{neal1996priors,matthews2018gaussian,lee2018deep,hron2020exact,jacot2018neural}. To obtain a more precise understanding of the behavior of the RF model near the kernel limit, we expand \eqref{eqn:rf_learning_curve} in the regime $\gamma_{1},\cdots,\gamma_{\ell} \gg 1$. If $\alpha > 1$, then we have $\epsilon_{\textrm{RF}} = \epsilon_{\textrm{LR}}$ in this regime. If $\alpha < 1$, we have
\begin{align} \label{eqn:rf_perturbation}
\epsilon_{\textrm{RF}} &= \epsilon_{\textrm{LR}} +
[(1 - \alpha) (1 - \sigma^{2}) + \eta^{2}] \sum_{l=1}^{\ell} \frac{\alpha}{\gamma_{l}} + \mathcal{O}\left(\frac{\alpha^{2}}{\gamma^2}\right),
\end{align}
where $\mathcal{O}(\alpha^2/\gamma^2)$ denotes terms that include two or more factors of any combination of the layer widths.
For an RF model of equal hidden layer widths $\gamma_{1} = \cdots = \gamma_{\ell} = \gamma$, the leading correction scales as $\ell \alpha/\gamma$. For this simple architecture, we can also study the scaling of higher-order corrections relatively easily. In the regime $\alpha < \min\{1,\gamma\}$, the learning curve \eqref{eqn:rf_learning_curve} can be written compactly as
\begin{align} \label{eqn:rf_equal_width_generalization}
\frac{\epsilon_{\textrm{RF}} - \epsilon_{\textrm{LR}}}{1 - \alpha + \eta^2} &= \tilde{\sigma}^{2} \left[ \left(\frac{\gamma-\alpha}{\gamma}\right)^{\ell} - 1 \right] + \ell \frac{\alpha}{\gamma-\alpha} ,
\end{align}
where we have defined the re-scaled prior variance
\begin{align} \label{eqn:sigma_tilde}
\tilde{\sigma}^2 \equiv \frac{ \sigma^{2} }{1 + \eta^{2}/(1-\alpha)}.
\end{align}
Then, for $\alpha/\gamma < 1$, we can read off the full series expansion using the binomial theorem and the geometric series:
\begin{align} \label{eqn:rf_all_order_series}
\frac{\epsilon_{\textrm{RF}} - \epsilon_{\textrm{LR}}}{1 - \alpha + \eta^2} &= \sum_{j=1}^{\infty} \left[ (-1)^{j} \tilde{\sigma}^{2} \binom{\ell}{j} + \ell \right] \frac{\alpha^{j}}{\gamma^{j}},
\end{align}
where we note that $\binom{\ell}{j} = 0$ if $j > \ell$. Noting that $\binom{\ell}{j} = \mathcal{O}(\ell^{j})$ for $\ell \gg j$, we can see that the dominant term for large $\ell$ at each order in $\alpha / \gamma$ will scale with $\ell \alpha / \gamma$, up to around $\mathcal{O}(\alpha^{\ell}/\gamma^{\ell})$. Therefore, depth will have an important effect on how quickly the kernel limit is approached with varying width. At small $\ell$, the $j$-th order term will simply scale as $\ell (\alpha/\gamma)^{j}$ for all $j > \ell$, hence the effect of depth on the approach to the kernel limit can be neglected in this regime.
\subsection{Optimal width and depth}
With the formula \eqref{eqn:rf_learning_curve} for the generalization error in hand, we can determine the optimal hidden layer width for fixed depth, noise variance, and prior variance. We focus on the regime $\alpha < \min\{1,\gamma_{\textrm{min}}\}$, in which the generalization error always depends non-trivially on width. In Appendix \ref{app:rf_optimal_width}, we show that the optimal architecture for an RF model depends on the rescaled prior variance $\tilde{\sigma}^{2}$ defined in \eqref{eqn:sigma_tilde}. If $\tilde{\sigma} \leq 1$, then $\partial \epsilon_{\textrm{RF}}/\partial \gamma_{l} < 0$ for all $l$ and all widths in this regime, hence increasing width always improves generalization. Thus, in this regime, the best RF model is one that behaves identically to an LR model (Figure \ref{fig:rf_optimal}). If $\tilde{\sigma}^{2} > 1$, then $\epsilon_{\textrm{RF}}$ is minimized by taking all $\gamma_{1} = \cdots = \gamma_{\ell} = \gamma_{\star}$ for
\begin{align} \label{eqn:rf_optimal_width}
\gamma_{\star} = \frac{\tilde{\sigma}^{2/(\ell+1)}}{\tilde{\sigma}^{2/(\ell+1)} - 1} \alpha .
\end{align}
We note that the leading term in the perturbative expansion \eqref{eqn:rf_perturbation} would predict that generalization performance improves with increasing width if $\tilde{\sigma} < 1$, is invariant under changes of width if $\tilde{\sigma} = 1$, and degrades with increasing width if $\tilde{\sigma} > 1$. Thus, in this case the leading-order perturbative correction captures some, but not all, of the effect of width.
\begin{figure*}
\centering
\includegraphics[width=4in]{fig/rf_target_prior_mismatch_v2.pdf}
\caption{Optimal RF model architecture depends on target-prior mismatch.
\textbf{(a).} Contour plot in $(\alpha,\gamma)$-space of the theoretical error surface $\epsilon_{\textrm{RF}}$ \eqref{eqn:rf_learning_curve} for a single-hidden-layer RF model with prior variance $\sigma^2 = 1$. For all panels, we have no label noise ($\eta = 0$) and set the input dimensionality $d = 100$. For details of our numerical methods, see Appendix \ref{app:numerical_methods}.
\textbf{(b).} As in (a)., but for a single-hidden-layer RF model with higher prior variance ($\sigma^2 = 4$).
\textbf{(c).} As in (a)., but for a deep RF model ($\ell = 5$) and prior variance $\sigma^2 = 1$.
\textbf{(d).} As in (a)., but for a deep RF model ($\ell = 5$) and with higher prior variance ($\sigma^2 = 4$).
\textbf{(e).} Vertical cross sections of above (a). Theory curves are overlayed with experiment points, plotted with $\pm 2$ SE bars.
\textbf{(f).} Vertical cross sections of above (b). Optimal widths computed from equation \ref{eqn:rf_optimal_width} are marked with dashed vertical lines for each respective setting of $\alpha$.
\textbf{(g).} Error across different depths for prior variance $\sigma^2 = 1$ and fixed width $\gamma = 1.5$
\textbf{(h).} Error across different depths for prior variacne $\sigma^2 = 4$ and fixed width $\gamma = 1.5$. Optimal depths computed from equation $\ref{eq:optimal_depth}$ are marked with dashed vertical lines for each respective setting of $\alpha$.
}
\label{fig:rf_optimal}
\end{figure*}
In the absence of noise, this yields a simple qualitative picture in which the optimal width is related to the mismatch between the scale of the prior and the target weight vector: if $\mathbb{E}_{\mathcal{W}} \Vert \mathbf{w} \Vert^2 = \sigma^{2} d \leq d = \Vert \mathbf{w}_{\ast} \Vert^2$, then wider networks are always better, while otherwise one can obtain improved generalization performance by using an RF model rather than an LR model. This occurs because of the trade-off between the terms with linear and exponential dependence on depth in \eqref{eqn:rf_learning_curve}. Label noise has the effect of increasing the effective scale of the target in a way that depends on the data density $\alpha$: as $\alpha \uparrow 1$, wider models are always better in the presence of label noise. This behavior is illustrated in Figure \ref{fig:rf_optimal}.
Similarly, one can also optimize the depth for fixed width, noise variance, and prior variance. To do so, it is convenient to assume that all layers are of the same width $\gamma_{1} = \cdots = \gamma_{\ell} = \gamma$, which allows us to analytically continue $\epsilon_{\textrm{RF}}$ as a function of the depth $\ell$. Again specializing to the regime $\alpha < \min\{1,\gamma\}$, the generalization error is given by \eqref{eqn:rf_equal_width_generalization}. It is then easy to see that the LR model learning curve \eqref{eqn:lr_learning_curve} is recovered upon $\ell \downarrow 0$. In Appendix \ref{app:rf_optimal_depth}, we show that, if $\tilde{\sigma} \leq 1$, $\epsilon_{\textrm{RF}}$ is a monotonically increasing function of $\ell$, hence shallower RF models always generalize better. This is consistent with our result above for optimal width, because taking $\gamma_{l} \to \infty$ for some $l$ effectively reduces the depth of the RF model by one, by eliminating that layer's contribution to \eqref{eqn:rf_equal_width_generalization}. If $\tilde{\sigma} > 1$, then the optimal depth is given by
\begin{align} \label{eq:optimal_depth}
\ell_{\star} = \begin{dcases}
j\ \textrm{or}\ j - 1, & \textrm{if}\ \frac{\log(\tilde{\sigma}^2)}{\log[\gamma/(\gamma-\alpha)]} = j \in \mathbb{N}_{>0}
\\
\left\lfloor \frac{\log(\tilde{\sigma}^2)}{\log[\gamma/(\gamma-\alpha)]} \right\rfloor, & \textrm{otherwise}.
\end{dcases}
\end{align}
In the former condition, taking $\ell = j$ or $\ell = j-1$ will yield identical generalization error. Moreover, for the condition $\log(\tilde{\sigma}^{2}) / \log[\gamma/(\gamma-\alpha)] \in \mathbb{N}_{>0}$ to hold, the network width must be of the form
\begin{align}
\gamma = \frac{\tilde{\sigma}^{2/j}}{\tilde{\sigma}^{2/j} - 1} \alpha ,
\end{align}
which is consistent with the result for optimal width at fixed depth given in \eqref{eqn:rf_optimal_width}. Therefore, much like we found in our analysis of optimal width, the optimal depth of an RF model is related to the match between the scale of the prior and of the target. This behavior is illustrated in Figure \ref{fig:rf_optimal}.
\section{Learning curves for the NN model}
\subsection{Learning curve and double-descent behavior}
For the NN model, we do not obtain a simple closed-form solution for the RS learning curve at general depth. As shown in Appendix \ref{app:nn_rs_saddle_point}, we find that the solution is of the form
\begin{align} \label{eqn:nn_learning_curve}
\epsilon_{\textrm{NN}} = \epsilon_{\textrm{LR}} +
\begin{dcases}
z, &\textrm{if}\ \alpha < 1
\\
0, &\textrm{if}\ \alpha > 1,
\end{dcases}
\end{align}
where $z = z(\alpha, \sigma^{2}, \eta^{2},\gamma_{1},\ldots,\gamma_{\ell})$ is a non-negative real root of the polynomial
\begin{align} \label{eqn:z_polynomial}
z^{\ell+1} = \sigma^{2} (1 - \alpha) \prod_{l=1}^{\ell} \left[\frac{\gamma_{l} - \alpha}{\gamma_{l}} z + \frac{\alpha (1 - \alpha + \eta^{2})}{\gamma_{l}} \right].
\end{align}
We defer more detailed discussion of which root should be selected to Appendix \ref{app:nn_rs_saddle_point}, where we show that one required condition on the solution is that
\begin{align}
(\gamma_{l} - \alpha) z + \alpha (1 - \alpha + \eta^{2}) > 0
\end{align}
for all $l$. For a network with a single hidden layer ($\ell = 1$), \eqref{eqn:z_polynomial} is quadratic, and we can easily obtain
\begin{align}
\frac{z}{1 - \alpha + \eta^2 } &= \frac{\tilde{\sigma}^{2} (\gamma_{1} - \alpha) + \sqrt{\tilde{\sigma}^{4} (\gamma_{1} - \alpha)^{2} + 4 \alpha \gamma_{1} \tilde{\sigma}^{2} }}{2 \gamma_{1}},
\end{align}
where $\tilde{\sigma}^{2}$ is defined as in \eqref{eqn:sigma_tilde}.
\begin{figure*}[t]
\centering
\includegraphics[width=4in]{fig/nn_noisy_labels_v1.pdf}
\caption{Sample-wise double-descent in deep Bayesian neural networks.
\textbf{(a).} Contour plot in $(\alpha,\gamma)$-space of the theoretical error surface $\epsilon_{\textrm{NN}}$ \eqref{eqn:nn_learning_curve} for a two-layer NN model in the absence of label noise ($\eta = 0$). For all panels, we set the input dimensionality $d = 100$ and prior variance $\sigma^2 = 1$. For details of our numerical methods, see Appendix \ref{app:numerical_methods}.
\textbf{(b).} As in (a)., but in the presence of label noise ($\eta = 0.5$).
\textbf{(c).} Horizontal cross sections of above (a). Theory curves are overlayed with experiment points, plotted with $\pm 2$ SE bars.
\textbf{(d).} Horizontal cross sections of above (b).
\textbf{(e).} Vertical cross sections of above (a).
\textbf{(f).} Vertical cross sections of above (b).
}
\label{fig:nn_learning_curve}
\end{figure*}
The special case of \eqref{eqn:nn_learning_curve} for networks with hidden layers of equal widths follows from results obtained through a rather different approach in a recent study by Li and Sompolinsky \cite{li2021statistical}. Concretely, they use an iterative saddle-point argument to approximate the posterior expectation in \eqref{eqn:generalization_error} for fixed data, and then apply that result to a random Gaussian covariate model under what amounts to the assumption that the quantity $\mathbf{y}^{\top} (\mathbf{X} \mathbf{X}^{\top})^{-1} \mathbf{y}$ concentrates rapidly. In Appendix \ref{app:nn_posterior_expectations}, we provide a detailed discussion of the mapping between the polynomial condition in terms of which their result is expressed and the RS condition \eqref{eqn:z_polynomial}. In Appendix \ref{app:nn_posterior_expectations}, we also use a finite-size fixed-data approach derived from our previous work \cite{zv2021scale} to show that the learning curve should be of the form \eqref{eqn:nn_learning_curve}. Concretely, this approach gives an expression for $z$ as the thermodynamic limit of a dataset average of a ratio of prior averages, with the remaining components of the learning curve exactly matching the RS prediction. Taken together, these result suggests that the RS prediction for the learning curve correctly captures at least the coarse behavior of generalization in NNs.
To further probe whether the RS prediction is quantitatively accurate, we evaluate the finite-size data average numerically. As shown in Figures \ref{fig:nn_learning_curve} and \ref{fig:nn_narrow}, and in supplemental figures provided in Appendix \ref{app:numerical_methods}, we observe good agreement for two-layer networks. To probe the accuracy of the RS prediction for deeper networks, we solve the polynomial \eqref{eqn:z_polynomial} numerically. As shown in Figures \ref{fig:nn_learning_curve} and \ref{fig:nn_narrow}, we again observe good agreement. Therefore, both alternative heuristic analytical approaches and numerical results are consistent with the RS learning curve, suggesting that it provides a reasonably accurate picture of generalization in deep NNs.
\begin{figure*}
\centering
\includegraphics[width=4in]{fig/nn_narrowest_hidden_layer_v2.pdf}
\caption{Bottleneck layers do not induce model-wise double-descent in deep NNs.
\textbf{(a).} Contour plot in $(\gamma_1,\gamma_2)$-space of the theoretical error surface $\epsilon_{\textrm{NN}}$ \eqref{eqn:nn_learning_curve} for a deep Bayesian NN model with two hidden layers and $\alpha=0.5$. For all panels, we set the input dimensionality $d = 100$, prior variance $\sigma^2 = 1$, and no label noise ($\eta = 0$). For details of our numerical methods, see Appendix \ref{app:numerical_methods}.
\textbf{(b).} As in (a)., but with $\alpha = 1.5$.
\textbf{(c).} Horizontal cross sections of above (a). Theory curves are overlayed with experiment points, plotted with $\pm 2$ SE bars.
\textbf{(d).} Horizontal cross sections of above (b).
}
\label{fig:nn_narrow}
\end{figure*}
Like the previously-studied models, we see that label noise can induce sample-wise double-descent, with $\epsilon_{\textrm{NN}} \to \infty$ as $\alpha \to 1$ (Figure \ref{fig:nn_learning_curve}). However, unlike for the RF model, having relatively narrow hidden layers does not introduce the possibility of divergences other than at $\alpha = 1$, as $z$ should remain bounded. This is illustrated in Figure \ref{fig:nn_narrow}, where we repeat the analysis of Figure \ref{fig:rf_narrow}, but do not observe similar model-wise divergences. Moreover, this means that the NN model does not display sample-wise divergences in the absence of label noise. Therefore, training the hidden layers affords the advantage of avoiding the possible model- and sample-wise divergences that can arise in RF models with narrow bottlenecks. This sharp contrast makes sense, since in the RF model the presence of layers width $\gamma_{l} < 1$ introduces a true bottleneck, while in the NN model one could in principle find a solution where, in all layers except the first, exactly one weight is nonzero, and the model essentially reduces to shallow linear regression. The existence of this solution reflects the fact that, from the standpoint of expressivity, NN models should be able to perform as well as LR models, and differences in performance reflect the behavior of the inference algorithm \cite{saxe2013exact}. Indeed, if $\sigma = 1$ and $\eta = 0$, we have the solution $z = 1-\alpha$, and $\epsilon_{\textrm{NN}} = \epsilon_{\textrm{LR}}$. Therefore, in this special case, the RS result predicts that depth has no effect on generalization performance. This behavior is clearly illustrated by Figure \ref{fig:nn_narrow}, where the generalization error of a three-layer NN remains constant as the widths of the two hidden layers are varied. Even at non-zero noise levels, Figure \ref{fig:nn_learning_curve} illustrates that width has a relatively minimal effect of generalization performance when $\sigma = 1$.
\subsection{Large-width behavior}
Beyond the special cases mentioned above, we observe that, in the limit $\gamma_{1},\ldots,\gamma_{\ell} \to \infty$, we have the solution $z=\sigma^{2}(1-\alpha)$ for any fixed $\alpha$, $\sigma$, and $\eta$. Therefore, as we found for the RF model, the NN model's generalization performance reduces to that of the shallow LR model in this large-width limit: $\epsilon_{\textrm{NN}} \to \epsilon_{\textrm{LR}}$. In the regime $\alpha < 1$, $\gamma_{1},\ldots,\gamma_{\ell} \gg 1$, we can obtain a perturbative solution for the learning curve (see Appendix \ref{app:rs_saddle_point}), which is given as
\begin{align} \label{eqn:nn_perturbation}
\epsilon_{\textrm{NN}} &= \epsilon_{\textrm{LR}}
+ [(1 - \alpha) (1-\sigma^{2}) + \eta^2 ] \sum_{l=1}^{\ell} \frac{\alpha}{\gamma_{l}} + \mathcal{O}\left(\frac{\alpha^{2}}{\gamma^2}\right)
\end{align}
to leading order. This result can be compared to the leading-order perturbative computation of the zero-temperature learning curve for fixed data in our previous work \cite{zv2021asymptotics}. As shown in Appendix \ref{app:perturbative_comparison}, averaging the result of \cite{zv2021asymptotics} over data recovers the $\mathcal{O}(\alpha/\gamma)$ term resulting from the replica method computation. This suggests that the RS prediction for the NN model learning curve is accurate at large widths. Heuristically, this makes sense because the concavity of the log-posterior is restored in the limit $\gamma_{1},\ldots,\gamma_{\ell} \to \infty$.
This limiting result has several interesting features. First, paralleling our analysis of the RF model at large widths, the closeness of the NN model's learning curve to that of simple linear regression is determined by a combination of depth, dataset size and width. Second, not only do the RS learning curves for NN and RF models agree at infinite width, but the leading order corrections agree (i.e., the term that is linear in $\alpha/\gamma_{l}$; see \eqref{eqn:rf_perturbation}). Thus, if one tracked only the generalization error, one could not differentiate between training only the readout layer and training all of the layers simply by considering the leading order perturbative correction. One could of course distinguish between these two models by considering leading-order corrections to observables that explicitly measure task-relevant feature learning in early hidden layers, such as the kernels considered in our previous work \cite{zv2021asymptotics}.
\subsection{Generalization gap between RF and NN models}
To distinguish between RF and NN models based on generalization performance, one must therefore go to higher order in perturbation theory. For convenience and clarity, we specialize to the case of networks with equal hidden layer widths $\gamma_{1}=\gamma_{2}=\cdots=\gamma_{\ell} = \gamma$. Then, we find that
\begin{align}
\frac{\epsilon_{\textrm{NN}} - \epsilon_{\textrm{LR}}}{1-\alpha+\eta^2} &= (1-\tilde{\sigma}^2) \frac{\ell \alpha}{\gamma} \nonumber\\&\quad + \left(\frac{\ell(\ell-1) \tilde{\sigma}^2}{2} - \frac{\ell(\ell+1) }{2 \tilde{\sigma}^2} + \ell \right) \frac{\alpha^2}{\gamma^2} \nonumber\\&\quad + \mathcal{O}\left(\frac{\alpha^3}{\gamma^3}\right),
\end{align}
where $\tilde{\sigma}$ is defined as in \eqref{eqn:sigma_tilde}. In contrast, by truncating \eqref{eqn:rf_all_order_series} to this order, we can see the corresponding RF model has generalization error
\begin{align}
\frac{\epsilon_{\textrm{RF}} - \epsilon_{\textrm{LR}}}{1-\alpha+\eta^2} &= (1-\tilde{\sigma}^2) \frac{\ell \alpha}{\gamma} \nonumber\\&\quad + \left(\frac{\ell (\ell-1) \tilde{\sigma}^2}{2} + \ell\right) \frac{\alpha^2}{\gamma^2} \nonumber\\&\quad + \mathcal{O}\left(\frac{\alpha^{3}}{\gamma^3}\right).
\end{align}
Therefore, the next-to-leading order correction can distinguish between RF and NN models. Moreover, the gap in the generalization performance of the two models is, to the given order,
\begin{align}
\frac{\epsilon_{\textrm{RF}} - \epsilon_{\textrm{NN}}}{1-\alpha+\eta^2} = \frac{\ell(\ell+1) }{2 \tilde{\sigma}^2} \frac{\alpha^2}{\gamma^2} + \mathcal{O}\left(\frac{\alpha^{3}}{\gamma^3}\right).
\end{align}
The coefficient of the leading term is always positive, hence at very large widths training both layers should produce a small benefit relative to simply training the readout. In the two-layer case, one can use the closed-form solution for the RS generalization error to show that the generalization gap $\epsilon_{\textrm{RF}} - \epsilon_{\textrm{NN}}$ is strictly positive, except at vanishing load or in the limit $\gamma_{1} \to \infty$ (see Appendix \ref{app:generalization_gap}). These results suggest that training all layers of a deep linear network can yield improved generalization relative to training only the last layer, even if the widths are large enough such that the RF model does not display double-descent in the absence of noise. See Figure \ref{fig:rf_nn_gap} for an illustration of this behavior.
\subsection{Optimal width and depth}
The leading correction term in \eqref{eqn:nn_perturbation} predicts that generalization error always decreases with increasing width (respectively increases with increasing depth) if $\tilde{\sigma} < 1$, is constant if $\tilde{\sigma} = 1$, and always increases (respectively decreases) if $\tilde{\sigma} > 1$. As shown in Appendix \ref{app:perturbative_comparison}, this condition is the dataset-averaged version of the fixed-data condition noted by Li and Sompolinsky \cite{li2021statistical} and in our previous perturbative work \cite{zv2021asymptotics}. In Appendix \ref{app:nn_optimal_width}, we show in detail that this condition captures the behavior of the full RS generalization error of an NN with one hidden layer; for other depths it follows from an argument based on implicit differentiation given by Li and Sompolinsky \cite{li2021statistical}. Therefore, like in our study of the RF model, the optimal width of an NN depends on the match between the scales of the prior and of the target. However, unlike for an RF model, the RS result suggests that the optimal generalization performance for an NN is obtained either by taking $\gamma_{l} \to \infty$ or by taking $\gamma_{l} \downarrow 0$, behavior which is fully predicted by the leading perturbative correction. This behavior is illustrated in Figure \ref{fig:nn_optimal}.
\begin{figure}[tb]
\centering
\includegraphics[width=3in]{fig/bnn_rf_gap.pdf}
\caption{The generalization gap between RF and NN models approaches zero with increasing width. The difference $\epsilon_{\textrm{RF}} - \epsilon_{\textrm{NN}}$ remains positive or consistent with zero within standard error throughout, highlighting the advantage in training all layers. See Appendix \ref{app:numerical_methods} for details of our numerical methods.}
\label{fig:rf_nn_gap}
\end{figure}
\section{Discussion and conclusions}
\subsection{Summary of results}
In this work, we studied the statistical mechanics of inference in deep Bayesian linear models. We characterized the learning curves of deep linear random feature models and deep linear neural networks for isotropic Gaussian covariates, using a combination of the replica trick and replica-free methods. Our primary results for how deep Bayesian linear models with random and learned features differ or resemble may be summarized as follows:
\begin{itemize}
\item In the presence of label noise, both RF and NN models display sample-wise double-descent (Figures \ref{fig:rf_learning_curve} and \ref{fig:nn_learning_curve}). For RF models, the presence of a bottleneck layer with width less than the input dimension induces model-wise double-descent at fixed dataset size and sample-wise double descent at fixed width (Figures \ref{fig:rf_learning_curve} and \ref{fig:rf_narrow}), while bottlenecks do not affect the double-descent behavior of NN models (Figures \ref{fig:nn_learning_curve} and \ref{fig:nn_narrow}). In particular, NN models do not display model-wise double-descent, and do not display sample-wise double-descent in the absence of label noise.
\item
For both RF and NN models, the effect of width on generalization depends on the match between the prior variance and the true scale of the targets, with wider networks yielding better generalization when the prior variance is less than the average target scale (Figures \ref{fig:rf_optimal} and \ref{fig:nn_optimal}). For NN models, taking the network to be as wide or as narrow as possible is always optimal. In contrast, when the prior variance is greater than the average target scale, there is a particular width that yields optimal generalization in RF models.
\item
Similarly, the optimal depth for both models depends on prior-target mismatch. Paralleling the case of optimal width, deeper models always perform worse when the prior variance is less than the average target scale (Figures \ref{fig:rf_optimal} and \ref{fig:nn_optimal}). When prior variance is greater than the average target scale, shallower models perform better. In this regime, as in the case of optimal width, there is a particular depth that yields optimal RF model generalization for fixed width, prior variance, and data density.
\item
Both RF and NN models display kernel-limit behavior---i.e., their learning curves reduce to those of shallow linear regression---when the depth and dataset size are small relative to the hidden layer width. Moreover, for both classes of deep models, the $\mathcal{O}(\ell\alpha/\gamma)$ perturbative correction captures much of the gross qualitative behavior of the learning curve as a function of prior variance, width, and depth.
\item
The learning curves of wide RF and NN models coincide not only in the limit $\ell\alpha/\gamma \downarrow 0$, but have identical leading-order corrections in $\ell\alpha/\gamma$. Training all layers improves generalization relative to training only the readout, but this gap is an $\mathcal{O}(\ell^2\alpha^2/\gamma^2)$ effect (Figure \ref{fig:rf_nn_gap}).
\end{itemize}
\begin{figure*}
\centering
\includegraphics[width=4in]{fig/nn_target_prior_mismatch_v2.pdf}
\caption{Optimal NN model architecture depends on target-prior mismatch.
\textbf{(a).} Contour plot in $(\alpha,\gamma)$-space of the theoretical error surface $\epsilon_{\textrm{NN}}$ \eqref{eqn:nn_learning_curve} for a single-hidden-layer NN model with prior variance $\sigma^2 = 1/4$. For all panels, we have no label noise ($\eta = 0$) and set the input dimensionality $d = 100$. For details of our numerical methods, see Appendix \ref{app:numerical_methods}.
\textbf{(b).} As in (a)., but for a single-hidden-layer NN model with higher prior variance ($\sigma^2 = 4$).
\textbf{(c).} As in (a)., but for a deep NN model ($\ell = 5$) and prior variance $\sigma^2 = 1/4$.
\textbf{(d).} As in (a)., but for a deep NN model ($\ell = 5$) and with higher prior variance ($\sigma^2 = 4$).
\textbf{(e).} Vertical cross sections of above (a). Theory curves are overlayed with experiment points, plotted with $\pm 2$ SE bars.
\textbf{(f).} Vertical cross sections of above (b).
\textbf{(g).} Error across different depths for prior variance $\sigma^2 = 1/4$ and fixed width $\gamma = 1.5$
\textbf{(h).} Error across different depths for prior variance $\sigma^2 = 4$ and fixed width $\gamma = 1.5$.
}
\label{fig:nn_optimal}
\end{figure*}
\subsection{Prior work}
As noted above, our results for deep linear neural networks partially overlap with those obtained previously by Li and Sompolinsky \cite{li2021statistical}. Specifically, the RS learning curve for networks of equal hidden layer width agrees with the result they obtained through an alternative heuristic, and, as a result, their criteria for when generalization improves or degrades with width and depth coincide with those obtained here. However, they did not analyze in detail how the kernel limit is approached as $\ell \alpha/\gamma \downarrow 0$, and did not consider random feature models. Our results therefore complement their study by providing a more granular picture of how generalization performance for random datasets depends on model architecture. Moreover, the agreement between their approximations, our RS results, and our numerical simulations is consistent with the conjecture that the RS learning curve is reasonably accurate.
The statistical mechanics of inference in shallow linear models with more general priors and likelihoods was investigated in detail by Advani and Ganguli \cite{advani2016statistical}, who showed a correspondence between the performance of Bayesian minimum mean-squared error (MMSE) inference and a class of algorithms known as M-estimators. The effect of prior mismatch on the performance of the shallow MMSE estimator has been considered in recent rigorous work by Barbier \emph{et al.} \cite{barbier2021performance}. The generalization error of the MMSE estimator is given by the error of the posterior mean---i.e., $\lim_{p,d \to \infty} \mathbb{E}_{\mathcal{D}} \Vert \langle \mathbf{w} \rangle - \mathbf{w}_{\ast} \Vert^2/d$---rather than the posterior mean of the error \eqref{eqn:generalization_error}, which gives the generalization error of the Gibbs estimators considered here. As discussed briefly above and in detail in Appendix \ref{app:rf_posterior_expectations} and Appendix \ref{app:nn_posterior_expectations}, our results therefore include an additional contribution to the generalization error from the posterior covariance $\langle \mathbf{w} \mathbf{w}^{\top} \rangle - \langle \mathbf{w} \rangle \langle \mathbf{w} \rangle^{\top}$ of the end-to-end weight vector, which is not identically zero. If one considered an alternative low-temperature limit in which the prior variance is proportional to $1/\beta$, then this additional contribution would vanish in the low-temperature limit. Our choice of scaling is motivated by the considerations described in our previous work \cite{zv2021asymptotics}, and is the one classically used in studies of the statistical mechanics of Bayesian inference \cite{biehl1998phase,solla1992learning,levin1990statistical,li2021statistical}. This choice is important as it affects the relationship of our results to those in the setting of ridge regression. As discussed in Appendices \ref{app:rf_posterior_expectations} and \ref{app:nn_posterior_expectations}, in our previous work \cite{zv2021asymptotics}, and in the works of Advani and Ganguli \cite{advani2016statistical} and Barbier \emph{et al.} \cite{barbier2021performance}, the zero-temperature limit of the MMSE estimator will in this case coincide with the ridge regression estimator.
Here, we considered a proportional asymptotic limit in which the input dimension $d$, dataset size $p$, and hidden layer widths $n_{\ell}$ tend jointly to infinity with fixed limiting ratios $\alpha = p/d = \mathcal{O}(1)$ and $\gamma_{l} = n_{l}/d = \mathcal{O}(1)$ and fixed depth $\ell$. Moreover, we have only considered networks with scalar output. In this setting, kernel-machine behavior---i.e., approximate reduction of the learning curves of deep models to those of simple linear regression \cite{neal1996priors,williams1997computing,lee2018deep,matthews2018gaussian,hron2020exact}---emerges when the ratio $\ell \alpha/\gamma_{l}$ is vanishingly small. This is consistent with our observations in prior work \cite{zv2021asymptotics,zv2021scale}, and with those of works that considered large depths but fixed dataset size \cite{roberts2022principles} or large dataset size but fixed depth \cite{li2021statistical,naveh2021self}. Given the large scale of contemporary regression and classification datasets (e.g., \cite{russakovsky2015imagenet}), careful consideration of limits in which the output dimension and dataset size are not vanishingly small relative to hidden layer width warrants further study.
This regime has thus far proven challenging to access perturbatively, as large deviations from the kernel limit may emerge \cite{zv2021asymptotics,li2021statistical,aitchison2020bigger,zv2021scale,naveh2021self}. Existing fixed-data approaches to regimes in which either the dataset size \cite{naveh2021self,li2021statistical,zv2021scale} or the output dimension \cite{aitchison2020bigger,zv2021scale} is not negligible relative to hidden layer width rely on saddle-point approximations that may break down when both of these parameters are large. New approaches will therefore be required to study networks in this limit non-perturbatively. With such results in hand, it will be interesting to test whether existing perturbative predictions do in fact capture qualitative features of how generalization depends on network architecture and other hyperparameters. For the simple models considered here, we found that small-sample-size perturbation theory does in fact yield largely correct predictions for when wider networks generalize better, even at large sample size.
\subsection{Outlook}
We conclude by noting that our work has several important limitations, which will be interesting to address in future work. First, our approach is highly specialized to deep linear networks, and would not extend easily to nonlinear models. Though the utility of linear networks as a model system for studying the effect of depth on inference has been clearly established \cite{saxe2013exact,fukumizu1998effect,advani2020high,zv2021asymptotics,li2021statistical}, rigorous characterization of the effect of nonlinearity on inference in deep Bayesian neural networks remains a largely open problem \cite{zv2021asymptotics,zv2021exact,zv2021activation,naveh2021predicting,naveh2021self,li2021statistical,roberts2022principles,aitchison2020bigger}. Second, we have assumed that the covariates are drawn from an isotropic Gaussian distribution. Though this is a standard generative model in theoretical studies of inference \cite{krogh1992generalization,hastie2019surprises,barbier2021performance,advani2016statistical,nakkiran2019more,advani2020high}, it is undoubtedly not reflective of real-world data. Extending results of this form to more realistic generative models will be an interesting objective for future work \cite{canatar2021spectral,loureiro2021learning,gold2020manifold}. We remark that some of the fixed-data results of Appendices \ref{app:rf_posterior_expectations} and \ref{app:nn_posterior_expectations} would extend immediately to anisotropic and non-Gaussian data provided that the requisite invertibility conditions hold. Finally, our replica theory approach is of course non-rigorous. For the RF model, we do not expect replica symmetry to be broken, and conjecture that our results might be rigorously justifiable \cite{mezard1987spin,barbier2021strong,canatar2021spectral,engel2001statistical,akemann2013products}. Moreover, our replica-free analytical approaches and numerical experiments suggest that our RS results for NNs are at the very least a reasonable approximation for their true generalization performance. With that in mind, careful exploration of the possibility of replica symmetry breaking will be an interesting topic for further investigation.
\begin{acknowledgments}
We thank A. Atanasov and B. Bordelon for helpful comments on our manuscript. This work was supported by a Google Faculty Research Award and NSF Award \#2134157. A subset of the computations in this paper were performed using the Harvard University FAS Division of Science Research Computing Group's Cannon HPC cluster.
\end{acknowledgments}
|
1,108,101,564,170 | arxiv | \section{INTRODUCTION}
\label{sec:Intro}
The level of stellar activity is typically characterized by the radiation emitted in the X-ray and EUV bands, which is a measure of the temperature of the stellar atmosphere \citep[the stellar corona, see review by][]{gudel07}. The source of this radiation is the over a million degrees plasma confined in the closed coronal magnetic loops.
While the complete suite of mechanisms for coronal heating is still under debate, it is largely accepted that the magnetic field is the main source of energy for such an intense heating. Thus, stellar X-ray emission serves as a proxy for both the stellar coronal field structure and strength. The ambient X-ray luminosity could be enhanced by stellar flaring activity, during which particles are impulsively accelerated and coronal plasma is heated during transient events \citep[see e.g., review by][]{Schrijver09}. These flaring events are also driven by the stellar magnetic field. The total X-ray luminosity, $L_X$, of the Sun is one of the best indirect indicator of the solar magnetic cycle. Observations have shown that solar $L_X$ oscillates in coherence with the solar magnetic cycle, with $L_X$ being much larger at times of high magnetic activity. The change in $L_X$ over the solar cycle is also rather large as compared to the variability at other wavelengths, varying by orders-of-magnitude in the hard X-ray and about a factor of 6 in the soft X-ray \citep[e.g.,][]{Judge03,Cohen11}.
Stellar activity has been related to the stellar rotation and age, which are known to correlate with each other by the well known Skumanich law \citep{Skumanich72} for late type stars. Recently, the rotation-age relation has also been investigated for earlier stellar ages \citep[e.g.,][]{GalletBouvier13}. While the rotation-age relation is quite understood, and is attributed, in part, to stellar spindown by the magnetized stellar wind \citep[e.g.,][]{weberdavis67, Matt12, Vidotto14a, Garraffo15}, a robust understanding of the relationship between activity and rotation remains elusive. Observations have shown that the stellar activity, represented by the ratio of the X-ray luminosity to the bolometric luminosity, $R_X=L_X/L_{bol}$, increases with rotation and saturates below a certain rotation period \citep{Pallavicini81,Wright11}. The saturation level is more notable when $R_X$ is displayed as a function of the Rossby number, $Ro=P_{rot}/\tau$, where $P_{rot}$ is the rotation period and $\tau$ is the stellar convective turnover time \citep{Pizzolato03}. Rapidly rotating stars with $Ro$ smaller than about 0.1 have a saturated $R_X$ of about $10^{-3}$ while those with higher $Ro$ show a decline in $R_X$ as $Ro$ increases. This observational law is known as the ``activity-rotation" relationship.
Stars in the saturated regime of the activity-rotation diagram are known to exhibit rather intense magnetic fields \citep[e.g.,][]{ReinersBasri07, Vidotto14b}. In particular, rapidly rotating fully convective M-stars stand out and are known to produce magnetic fields that can reach kilo Gauss (kG) levels on scales comparable to the size of the star \citep{Morin10}. It is also expected that most of the magnetic flux in M-stars is likely present on small scales \cite[][]{Saar94,Saar96,ReinersBasri09}, implying that numerous small-scale active regions with typical fields reaching several kG are also present. This is a rather exotic scenario since the solar magnetic field reaches kG levels only in small active regions. Due to the unprecedented nature of the magnetism in low Rossby number fully-convective M-stars, the resulting coronal properties are not yet understood.
There are different tentative ways to explain the activity saturation in low $Ro$ stars. First, it is possible that the percentage of the stellar surface which is covered in hot coronal loops (the ``filling factor") is so high that any additional loops do not substantially contribute to $L_X$ \citep{Vilhu84}. Alternatively, it is possible that in low $Ro$ stars, the flaring rate is so high that the cumulative $L_X$ is dominated by the transient flares \citep{gudel07}. It has also been suggested that the saturation could be the result of centrifugal stripping of the corona \citep{JardineUnruh99}. In this paper, we use the magnetic field from an M-star dynamo simulation in a model for the stellar corona to better understand the X-ray activity of stars in the saturated activity regime.
In the next section we describe the models used here, we present the results in Section~\ref{sec:Results}, and discuss them in Section~\ref{sec:Discussion}. We finish with our conclusions in Section~\ref{sec:Conclusions}
\section{Description of Models}
\label{sec:Models}
The dynamo model simulates self-consistently the convection and magnetic field generation in the convection zone of a fully convective star \citep{Yadav15b}. The second model simulates the stellar corona and stellar wind, and is driven by the photospheric field provided by the aforementioned dynamo model. Here we describe the models briefly.
\subsection{Dynamo Model}
\label{sec:Dynamo}
For the dynamo simulation of a nearly fully-convective M-star, we use the open-source MagIC code\footnote{{\tt https://github.com/magic-sph}} \citep{Gastine12a}. The simulation solves the anelastic fully-nonlinear magnetohydrodynamic (MHD) equations in a rotating spherical shell with a tiny inner core of radius 0.1$r_o$, where $r_o$ is the outer radius of the simulated domain. The simulated convection zone contains 5 density scale heights, enough to model about 95\% of the stellar convection zone. The magnetic field is self-consistently generated from a seed magnetic field. The field morphology is dipole-dominated on large-scales with strength reaching several kG. However, most of the magnetic flux is present in much smaller magnetic field regions. The area-averaged total mean field strength on the simulation surface is about 2 kG. The mean Rossby number is about 0.05. Further details can be found in \cite{Yadav15b}. It should be noted that the outer surface in this simulation is actually a level below the photosphere of the star being modeled. We believe that, at least on the length scales resolved by this simulation, the photopspheric turbulent convection (not simulated) will not affect the strong magnetic field features. Therefore, we assume that the magnetic field on the simulation surface largely represents the stellar photospheric field.
\subsection{Coronal Model}
\label{sec:Corona}
The stellar corona is simulated using the Alfv\'en Wave Solar Wind Model (AWSOM) \citep{vanderholst14}. This model solves the MHD equations including additional momentum and energy terms, which assume that the coronal heating and the wind acceleration are the result of an Alfv\'en wave turbulence. These terms are derived from first-principle, physics-based theoretical models. In addition, the model includes thermodynamics and radiative transfer terms. The model has been extensively validated with solar data and scaling it to other stars is reliable due to the fact that 1) the Poynting flux in the model assumes the observed linear relation between the magnetic flux, $\Phi_m$, and $L_X$ \citep{Pevtsov03}; and 2) the dissipation term, $L_\perp$, scales with the square root of the average surface field magnitude \citep{Hollweg86,vanderholst14}. For a given photospheric radial field provided by the dynamo model, AWSOM provides a three-dimentional, quiescent solution with hot corona and accelerated stellar wind up to a typical distance of $25-40R_\star$. To apply AWSOM, we use the magnetograms produced by the dynamo simulation of a nearly fully convective M star with radius and mass of $R_\star=0.3R_\odot$, $M_\star=0.3\,M_\odot$, and rotation period of $P_{rot}=20$ days. We use the dynamo magnetogram in two forms. In the first one, we mostly preserve the dynamo data resolution and call it high-resolution or `HR'. In the second, we apply a low-pass filter to the dynamo data and artificially smooth it. We refer to this magnetogram as `LR'. The resolution of the LR type magnetogram is similar to the magnetic field maps of the fully convective stars inferred using the Zeeman-Doppler imaging technique \cite[][]{Morin10}. Figure~\ref{fig:f1} shows the input magnetograms used here.
\subsection{Synthetic X-ray Emissions}
\label{sec:SyntheticXray}
In order to compare our results with X-ray observations, the coronal model enables to produce synthetic X-ray images (originally designed to reproduce solar X-ray images). Here we produce the images by performing the line-of-sight (LOS) integration:
\begin{equation}
I_{pix}=\int n^2_e\Lambda(T)ds,
\end{equation}
where $I_{pix}$ is the pixel's flux, $n_e$ is the electron density, $ds$ is the differential path along the LOS, and $\Lambda$ is the temperature response function. The response functions are taken from an external table, which lists the emissivity of iso-density and isothermal plasma for various density and temperatures, computed using {\it CHIANTI} line and continuum emissivities \citep[e.g.,][]{Dare97} and \cite{Grevesse92} abundances. The emissivities are in units of $[10^{-23}\;erg\;cm^3\;s^{-1}]$. For each solution, we generate synthetic images for a series of LOS, incremented by 10 degrees along the stellar viewing phase (with zero inclination). This enables us to generate synthetic light curves, where we multiply the image flux by the stellar surface area to obtain the total simulated $L_x$ in $ergs\;s^{-1}$.
\section{RESULTS}
\label{sec:Results}
Figure~\ref{fig:f2} shows the structure of the coronal loops in the AWSOM solutions for the two input magnetograms. The stellar surface is colored by the surface magnetic field, while the field lines are colored by their temperature.
The first notable feature is that despite of the difference in resolution and much more detailed surface field in the HR map, the two coronal solutions are quite similar. Both solutions include strong field concentration at high-latitude, which lead to a dipole-like structure of the coronal field, dominated by large loops that extend from one pole to the other. The structure of the smaller, underlying loops is different between the two solutions, where some of the loops fragment more in the HR solution as this map enables the magnetic field to find closer pairs of opposite field polarity at its footpoints.
The most notable feature in both solutions is that the large, overlaying coronal loops are hotter than the underlying smaller loops. This is consistent with the Rosner-Tucker-Vaiana (RTV) loop scaling \citep{RTV78}, which derives that the loop's maximum temperature, $T_{max}$, scales as a positive power of the loop's length if all other parameters are roughly the same. In particular, for a same or weaker footpoint field strength \citep[e.g.,][]{Aschwanden08,Cranmer09,Martens10,Bourdin16}.
Figure~\ref{fig:f3} shows synthetic X-ray images of the two solutions. For reference, we also show a comparison between simulated and real X-ray images of the Sun, which display the contrast between the hotter active regions and the quiescent corona on November 25 1996. These images are created assuming response functions for the {\it YOHKOH SXT AlMg} line.
Similarly to Figure~\ref{fig:f2}, the X-ray images for the M star are similar for both solutions with slight contrast in the shape and size of the darker area representing the coronal holes. It is clear that the images for the M star are saturated in the X-ray as compared to the solar images, and the smaller-scale structure of the coronal loops in the M star cases is not quite visible. We would like to point out that similar saturation was obtained for the X-ray images of the M star solutions using the {\it YOHKOH SXT AlMg} response function table.
Figure~\ref{fig:f4} shows the total $L_X$ as a function of phase for the two solutions of our generic fully convective M star. The total $L_X$ is about $2-3\cdot 10^{28}\;ergs\;s^{-1}$, which translates to $R_X=10^{-4}-10^{-3}$. In our case, $Ro\approx 0.05$, so the obtained $L_X$ is roughly within the spread of the saturated X-ray activity seen in observations \cite[][]{Wright11}. Table~\ref{tab:table1} shows $T_{max}$ and the average $\log{(L_X)}$ for the two solutions. The simulated $\log{(Lx)}\approx \;28.2-28.4$ matches observations of stars with $R_0$ around 0.05, e.g., AD Leo ($\log{(L_X)}=28.3$), and YZ CMi($\log{(L_X)}=28.33$) \citep{Vidotto14b}. The simulated $T_{max}\approx\;6-7MK$ is within the range observed for mid-M stars by \cite{Preibisch05}. While $T_{max}$ is quite different between the solutions, $L_X$ is not that different, which means that $T_{max}$ is probably very local, while the overall dominant temperatures are more similar between the solutions. The variabilities in the light curves are due to longitudinal variabilities in the magnetic field at high latitudes. This means that while the overall field structure is dipolar, some star-size loops may be rooted in a stronger field than others, leading to slightly higher loop temperature at preferred longitudes.
\section{DISCUSSION}
\label{sec:Discussion}
We perform our study here under the following assumptions: 1) The simulation results represent a static, quiescent solution to the coronal structure and X-ray emission; 2) the coronal heating in Sun-like stars is due to the Alfv\'en wave turbulence that can be extended to M-stars (see Section~\ref{sec:Corona}); and 3) in the Alfv\'en wave turbulence model, the temperature of the coronal loops scales with the magnitude of the footpoint magnetic field and the size of the loops.
Keeping these in mind, the results of our simulations show that the hot corona is dominated by the large, dipolar loops in fully convective M-stars. In fact, this should apply to any star that can maintain strong magnetic fields on star-size length scales. This situation is dramatically different from what we see on the Sun, where the hottest loops are those of the small active regions. This is due to the fact that the high-latitude surface field in rapidly rotating M-stars is very strong (reaching kG levels), while in the Sun, the polar field is much weaker (about 10G). {\it Thus, the location and the scale of the strong field concentration dictates the dominating loop scale in the stellar hot corona and its emission}.
It follows that a special dynamo mechanism is probably working in rapidly rotating stars that sustains such large-scale magnetic fields. Our recent anelastic simulations of K-type \citep{Yadav15a} and M-type \citep{Yadav15b} stars show that under rapid rotation an $\alpha^2$-type dynamo mechanism, that does not require a tachocline or strong differential rotation, can sustain large-scale strong magnetic fields, in line with previous suggestions \citep{Christensen09, Gastine12b, Yadav13,WrightDrake16}. In the rapidly rotating solar-type stars, observation indicate large-scale fields that are substantially stronger than those on the Sun \citep{Folsom16}. There is also some evidence that fast-rotation induces field concentration at higher latitudes \citep{Donaticameron97, Strassmeier01}, perhaps producing strong large-scale dipolar field configuration. This has been attributed to the poleward deflection of flux tubes by Coriolis forces or strong meridional circulations \citep{Schuesslersolanki92,Solanki97,Schrijvertitle01}. Therefore, observations and theoretical models support the existence of strong large-scale fields in rapidly rotating stars with different spectral types.
In principle the model here could be tested against RS CVn stars and other eclipsing binaries examined in the literature. A limitation to this approach is that most studied systems have unknown or relatively high mass secondaries, so the direct applicability is unclear. One with a mid M secondary is EI Eri. \cite{PandeySingh12} fitted this system as a three component plasma with Plasma temperatures ranging from 5- 30Mk. With the emission measure peaking around 10 MK. This is similar to the results for systems with higher mass secondaries \citep[eg. $\sigma^2$ Cor Bor, AR LAc, V711 Tau,][]{Osten03,PandeySingh12,Drake14}. The scale height of the corona of the G star in AR Lac is about 1.3 solar radii \citep{Drake14}. Those authors warn that symmetrical coronal eclipses that can easily be interpreted in terms of spherical emitting geometry are fairly rare. Another approach is to look at flares, but the height of these is typically smaller than the scale height. In the case of EI Eri a flare was seen and measured to be about 0.23$R_*$ \citep{PandeySingh12}. These sizes and temperature scales are consistent with the model posited here.
We propose that the "saturated activity" state contains a basal level of quiescent emission due to large hot loops, which is an alternative to the full coverage of the stellar surface by small loops \cite[as experimented by][]{Lang14}. The transition to the un-saturated regime occurs when the strong fields begin to appear predominantly in small active regions. The base level of the saturated regime could of course be enhanced with high flaring rate that can dominate the ambient, quiescent X-ray luminosity \cite[e.g.,][]{Gudel03}. This can increase the level of $R_x$ from $10^{-5}-10^{-4}$ to the saturated level around $10^{-3}$, especially when taking into account the wide spread around that level in \cite{Wright11}.
\section{Conclusions}
\label{sec:Conclusions}
We perform a combined simulation of the stellar dynamo and the stellar corona of a generic fully convective M star. The photospheric magnetic field is extracted from the dynamo model and is used to drive a physics-based coronal model. The magnetic field is dominated by high-latitude concentration of few kG. This leads to a quiescent coronal structure, which is dominated by large, dipolar, hot loops that extend from one pole to the other. As the small, underlying coronal loops are cooler, the coronal X-ray emission is dominated by the large hot loops and appears saturated. This is an alternative view to that where the stellar surface is full with small, hot loops. We propose that the observed saturation in the activity-rotation relation, at its cooler component, is due to the large hot loops, and that the transition to the un-saturated regime occurs when the stellar strong field begins to appear only in small active regions. Overlying this basal saturated level is a high rate of flares which provide a near continuous additional, hotter emission component which typically dominates the overall emission.
\acknowledgments
We thank for an unknown referee for her/his comments. The work presented here was funded by a NASA Living with a Star grants NNX16AC11G, NNX16AB79G, and a {\em Chandra} grant GO4-15011X. Simulation results were obtained using the Space Weather Modeling Framework, developed by the Center for Space Environment Modeling, at the University of Michigan with funding support from NASA ESS, NASA ESTO-CT, NSF KDI, and DoD MURI. The simulations were performed on NASA's PLEIADES cluster under SMD-16-6857 allocation.
\bibliographystyle{apj}
|
1,108,101,564,171 | arxiv |
\subsubsection{Determining the signal fraction in data\label{sub:sfrac}}
The signal fraction \ensuremath{f}\xspace is determined independently for the \ensuremath{e\!+\!{\rm jets}}\xspace and \ensuremath{\mu\!+\!{\rm jets}}\xspace channels directly from the selected data sample.
The likelihood depends explicitly on three parameters: \ensuremath{\Delta m}\xspace, \ensuremath{m_{\rm top}}\xspace, and $\ensuremath{f}\xspace$, as defined in Eq.~(\ref{eq:lh}). The uncalibrated signal fraction $\ensuremath{f}\xspace^{\rm uncal}$ is calculated in data as an average of $\ensuremath{f}\xspace^{\rm best}$ determined at each point in the $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ grid and weighted by the value of the likelihood at that point.
To calibrate $\ensuremath{f}\xspace^{\rm uncal}$, we form 1000 pseudo-experiments for each input signal fraction $\ensuremath{f}\xspace^{\rm true}$ in the interval $[0,1]$ in increments of 0.1, and extract $\ensuremath{f}\xspace^{\rm uncal}$ for each one, following the same procedure as in data. Signal MC events with $\ensuremath{m_t}\xspace=\ensuremath{m_{\bar t}}\xspace=172.5~\ensuremath{\textnormal{GeV}}\xspace$ are used for this calibration. A linear dependence is observed between $\ensuremath{f}\xspace^{\rm extr}$ and $\ensuremath{f}\xspace^{\rm true}$, where $\ensuremath{f}\xspace^{\rm extr}$ is the average of $\ensuremath{f}\xspace^{\rm uncal}$ values extracted in 1000 pseudo-experiments for a given $\ensuremath{f}\xspace^{\rm true}$. We use the results of a linear fit of $\ensuremath{f}\xspace^{\rm extr}$ to $\ensuremath{f}\xspace^{\rm true}$ to calibrate the fraction of signal events in data. The results are summarized in Table~\ref{tab:sfrac}. Possible systematic biases on the measured value of $\ensuremath{\Delta m}\xspace$ from the uncertainty on $\ensuremath{f}\xspace$ are discussed in Sec.~\ref{sec:syst}.
\begin{table}[h]
\caption{\label{tab:sfrac}
Signal fractions determined from data for the assumption that $\ensuremath{m_t}\xspace=\ensuremath{m_{\bar t}}\xspace=172.5~\ensuremath{\textnormal{GeV}}\xspace$. The uncertainties are statistical only.
}
\begin{centering}
\begin{tabular}{rc}
\hline\hline
Channel~~~&~Measured signal fraction \\
\hline
\ensuremath{e\!+\!{\rm jets}}\xspace~~~& $0.71~\pm~0.05$ \\
\ensuremath{\mu\!+\!{\rm jets}}\xspace~~~& $0.75~\pm~0.04$ \\
\hline\hline
\end{tabular}
\end{centering}
\end{table}
\subsubsection{Calibration of $\ensuremath{\Delta m}\xspace$\label{sssec:calibdm}}
The dependence of the extracted \ensuremath{\Delta m}\xspace
on the generated \ensuremath{\Delta m}\xspace is determined from the extracted values ${\ensuremath{\Delta m}\xspace}^{\rm extr}(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$, again obtained from averaging $\langle\ensuremath{\Delta m}\xspace\rangle$ over 1000 pseudo-experiments for each $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ combination. The resulting distribution and fit to the 14~$(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ points is shown in Fig.~\ref{fig:calibmdel} (a) and (b) for the \ensuremath{e\!+\!{\rm jets}}\xspace and \ensuremath{\mu\!+\!{\rm jets}}\xspace channels, respectively. This provides the calibration of the extracted \ensuremath{\Delta m}\xspace value:
\begin{equation}\label{eq:calib}
\ensuremath{\Delta m}\xspace^{\rm extr}=\xi_0^{\ensuremath{\Delta m}\xspace}+\xi_1^{\ensuremath{\Delta m}\xspace}\cdot\ensuremath{\Delta m}\xspace^{\rm gen}\,.
\end{equation}
The fit parameters $\xi_i^{\ensuremath{\Delta m}\xspace}$ are summarized in Table~\ref{tab:calib}.
\begin{figure}[b]
\includegraphics[width=0.24\textwidth]{plots_prd/results/plotd_em_prd}
\hspace{-2mm}
\includegraphics[width=0.24\textwidth]{plots_prd/results/plotd_mu_prd}\\
\includegraphics[width=0.24\textwidth]{plots_prd/results/plotdp_em_prd}
\hspace{-2mm}
\includegraphics[width=0.24\textwidth]{plots_prd/results/plotdp_mu_prd}
\caption{\label{fig:calibmdel}
The calibration of the extracted $\ensuremath{\Delta m}\xspace$ value as a function of generated $\ensuremath{\Delta m}\xspace$ is shown for the (a)~\ensuremath{e\!+\!{\rm jets}}\xspace and (b)~\ensuremath{\mu\!+\!{\rm jets}}\xspace channels. The points are fitted to a linear function. Each point represents a set of 1000 pseudo-experiments for one of the fourteen $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ combinations. The circle, square, triangle, rhombus, cross, star, and ``$\times$'' symbols stand for $\ensuremath{m_{\rm top}}\xspace=165,167.5,170,172.5,175,177.5,{\rm~and~}180~\ensuremath{\textnormal{GeV}}\xspace$, respectively.
Similarly, the pull widths, as defined in the text, are given for the (c)~\ensuremath{e\!+\!{\rm jets}}\xspace and (d)~\ensuremath{\mu\!+\!{\rm jets}}\xspace channels.}
\end{figure}
For an unbiased estimate of \ensuremath{\Delta m}\xspace and of the uncertainty \ensuremath{\delta_{\Delta m}}\xspace on the measured $\langle\ensuremath{\Delta m}\xspace\rangle$ value, the distribution of the pulls should be described by a Gaussian function with a standard deviation~(SD) of unity, and centered at zero. A~SD~of the pulls larger than unity would indicate an underestimation of \ensuremath{\delta_{\Delta m}}\xspace, which could be caused by the simplifying assumptions of the ME technique discussed in Sec.~\ref{sec:method}. For a given pseudo-experiment at $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$, we define the pull in \ensuremath{\Delta m}\xspace as
\begin{equation}\label{eq:pull}
\pi_{\ensuremath{\Delta m}\xspace} = \frac{\langle\ensuremath{\Delta m}\xspace\rangle - {\ensuremath{\Delta m}\xspace}^{\rm extr}(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)}{\ensuremath{\delta_{\Delta m}}\xspace}\,.
\end{equation}
The pull widths $\pullw{\ensuremath{\Delta m}\xspace}$, defined by the SD~in Gaussian fits to the pull distributions, are also shown for all 14 $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ points in Fig.~\ref{fig:calibmdel}~(c) and~(d) for the \ensuremath{e\!+\!{\rm jets}}\xspace and \ensuremath{\mu\!+\!{\rm jets}}\xspace channels, respectively. The average pull widths $\langle\pullw{\ensuremath{\Delta m}\xspace}\rangle$ are taken from fits of the 14 pull widths in each channel to constant offsets and are summarized in Table~\ref{tab:calib}. We calibrate the estimated uncertainty according to $\ensuremath{\delta_{\Delta m}}\xspace^{\rm cal}\equiv\langle\pullw{\ensuremath{\Delta m}\xspace}\rangle\times\ensuremath{\delta_{\Delta m}}\xspace$.
\begin{table}
\caption{\label{tab:calib}
Fit parameters for the calibration of \ensuremath{\Delta m}\xspace and \ensuremath{m_{\rm top}}\xspace, defined by Eq.~(\ref{eq:calib}), and average pull-widths $\langle\pullw{}\rangle$ for pulls in \ensuremath{\Delta m}\xspace and \ensuremath{m_{\rm top}}\xspace, defined in Eq.~(\ref{eq:pull}).
}
\begin{center}
\begin{tabular}{llr@{ $\pm$ }r r@{ $\pm$ }r r@{ $\pm$ }r }
\hline
\hline
& Channel~ & \multicolumn{2}{c}{~~$\xi_0$~(GeV)}
& \multicolumn{2}{c}{$\xi_1$}
& \multicolumn{2}{c}{$\langle\pullw{}\rangle$} \\
\hline
\multirow{2}{*}{\ensuremath{\Delta m}\xspace}
& \ensuremath{e\!+\!{\rm jets}}\xspace &~$ 0.28$ & 0.14~&~1.10 & 0.02~&~1.25 & 0.01 \\
& \ensuremath{\mu\!+\!{\rm jets}}\xspace & $-0.08$ & 0.13 & 0.99 & 0.02 & 1.22 & 0.01 \\
\multirow{2}{*}{\ensuremath{m_{\rm top}}\xspace}
& \ensuremath{e\!+\!{\rm jets}}\xspace &~$ 0.53$ & 0.08~&~0.99 & 0.02~&~1.17 & 0.01 \\
& \ensuremath{\mu\!+\!{\rm jets}}\xspace & $ 0.24$ & 0.07 & 1.02 & 0.02 & 1.16 & 0.01 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Calibration of $\ensuremath{m_{\rm top}}\xspace$\label{sssec:calibmavg}}
Results from an analogous calibration of $\ensuremath{m_{\rm top}}\xspace$ are displayed in Fig.~\ref{fig:calibmsum}~(a) and~(b) for the \ensuremath{e\!+\!{\rm jets}}\xspace and \ensuremath{\mu\!+\!{\rm jets}}\xspace channel, respectively. The distributions in pull widths are given in parts (c) and~(d) of the same figure. The corresponding fit parameters and average pull widths are also summarized in Table~\ref{tab:calib}.
\begin{figure}[h]
\includegraphics[width=0.24\textwidth]{plots_prd/results/plota_em_prd}
\hspace{-2mm}
\includegraphics[width=0.24\textwidth]{plots_prd/results/plota_mu_prd}\\
\includegraphics[width=0.24\textwidth]{plots_prd/results/plotap_em_prd}
\hspace{-2mm}
\includegraphics[width=0.24\textwidth]{plots_prd/results/plotap_mu_prd}
\caption{\label{fig:calibmsum}
The calibration of the extracted $\ensuremath{m_{\rm top}}\xspace$ value as a function of generated $\ensuremath{m_{\rm top}}\xspace$ is shown for the (a)~\ensuremath{e\!+\!{\rm jets}}\xspace and (b)~\ensuremath{\mu\!+\!{\rm jets}}\xspace channels. The dependence is fitted to a linear function. Each point represents a set of 1000 pseudo-experiments for one of the fourteen $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ combinations.
Similarly, the pull widths, as defined in the text, are given for the (c)~\ensuremath{e\!+\!{\rm jets}}\xspace and (d)~\ensuremath{\mu\!+\!{\rm jets}}\xspace channels.
}
\end{figure}
\subsubsection{Modeling of physical processes\label{ssec:physics}}
\begin{enumerate}
\item
{\em Higher-order corrections:~}
To check the effect of higher-order corrections on~\ensuremath{\Delta m}\xspace, we perform ensemble studies using \ensuremath{t\bar{t}}\xspace~events generated with $(i)$~the NLO MC generator {\sc mc@{}nlo}~\cite{bib:mcnlo}, and $(ii)$~the LO MC generator {\sc alpgen}\xspace, with {\sc Herwig}\xspace~\cite{bib:herwig} for hadronization and shower evolution.
\item
{\em Initial and final-state radiation:~}
The modeling of extra jets from ISR/FSR is checked by comparing {\sc pythia}\xspace samples with modified input parameters, such as the $\pm1$SD changes, found in a study of Drell-Yan processes~\cite{bib:cdfisr}.
\item
{\em Hadronization and underlying event:~}
To check a possible effect of \ensuremath{\Delta m}\xspace from the underlying event as well as the hadronization models, we compare samples hadronized using {\sc pythia}\xspace with those hadronized using {\sc Herwig}\xspace.
\item
{\em Color reconnection:~}
The default {\sc pythia}\xspace tune used at D0\xspace (tune {\tt A}), does not include explicit color reconnection.
For our check, we quantify the difference between \ensuremath{\Delta m}\xspace values found in ensemble studies for \ensuremath{t\bar{t}}\xspace MC samples generated using tunes {\tt Apro} and {\tt ACRpro}, where the latter includes an explicit model of color reconnection~\cite{bib:color,bib:acrpro}.
\item
{\em $b$-fragmentation\label{ssub:bfrag}:~}
Uncertainties in the simulation of \mbox{$b$-quark} fragmentation can affect the measurement of $\ensuremath{m_{\rm top}}\xspace$ in several phases of the analysis, such as in $b$-tagging and in the $b$-quark transfer functions used in the ME calculations. Such effects are studied in the context of \ensuremath{\Delta m}\xspace by reweighting the simulated $t\bar{t}$ events used in the calibration of the method from the default Bowler scheme~\cite{bowler}, which is tuned to LEP (ALEPH, OPAL, and DELPHI) data, to a tune that accounts for differences between SLD and LEP data~\cite{bfragyvonne}.
\item
{\em Uncertainty on PDF:~}
The CTEQ6M~\cite{bib:cteq} PDFs provide a set of possible excursions in parameters from their central values. To check the effect on \ensuremath{\Delta m}\xspace from PDFs, we change the default \ensuremath{t\bar{t}}\xspace MC sample (generated using CTEQ6L1) by reweighting it to CTEQ6M, repeat the ensemble studies for each of the parameter variations, and evaluate the uncertainty using the prescribed formula~\cite{bib:cteq}:
\[
\qquad\delta_{\ensuremath{\Delta m}\xspace,\rm PDF}=\frac{1}{2}\bigg\{\sum{}_{i=1}^{20}[\ensuremath{\Delta m}\xspace(S_{i}^{+})-\ensuremath{\Delta m}\xspace(S_{i}^{-})]^{2}\bigg\}^\frac12,
\]
where the sum runs over PDF uncertainties for positive ($S_{i}^{+})$ and negative ($S_{i}^{-}$) excursions.
\item
{\em Multiple hadron interactions:~}
When calibrating the ME method, we reweight the luminosity profiles of our MC samples to the instantaneous luminosity profile for that data-taking period. For our check, we re-derive the calibration ignoring luminosity-dependent weights.
\item
{\em Modeling of background:~}
We check the effect of inadequate modeling of background processes on our \ensuremath{\Delta m}\xspace measurement by identifying distributions in the background-dominated $\ell+3$\,jets events that display only limited agreement between data and predictions from the sum of our signal and background models, as determined through a Kolmogorov-Smirnov test~\cite{bib:kstest}. The calibration of the method is then re-done using \ensuremath{W\!+\!{\rm jets}}\xspace events that are reweighted to bring the identified distributions of predicted signal and background events into better agreement with data.
\item
{\em Heavy-flavor scale-factor:~}
As discussed in Sec.~\ref{sec:samples}, a heavy-flavor scale-factor of $1.47\pm0.22$ is applied to the $W\!\!+\!b\bar{b}\!+\!\ensuremath{\rm jets}\xspace$ and $W\!\!+\!c\bar{c}\!+\!\ensuremath{\rm jets}\xspace$ production cross sections to increase the heavy-flavor content in the {\sc alpgen}\xspace $W$+jets MC samples. Moreoever, a scale factor of $1.27\pm0.15$ for the $W\!\!+\!c\!+\!\ensuremath{\rm jets}\xspace$ production cross section is obtained using {\sc mcfm}.
We re-derive the calibration with the heavy-flavor scale-factor changed by $\pm30\%$ to check the magnitude of the effect on \ensuremath{\Delta m}\xspace.
\end{enumerate}
\subsubsection{Modeling of detector}
\begin{enumerate}
\item
{\em Trigger selection:~}
To check the magnitude the effect from differential trigger efficiencies on $\ensuremath{\Delta m}\xspace$, we re-derive a new $\ensuremath{\Delta m}\xspace$ calibration ignoring the trigger weights.
\item
{\em $b$-tagging efficiency:~}
We check the possibility of a bias in our \ensuremath{\Delta m}\xspace measurement from discrepancies in the $b$-tagging efficiency between data and MC events by using {\em absolute} uncertainties on the $b$-tagging efficiencies, and account independently for possible discrepancies that are {\em differential} in $\eta$ and \ensuremath{p_T}\xspace of the jet by reweighting the $b$-tagging rate in simulated \ensuremath{t\bar{t}}\xspace MC events to that observed in data. The total magnitude of a possible effect is determined by combining in quadrature excursions of \ensuremath{\Delta m}\xspace values obtained with the modified calibrations for both absolute and differential changes.
\item
{\em Momentum scale for electrons:~}
D0\xspace calibrates the energy of electrons based on studies of the $Z\to ee$ mass for data and MC events.
We rescale the electron energies in the default signal MC sample according to the uncertainties on the electron energy calibration to check the magnitude of the effect in the context of \ensuremath{\Delta m}\xspace.
\item
{\em Momentum scale for muons:~}
The absolute momentum scale for muons is obtained from $J/\psi\to\mu\mu$ and $Z\to\mu\mu$ data. However, both linear and quadratic interpolation between these two points can be employed for the calibration.
We check the effect of each extrapolation on \ensuremath{\Delta m}\xspace by applying the respective corrections to simulated \ensuremath{t\bar{t}}\xspace MC events in the default sample, and
find a larger shift in \ensuremath{\Delta m}\xspace for the linear parametrization.
\end{enumerate}
\section{Introduction}
\input{intro}
\section{The D0\xspace\ detector\label{sec:detector}}
\input{detector}
\section{Event selection\label{sec:selection}}
\input{selection}
\section{Monte carlo simulation\label{sec:samples}}
\input{samples}
\section{General description of the method\label{sec:method}}
\input{method}
\section{Measurement of the top-antitop quark mass difference\label{sec:measurement}}
\subsection{Fit to the top-antitop quark mass difference\label{ssec:massfit}}
\input{massfit}
\subsection{Calibration of the method\label{ssec:calib}}
\input{calib}
\subsection{Results\label{ssec:results}}
\input{results}
\section{Systematic uncertainties\label{sec:syst}}
\input{syst}
\subsection{Additional checks\label{sec:cross}}
\input{cross}
\section{Combining the 2.6\,${\bf fb}^{\boldsymbol{-1}}$ and 1\,${\bf fb}^{\boldsymbol{-1}}$ analyses\label{sec:combi}}
\input{combination}
\section{Conclusion\label{sec:conclusion}}
\input{conclusion}
\setcounter{section}{0}
\section{Appendix: generation of $\boldsymbol{t\bar t}$ events with $\boldsymbol{M_t\neq M_{\boldsymbol{\bar t}}}$\label{sec:app}}
\input{appendix}
\section*{Acknowledments}
\input{acknowledgement}
\subsection{Probability densities for events\label{ssec:prob}}
To optimize the use of kinematic and topological information, each event is assigned a probability \ensuremath{P_{\rm evt}}\xspace to observe it as a function of the assumed top and antitop quark masses: $\ensuremath{P_{\rm evt}}\xspace=\ensuremath{P_{\rm evt}}\xspace(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$. The individual probabilities for all events in a given sample are combined to form a likelihood, from which the \ensuremath{\Delta m}\xspace and \ensuremath{m_{\rm top}}\xspace parameters are extracted. Simplifying assumptions are made in the expression of the likelihood about, e.g., detector response or the sample composition, are made to render the problem numerically solvable. It is therefore necessary to calibrate the method using fully simulated MC events, as detailed in Sec.~\ref{ssec:calib}. Systematic uncertainties are estimated to account for possible effects of these assumptions on the extracted value of \ensuremath{\Delta m}\xspace.
Assuming that the signal and background physics processes do not interfere, the contribution to the overall probability from a single event can be formulated as
\begin{eqnarray}
\ensuremath{P_{\rm evt}}\xspace(x;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{f}\xspace) &=& \ensuremath{A}\xspace(x)\{~\ensuremath{f}\xspace\cdot\ensuremath{P_{\rm sig}}\xspace(x;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)\nonumber\\
&&+~(1-\ensuremath{f}\xspace)\cdot\ensuremath{P_{\rm bkg}}\xspace(x)~ \}\,,\label{eq:pevt}
\end{eqnarray}
where $x$ denotes the set of measured kinematic variables for the event observed in the detector, \ensuremath{f}\xspace is the fraction of signal events in the sample, $\ensuremath{A}\xspace(x)$ reflects the detector acceptance and efficiencies for a given $x$, and $\ensuremath{P_{\rm sig}}\xspace$ and $\ensuremath{P_{\rm bkg}}\xspace$ are the probabilities for the event to arise from \ensuremath{t\bar{t}}\xspace or \ensuremath{W\!+\!{\rm jets}}\xspace production, respectively. The production of $W$ bosons in association with jets is the dominant background, and we neglect all other contributions to $\ensuremath{P_{\rm bkg}}\xspace$.
Kinematically similar contributions from other background processes like MJ production are accounted for in the analysis implicitly (cf.~Sec.~\ref{sec:syst}).
Both signal and background probabilities depend on the JES, which is defined as the ratio of the calibrated energy of a jet over its uncalibrated energy. The standard calibration of jet energies accounts for the energy response of the calorimeters, the energy that crosses the cone boundary due to the transverse shower size, and the additional energy from pileup of events and from multiple $p\bar p$ interactions in a single beam crossing. Although the $\ensuremath{\Delta m}\xspace$ observable is not expected to show a strong dependence on JES by construction, we apply an additional absolute calibration to the JES using a matrix element which is a function of \ensuremath{m_{\rm top}}\xspace and JES from Refs.~\cite{Aba06,bib:me26fb}.
The potential systematic bias on \ensuremath{\Delta m}\xspace from the uncertainty on the absolute value of the JES is estimated in Sec.~\ref{sec:syst}.
To extract the masses \ensuremath{m_t}\xspace and \ensuremath{m_{\bar t}}\xspace from a set of $n$ selected events, with sets of measured kinematic quantities $x_{1},...,x_{n}$, a likelihood function is defined from the individual event probabilities according to Eq.~(\ref{eq:pevt}):
\begin{equation}
L(x_{1},...,x_{n};\, \ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{f}\xspace) = \prod_{i=1}^{n} \ensuremath{P_{\rm evt}}\xspace(x_{i};\, \ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{f}\xspace)\,.\label{eq:lhmtmtb}
\end{equation}
For every assumed $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ pair, we first determine the value of $\ensuremath{f}\xspace\equiv\ensuremath{f}\xspace^{\rm best}$ that maximizes this likelihood.
\subsection{Calculation of signal probability $\boldsymbol P_{\bf sig}$\label{ssec:psig}}
The probability density for the signal to yield a given set of partonic final state four-momenta $y$ in $p\bar p$ collisions is proportional to the differential cross section $\ensuremath{{\rm d}}\sigma$ for $\ensuremath{t\bar{t}}\xspace$ production:
\begin{eqnarray}
\ensuremath{{\rm d}}\sigma&&\!\!\!\!\!\!\!\!\!\!(p\bar p\to\ensuremath{t\bar{t}}\xspace\to y;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)
= \int\limits_{q_{1},\,q_{2}}\sum_{{}^{\rm quark}_{\rm flavors}}\ensuremath{{\rm d}} q_{1}\ensuremath{{\rm d}} q_{2} f(q_{1})f(q_{2}) \nonumber\\
&&\qquad\qquad\quad\times \frac{(2\pi)^{4}\left|\mathcal{M}(q\bar q\to \ensuremath{t\bar{t}}\xspace\to y)\right|^{2}}{2q_{1}q_{2}s}\ensuremath{{\rm d}}\Phi_{6}\,,\label{eq:dsigma}
\end{eqnarray}
where $\mathcal{M}$ denotes the matrix element for the $q\bar{q}\to t\bar{t}\to b(l\nu)\bar b(q\bar q')$ process, $s$ is the square of the center-of-mass energy, $q_{i}$ is the momentum fraction of the colliding parton $i$ (assumed to be massless), and ${\rm d}\Phi_{6}$ is an infinitesimal element of six-body phase space. The $f(q_i)$ denote the probability densities for finding a parton of given flavor and momentum fraction $q_i$ in the proton or antiproton, and the sum runs over all possible flavor configurations of the colliding quark and antiquark. In our definition of $\mathcal M$, and therefore the \ensuremath{t\bar{t}}\xspace signal probability, only quark-antiquark annihilation at LO is taken into account; in this sense, Eq.~(\ref{eq:dsigma}) does not represent the full differential cross section for $\ensuremath{t\bar{t}}\xspace$ production in $p\bar p$ collisions. Effects from gluon-gluon and quark-gluon induced \ensuremath{t\bar{t}}\xspace production are accounted for in the calibration procedure described in Sec.~\ref{ssec:calib}. We further test for an effect on \ensuremath{\Delta m}\xspace from higher-order corrections in Sec.~\ref{sec:cross}.
The differential cross section for observing a \ensuremath{t\bar{t}}\xspace event with a set of kinematic quantities $x$ measured in the detector can be written as
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\ensuremath{{\rm d}}\sigma(p\bar p\to\ensuremath{t\bar{t}}\xspace\to x;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{k_{\rm JES}}\xspace)\nonumber\\
&&\!\!\!=\ensuremath{A}\xspace(x)\!\!\int_{y}\!\!\ensuremath{{\rm d}} y\,\ensuremath{{\rm d}}\sigma(p\bar p\to\ensuremath{t\bar{t}}\xspace\to y;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)W(x,y;\ensuremath{k_{\rm JES}}\xspace)\,,
\end{eqnarray}
where finite detector resolution and offline selections are taken explicitly into account through the convolution over a transfer function $W(x,y;\ensuremath{k_{\rm JES}}\xspace)$ that defines the probability for a partonic final state $y$ to appear as $x$ in the detector given an absolute JES correction \ensuremath{k_{\rm JES}}\xspace.
With the above defintions, the differential probability to observe a $\ensuremath{t\bar{t}}\xspace$ event with a set of kinematic quantities $x$ measured in the detector is given by
\begin{eqnarray}
\!\!\!\!\!\!\!\!\ensuremath{P_{\rm sig}}\xspace(x;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{k_{\rm JES}}\xspace) &=& \frac{\ensuremath{{\rm d}}\sigma(p\bar p\to\ensuremath{t\bar{t}}\xspace\to x;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{k_{\rm JES}}\xspace)}
{\sigma_{\rm obs}(p\bar p\to\ensuremath{t\bar{t}}\xspace;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{k_{\rm JES}}\xspace)}\,,\label{eq:psig}
\end{eqnarray}
where $\sigma_{\rm obs}$ is the cross section for observing $\ensuremath{t\bar{t}}\xspace$ events in the detector for the specific ME $\mathcal M$ defined in Eq.~(\ref{eq:dsigma}):
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\sigma_{\rm obs}(p\bar p\to\ensuremath{t\bar{t}}\xspace;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{k_{\rm JES}}\xspace) \nonumber\\
&&\!\!\!\! = \int_{x,y}\!\!\!\!\!\!{\ensuremath{{\rm d}} x\,\ensuremath{{\rm d}} y~\ensuremath{{\rm d}}\sigma(p\bar p\to\ensuremath{t\bar{t}}\xspace\to y;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)}W(x,y;\ensuremath{k_{\rm JES}}\xspace)\ensuremath{A}\xspace(x)\,\nonumber\\
&&\!\!\!\! = \int_{y}\!\!{\ensuremath{{\rm d}} y~\ensuremath{{\rm d}}\sigma(p\bar p\to\ensuremath{t\bar{t}}\xspace\to y;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)}\!\!\int_{x}\!\!{\ensuremath{{\rm d}} x~W(x,y;\ensuremath{k_{\rm JES}}\xspace)\ensuremath{A}\xspace(x)}\,.\nonumber
\end{eqnarray}
The normalization factor $\sigma_{\rm obs}$ is calculated using MC integration techniques:
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!\sigma_{\rm obs}(p\bar p\to\ensuremath{t\bar{t}}\xspace;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace,\ensuremath{k_{\rm JES}}\xspace)
\!\!&\simeq&\!\! \sigma_{\rm tot}(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)\!\times\!\langle\ensuremath{A}\xspace|\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace\rangle,\label{eq:accapprox}
\end{eqnarray}
where
\begin{equation}
\sigma_{\rm tot}(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace) = \int_{y}\!\!{\ensuremath{{\rm d}} y~\ensuremath{{\rm d}}\sigma(p\bar p\to\ensuremath{t\bar{t}}\xspace\to y;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)}\,, \label{eq:acc}\nonumber
\end{equation}
and
\begin{equation}
\langle\ensuremath{A}\xspace|\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace\rangle\equiv\frac1{N_{\rm gen}}\sum_{\rm acc}\omega\,. \label{eq:acccalc}\nonumber
\end{equation}
To calculate the $\langle\ensuremath{A}\xspace|\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace\rangle$ term, events are generated according to $\ensuremath{{\rm d}}\sigma(p\bar p\to\ensuremath{t\bar{t}}\xspace;\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$ using {\sc pythia} and passed through the full simulation of the detector. Here, $N_{\rm gen}$ is the total number of generated events, $\omega$ are the MC event weights that account for trigger and identification efficiencies, and the sum runs over all accepted events.
The formulae used to calculate the total cross section $\sigma_{\rm tot}$ and the matrix element $\mathcal M$ are described below in Secs.~\ref{sssec:xsec} and~\ref{sssec:me}. In all other respects, the calculation of the signal probability proceeds identically to that in Refs.~\cite{Aba06,bib:me26fb}, with the following exceptions: $(i)$~CTEQ6L1 PDFs are used throughout, and
$(ii)$~the event probabilities are calculated on a grid in \ensuremath{m_t}\xspace and \ensuremath{m_{\bar t}}\xspace spaced at 1~GeV intervals along each axis. As described in Sec.~\ref{ssec:massfit}, a transformation of variables to $\ensuremath{\Delta m}\xspace$ and \ensuremath{m_{\rm top}}\xspace is performed when defining the likelihood.
\subsubsection{Calculation of the total cross section $\sigma_{\rm tot}$\label{sssec:xsec}}
Without the assumption of equal top and antitop quark masses, the total LO cross section for the $q\bar q\to\ensuremath{t\bar{t}}\xspace$ process in the center of mass frame is given by
\begin{equation}\label{eq:sigmatot}
\sigma=\frac{16\pi\alpha_{s}^{2}}{27s^{\frac{5}{2}}}|\vec p|\left[3E_{t}E_{\bar t}+|\vec p|^2+3\ensuremath{m_t}\xspace\ensuremath{m_{\bar t}}\xspace\right],
\end{equation}
where $E_{t}$~($E_{\bar t}$) are the energies of the top and antitop quark, and $\vec p$ is the three-momentum of the top quark. This reduces to the familiar form for $\ensuremath{m_t}\xspace=\ensuremath{m_{\bar t}}\xspace$:
\[
\sigma=\frac{4\pi\alpha_{s}^{2}}{9s}\beta\left(1-\frac{\beta^{2}}{3}\right),
\]
where $\beta=|\vec p_{t}|/E_{t}=|\vec p_{\bar t}|/E_{\bar t}$ represents the velocity of the $t$ (or $\bar{t}$) quark in the $q\bar q$ rest frame.
Integrating Eq.~(\ref{eq:sigmatot}) over all incoming $q\bar{q}$ momenta and using the appropriate PDF yields $\sigma_{\rm tot}(p\bar{p}\to t\bar{t};\,\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$, as defined for any values of \ensuremath{m_t}\xspace and \ensuremath{m_{\bar t}}\xspace in Eq.~(\ref{eq:acc}). Figure~\ref{fig:xsec} displays the dependence of $\sigma_{\rm tot}$ on \ensuremath{\Delta m}\xspace for a given \ensuremath{m_{\rm top}}\xspace. The corresponding average acceptance term $\langle\ensuremath{A}\xspace|\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace\rangle$, as defined in the same equation, is shown in Fig.~(\ref{fig:acc}) for the \ensuremath{e\!+\!{\rm jets}}\xspace and \ensuremath{\mu\!+\!{\rm jets}}\xspace channels.
\begin{figure}
\begin{centering}
\includegraphics[width=0.49\textwidth]{plots_prd/me/sigtot}
\par\end{centering}
\caption{\label{fig:xsec}
The total $p\bar{p}\to t\bar{t}$ production cross section $\sigma_{\rm tot}$ defined in Eq.~(\ref{eq:acc}) as a function of $\ensuremath{\Delta m}\xspace$ and $\ensuremath{m_{\rm top}}\xspace$.
Each line shows $\sigma_{\rm tot}$ as a function of $\ensuremath{\Delta m}\xspace$ for a given value of $\ensuremath{m_{\rm top}}\xspace$ displayed above the curve. The range from $152$~GeV to $188$~GeV is shown in $6$~GeV increments, the broken line corresponds to 170~GeV.
}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/acc_em_1d}
\hspace{-1.5mm}
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/acc_mu_1d}
\par\end{centering}
\caption{\label{fig:acc}
The dependence of the overall average acceptance $\langle\ensuremath{A}\xspace|\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace\rangle$ on $\ensuremath{\Delta m}\xspace$ and $\ensuremath{m_{\rm top}}\xspace$, as defined in Eq.~(\ref{eq:acccalc}), for the (a)~\ensuremath{e\!+\!{\rm jets}}\xspace and (b)~\ensuremath{\mu\!+\!{\rm jets}}\xspace signal MC samples. Each line shows $\langle\ensuremath{A}\xspace|\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace\rangle$ as a function of $\ensuremath{\Delta m}\xspace$ for a given value of $\ensuremath{m_{\rm top}}\xspace$ displayed above the curve. The range from $152$~GeV to $188$~GeV is shown in $6$~GeV increments, the broken lines correspond to 170~GeV.
}
\end{figure}
\subsubsection{Calculation of the matrix element $\mathcal M$\label{sssec:me}}
The LO matrix element for the $q\bar q\to\ensuremath{t\bar{t}}\xspace$ process we use in our analysis is
\begin{eqnarray}
\left|\mathcal{M}\right|^{2} &=& \frac{g_{s}^{4}}{9}F\bar{F}\cdot\frac{2}{s}\nonumber\\
&\times&\!\!\!\!\!\left\{(E_{t}-|\vec p_{t}|c_{qt})^{2}+(E_{\bar t}+|\vec p_{\bar t}|c_{qt})^{2}+2\ensuremath{m_t}\xspace \ensuremath{m_{\bar t}}\xspace\right\}.
\label{eq:me}
\end{eqnarray}
The form factors $F\bar F$ are identical to those given in Eqs.~(24) and~(25) of Ref.~\cite{Aba06}. For the special case of $\ensuremath{m_t}\xspace=\ensuremath{m_{\bar t}}\xspace$, the expression in Eq.~(\ref{eq:me}) reduces to
\[
\left|\mathcal{M}\right|^{2}=\frac{g_{s}^{4}}{9}F\bar{F}\cdot\left(2-\beta^{2}s_{qt}^{2}\right)
\]
which is identical to Refs.~\cite{Aba06,bib:mecalc}, where $s_{qt}$ is the sine of the angle between the incoming parton and the outgoing top quark in the $q\bar q$ rest frame.
\subsection{Calculation of the background probability $\boldsymbol P_{\bf bkg}$\label{ssec:pbkg}}
The expression for the background probability $\ensuremath{P_{\rm bkg}}\xspace$ is similar to that for $\ensuremath{P_{\rm sig}}\xspace$ in Eq.~(\ref{eq:psig}), except that the ME $\mathcal{M}_{\ensuremath{W\!+\!{\rm jets}}\xspace}$ is for \ensuremath{W\!+\!{\rm jets}}\xspace production, and all jets are assumed to be light quark or gluon jets. Clearly, $\mathcal{M}_{\ensuremath{W\!+\!{\rm jets}}\xspace}$ does not depend on $\ensuremath{m_t}\xspace$ or $\ensuremath{m_{\bar t}}\xspace$, and $\ensuremath{P_{\rm bkg}}\xspace$ is therefore independent of either. We use a LO parameterization of $\mathcal{M}$ from the {\sc vecbos}\xspace~\cite{bib:VECBOS} program. More details on the calculation of the background probability can be found in Ref.~\cite{Aba06}.
\subsection{Description of detector response\label{ssec:tf}}
The transfer function $W(x,y,\ensuremath{k_{\rm JES}}\xspace)$, which relates the set of variables $x$ characterizing the reconstructed final-state objects to their partonic quantities $y$, is crucial for the calculation of the signal probability according to Eq.~(\ref{eq:psig}), and the corresponding expression for \ensuremath{P_{\rm bkg}}\xspace. A full simulation of the detector would not be feasible for calculating event probabilities because of the overwhelming requirements for computing resources. Therefore, we parametrize the detector response and resolution through a transfer function.
In constructing the transfer function, we assume that the functions for individual final-state particles are not correlated. We therefore factorize the transfer function into contributions from each measured final-state object used in calculating \ensuremath{P_{\rm sig}}\xspace, that is the isolated lepton and four jets. The poorly measured imbalance in transverse momentum \ensuremath{p\!\!\!\!/_T}\xspace, and consequently the transverse momentum of the neutrino, is not used in defining event probabilities. We assume that the directions of $e$, $\mu$, and jets in $(\eta,\phi)$ space are well-measured, and therefore define the transfer functions for these quantities as $\delta$ functions: $\delta^2(\eta,\phi)\equiv\delta(\eta_y-\eta_x)\delta(\phi_y-\phi_x)$. This reduces the number of integrations over the 6-particle phase space $\ensuremath{{\rm d}}\Phi_6$ by $5\times2=10$ dimensions. The magnitudes of particle momenta $|\vec p|$ display significant variations in resolution for leptons and jets and are therefore parameterized by their corresponding resolutions.
There is an inherent ambiguity in assigning jets reconstructed in the detector to specific partons from $\ensuremath{t\bar{t}}\xspace$ decay. Consequently, all 24 permutations of jet-quark assignments are considered in the analysis. The inclusion of $b$-tagging information provides improved identification of the correct permutation. This additional information enters the probability calculation through a weight $w_i$ on a given permutation $i$ of jet-parton assignments.
The $w_i$ are larger for those permutations that assign the $b$-tagged jets to $b$ quarks and untagged jets to light quarks. The sum of weights is normalized to unity: $\sum_{i=1}^{24}\!w_i = 1$.
Based on the above, we define the transfer function as
\begin{eqnarray}
\label{eq:tfdefinition-btag}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
W(x,y;\,\ensuremath{k_{\rm JES}}\xspace) = W_{\ell}(E_x,E_y)\delta_\ell^2(\eta,\phi) \nonumber\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\times \sum_{i=1}^{24}\!w_i
\left\{
\prod_{j=1}^{4}\,\delta_{ij}^2(\eta,\phi) W_{\rm jet}(E^i_x,E^j_y;\ensuremath{k_{\rm JES}}\xspace)
\!\!\right\}\!,
\end{eqnarray}
where $\ell$ denotes the lepton flavor, with a term $W_e$ describing the energy resolution for electrons and $W_\mu$ the resolution in the transverse momentum for muons. Similarly, $W_{\rm jet}$ describes the energy resolution for jets. The sum in $i$ is taken over the 24 possible permutations of assigning jets to quarks in a given event. More details on $W_{\ell}$ and $W_{\rm jet}$ can be found in Ref.~\cite{bib:me26fb}.
The weight $w_i$ for a given permutation $i$ is defined by a product of individual weights $w_i^j$ for each jet $j$.
For $b$-tag\-ged jets, $w_i^j$ is equal to the per-jet tagging efficiency $\epsilon_{\rm tag}(\alpha_k;\ \ensuremath{E_T^j}\xspace,\,\eta^j)$, where $\alpha_k$ labels the three possible parton-flavor assignments of the jet: $(i)$~$b$~quark, $(ii)$~$c$~quark, and $(iii)$~light ($u,d,s$) quark or gluon. For untagged jets, the $w_i^j$ factors are equal to $1-\epsilon_{\rm tag}(\alpha_k;\ \ensuremath{E_T^j}\xspace,\,\eta^j)$.
Because the contributions to \ensuremath{W\!+\!{\rm jets}}\xspace are parameterized by $\mathcal{M}_{\ensuremath{W\!+\!{\rm jets}}\xspace}$ without regard to heavy-flavor content, the weights $w_i$ for each permutation in the background probability are all set equal.
\begin{comment}
\subsubsection{Parameterization of the jet energy resolution\label{ssec:tf_jet}}
The transfer function for jets, $W_{\rm jet}(E_{x},E_{y};\ensuremath{k_{\rm JES}}\xspace)$, yields the probability for measuring the jet energy $E_{x}$ in the detector if the true quark energy is $E_{y}$. For the case $\ensuremath{k_{\rm JES}}\xspace=1$, it is parameterized as a double Gaussian:
\begin{eqnarray}
W_{\rm jet}&&\!\!\!\!\!\!\!\!\!\!(E_{x},E_{y};\ensuremath{k_{\rm JES}}\xspace\!\!=\!\!1)
= \frac{1}{\sqrt{2\pi}(\kappa_{2}+\kappa_{3}\kappa_{5})} \nonumber \\
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \times \left\{ e^{-{(E_{x}-E_{y}-\kappa_{1})^{2}}/{2\kappa_{2}^{2}}}
+ \kappa_{3}\cdot e^{-{(E_{x}-E_{y}-\kappa_{4})^{2}}/{2\kappa_{5}^{2}}} \right\}, \label{eq:tf_jet}
\end{eqnarray}
where the parameters $\kappa_{i}$ for $i=1,2,4,5$ are in units of energy and are parameterized linearly as
\[
\kappa_{i}=a_{i}+E_{y}\cdot b_{i}\ ,
\]
while $\kappa_3$ is a dimensionless constant. The parameters $a_{i}$ and $b_{i}$ are determined from fully simulated \ensuremath{t\bar{t}}\xspace MC events after applying all standard jet energy corrections. The parton and jet energies are used in an unbinned likelihood fit that minimizes the product of the $W_{\rm jet}$ contributions in Eq.~(\ref{eq:tf_jet}) with respect to the $a_{i}$ and $b_{i}$, where $a_3\equiv0$. Four sets of parameters are derived independently for four $\eta$ regions: $|\eta|\leq0.5$, $0.5<|\eta|\leq1.0$, $1.0<|\eta|\leq1.5$,
and $1.5<|\eta|\leq2.5$. This is done separately for three categories: $(i)$~light
quarks ($u$, $d$, $s$, $c$), $(ii)$~$b$ quarks with a muon within the cone of the associated jet, and $(iii)$~all other $b$ quarks, thereby specifying 108 parameters. Their values are given in Tables~\ref{tab:params1} and~\ref{tab:params2}. As an example, the transfer functions for light quark jets are shown in Fig.~\ref{fig:tf_jet} as a function of $E_{x}$ for different values of $E_{y}$.
Figure~\ref{fig:MEtf4} shows the 2-jet and 3-jet invariant masses in simulated \ensuremath{t\bar{t}}\xspace MC samples, where hatched and open histograms are obtained from full detector simulation and by applying the transfer functions to generated partons, respectively. The agreement between the filled and open histograms indicates that the extracted jet transfer functions provide a good description of detector response.
\begin{figure}
\centering
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/plot_tf_eta0}
\hspace{-1.5mm}
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/plot_tf_eta1}
\\
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/plot_tf_eta2}
\hspace{-1.5mm}
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/plot_tf_eta3}
\caption{\label{fig:tf_jet}
Transfer functions for light quark jets for different parton energies, for the reference point $\ensuremath{k_{\rm JES}}\xspace=1$, for (a)~$|\eta|\leq0.5$, (b)~$0.5<|\eta|\leq1.0$, (c)~$1.0<|\eta|\leq1.5$, and (d)~$1.5<|\eta|\leq2.5$.
}
\end{figure}
\begin{table}[htbp]
\caption{\label{tab:params1}
Parameters ($a_i$ in GeV) for the light quark transfer function.}
\begin{center}
\begin{tabular}{lr@{$\times$}lr@{$\times$}lr@{$\times$}lr@{$\times$}l}
\hline
\hline
& \multicolumn{8}{c}{Light quark jets} \\
& \multicolumn{2}{c}{$|\eta|\leq0.5$} & \multicolumn{2}{c}{$0.5<|\eta|\leq1.0$} & \multicolumn{2}{c}{$1.0<|\eta|\leq1.5$} & \multicolumn{2}{c}{$1.5<|\eta|\leq2.5$} \\
\hline\vspace{-3mm}\\
$a_1$ &$-2.74$ & $10^{ 0}$ &$-8.02$ & $10^{-1}$ &$ 1.69$ & $10^{-1}$ &$ 1.52$ & $10^{ 1}$ \\
$b_1$ &$ 1.67$ & $10^{-2}$ &$-3.59$ & $10^{-3}$ &$ 1.32$ & $10^{ 1}$ &$-2.17$ & $10^{-1}$ \\
$a_2$ &$ 5.44$ & $10^{ 0}$ &$ 5.40$ & $10^{ 0}$ &$-3.26$ & $10^{-1}$ &$ 3.34$ & $10^{ 0}$ \\
$b_2$ &$ 6.29$ & $10^{-2}$ &$ 8.46$ & $10^{-2}$ &$ 6.97$ & $10^{ 0}$ &$ 1.45$ & $10^{-1}$ \\
$b_3$ &$ 4.30$ & $10^{-4}$ &$ 4.80$ & $10^{-4}$ &$ 2.52$ & $10^{-2}$ &$ 4.06$ & $10^{-3}$ \\
$a_4$ &$ 1.54$ & $10^{ 1}$ &$ 2.00$ & $10^{ 1}$ &$ 4.71$ & $10^{ 0}$ &$ 1.72$ & $10^{ 1}$ \\
$b_4$ &$-2.12$ & $10^{-1}$ &$-2.38$ & $10^{-1}$ &$-8.37$ & $10^{-3}$ &$-3.69$ & $10^{-2}$ \\
$a_5$ &$ 1.77$ & $10^{ 1}$ &$-2.38$ & $10^{-1}$ &$ 1.03$ & $10^{ 1}$ &$ 1.75$ & $10^{ 1}$ \\
$b_5$ &$ 1.96$ & $10^{-1}$ &$ 1.89$ & $10^{ 1}$ &$ 6.42$ & $10^{-2}$ &$ 5.34$ & $10^{-2}$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table*}[htbp]
\caption{\label{tab:params2}
Parameters for the $b$-quark transfer function parameters for jets without and with a muon within the jet cone ($a_i$ in GeV).
}
\begin{center}
\begin{tabular}{lr@{$\times$}lr@{$\times$}lr@{$\times$}lr@{$\times$}l|lr@{$\times$}lr@{$\times$}lr@{$\times$}lr@{$\times$}l}
\hline
\hline
& \multicolumn{8}{c|}{$b$ quark jets without a muon within the cone} & & \multicolumn{8}{c}{$b$ quark jets with a muon within the cone} \\
Par. & \multicolumn{2}{c}{$|\eta|\leq0.5$} & \multicolumn{2}{c}{$0.5<|\eta|\leq1.0$} & \multicolumn{2}{c}{$1.0<|\eta|\leq1.5$} & \multicolumn{2}{c|}{$1.5<|\eta|\leq2.5$} &
~Par. & \multicolumn{2}{c}{$|\eta|\leq0.5$} & \multicolumn{2}{c}{$0.5<|\eta|\leq1.0$} & \multicolumn{2}{c}{$1.0<|\eta|\leq1.5$} & \multicolumn{2}{c}{$1.5<|\eta|\leq2.5$} \\
\hline\vspace{-3mm}\\
$a_1$ &$ 3.30$ & $10^{ 0}$ &$ 5.38$ & $10^{ 0}$ &$ 2.85$ & $10^{ 0}$ &$ 1.38$ & $10^{ 1}$~ & ~$a_1$ &$ 6.37$ & $10^{ 0}$ &$ 6.31$ & $10^{ 0}$ &$ 8.00$ & $10^{ 0}$ &$ 1.65$ & $10^{ 1}$ \\
$b_1$ &$-2.13$ & $10^{-1}$ &$-2.26$ & $10^{-1}$ &$-1.85$ & $10^{-1}$ &$-2.90$ & $10^{-1}$ & ~$b_1$ &$-1.46$ & $10^{-1}$ &$-1.40$ & $10^{-1}$ &$-1.39$ & $10^{-1}$ &$-1.91$ & $10^{-1}$ \\
$a_2$ &$ 5.02$ & $10^{ 0}$ &$ 5.08$ & $10^{ 0}$ &$ 9.78$ & $10^{-1}$ &$ 3.86$ & $10^{ 0}$ & ~$a_2$ &$ 2.53$ & $10^{ 0}$ &$ 3.89$ & $10^{ 0}$ &$ 8.54$ & $10^{ 0}$ &$ 4.88$ & $10^{ 0}$ \\
$b_2$ &$ 1.73$ & $10^{-1}$ &$ 1.77$ & $10^{-1}$ &$ 1.83$ & $10^{-1}$ &$ 1.36$ & $10^{-1}$ & ~$b_2$ &$ 1.43$ & $10^{-1}$ &$ 1.37$ & $10^{-1}$ &$ 1.28$ & $10^{-1}$ &$ 1.43$ & $10^{-1}$ \\
$b_3$ &$ 3.48$ & $10^{-2}$ &$ 2.49$ & $10^{-2}$ &$ 6.69$ & $10^{-3}$ &$ 7.52$ & $10^{-3}$ & ~$b_3$ &$ 3.90$ & $10^{-4}$ &$ 3.40$ & $10^{-4}$ &$ 1.90$ & $10^{-4}$ &$ 1.20$ & $10^{-4}$ \\
$a_4$ &$-6.68$ & $10^{ 0}$ &$-6.56$ & $10^{ 0}$ &$ 8.54$ & $10^{-1}$ &$ 5.59$ & $10^{ 0}$ & ~$a_4$ &$ 2.80$ & $10^{ 1}$ &$ 1.52$ & $10^{ 1}$ &$ 7.89$ & $10^{ 1}$ &$ 4.73$ & $10^{ 1}$ \\
$b_4$ &$ 2.38$ & $10^{-2}$ &$ 1.91$ & $10^{-2}$ &$-2.83$ & $10^{-2}$ &$-4.54$ & $10^{-2}$ & ~$b_4$ &$-3.87$ & $10^{-1}$ &$-9.74$ & $10^{-2}$ &$ 2.22$ & $10^{-1}$ &$ 5.21$ & $10^{-2}$ \\
$a_5$ &$ 5.06$ & $10^{ 0}$ &$ 4.36$ & $10^{ 0}$ &$ 1.38$ & $10^{ 1}$ &$ 1.50$ & $10^{ 1}$ & ~$a_5$ &$ 1.80$ & $10^{ 1}$ &$ 2.32$ & $10^{ 1}$ &$ 2.80$ & $10^{ 1}$ &$ 2.83$ & $10^{ 1}$ \\
$b_5$ &$ 4.71$ & $10^{-2}$ &$ 6.99$ & $10^{-2}$ &$ 6.04$ & $10^{-2}$ &$ 7.60$ & $10^{-2}$ & ~$b_5$ &$ 1.30$ & $10^{-1}$ &$ 2.91$ & $10^{-2}$ &$-2.87$ & $10^{-1}$ &$-8.55$ & $10^{-2}$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
For $\ensuremath{k_{\rm JES}}\xspace\neq1$, the jet transfer function is modified as follows:
\[
W_{{\rm jet}}(E_{x},E_{y};\ensuremath{k_{\rm JES}}\xspace)=\frac{W_{{\rm jet}}(\frac{E_{x}}{\ensuremath{k_{\rm JES}}\xspace},E_{y};1)}{\ensuremath{k_{\rm JES}}\xspace}\ .\label{eq:MEtfjes}
\]
\begin{figure}
\centering
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/tfcheckmass_2j}
\hspace{-2mm}
\includegraphics[width=0.24\textwidth,clip=]{plots_prd/me/tfcheckmass_3j}
\caption{\label{fig:MEtf4}
The (a)~2-jet and (b)~3-jet invariant masses. The hatched histograms use fully-simulated jet-parton matched events that pass selection criteria. The open histograms use the same generated partons, but have their energies smeared with the jet transfer functions and are then required to pass the same selections as used to produce the filled histograms.
}
\end{figure}
\subsubsection{Parameterization of the electron energy resolution\label{ssec:tf_em}}
The electron energy resolution is parameterized by the transfer function
\begin{equation}
W_{e}\left(E_{x},E_{y}\right) = \frac{1}{\sqrt{2\pi}\rho_e}\cdot \exp\left[-\frac{1}{2}\left(\frac{E_{x}- E_{y}'}{\rho_e}\right)^{2}\right]\,,\label{eq:MEtfem}
\end{equation}
where $E_{x}$ is the reconstructed energy of the electron, $E_y$ its partonic energy, and other variables are defined as:
\begin{eqnarray*}
E_y' & = & a_e\cdot E_y+b_e\quad{\rm with}\quad a_e = 1.0002,~b_e=0.324~\ensuremath{\textnormal{GeV}}\xspace,\\
\rho_e & = & \sqrt{(0.028\cdot E_{y}')^{2}+(S\cdot E_{y}')^{2}+(0.4~\ensuremath{\textnormal{GeV}}\xspace)^{2}},\\
S & = & \left( \frac{0.164~\ensuremath{\textnormal{GeV}}\xspace^{\frac12}}{\sqrt{E_{y}'}}+\,\frac{0.122~\ensuremath{\textnormal{GeV}}\xspace}{E_{y}'} \right)\times\exp \left( \frac{\kappa_e}{\sin\theta_{e}} - \kappa_e \right),\\
\kappa_e & = & 1.35193-\frac{2.09564~\ensuremath{\textnormal{GeV}}\xspace}{E_{y}'}-\frac{6.98578~\ensuremath{\textnormal{GeV}}\xspace^2}{{E_{y}'}^{2}}.
\end{eqnarray*}
These parameters are derived from a dedicated MC simulation of electromagnetic calorimetery used in the measurement of the $W$ boson mass~\cite{bib:wmass}.
\subsubsection{Parameterization of the muon momentum resolution\label{ssec:tf_mu}}
The resolution of the central tracking chamber is parametrized as a function of pseudorapidity in terms of the track curvature $\ensuremath{p_T}\xspace^{-1}$ signed by its electric charge~$q$. The muon transfer function can be written as
\begin{eqnarray}
W_{\mu}&&\!\!\!\!\!\!\!\!\!\!\left[(q/\ensuremath{p_T}\xspace)_{x},(q/\ensuremath{p_T}\xspace)_{y}\right] \nonumber\\
&&\!\!\!\!\!\!\!\!\!\!= \frac{1}{\sqrt{2\pi}\rho_\mu}\exp\left\{
-\frac{1}{2}\left[\frac{\left(q/\ensuremath{p_T}\xspace\right)_x-\left(q/\ensuremath{p_T}\xspace\right)_y}{\rho_\mu}\right]^{2}
\right\},
\label{eq:MEtfmu}
\end{eqnarray}
where $x$ generically denotes reconstructed quantities, $y$ denotes partonic quantities, and $\rho_\mu$ is given in inverse momentum units. The resolution is obtained from muon tracks in simulated events, and the parameters in Eq.~(\ref{eq:MEtfmu}) are defined as
\begin{equation}
\rho_\mu =
\left\{\begin{array}{cc}
\lambda & {\rm for}\ |\eta|\le\ensuremath{\tilde\eta}\xspace\\
\sqrt{\lambda^{2}+\left\{\kappa_\mu\cdot\left(|\eta|-\ensuremath{\tilde\eta}\xspace\right)\right\}^{2}} & {\rm for}\ |\eta|>\ensuremath{\tilde\eta}\xspace
\end{array}\right.
\label{eq:MEsigmamu}
\end{equation}
where $\ensuremath{\tilde\eta}\xspace = 1.4$ is a constant, and the $\lambda$, $\kappa$ parameters are linear functions of $1/\ensuremath{p_T}\xspace$:
\begin{eqnarray}
\lambda & = & \lambda_0+\lambda_1\cdot1/\ensuremath{p_T}\xspace\nonumber \\
\kappa_\mu & = & a_\mu+b_\mu\cdot1/\ensuremath{p_T}\xspace\label{eq:mu_tf_par}\,.
\end{eqnarray}
The fitted values of the coefficients are given in Table~\ref{tab:mutfpar} for muon tracks with and without hits in the SMT.
\begin{table}
\caption{Parameters for muon transfer functions, as defined in Eqs.~(\ref{eq:MEsigmamu}) and~(\ref{eq:mu_tf_par}), for muon tracks with and without hits in the SMT.}
\label{tab:mutfpar}
\centering
\begin{tabular}{lcc}
\hline
\hline
\multirow{2}{*}{Parameter~}
& With hits & No hits \\
& in the SMT & in the SMT \\
\hline\vspace{-3mm}\\
$\lambda_0~(\ensuremath{\textnormal{GeV}}\xspace^{-1})$ &~$2.082\times10^{-3}$~&~$3.620\times10^{-3}$ \\
$\lambda_1$ & $1.125\times10^{-2}$ & $1.388\times10^{-2}$ \\
$a_\mu~(\ensuremath{\textnormal{GeV}}\xspace^{-1})$ & $7.668\times10^{-3}$ & $2.070\times10^{-2}$ \\
$b_\mu$ & $7.851\times10^{-2}$ & $7.042\times10^{-2}$ \\
\hline
\hline
\end{tabular}
\end{table}
\end{comment}
\subsubsection{Comparison with the Absolute Mass Analysis}
\subsection{Monte Carlo samples for signal\label{ssec:samplessig}}
Simulated \ensuremath{t\bar{t}}\xspace events with different \ensuremath{m_t}\xspace and \ensuremath{m_{\bar t}}\xspace are required to calibrate the \ensuremath{\Delta m}\xspace measurement. We use the {\sc pythia}\xspace generator~\cite{bib:pythia}, version 6.413, to model the \ensuremath{t\bar{t}}\xspace signal. This generator models the Breit-Wigner shape of the invariant mass distribution of $t$ and $\bar t$ quarks, whose correct description is important for the \ensuremath{\Delta m}\xspace measurement.
In the standard {\sc pythia}\xspace, it is not possible to generate \ensuremath{t\bar{t}}\xspace events with different masses $\ensuremath{m_t}\xspace$ and $\ensuremath{m_{\bar t}}\xspace$. Therefore, we modify the {\sc pythia}\xspace program to provide signal events with $\ensuremath{m_t}\xspace\neq\ensuremath{m_{\bar t}}\xspace$. In applying these modifications, we adjust the description of all quantities that depend on the two masses, for example, the respective decay widths $\Gamma_t$ and $\Gamma_{\bar t}$. Technical details of this implementation can be found in Appendix~\ref{sec:app}.
We generate \ensuremath{t\bar{t}}\xspace events using the CTEQ6L1 parton distribution function set (PDF)~\cite{bib:cteq} at the momentum transfer scale $Q^2 = (\ensuremath{p_T}\xspace^{\rm scat})^2 + \frac{1}{2}\left\{ P_{1}^2 + P_{2}^2 + \ensuremath{m_t}\xspace^{2} + \ensuremath{m_{\bar t}}\xspace^{2} \right\}$, where $\ensuremath{p_T}\xspace^{\rm scat}$ is the transverse momentum for the hard scattering process, and $P_{i}$ is the four-momentum of the incoming parton $i$. For $\ensuremath{m_t}\xspace=\ensuremath{m_{\bar t}}\xspace$, the expression used for $Q^2$ is identical to that in the standard {\sc pythia}\xspace. All other steps in the event simulation process aside from the generation of the hard-scattering process, e.g., the modeling of the detector response, are unchanged from the standard {\sc pythia}\xspace.
We check our modified {\sc pythia}\xspace version against the original by comparing large samples of simulated $\ensuremath{t\bar{t}}\xspace$ events for $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)=(170~\ensuremath{\textnormal{GeV}}\xspace,\,170~\ensuremath{\textnormal{GeV}}\xspace)$, at both the parton and reconstruction levels, and find full consistency.
The $\ensuremath{t\bar{t}}\xspace$ samples are generated at fourteen combinations of top and antitop quark masses $(\ensuremath{m_t}\xspace,\ensuremath{m_{\bar t}}\xspace)$, which form a grid spaced at 5~\ensuremath{\textnormal{GeV}}\xspace intervals between (165~\ensuremath{\textnormal{GeV}}\xspace,\,165~\ensuremath{\textnormal{GeV}}\xspace) and (180~\ensuremath{\textnormal{GeV}}\xspace,\,180~\ensuremath{\textnormal{GeV}}\xspace), excluding the two extreme points at (165~\ensuremath{\textnormal{GeV}}\xspace,\,180~\ensuremath{\textnormal{GeV}}\xspace) and (180~\ensuremath{\textnormal{GeV}}\xspace,\,165~\ensuremath{\textnormal{GeV}}\xspace).
The four points with $\ensuremath{m_t}\xspace=\ensuremath{m_{\bar t}}\xspace$ are generated with the standard {\sc pythia}\xspace, whereas all others use our modified version of the generator.
\subsection{Monte Carlo and other simulations of background}
The dominant background to \ensuremath{t\bar{t}}\xspace decays into $\ensuremath{\ell\!+\!{\rm jets}}\xspace$ final states is from the electroweak production of a~$W$~boson in association with jets from gluon radiation. We simulate the hard scattering part of this process
using the {\sc alpgen}\xspace MC program~\cite{bib:alpgen}, which is capable of simulating up to five additional particles in the final state at leading order (LO) in $\alpha_s$.
{\sc alpgen}\xspace is coupled to {\sc pythia}\xspace, which is used to model the hadronization of the partons and the evolution of the shower. The MLM matching scheme is applied to avoid double-counting of partonic event configurations~\cite{bib:matching}. The \ensuremath{W\!+\!{\rm jets}}\xspace contribution is divided into two categories according to parton flavor:
$(i)$~$W\!+\!b\bar b\!+\!\ensuremath{\rm jets}\xspace$ and $W\!\!+\!c\bar c\!+\!\ensuremath{\rm jets}\xspace$ and (ii) all other contributions, where ``jets'' generically denotes jets from $u,d,s$-quarks and gluons. The second category also includes the $W\!\!+\!c\!+\!\ensuremath{\rm jets}\xspace$ final states.
While the individual processes are generated with {\sc alpgen}\xspace, the relative contributions of the two categories are determined using next-to-LO (NLO) calculations, with next-to-leading logarithmic (NLL) corrections based on the {\sc mcfm} MC generator~\cite{bib:mcfm}. These NLO corrections increase the LO cross section of category $(i)$ by a factor of $k=1.47\pm0.22$, while $k=1$ is used for category $(ii)$. The resulting combined $\ensuremath{W\!+\!{\rm jets}}\xspace$ background contribution is then determined from a fit to data and predictions for other signal and background contributions, as described in Sec.~\ref{sec:method}. Thus, the NLO $k$-factors only change the relative balance between $(i)$ and $(ii)$.
Additional background contributions arise from $WW$, $WZ$, $ZZ$, single top quark electroweak production, $Z\rightarrow\tau\tau$, and $Z\rightarrow ee$ ($Z\rightarrow \mu\mu$) production in the \ensuremath{e\!+\!{\rm jets}}\xspace (\ensuremath{\mu\!+\!{\rm jets}}\xspace) channel. The predictions for these backgrounds are taken from MC simulations, and, with the exception of single top quark electroweak production, their production cross sections are normalized to NLO$+$NLL calculations with {\sc mcfm}. Diboson processes are simulated with {\sc pythia}\xspace. The hard-scattering part of single top quark production is simulated with {\sc CompHEP}~\cite{bib:comphep}, while {\sc alpgen}\xspace is used for $Z\!+\!\ensuremath{\rm jets}\xspace$ boson production. For both backgrounds, {\sc pythia}\xspace is employed to model hadronization and shower evolution. The CTEQ6L1 PDFs and the D0~Tune~A underlying event model~\cite{bib:d0tunea} are used in the generation of all MC samples.
Events from MJ production can pass our selection criteria, which typically happens when a jets mimics an electron, or a muon that arises from a semileptonic decay of a $b$ or $c$ quark appears to be isolated. The kinematic distributions of the MJ background are modeled using events in data that fail only the electron identification (muon isolation) criteria, but pass loosened versions of these criteria defined in~\cite{bib:matrixmethod}. The absolute contribution of this background to each of the channels is estimated using the method described in Ref.~\cite{bib:matrixmethod}. This method uses the absolute numbers of events with prompt leptons $N^{\ensuremath{t\bar{t}}\xspace+W}_{\rm loose}$ and events from MJ production $N^{\rm MJ}_{\rm loose}$ in the sample with loosened lepton identification criteria, and relates them to the absolute contributions to the sample with standard lepton identification criteria via $N=\varepsilon^{\ensuremath{t\bar{t}}\xspace+W}N^{\ensuremath{t\bar{t}}\xspace+W}_{\rm loose}+\varepsilon^{\rm MJ}N^{\rm MJ}_{\rm loose}$. Here, $\varepsilon^{\ensuremath{t\bar{t}}\xspace+W}$ and $\varepsilon^{\rm MJ}$ represent the efficiency of events which pass the loosened lepton identification criteria to also pass the standard identification criteria, and are measured in control regions dominated by prompt leptons and MJ events, respectively.
\subsection{Event yields\label{ssec:yields}}
We split the selected \ensuremath{\ell\!+\!{\rm jets}}\xspace\ events into subsamples according to lepton flavor ($e$ or $\mu$), jet multiplicity, and the number of $b$-tagged jets in the event to verify an adequate description of the data with our signal and background model. In general, we observe good agreement between data and simulations, and systematic uncertainties on the final result explicitly account for moderate agreement observed in some kinematic distributions~(cf.~Sec.~\ref{sec:syst}).
The numbers of events surviving the final stage of selection with at least one $b$-tag are summarized in Table~\ref{tab:yields}.
Here, for ease of comparison, the contributions from $\ensuremath{t\bar{t}}\xspace$ events are scaled to $7.45^{+0.5}_{-0.7}$\,pb, the NLO cross section including NNLO approximations~\cite{bib:ttxsec}. The total $\ensuremath{W\!+\!{\rm jets}}\xspace$ cross section is adjusted to bring the absolute yield from our signal and background model into agreement with the number of events selected in data before applying $b$-jet identification criteria. The distributions in the transverse mass of the $W$ boson, $M_T^W$~\cite{bib:mwtrans}, and in $\ensuremath{p\!\!\!\!/_T}\xspace$ are shown in Fig.~\ref{fig:yield} for data with at least one $b$-tag, together with the predictions from our signal and background models.
\begin{table}[ht]
\caption{\label{tab:yields}
Numbers of events selected in data, compared to yield predictions for individual processes using simulations, in the \ensuremath{e\!+\!{\rm jets}}\xspace and \ensuremath{\mu\!+\!{\rm jets}}\xspace channels with exactly 4 jets and at least one $b$-tagged jet, split according to \mbox{$b$-tag} multiplicity. Uncertainties are purely statistical. See text for details.
}
\begin{center}
\begin{tabular}{l|r@{ $\pm$ }r r@{ $\pm$ }r }
\hline
\hline
& \multicolumn{2}{c}{1 $b$-tag} &
\multicolumn{2}{c}{$>$1 $b$-tags } \\
\hline
\ensuremath{e\!+\!{\rm jets}}\xspace\\
\quad $t\bar{t}$ &~139.2 & 3.0&~~~~91.8 & 2.5 \\
\quad \ensuremath{W\!+\!{\rm jets}}\xspace & 39.9 & 1.2 & 4.7 & 0.3 \\
\quad MJ & 23.5 & 2.1 & 5.7 & 1.0 \\
\quad \ensuremath{Z\!+\!{\rm jets}}\xspace & 7.6 & 0.7 & 0.9 & 0.1 \\
\quad Other & 6.6 & 0.4 & 1.9 & 0.1 \\
\quad Total & 216.7 & 3.9 & 105.1 & 2.7 \\
\quad Observed~ & \multicolumn{2}{c}{223} & \multicolumn{2}{c}{89} \\
\hline
\ensuremath{\mu\!+\!{\rm jets}}\xspace\\
\quad $t\bar{t}$ & 105.9 & 2.4 & 70.9 & 2.0 \\
\quad \ensuremath{W\!+\!{\rm jets}}\xspace & 59.9 & 1.8 & 7.2 & 0.5 \\
\quad MJ & 5.2 & 0.9 & 2.0 & 0.6 \\
\quad \ensuremath{Z\!+\!{\rm jets}}\xspace & 5.3 & 0.5 & 1.2 & 0.2 \\
\quad Other & 5.0 & 0.3 & 1.3 & 0.1 \\
\quad Total & 181.3 & 3.2 & 82.6 & 2.2 \\
\quad Observed & \multicolumn{2}{c}{191} & \multicolumn{2}{c}{112} \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\includegraphics[width=0.235\textwidth]{plots_prd/samples/WmtNJet4BTag2_ejets}
\hspace{-1mm}
\includegraphics[width=0.235\textwidth]{plots_prd/samples/WmtNJet4BTag2_mujets}\\
\includegraphics[width=0.235\textwidth]{plots_prd/samples/myMETNJet4BTag2_ejets}
\hspace{-1mm}
\includegraphics[width=0.235\textwidth]{plots_prd/samples/myMETNJet4BTag2_mujets}
\caption{\label{fig:yield}
The transverse mass of the $W$ boson $M_T^W$ for events with at least one $b$-tag is shown for the (a)~\ensuremath{e\!+\!{\rm jets}}\xspace and (b)~\ensuremath{\mu\!+\!{\rm jets}}\xspace channels.
Similarly, $\ensuremath{p\!\!\!\!/_T}\xspace$ is shown for the (c)~\ensuremath{e\!+\!{\rm jets}}\xspace and (d)~\ensuremath{\mu\!+\!{\rm jets}}\xspace channels.
The statistical uncertainties on the prediction from the \ensuremath{t\bar{t}}\xspace signal and background models are indicated by the hatched area.
}
\end{figure}
\subsection{Modeling of detector\label{ssec:syst_detector}}
\begin{enumerate}
\item
{\em Jet energy scale:~}
As indicated in Sec.~\ref{ssec:results}, we use the absolute JES calibration of $\ensuremath{k_{\rm JES}}\xspace=1.018\pm0.008$ determined from data. To propagate this uncertainty to \ensuremath{\Delta m}\xspace, we scale the jet energies in the selected data sample by $\ensuremath{k_{\rm JES}}\xspace\pm1$SD.
\item
{\em Remaining jet energy scale:~}
The systematic uncertainty on the absolute JES discussed above does not account for possible effects from uncertainties on jet energy corrections that depend on $E_{\rm jet}$ and $\eta_{\rm jet}$. To estimate this effect on \ensuremath{\Delta m}\xspace, we rescale the energies of jets in the default \ensuremath{t\bar{t}}\xspace MC sample by a differential scale factor $S(E_{\rm jet},\eta_{\rm jet})$ that is a function of the JES uncertainties, but conserves the magnitude of the absolute JES correction.
\item
{\em Response to $b$ and light quarks:~}
The difference in the hadronic/electromagnetic response of the calorimeter leads to differences in the response to $b$ and light quarks between data and simulation.
This uncertainty is evaluated by re-scaling the energies of jets matched to $b$ quarks in the default \ensuremath{t\bar{t}}\xspace MC sample.
\item
{\em Response to $b$ and $\bar b$ quarks:~}
The measurement of \ensuremath{\Delta m}\xspace can be affected by differences in the reconstruction of the transverse momenta of particles and antiparticles. A~difference could in principle be caused by different \ensuremath{p_T}\xspace scales for $\mu^+$ and $\mu^-$. However, the data consist of an almost equal mix of events with opposite magnet polarities, thereby minimizing such biases. We do not observe any difference in calorimeter response to $e^+$ and $e^-$.
\\
{\phantom{123}}A systematic bias to $\ensuremath{\Delta m}\xspace$ can also be caused by differences in calorimeter response to quarks and antiquarks. In the case of \ensuremath{t\bar{t}}\xspace events, this bias could arise especially from a different response to $b$ and $\bar b$-quarks.
Several mechanisms could contribute to this, most notably a different content of $K^+/K^-$ mesons, which have different interaction cross sections.
In our evaluation of this systematic uncertainty, we assume that, although differences in response to $b/\bar b$ quarks are present in data, they are not modeled in MC events.
We measure the difference of the calorimeter response to $b$ quarks to that of $\bar b$ quarks, $\ensuremath{\mathcal{R}_{b,\bar b}}\xspace\equiv\mathcal{R}_{b}-\mathcal{R}_{\bar b}$, using a ``tag-and-probe'' method in data. Namely, we select back-to-back dijet events, and enhance the $b\bar b$ content by requiring $b$-tags for both jets. The tag jet is defined by the presence of a muon within the jet cone, whose charge serves as an indication whether the probe jet is more likely to be a $b$ or a $\bar b$-quark jet. By evaluating the $|\vec \ensuremath{p_T}\xspace|$ imbalance between tag and probe jets for positively and negatively charged muon tags, we find an upper bound $|\ensuremath{\mathcal{R}_{b,\bar b}}\xspace|<0.0042$. Based on this result, we modify the default \ensuremath{t\bar{t}}\xspace MC sample by re-scaling the momenta $|\vec p|$ of $b$ ($\bar b$)-quark jets by $1\mp\frac12\cdot\ensuremath{\mathcal{R}_{b,\bar b}}\xspace=0.9979$~($1.0021$), and adjusting their 4-vectors accordingly. We repeat the ensemble studies after recalculating the probabilities for the modified sample and quote the difference relative to the default sample as a systematic uncertainty.
\item
{\em Response to $c$ and $\bar c$ quarks:~}
A difference in calorimeter response to $c$ and $\bar c$ quarks can potentially bias \ensuremath{\Delta m}\xspace, since~$c$ quarks appear in decays of $W^+$ bosons from $t$ quark decays, and vice versa for $\bar c$ and $\bar t$. It is expermentally difficult to isolate a sufficiently clean sample of $c\bar c$ dijet events, since it will suffer from considerable contributions from $b\bar b$ dijet events. However, the major underlying mechanisms that could cause a response assymetry, like, e.g., the different content of $K^+/K^-$ mesons, are the same, but of roughly opposite magnitude between $c$ and $b$ quark jets, which would result in an anticorrelation. Based on the above, we assume the same upper bound $|\mathcal{R}_{c,\bar c}|\leq\mathcal{R}_{b,\bar b}<0.0042$, and treat $\mathcal{R}_{c,\bar c}$ and $\mathcal{R}_{b,\bar b}$ as uncorrelated. To propagate the systematic uncertainty from $\mathcal{R}_{c,\bar c}$ to \ensuremath{\Delta m}\xspace, we apply a similar technique to that for the estimation of the systematic uncertainty due to different response to $b$ and $\bar b$ quarks.
\item
{\em Jet identification efficiency:~}
D0\xspace uses scale factors to achieve data/MC agreement in jet identification efficiencies. To propagate to the \ensuremath{\Delta m}\xspace measurement the effect of uncertainties on these scale factors, we decrease the jet identification efficiencies in the default \ensuremath{t\bar{t}}\xspace sample according to their uncertainties.
\item
{\em Jet energy resolution:~}
An additional smearing of jet energies derived by comparison of the $\ensuremath{p_T}\xspace$ balance in $(Z\rightarrow ee)+1\,{\rm jet}$ events~\cite{bib:jer} is applied to all MC samples in this analysis in order to achieve better data/MC agreement. To evaluate any effect from data/MC disagreement in jet energy resolutions on~$\ensuremath{\Delta m}\xspace$, we modify the default \ensuremath{t\bar{t}}\xspace MC sample by varying the jet energy resolution within its uncertainty.
\item
{\em Determination of lepton charge:~}
This analysis uses the charge of the lepton in \ensuremath{t\bar{t}}\xspace candidate events to distinguish the top quark from the antitop quark. Incorrectly reconstructed lepton charges can result in a systematic shift in the measurement. The charge misidentification rate is found to be less than 1\% in studies of $Z\to ee$ data events. To estimate the contribution of this uncertainty, we assume a charge misidentification rate of 1\% for both \ensuremath{e\!+\!{\rm jets}}\xspace and \ensuremath{\mu\!+\!{\rm jets}}\xspace final states and evaluate the effects on \ensuremath{\Delta m}\xspace resulting from a change in the mean values of the extracted $\ensuremath{m_t}\xspace^{\rm cal}$ and $\ensuremath{m_{\bar t}}\xspace^{\rm cal}$.
\end{enumerate}
\subsection{ME method}
\begin{enumerate}
\item
{\em Signal fraction:~}
The signal fractions \ensuremath{f}\xspace presented in Table \ref{tab:sfrac} are changed by their respective uncertainties for each decay channel, and ensemble studies are repeated for all MC samples to re-derive the calibration for $\ensuremath{\Delta m}\xspace$. The new calibrations are applied to data and the results compared with those obtained using the default calibration.
\item
{\em Background from multijet events\label{sub:sysqcd}:~}
In the calibration of this analysis, the background contribution to pseudo-experiments is formed using only $W$+jets events, as they are also assumed to model the small MJ background from QCD processes and smaller contributions from other background processes present in the data. To estimate the systematic uncertainty from this assumption, we define a dedicated MJ-enriched sample of events from data.
The calibration is re-derived with this background sample included in forming pseudo-experiments.
\item
{\em Calibration of the ME method:~}
The statistical uncertainties associated with the offset~($\xi_0$) and slope~($\xi_1$) parameters that define the mass calibration in Sec.~\ref{ssec:calib} contribute to the uncertainty on \ensuremath{\Delta m}\xspace. To quantify this, we calculate the uncertainty \ensuremath{\delta_{\Delta m}}\xspace due to $\delta_{\xi_0}$ and $\delta_{\xi_1}$ for each channel according to the error propagation formula
\[
~~~~~~~\ensuremath{\delta_{\Delta m}}\xspace = \left\{
\left(\frac{\ensuremath{\Delta m}\xspace-\xi_0}{\xi_1^2}\cdot\delta_{\xi_1}\right)^2
+ \left(\frac{\delta_{\xi_0}}{\xi_1}\right)^2
\right\}^{-\frac12}
\]
and then combine the resulting uncertainties for the \mbox{\ensuremath{e\!+\!{\rm jets}}\xspace} and \ensuremath{\mu\!+\!{\rm jets}}\xspace channels in quadrature.
\end{enumerate}
|
1,108,101,564,172 | arxiv | \section{Introduction}
\label{Introduction}
A two--channel Kondo impurity in a metal is, perhaps, the most promising
impurity system to exhibit low temperature non--Fermi--liquid behavior
experimentally. This hope rests on the relatively few degrees of
freedom involved (a local spin doublet coupled to two degenerate
``flavors'' of spin--$\frac{1}{2}$ fermions) with the symmetry group
under which they transform being not very complicated
[$SU(2)_{flavor}\times SU(2)_{spin}\times U(1)_{charge}$]. As a result
one might hope that it would not be difficult to find a system
whose low temperature behavior would be described by the two--channel
Kondo system.
Yet, for more than a decade since it was first
introduced by Nozi\'{e}res and Blandin\cite{Nozieres1}, no
experimental realization of this model has been conclusively
demonstrated. The difficulty lies in the fact that the
non--Fermi--liquid fixed point is unstable to various
symmetry--breaking processes which turn out to be present in real
experimental situations. For example, Nozi\'{e}res
and Blandin\cite{Nozieres1} pointed out that anisotropy between
the two flavor channels, caused by lattice effects, would destroy the
non--Fermi--liquid ground state. Since this anisotropy could not be
made to vanish for the known cases, the search for
non--Fermi--liquid behavior with conventional spin--Kondo systems was
suspended. Cox\cite{Cox} pointed out, however, that under certain
symmetry conditions, local quadrupolar degrees of freedom could
result in two channel Kondo-like coupling; unfortunately, dilute
impurity systems of this type have proven hard to make.
Later, Vlad\'{a}r and Zawadowski\cite{Zawadowski1} suggested that a
non--magnetic impurity tunnelling between two sites in a metal could
be modeled as a two--channel Kondo system in which the roles of the
channels and the spins in the original formulation are
interchanged. In this system the spin of the electron plays the role
of the ``flavor channels'', so that the anisotropy between
``channels'' is no longer an issue
since in zero external magnetic field the spin--up and the
spin--down electrons are degenerate. This led Vlad\'{a}r and
Zawadowski\cite{Zawadowski1} to predict
non--Fermi--liquid behavior in such a system. Recently Ralph
{\em et al}\cite{Ralph1,Ralph2} have interpreted low
temperature tunnelling data in very small metallic contacts in terms
of two--channel Kondo--like physics. Their measurements are
claimed to be consistent with certain exact results obtained by Affleck and
Ludwig\cite{Affleck1}, but at this point, the interpretation is
still controversial\cite{Wingreen}.
Unfortunately, the mapping of the two--site impurity to the
two--channel Kondo (2CK) system is far from exact, and, even with
no anisotropy between ``channels'' (the spin--up
and spin--down electrons) there are other processes present in the tunnelling
impurity system which are relevant and hence generically destroy the
non--Fermi--liquid ground state. These processes, which cannot be
neglected have not been treated adequately in the
literature\cite{Zawadowski1,Zimanyi1,Murumatsu}.
In this paper we carefully consider the mapping between the tunnelling
impurity system and the two--channel Kondo (2CK) problem. We first
analyze the behavior when the impurity tunnelling starts to become
important as the temperature is lowered; this is analogous to the
weak coupling regime of the Kondo problem and is needed in order to
understand which tunnelling processes will dominate at low
temperatures. We then analyze the intermediate coupling behavior to
investigate whether there are parameter regimes in which the system
will be governed by the 2CK fixed point over a reasonable range of
temperatures -- as would be needed to observe the non--Fermi liquid
behavior experimentally.
Unfortunately, even in the optimal case of an impurity tunnelling
between two identical sites so that the system has an extra $Z_2$
symmetry, we find that the 2CK fixed point is only accessible if two
tunnelling processes, which are very different physically, nearly
exactly cancel. Generically, the system will exhibit Fermi liquid
behavior at low temperatures.
We analyze the behavior near the 2CK point in detail,
focusing on the connection between the physical operators and those
that appear as ``natural'' operators in the 2CK language. It is found
that, on the critical manifold which can be obtained in the symmetric
impurity problem by adjusting one parameter, there are {\em four}
leading irrelevant directions in contrast to the behavior for the pure
2CK problem analyzed by Sengupta and Georges\cite{Georges}.
Various symmetry breaking terms are also studied and our results
recover those derived by Affleck {\em et al}\cite{Affleck2,Affleck3} with
conformal field theory techniques.
A somewhat surprising feature emerges in the fuller phase diagram of
the symmetric impurity model: a {\em second} fixed point which
exhibits non-Fermi liquid behavior, albeit one with two
relevant directions in the $Z_2$ symmetric case.
\subsection{Outline}
In the remainder of the Introduction we motivate the form of the
Hamiltonian in which we focus and interpret the various terms that
should appear. In Section II we derive an effective Hamiltonian that
we will study and analyze its symmetries, while in Section III the
weak coupling analysis is outlined. In Section IV the behavior near
the intermediate coupling 2CK fixed point and, for completeness, the
various symmetry breaking operators near the 2CK fixed point are
studied. In Section V we discuss the existence of an extra novel fixed
point at intermediate coupling. In Section VI, we discuss the
accessibility of the 2CK fixed point and draw our conclusions.
Finally, in the Appendices, the details of the weak--coupling
(Appendix A) and the intermediate--coupling analysis (Appendix B) are
presented; the comparison of our results to those obtained by
conformal field theory is made in Appendix C.
\subsection{Physical Picture}
The system we wish to describe is an impurity or heavy particle which
can hop back and forth between two sites coupled to a bath of
electrons. The two sites may or may not be equivalent but we will
primarily focus on the symmetric (equivalent) case. The asymmetric
case can readily be treated in a similar manner. The effects of the
interaction of this impurity with the electrons can be manifested in a
number of ways. First, the electrons will tend to screen the charge of
the impurity. Thus the impurity will hop between the two sites
carrying with it tightly bound electrons which can move fast enough to
adjust to the position of the impurity; it is convenient to consider
these to simply be part of the ``dressed'' impurity particle. However,
in addition, as the impurity moves it may also redistribute the low
energy electronic excitations near the Fermi surface. Since we are
interested in the low energy physics, these processes must be treated
directly. For simplicity we consider only $s$-wave scattering off the
impurity.
If an impurity of charge $Ze$ hops between two well separated sites,
then the Friedel sum rule relates the ($s$-wave) scattering phase
shift off a static impurity, $\delta$, to the electronic charge that
will be moved to screen the impurity as it moves adiabatically from
one site to the other, via $Z=2\delta/\pi$, with the factor of two due
to the two spin species. Conversely, if the two sites are close
together, one can still usefully speak of an effective charge $Q$ (per
spin) which plays an analogous role to $Q=\delta/\pi$ in the well
separated case, but is no longer simply related either to scattering
phase shifts or to the impurity charge. It will instead turn out to be
exactly the ``orthogonality catastrophe'' exponent that determines the
system size dependence of the overlap between the electronic ground
states with the impurity at the two sites.\cite{Yamada} For
simplicity, we will focus on $Q$ in the range $0\leq Q \leq 1$,
corresponding to repulsive interactions. (As shown in reference
\onlinecite{MF}, other ranges of the effective charge can be reduced
to this case via a set of more complicated combined impurity-electron
processes related to those we consider here.)
The important processes, in addition to hopping of the (dressed)
impurity by itself, will be those in which one or two low energy
electrons move in the {\em opposite} direction to that the impurity
hops. These processes can, for $Q>1/2$, reduce the effective charge
that must relax to the new impurity position, thereby decreasing the
orthogonality between the pre- and post-hop configuration; this
results in a larger amplitude for the combined impurity-electron
process at low temperatures, relative to the simple impurity
hop process. [Note that in reference \onlinecite{MF}, a
sign error in the definition of $Q$ resulted in the incorrect
interpretation being given to these processes; this error does not affect
the conclusions, just the interpretation of the combined hopping
processes].
In order to proceed with the analysis of the low temperature behavior
of interest it would appear to be important to assess the relative
magnitudes of the amplitudes, at some
high energy scale, of the processes discussed above. However, it has
been claimed\cite{Kagan} that this is
essentially impossible for strong electron-impurity
interactions. Indeed Kagan and Prokof'ev\cite{Kagan} have claimed that
a sensible Hamiltonian cannot be written in terms of a simple two
level system since the high energy electronic degrees of freedom
cannot be properly taken into account. Although there are indeed real
difficulties here, it should nevertheless be possible to introduce
{\em effective} amplitudes at some intermediate energy scale and then
analyze the behavior of the system in the phase space of these
effective parameters. In general, unless there are specific reasons to
prevent it, one would expect that most combinations of parameters
could, in principle, be realized. We thus approach the problem via
this route and start with an effective Hamiltonian at an intermediate
energy scale, at, say, some fraction of the conduction bandwidth. As
we shall see, the extra hopping processes will in any case be
generated at low energies.
\section{Effective Hamiltonian}
\label{Effective Hamiltonian}
The important electronic degrees of freedom at low energies are those
that interact with the impurity in one of its two positions ``1'' and
``2''. These are just the $s$-wave conduction electrons around the
positions of the two sites. Thus at each energy, there will be two
important electronic degrees of freedom per spin. However,
unless the two sites are very far apart (in which case the impurity
tunnelling rates will be negligible and hence not of interest), the
two sets of $s$-wave electrons will {\em not} be orthogonal; this will
play an important role in what follows. If we label the two sets of
$s$-wave electrons by their energy, $\epsilon$, measured from the
Fermi surface, then for each $\epsilon$ there is an (essentially)
unique pair of linear combinations of the two $s$-wave states, that
are orthonormal and transform into each other under interchange of the
two sites. We label the annihilation operators of this orthonormal
pair $c_{1\epsilon}$ and
$c_{2\epsilon}$ with anticommutation relations
\begin{equation}
\label{anticommutationscie}
\left\{ c_{i\epsilon},c^\dagger_{j\epsilon'}\right\} = 2\pi
\delta\left(\epsilon-\epsilon'\right) \delta_{ij}
\end{equation}
with the ``1'' and ``2'' denoting the sites near which the
wavefunction is larger. The impurity interacts with, simply, the
operators
\begin{equation}
\label{defcilocal}
c_{1,2}= \int \frac{d\epsilon}{2\pi} c_{1,2\epsilon},
\end{equation}
although in each position the impurity will couple to {\em both}
$c_1$ and $c_2$ due to the non-orthogonality of the two electronic
$s$-wave states. With time reversal invariance, which we assume
henceforth, the most general interaction with the impurity becomes
(ignoring spin for now)
\begin{eqnarray}
\label{Uinit}
U= d^\dagger_1d_1 \left[V_1 \left(c^\dagger_1c_1
+c^\dagger_2c_2\right)\right. &+& V_2
\left(c^\dagger_1c_2+c^\dagger_2c_1\right) \\ \nonumber
&+&V_3\left.\left(c^\dagger_1c_1
-c^\dagger_2c_2\right)\right] \\ \nonumber
+ d^\dagger_2d_2 \left[V_1 \left(c^\dagger_2c_2
+c^\dagger_1c_1\right) \right.&+& V_2
\left(c^\dagger_2c_1+c^\dagger_1c_2\right) \\ \nonumber
&+&V_3\left.\left(c^\dagger_2c_2
-c^\dagger_1c_1\right)\right]
\end{eqnarray}
where $d_{1,2}$ are the annihilation operator of the impurity at the sites.
Using the obvious identity $d^\dagger_1d_1+d^\dagger_2d_2=1$, Eq(\ref{Uinit})
can be written as
\begin{eqnarray}
\label{Ufinal}
U&=& V_1 \left(c^\dagger_1c_1 + c^\dagger_2c_2\right)
+ V_2 \left(c^\dagger_1c_2 + c^\dagger_2c_1\right) \\ \nonumber
&+& V_3 \left(d^\dagger_1d_1 -d^\dagger_2d_2\right) \left(c^\dagger_1c_1
-c^\dagger_2c_2\right).
\end{eqnarray}
Note the appearance of an effective electronic hopping term $V_2$,
caused by the scattering by the impurity; this will vanish if the
sites are far apart, but in general will be comparable to the other
terms.
The first term in Eq(\ref{Ufinal}) which is the average of the
interaction over the two impurity positions, merely produces a constant
phase shift for both ``1'' and``2'' electrons at the Fermi level and,
combined with other operators, gives rise only to irrelevant
terms. Therefore we will ignore it at this point
although in Section IV an irrelevant operator it
gives rise to will play a role in our discussion of the intermediate
coupling behavior.
With the effective hopping charge $Q$ in the range
$\left[0,1\right]$, there are three
hopping processes that must be considered\cite{MF}:
\begin{eqnarray}
\label{Hhop1}
{\cal H}_{hop} = d^\dagger_2 d_1 \biggl[\Delta_0 +
\frac{\Delta_1}{2}
\left(c^\dagger_{1\uparrow}c_{2\uparrow} +
c^\dagger_{1\downarrow}c_{2\downarrow} \right)\biggr. \\ \nonumber
+\Delta_2\biggl.
c^\dagger_{1\uparrow}c_{2\uparrow}c^\dagger_{1\downarrow}c_{2\downarrow}
\biggr] + h.c.
\end{eqnarray}
representing hopping of the (dressed) impurity, jointly with,
respectively, 0, 1 and 2 electrons moving the opposite way. Although we
might start at a high energy scale with negligible $\Delta_1$ and
$\Delta_2$, these will be generated under renormalization and hence
must be included.
In order to analyze the renormalization group flows, it is convenient
to approximate the conduction band $s$-wave electrons $c_{1\epsilon}$
and $c_{2\epsilon}$ by a linear dispersion with a cutoff, at short
times, $\tau_c$, roughly the inverse bandwidth. Then the interactions
with the impurity will essentially be replaced by corresponding phase
shifts, specifically $V_3$ replaced by an effective phase shift that
we denote $\pi Q_0$; $Q_0$ will have the interpretation of an
effective ``charge''. We then have, after
inserting powers of $\tau_c$ to make the couplings dimensionless and
factors of $\pi$ for convenience,
\begin{equation}
\label{calH}
{\cal H}= {\cal H}_0 + {\cal H}_{int} + {\cal H}_{hop}
\end{equation}
with
\begin{equation}
\label{calHo}
{\cal H}_0= \sum_\sigma \int \frac{d\epsilon}{2\pi} \epsilon
\left[c^\dagger_{1\sigma\epsilon}c_{1\sigma\epsilon} +
c^\dagger_{2\sigma\epsilon}c_{2\sigma\epsilon}\right],
\end{equation}
\begin{eqnarray}
\label{calHint}
{\cal H}_{int} &=& \pi Q_0 \left(d^\dagger_2d_2 -d^\dagger_1d_1\right)
\sum_\sigma \left(c^\dagger_{2\sigma}c_{2\sigma}
-c^\dagger_{1\sigma}c_{1\sigma}\right) \\ \nonumber
&+& \pi y \sum_\sigma
\left(c^\dagger_{1\sigma}c_{2\sigma}
+c^\dagger_{2\sigma}c_{1\sigma}\right)
\end{eqnarray}
and
\begin{eqnarray}
\label{calHhop}
{\cal H}_{hop} = d^\dagger_2 d_1 \biggl[\frac{\Delta_0}{2\pi\tau_c}+
\frac{\Delta_1}{2}
\sum_\sigma c^\dagger_{1\sigma}c_{2\sigma} \biggr.\\ \nonumber
+\biggl. \Delta_2 2\pi\tau_c
c^\dagger_{1\uparrow}c_{2\uparrow}c^\dagger_{1\downarrow}c_{2\downarrow}
\biggr] + h.c.
\end{eqnarray}
We have rescaled the electronic term $V_2$ to a coefficient $y$ which
will play the role of a ``fugacity'' for electronic hops.
\subsection{Symmetries}
It is important at this stage to examine the symmetries of
Eq(\ref{calH}). In addition to time reversal, conservation of the
electrons $\left(c\rightarrow e^{i\phi} c\right)$, conservation of the
impurity $\left(d\rightarrow e^{i\phi} d\right)$, and $SU(2)$ spin
symmetry, the only other symmetry is interchange of the two sites and
the corresponding electronic states ($1\leftrightarrow 2$). Note,
however, that if the only hopping term had been $\Delta_1$, and if $y$
vanished, there would be an {\em extra} artificial symmetry $d_1
\rightarrow e^{i\phi} d_1$, $c_1
\rightarrow e^{i\phi} c_1$, $d_2\rightarrow d_2$, $c_2 \rightarrow
c_2$ corresponding to conservation of $N_1=d^\dagger_1 d_1 +
n_{c_1}$ and similarly $N_2$ {\em separately}, where $n_{c_1}$
is the number of the ``one'' electrons which, in the
absence of the channel mixing term $y$, are independent of the ``two''
electrons. As shown in reference \onlinecite{MF}, even if the
electronic states had
been optimally chosen so that there was no mixing of ``one'' and
``two'' electrons at the Fermi energy, the energy dependence of
scattering off the impurity would generate extra mixing terms in
${\cal H}_{int}$, that cannot simply be expressed in terms of $c_1$ and
$c_2$. These would break the artificial symmetry and under
renormalization generate a $y\left(c^\dagger_1 c_2 + h.c.\right)$
mixing term even in the absence of impurity motion. Thus it is best to
include $y$ from the beginning. (The neglected energy dependent
scattering terms will then not play an important role).
In order to understand the difficulties of reaching the 2CK-like fixed
point, this step is {\em crucial}.
The artificial symmetry in the absence of the channel mixing and
$\Delta_0$, $\Delta_2$ terms, corresponds to a conserved pseudo-spin
$N_1-N_2$ which is the sum of the ``$z$-components'' of the impurity
pseudo-spin $d^\dagger_2d_2-d^\dagger_1d_1$ and an electronic
pseudo-spin $n_{c_2} - n_{c_1}$. This pseudo-spin can
play the role of spin for the two-channel Kondo effect and, indeed,
under renormalization the system will flow to this intermediate
coupling 2CK fixed point if $Q>0$.\cite{Footnote1}
Unfortunately, there is no natural small parameter which keeps the
pseudo-spin symmetry breaking terms small.\cite{Footnote2}
\subsection{Bosonization}
In order to carry out the renormalization group analysis for small
bare impurity hopping rates, it is useful, as is standard, to bosonize
the electronic degrees of freedom, treating the electronic states
$c_{1,2 \epsilon}$ as those of a one-dimensional system with two sets
of right moving electrons with ``wavevectors'' $v_F (k-k_F)\propto
\epsilon$. It is simplest to set the Fermi velocity, $v_F=1$, and
treat $\epsilon$ like a
wavevector index, defining
\begin{equation}
\label{Psij(x)}
c_{j\sigma}\left(x\right) \equiv \int \frac{d\epsilon}{2\pi}
e^{i\epsilon x} c_{j\sigma\epsilon}=\frac{1}{\sqrt{2\pi\tau_c}}
e^{i\Phi_{j\sigma}\left(x\right)}
\end{equation}
so that
\begin{equation}
\label{dPhidx}
c^\dagger_{j\sigma}\left(x\right) c_{j\sigma}\left(x\right)=
\frac{1}{2\pi} \frac{\partial \Phi_{j\sigma}\left(x\right)}{\partial x}
\end{equation}
with $j=1,2$ $\sigma=\uparrow,\downarrow$ and $\Phi_{j\sigma}$ being
the corresponding bosonic degrees of freedom, where we have followed
Emery and Kivelson's notation.\cite{EK} Only $c_{j\sigma}\equiv
c_{j\sigma}\left(x=0\right)$ couples to the impurity. Note that in the
standard expression Eq(\ref{dPhidx}) the left hand side is normal
ordered and therefore the (infinite) uniform charge density does not
appear. Also corrections that vanish as $\tau_c\rightarrow 0$ are
neglected; we will be careful to include the effects of extra terms
when they play an important role.
Since we will later need to be careful to have the proper
anticommutation relations, we must insert extra factors of the form
$\exp\left(i\pi N_\mu\right)$, with $N_\mu\equiv \int dx
\Psi^\dagger_\mu\left(x\right) \Psi_\mu\left(x\right)$, into some of
the bosonized expressions to ensure anticommutations of the different
Fermi fields. These will not play a role as long as no spin-flip
processes occur, and, for the time being we ignore them; the needed
modifications are spelled out in Appendix B.
It is useful to define even and odd components of the Bose fields
\begin{equation}
\label{defPhieo}
\Phi_{e,o \sigma}= \frac{1}{\sqrt{2}}\left(\Phi_{2\sigma} \pm
\Phi_{1\sigma}\right),
\end{equation}
the Hamiltonian then becomes
\begin{eqnarray}
\label{defbosonizedH}
{\cal H} &=& {\cal H}_0 + \frac{Q_0}{\sqrt{2}} \sigma_z \sum_\sigma
\frac{\partial\Phi_{o\sigma}}{\partial x}
+y\sum_\sigma \cos \Phi_{o\sigma} \\ \nonumber
&+&\frac{\Delta_0}{2\pi\tau_c} \sigma_x
+ \frac{\Delta_1}{4\pi\tau_c}
\sum_\sigma \left(\sigma_+
\exp\left[i\sqrt{2}\Phi_{o\sigma}\right] +
h.c.\right) \\ \nonumber
&+&\frac{\Delta_2}{2\pi\tau_c} \left(\sigma_+
\exp\left[i\sqrt{2}\left(\Phi_{o\uparrow}+\Phi_{o\downarrow}\right)
\right] + h.c.\right)
\end{eqnarray}
where in all the coupling terms the bosonic fields are evaluated at
$x=0$ and we use the impurity
pseudo-spin operators
\begin{equation}
\label{defsigmaz}
\sigma_z=d^\dagger_2d_2-d^\dagger_1d_1
\end{equation}
and
\begin{equation}
\label{defsigma+-}
\sigma_+=d^\dagger_2d_1\; ,\; \sigma_-=d^\dagger_1d_2.
\end{equation}
The conduction electron part of the Hamiltonian can be written in
terms of boson creation and annihilation operators
$\phi_\mu\left(\epsilon\right)$ with canonical commutation relation
\begin{equation}
\label{phiphi+comrelation}
\left[ \phi_\mu\left(\epsilon\right),
\phi^\dagger_\nu\left(\epsilon'\right) \right] = 2\pi
\delta_{\mu\nu} \delta\left(\epsilon-\epsilon'\right)
\end{equation}
via
\begin{equation}
\label{Phimuxasafnofphi}
\Phi_\mu\left(x\right)=\int_0^\infty
\frac{d\epsilon}{\sqrt{2\pi\epsilon}} \left[
\phi_\mu\left(\epsilon\right) e^{i\epsilon
x}+\phi^\dagger_\mu\left(\epsilon\right) e^{-i\epsilon x}\right]
e^{-\frac{\epsilon\tau_c}{2}}
\end{equation}
and
\begin{equation}
\label{Hoasafnofphi}
{\cal H}_0 = \sum_\mu \int_o^\infty \frac{d\epsilon}{2\pi} \epsilon
\phi^\dagger_\mu\left(\epsilon\right)\phi_\mu\left(\epsilon\right)
e^{-\epsilon\tau_c}
\end{equation}
which involves positive energy parts only. Here $\tau_c^{-1}$ is the
energy cutoff and $\mu$ represents the various Bose fields, i.e. for
Eq(\ref{defbosonizedH}), $\mu=\left(e\uparrow, e\downarrow, o\uparrow,
o\downarrow\right)$. We see that Eq(\ref{defbosonizedH}) does not
include any terms with
$\Phi_{e\downarrow}$ or $\Phi_{e\uparrow}$. Thus, up to operators that
are irrelevant for weak coupling, the impurity is decoupled from the
even boson fields.
It is convenient, following Emery and Kivelson\cite{EK}, to decompose
the odd field $\Phi_o$ which couples to the impurity, into a spin and
charge part, by
\begin{equation}
\label{Phiosigma}
\Phi_{o\sigma}=\frac{1}{\sqrt{2}} \left(\Phi_c + \sigma \Phi_s\right)
\end{equation}
with $\sigma=\pm$ for spin $\uparrow$, $\downarrow$, respectively. The
second term in Eq(\ref{defbosonizedH}) becomes simply
$Q_0 \sigma_z \frac{\partial\Phi_c}{\partial x}\left(0\right)$. This
term, which represents the difference in phase shifts for electron
scattering off the two positions of the impurity, can be shifted away
by a unitary transformation which changes the naive weak coupling scaling of
the hopping terms; this is the conventional approach
used\cite{Murumatsu,Georges,MF,EK} to derive the weak
coupling flows discussed in the next section.
Although the even fields will not play much role for the time being,
for later purposes we also introduce even fields $\Phi_{ec}$,
$\Phi_{es}$ by
\begin{equation}
\label{Phiec,s}
\Phi_{e\sigma}=\frac{1}{\sqrt{2}} \left(\Phi_{ec} + \sigma \Phi_{es}
\right).
\end{equation}
Note that any sum or difference of any two of the $\Phi_{j\sigma}$,
i.e. those that appear from operators bilinear in electron operators,
can be written as a sum or difference of two of the fields $\Phi_s$,
$\Phi_c$, $\Phi_{es}$, $\Phi_{ec}$ with coefficients of {\em unity};
this enables the method of Emery and Kivelson\cite{EK} to work.
\section{Weak hopping analysis}
\label{sec:Analysisoftheweakcouplingpoint}
In order to connect the various amplitudes at the relatively high energy
scale of the effective Hamiltonian
(Eq(\ref{defbosonizedH})) to their renormalized values at low energies
we must
analyze the weak coupling renormalization group (RG) flow
equations for the amplitudes in Eq(\ref{defbosonizedH}). The
magnitudes of the various terms at the crossover scale to intermediate
coupling will determine which regions of the initial parameter space
can flow near to the 2CK fixed point.
Following the procedure in reference \onlinecite{MF} we transform ${\cal H}$
to $U{\cal H}U^\dagger$ using the unitary operator
\begin{equation}
\label{UtransQo}
U=\exp\left[-i\sigma_z Q_0 \Phi_c\right]
\end{equation}
Subsequently we follow the RG approach described there and obtain
the following flow equations for the various amplitudes, where for
later convenience we introduce
\begin{equation}
\label{defq}
q=\frac{1}{2} -Q_0,
\end{equation}
which lies in the range $(-1/2,1/2)$:
\begin{eqnarray}
\label{weakrgeqns}
\frac{d\Delta_0}{dl}&=&\left(\frac{1}{2}+2q-2q^2\right)\Delta_0 +
y\Delta_1 +O\left(\Delta^3\right) \\ \nonumber
\frac{d\Delta_2}{dl}&=&\left(\frac{1}{2}-2q-2q^2\right)\Delta_2 +
y\Delta_1 +O\left(\Delta^3\right) \\ \nonumber
\frac{d\Delta_1}{dl}&=&\left(\frac{1}{2} -2q^2\right)\Delta_1 +
2y\left(\Delta_0+\Delta_2\right) +O\left(\Delta^3\right) \\
\nonumber
\frac{dq}{dl}&=&-2q\left(\Delta_0^2 +\Delta_2^2-\frac{1}{2} \Delta_1^2
\right)+ \left(\Delta_0^2 -\Delta_2^2\right) \\ \nonumber
\frac{dy}{dl}&=&\Delta_1\left(\Delta_0+\Delta_2\right)
\end{eqnarray}
The important cross-terms in the first three equations in
Eq(\ref{weakrgeqns}) that are proportional to the electronic mixing
term $y$, have a simple physical interpretation: they represent the
effects of an impurity and an electronic hop both occuring within a
short time interval so that, at lower energies, this appears as simply
the corresponding combined process. Note that we have not included
$O\left(y^2\right)$ terms in the above equations; the definition of
these will depend on the RG procedure, and they will not qualitatively
change the behavior. Thus, in the spirit of focusing on the important
processes and terms, we ignore them\cite{Footnote12A}.
Noting that $q$ and $y$ are constant to order
$O\left(\Delta^2\right)$, to analyze the flow for weak hopping, we can
safely set them to their initial values, $q_0$ and $y_0$. Then the
first three equations can be diagonalized exactly; the details are
discussed in Appendix A. The RG eigenvalues for the
hopping terms, about the zero hopping fixed line are
\begin{eqnarray}
\label{rgivalues}
\lambda_\pm&=&\frac{1}{2} -2q^2_0 \pm 2\sqrt{q_0^2+y_0^2} \\ \nonumber
\lambda_0&=&\frac{1}{2} -2q^2_0.
\end{eqnarray}
We now note that for $0\leq Q_0\leq 1$, corresponding to
$\left|q_0\right|\leq \frac{1}{2}$, at least two eigenvalues are
positive so that impurity hopping is always relevant (leading to the
conclusion of {\em absence} of impurity localization\cite{MF});
likewise for other ranges of $Q_0$, there will always be at least two
relevant hopping processes if there is only $s$-wave scattering off
the impurity.
The Kondo temperature, $T_K$, is the energy scale at which the first
of the
impurity hopping processes becomes of order unity-- i.e. of order the
renormalized bandwidth.
The system considered by Vlad\'{a}r and
Zawadowski\cite{Zawadowski1} and Vlad\'{a}r {\em et
al}\cite{Zimanyi1}, essentially
amounts to neglecting $\Delta_2$ and $y$; for small
$\left|q\right|$, which will turn out to be the most interesting case,
this misses part of the physics. The reason is simply that they
neglect one relevant operator $\Delta_2$ which mixes with the other
two to give the correct eigenvalues
(Eq\ref{rgivalues}). Furthermore, as will be seen later,
non-vanishing values of $\Delta_2$ and $y$ are crucial to give the
correct renormalization flows close to the intermediate coupling fixed
point.
If we only kept $Q$ and $\Delta_1$ non-zero, their weak coupling flows
would be (up to
coefficients) like those for $J_z$ and $J_\perp$ for the conventional
Kondo problem. The Kondo scale is then simply
\begin{equation}
\label{tkondo}
\frac{T_K}{W} \sim \left(\frac{\Delta_1}{W}\right)^\frac{1}{\lambda_0},
\end{equation}
after reinserting factors of the bandwidth, $W$.
(For the special value $\Delta_1=2Q$, $T_K\sim e^{-\frac{1}{\Delta_1}}$
like in the well known anti-ferromagnetic Heisenberg Kondo
problem). For this artificial case, at scales below $T_K$ the novel two
channel Kondo physics will indeed appear, as we will discuss later. This
results from the approach, at low energies, of the system to the
intermediate coupling 2CK fixed point.
Unfortunately, the breaking of the artificial pseudo-spin symmetry
leads to the appearance of terms which generically drive the flow {\em
away} from the 2CK fixed point. In order to analyze whether the
system can get near to the 2CK fixed point --- the prerequisite for
observation of non-Fermi liquid behavior---, we must be able to identify
the operators near the 2CK fixed point in terms of the original terms
in the
Hamiltonian; the magnitude of the operators, in particular the relevant
ones, can then be
determined, roughly, by ``matching'' the coefficients at the
crossover scale,
$T_K$, between the weak and intermediate coupling regimes.
{}From Appendix A, we see that the crossover temperature, $T_K$, will
generally be a complicated function of the original parameters. Before
examining the magnitude of the various important terms at $T_K$, we
turn to the
behavior near the 2CK fixed point; this will tell us which terms need
to be small at scales of order $T_K$ for 2CK behavior to obtain.
\section{Intermediate coupling analysis and two channel Kondo fixed point}
\label{sec:Intermediatecouplingfixedpoint}
{}From the weak coupling flow equations in Eq(\ref{weakrgeqns}), it is
apparent that, in the absence of electronic mixing ($y$=0) there is a
special value of the effective impurity charge $Q$: for
$Q=\frac{1}{2}$, corresponding to $q=0$, $\Delta_0$, $\Delta_1$ and
$\Delta_2$ all scale in the same way with eigenvalue
$\lambda=\lambda_0=\lambda_\pm=\frac{1}{2}$. By analogy to the
Toulouse limit of the conventional one channel Kondo problem, it is
thus natural to look for a solvable point that corresponds to $y=q=0$,
inspired by the observation that free Fermi fields have scaling
dimension of $\frac{1}{2}$ and thus might be used to represent all of
the hopping terms that appear for $q=0$. This has been carried out
recently by Emery and Kivelson\cite{EK} who ``refermionize'' the
bosonized operators that appear in the Hamiltonian enabling the
computation of the scaling of the various important operators.
By examining the weak coupling flows, it is apparent that
special behavior might occur when $\Delta_0+\Delta_2=0$.
Defining
\begin{equation}
\label{defDelta+}
\Delta_+\equiv \Delta_0+\Delta_2,
\end{equation}
we see that, at least to the order included in Eq(\ref{weakrgeqns}),
for $y=0$ and $\Delta_+=0$, $q$ flows to zero, $y$ is {\em not}
generated and one might hope that the flow would be towards the 2CK
fixed point. Indeed, the intermediate coupling analysis shows that
this can occur, even if
\begin{equation}
\label{defDelta-}
\Delta_- \equiv
\Delta_0-\Delta_2
\end{equation}
is non-zero so that the artificial pseudo-spin symmetry is broken.
Physically the role of $\Delta_+$ is very surprising. The processes
represented by $\Delta_0$ and $\Delta_2$ are very different and the
definitions which make them dimensionless are clearly cutoff
dependent. Thus the special critical behavior must not in general
occur exactly
at $\Delta_+=0$, since the location --- but not the existence --- of
the critical manifold will be affected by irrelevant operators. In
particular, from the weak coupling flows we can see that a non-zero
$q$ combined with $\Delta_- \neq 0$ {\em will} generate $\Delta_+$,
thus a ``bare'' $\Delta_+$ that is non-zero will be needed for the
flow to go to the 2CK fixed point asymptotically.
In order to understand the intermediate coupling behavior and the
special role of $Q=\frac{1}{2}$, following Emery and Kivelson\cite{EK}
and the analogous Toulouse limit\cite{Toulouse} of the conventional
Kondo problem, we perform a unitary transformation with
\begin{equation}
\label{Utrans1/2}
U=\exp\left[-\frac{i}{2}\sigma_z\Phi_c\right]
\end{equation}
which transforms the Hamiltonian to $\tilde{\cal H}=U{\cal H}U^+$
\begin{eqnarray}
\label{Hbosrotated}
\tilde{\cal H} = {\cal H}_0 &+&
\frac{\Delta_1}{2\pi\tau_c}\sigma_x\cos\Phi_s +
\frac{\Delta_-}{2\pi\tau_c}\sigma_y\sin\Phi_c \\ \nonumber
&+&\frac{\Delta_+}{2\pi\tau_c}\sigma_x\cos\Phi_c
+ 2y\cos\Phi_c\cos\Phi_s -
q\sigma_z \frac{\partial\Phi_c}{\partial x} \\ \nonumber
&+&\frac{u}{\pi}\frac{\partial\Phi_{ec}}{\partial x}
-\frac{w}{2\pi}\sigma_x\cos\Phi_c\frac{\partial\Phi_{ec}}{\partial x}
\end{eqnarray}
where we have combined the terms
\begin{eqnarray}
\label{explanatoryforHbosrotated}
\frac{\Delta_2}{2\pi\tau_c}\left(\sigma_+ e^{i\Phi_c} + \sigma_-
e^{-i\Phi_c}\right) &+&
\frac{\Delta_0}{2\pi\tau_c}\left(\sigma_+ e^{-i\Phi_c} + \sigma_-
e^{i\Phi_c}\right) \nonumber \\
= \frac{\Delta_-}{2\pi\tau_c}\sigma_y\sin\Phi_c &+&
\frac{\Delta_+}{2\pi\tau_c}\sigma_x\cos\Phi_c,
\end{eqnarray}
and abbreviated $\Phi_\mu\left(x=0\right)$ simply by
$\Phi_\mu$. We have also reintroduced some of the coupling terms to
the even fields, in particular the marginal term
$u\frac{\partial\Phi_{ec}}{\partial x}$ that arises from the
impurity--position independent part of the electron--impurity
scattering ($V_1$ in Eq(\ref{Ufinal})) and the term
$w\sigma_x\cos\Phi_c\frac{\partial\Phi_{ec}}{\partial x}$, which arises
from the combination of $u$ and impurity hopping terms $\Delta_+$ and
is irrelevant for weak hopping, will play roles in our analysis. An
additional irrelevant term, $\sigma_y \sin\Phi_s
\frac{\partial\Phi_{es}}{\partial x}$ couples the impurity to the electronic
{\em spin} degrees of freedom, but does not feed back to the other
operators and thus we ignore it for the time being. Its effect will be
discussed further in Appendix C.
\subsection{Symmetries}
Since we expect that the symmetries will play an important role, we
should examine what the original symmetries correspond to in
$\tilde{\cal H}$ and ensure that there are no extra symmetries which
might have arisen from the discarding of operators which were naively
irrelevant, since, as shown in reference \onlinecite{MF},
such procedures, especially
when combined with ``large'' transformations, such as
Eq(\ref{Utrans1/2}), can be dangerous.
\subsubsection{Gauge invariance and spin conservation}
Since there are no spin-flip processes, separate gauge transformations
can be made for each spin
$\Phi_{j\sigma}\left(x\right)\rightarrow\Phi_{j\sigma}\left(x\right) +
\theta_\sigma$, corresponding to $\Phi_{e\sigma}\rightarrow\Phi_{e\sigma} +
\sqrt{2} \theta_\sigma$. This does not play much role as the
$\Phi_{ec}$-field enters only as a derivative
$\frac{\partial\Phi_{ec}}{\partial x}$ and the
$\Phi_{es}$-field decouples from the impurity at the level at
which we work (there is feedback, under renormalization from other
operators involving $\Phi_e$ but these only modify pre-existing terms
in $\tilde{\cal H}$).
However, if the $z$-component of electron spin is {\em not} conserved,
then only the symmetry
$\Phi_{ec}\rightarrow\Phi_{ec} + \theta_{ec}$ remains.
\subsubsection{Periodicity}
The definition of the Fermi fields, Eq(\ref{Psij(x)}), implies that
shifting any $\Phi_{j\sigma}$ by $2\pi$ should leave $\tilde{\cal H}$
unchanged. Depending on whether one or both spin components are
shifted, this implies that (ignoring shifts in $\Phi_{ec}$)
\begin{equation}
\label{Phicchange}
\Phi_c\rightarrow\Phi_c +2\pi
\end{equation}
and
\begin{equation}
\label{Phischange}
\Phi_s\rightarrow\Phi_s +2\pi
\end{equation}
are independent symmetries, as is the combination
\begin{eqnarray}
\label{combination1}
\Phi_c\rightarrow\Phi_c +\pi \;&,&\; \Phi_s\rightarrow\Phi_s +\pi\\
\nonumber
\sigma_x\rightarrow-\sigma_x \;&,&\; \sigma_y\rightarrow-\sigma_y ;
\end{eqnarray}
with the necessity for the simultaneous transformation of
$\sigma_{x,y}$ resulting from the unitary transformation, U, which
involves $\Phi_c$ so that $\Phi_c\rightarrow\Phi_c +\pi$ introduces an
extra \mbox{$\exp\left(-\frac{\pi i}{2}\sigma_z\right)=-i\sigma_z$} factor
into $U$ yielding $\sigma_{x,y}\rightarrow -\sigma_{x,y}$ in
$\tilde{\cal H}$.
\subsubsection{Interchange}
Interchanging sites one and two is equivalent to
\begin{eqnarray}
\label{combination2}
\Phi_c\rightarrow-\Phi_c \;&,&\; \Phi_s\rightarrow-\Phi_s, \\
\nonumber
\sigma_y\rightarrow-\sigma_y \;&,&\; \sigma_z\rightarrow-\sigma_z
\end{eqnarray}
with $\Phi_{ec,es}$ unchanged.
\subsubsection{Spin reversal}
Flipping electron spins is simply $\Phi_s\rightarrow-\Phi_s$ and
$\Phi_{es}\rightarrow -\Phi_{es}$.
\subsubsection{Time reversal}
Time reversal transformations change ingoing to outgoing waves,
thereby yielding
$x\rightarrow-x$, $i\rightarrow -i$, all \mbox{$\Phi_\mu\left(x\right)
\rightarrow-\Phi_\mu\left(-x\right)$} and
$\sigma_y\rightarrow-\sigma_y$. Note that here we are {\em not} time
reversing the spins.
\subsubsection{Artificial extra symmetries}
We now see that, indeed, $\tilde{\cal H}$ in Eq(\ref{Hbosrotated})
with all coefficients non-zero, does {\em not} have any artificial
symmetries.
But as seen earlier, an artificial extra pseudo-spin symmetry is possible:
pseudo-spin conservation mod 2 corresponds to $\Phi_c\rightarrow\Phi_c
+\pi$, and more generally full pseudo-spin symmetry corresponds to
independence of the ``one'' and ``two'' electrons,
i.e. \mbox{$\Phi_c\rightarrow \Phi_c +\theta_c$} with any
$\theta_c$. The terms $w$,
$\Delta_+$, $\Delta_-$ and $y$ all violate this, and it can readily be
seen in the representation of $\tilde{\cal H}$ of
Eq(\ref{Hbosrotated}) that $y$ combined with
$\Delta_1$ generates $\Delta_+$, and $q$ combined with $\Delta_-$
generates $\Delta_+$, as expected.
In the representation of Eq(\ref{Hbosrotated}) we see that there is
another possible artificial symmetry: If $w$, $\Delta_+$, $y$ and $q$ are
all zero, then
\begin{equation}
\label{Phictrans}
\Phi_c\rightarrow\pi-\Phi_c
\end{equation}
becomes a symmetry. This, as we shall see, restricts the system
automatically to the stable critical manifold of the 2CK fixed point.
But note that because of the unitary transformation of
Eq(\ref{Utrans1/2}) $\Phi_c\rightarrow\pi-\Phi_c$ does {\em not}
correspond to a realizable symmetry in terms of the original variables
since it mixes hops involving the impurity alone and those involving
the impurity together with two electrons. (Indeed many other
irrelevant terms neglected in $\tilde{\cal H}$ will also violate this
artificial symmetry.)
\subsection{Refermionization}
If $\Delta_+$, $q$, $y$ and the other operators neglected in
$\tilde{\cal H}$ all vanish, then the system will still exhibit the
novel intermediate coupling 2CK behavior. As we shall see, the only
relevant operator near the 2CK fixed point, which is consistent with
the true symmetries of the impurity hopping between two equivalent
sites, is $\Delta_+$, thus only one combination of physical quantities
needs to be adjusted to obtain the 2CK behavior. Unfortunately, this
is a combination which is not naturally small.
The behavior near the 2CK fixed point can most easily be found
following Emery and Kivelson\cite{EK} by ``refermionizing'' the Bose
fields $\Phi_c$, $\Phi_s$ and $\Phi_{ec}$ that appear in $\tilde{\cal
H}$ in Eq(\ref{Hbosrotated}), noting that $\exp i\Phi_\mu$, (with
$\Phi_\mu$ properly normalized) is like some pseudo-Fermi field. The
details are discussed in Appendix B.
Crudely, for $\mu=c$, $s$, $ec$, $es$, each field $e^{i\Phi_\mu}$ is
replaced by a new Fermi field, $\Psi_\mu$, and $\sigma_-$ by a local
Fermi field $d$, with appropriate factors of $e^{i\pi N_\mu}$ and
$e^{i\pi N_d}$ to give the correct anticommutation relations. The
symmetries can most easily be seen, and the Hamiltonian simplified, by
writing the new Fermi fields in terms of a set of Majorana (hermitian)
fermions:
\begin{eqnarray}
\label{defMajoranafermions}
d&=&\frac{1}{\sqrt{2}} \left(\gamma+i\delta\right) \\ \nonumber
\Psi_\mu\left(x=0\right)&=&\frac{1}{\sqrt{2}}\left(\alpha_\mu+
i\beta_\mu\right).
\end{eqnarray}
Note that
\begin{equation}
\label{defsigmazwrtgammadelta}
\sigma_z=2i\gamma\delta.
\end{equation}
The symmetry restrictions can now be examined in terms of these
variables; the details are given in Appendix B. The periodicity of
the Bose fields $\Phi_c$ and $\Phi_s$ simply implies that only terms
with an even number of Fermi fields can appear in the Hamiltonian.
{\em Gauge invariance} implies that $\Psi_{ec}$ can only appear as
$\Psi_{ec}^\dagger\Psi_{ec}$ and the {\em $z$-component of spin
conservation} that $\Psi_{es}$ cannot appear in the absence of
magnetic fields in the $x$- or $y$-direction. {\em Spin reversal},
because of the role of the ordering operators, takes
$\Psi_s\rightarrow-\Psi_s^\dagger$ and
$\Psi_{es}\rightarrow-\Psi_{es}^\dagger$, implying that $\alpha_s$ and
$\alpha_{es}$ by themselves are excluded by spin reversal symmetry.
{\em Interchange symmetry} takes $\Psi_c\rightarrow - \Psi_c^\dagger$,
$\Psi_s\rightarrow -\Psi_s^\dagger$ and $d\rightarrow d^\dagger$
thereby requiring that $\alpha_c$ and $\delta$ must appear together.
Finally, {\em time reversal} takes $x\rightarrow -x$, $i\rightarrow
-i$ and $\Phi\rightarrow -\Phi$, allowing only real coefficients of
$\Psi_\mu$ operators, and hence forbidding terms like $i\gamma\alpha$.
The Hamiltonian becomes
\begin{eqnarray}
\label{Hrefermionized1}
\hat{\cal H}= {\cal H}_0 &+& \frac{i}{\sqrt{2\pi\tau_c}}
\left(\Delta_1 \gamma \beta_s + \Delta_-
\gamma \beta_c + \Delta_+ \delta\alpha_c\right) \\ \nonumber
&-& 4\pi y \gamma\delta\alpha_c\beta_s + 4\pi q
\gamma\delta\alpha_c\beta_c\\ \nonumber
&+& 2iu\alpha_{ec}\beta_{ec} +
w\sqrt{2\pi\tau_c}\delta\alpha_c\alpha_{ec}\beta_{ec}
\end{eqnarray}
with ${\cal H}_0$ the kinetic energy of the four (eight Majorana) new
Fermi fields, $\Psi_\mu$. With the full symmetries of the system, the
other five fields ($\alpha_s$ and the $ec$-, $es$-fields), cannot
appear in the couplings to the
impurity, except in relatively innocuous forms, involving the simple
potential coupling to the average position of the impurity,
$i\alpha_{ec}\beta_{ec}$ and combinations of this with other terms, as
well as the irrelevant term $\delta\alpha_s\alpha_{es}\beta_{es}$,
which will be discussed in Appendix C.
Note that other potentially important operators, like
$\delta\beta_s\alpha_c\beta_c$ and $i\delta
\frac{\partial\alpha_c\left(0\right)}{\partial x}$ are excluded by
time reversal invariance.
The original 2CK problem studied by Emery and Kivelson\cite{EK} and
Sengupta and Georges\cite{Georges} corresponds to
$\Delta_-=\Delta_+=y=w=0$. In our case non-zero $\Delta_-$ can be
important by observing that ``half'' of the impurity, $\gamma$,
couples to both $\beta_c$ and $\beta_s$, thus it is convenient to
rediagonalize and make linear combinations of these, $\beta_I$ and
$\beta_X$ yielding, with
\begin{equation}
\label{defDeltaK}
\Delta_K=\sqrt{\Delta_1^2 + \Delta_-^2},
\end{equation}
\begin{eqnarray}
\label{Hrefermionized2}
\hat{\cal H}= {\cal H}_0 &+&
\frac{i}{\sqrt{2\pi\tau_c}} \left(\Delta_K
\gamma \beta_I +
\Delta_+ \delta\alpha_c \right)\\ \nonumber
&+& 4\pi\bar{q} \gamma\delta\alpha_c\beta_X +4\pi\bar{y}
\gamma\delta\alpha_c\beta_I \\ \nonumber
&+& 2iu\alpha_{ec}\beta_{ec} +\sqrt{2\pi\tau_c}w
\delta\alpha_c\alpha_{ec}\beta_{ec},
\end{eqnarray}
where $\bar{y}$ and $\bar{q}$ are linear combinations of the original
$y$ and $q$ (see Eq(\ref{defqybar}) in Appendix B for details). The
above rediagonalization of $\beta_c$ and $\beta_s$ roughly
corresponds to a rotation of ``spin'' axes in the conventional 2CK
language.
{}From the electronic kinetic energy ${\cal H}_0$, the $\alpha$'s and
$\beta$'s all scale, with time scale $\tau$, as
$\tau^{-\frac{1}{2}}$. If all the couplings are
small, then the anti-commutation relations of $\gamma$ and $\delta$
imply that they are dimensionless so that $\Delta_+$ and $\Delta_K$
scale as $\tau^{-\frac{1}{2}}$, while $\bar{q}$ and $\bar{y}$ are
marginal as from the weak coupling
analysis of Section II (Eq(\ref{weakrgeqns})) and $w$ is irrelevant.
\subsection{Two channel Kondo fixed point and flows}
When $\bar{y}=\bar{q}=\Delta_+=w=0$,
the Hamiltonian in Eq(\ref{Hrefermionized2}) corresponds to the Toulouse
limit analyzed by Emery and Kivelson\cite{EK}. As a free fermion
system it can be analyzed straightforwardly. In this limit ``half'' of
the impurity, $\delta$, is uncoupled from the electrons and thus has
no dynamics, while the other ``half'', $\gamma$, gets dynamics from
coupling to the electrons with correlations at large imaginary times
\begin{equation}
\label{gammacorrelation}
\left<T_\tau\gamma\left(\tau\right)\gamma\left(0\right)\right> \sim
\frac{1}{\tau}.
\end{equation}
Together these yield the non-Fermi liquid 2CK behavior
\begin{equation}
\label{sigmazcorrelation}
\left<T_\tau\sigma_z\left(\tau\right)\sigma_z\left(0\right)\right> \sim
\frac{1}{\tau}.
\end{equation}
It is important to note that\cite{NozieresPrivComm} the solvable
Hamiltonian is {\em not} generally at the 2CK fixed point. Indeed, the
correlations are readily seen to exhibit crossover from weak coupling
behavior, $\left<\sigma_z \sigma_z\right>\sim const$, to the non-Fermi
liquid behavior of Eq(\ref{sigmazcorrelation}) for
$\tau\gtrsim\Delta_K^{-2} \tau_c$.
The 2CK fixed point, formally, corresponds to
$\Delta_K\rightarrow\infty$. It is more convenient, however, to allow
instead the normalization of $\gamma$ to change, corresponding to
letting the coefficient of the $\int \gamma\partial_\tau \gamma \:
d\tau$ in the Lagrangian vary. At the fixed point, this coefficient,
say $g_\gamma$, will be zero, while $\Delta_K$ becomes a constant; the
correlations of $\gamma$ are then simply the inverse (in frequency
space) of those of $\beta_I$, i.e. a pure power law Eq(\ref{Ggamma}).
To connect the two regimes together, one could choose to renormalize
so that, for example, $\frac{\Delta_K^2}{4\pi}+ g_\gamma=1$ [a
particularly convenient choice; (see Eq(\ref{DeltaKgconstraint})], by
rescaling $\gamma$ under renormalization by a $\Delta_K$-dependent
amount. Details about this procedure are given in Appendix B. [Note,
however, that the resulting fixed point Hamiltonian (with
$\Delta_K=\sqrt{4\pi}$) will {\em not } have the pseudo-spin $SU(2)$
symmetry of the pure 2CK problem. This is because the RG approach we
have implemented here, including the unitary transformation of
Eq(\ref{Utrans1/2}), is inherently anisotropic in channels (equivalent
to spin of conventional 2CK problem). In Appendix C we will show that
this anisotropy will not affect the results.]
At the 2CK fixed point, the above scaling implies
$\gamma\sim\tau^{-\frac{1}{2}}$, while $\delta$ is still dimensionless
so that the RG eigenvalues of the other operators can be read off
immediately: +1/2 for $\Delta_+$, the unique relevant operator
consistent with the symmetries of the problem; -1/2 for $\bar{q}$,
$\bar{y}$ and $w$, which are the three leading irrelevant operators
discussed so far (the fourth will appear in Appendix C); and 0 for
$u$, which is marginal but redundant, in that it does not affect the
impurity dynamics.
The operator corresponding to $\bar{q}$ does not give rise to any
terms which couple $\delta$ linearly to the $\alpha$'s and $\beta$'s
and hence will not generate $\Delta_+$. It
is the leading irrelevant operator for the 2CK problem identified by
Sengupta and Georges\cite{Georges}. An artificial symmetry is
responsible for its special role, indeed just the one discussed
earlier: Eq(\ref{Phictrans}),
$\Phi_c\rightarrow \pi-\Phi_c$ corresponds to the discrete symmetry
$\alpha_c\rightarrow-\alpha_c$ and $\beta_X\rightarrow-\beta_X$ which
does not have a natural representation even in terms of $\Phi_c$ and
$\Phi_s$. But if present, this artificial symmetry excludes the
generation of terms like $\Delta_+$, $\bar{y}$ and $w$, even in the
presence of $\bar{q}$. In fact, without these terms, the extra
artificial symmetry is really an $O(2)$ symmetry in the
$\{\alpha_c, \beta_X\}$ pair, consisting of a $U(1)$ of
rotations in the $\left(\alpha_c, \beta_X\right)$ plane, combined (in a
non-commutative way) with $Z_2$, the usual site interchange symmetry, which
takes $\alpha_c\rightarrow-\alpha_c$ and
$\beta_X\rightarrow\beta_X$. This is exactly analogous to the $O(2)$
symmetry present in the model treated by
Emery and Kivelson\cite{EK} and
Sengupta and Georges\cite{Georges}.
In contrast, the irrelevant operators $w$ and $\bar{y}$, break the
artificial symmetry and yield, at lowest order, the generation of the
relevant operator $\Delta_+$ (see Eq(\ref{intermediatergeqns}))
\begin{equation}
\label{Delta+rgeq}
\frac{d\Delta_+}{dl}= \frac{1}{2} \Delta_+ + 2\bar{y}\Delta_K +
\frac{wu}{\sqrt{2}\left(1+u^2\right)},
\end{equation}
consistent with expectations from weak coupling. This implies as
stated earlier, that the critical point will not be exactly at
$\Delta_+=0$.
Of the three leading irrelevant operators, it should be noted that,
although $\bar{y}$ has the same scaling dimension as $\bar{q}$ and
$w$, it has a different role close to the 2CK fixed point. The reason
is that it couples to the term $i\gamma\beta_I$, already present at
the fixed point. But the $\Delta_K$ term at the fixed point suppresses
fluctuations of $\beta_I$, causing the leading term in the
correlations of $\beta_I$ to {\em vanish} at the fixed point (with
sub-dominant terms caused by $g_\gamma\neq 0$). As a result, unlike
$\bar{q}$ and $w$ which each yield $O\left(T\ln T\right)$
contributions to the impurity specific heat\cite{Georges} --- a key
feature of a 2CK non-Fermi liquid --- the singular part arising from
$\bar{y}$ is only of $O\left(T^3\ln T\right)$, for temperatures $T\ll
T_K$. Thus only {\em two} of the above independent leading irrelevant
operators give leading singular specific heat corrections. In fact,
there is a third one, involving only spin degrees of freedom, which is
discussed in Appendix C.
In Section VI, the conditions for accessibility of the 2CK fixed point
are analyzed. We turn here to further analysis of the behavior near
the 2CK fixed point.
\subsection{Symmetry Breaking Operators}
Up to this point we have only dealt with an electron-impurity system
which is invariant under $Z_2$ ($1\leftrightarrow 2$ interchange) and
spin $SU(2)$ symmetry. However, in a realistic situation of an
impurity in a metal, there will generally be a non-zero, although
possibly small asymmetry between the two sites which will break the
$Z_2$ symmetry. Furthermore, in the presence of a magnetic field, the
equivalence between the two spin channels will be lost, leading to a
situation similar to the anisotropic Kondo
problem\cite{Nozieres1,Nozieres2,Coleman}. As might be expected, in
both cases, the symmetry breaking terms are relevant and in their
presence the system flows away from the 2CK fixed point. In this
section, we will briefly comment on the effects of symmetry breaking
terms close to the 2CK fixed point.
It is clear from the discussion in the previous section that for an
operator to be relevant close to the 2CK fixed point, it has to be of
the form $i\delta \chi$ where $\chi$ is a Majorana fermion of scaling
dimension 1/2. From the ten Majorana fermions (four pairs of
$\alpha_\mu$, $\beta_\mu$ and $\gamma$, $\delta$, all listed in Table
\ref{Table1}) we can make nine such operators. Excluding $i\delta
\alpha_{ec}$ and $i\delta \beta_{ec}$ due to total electron number
conservation (which only allows $\alpha_{ec}$ and $\beta_{ec}$ to
appear together as $\alpha_{ec}\beta_{ec}$) we are left with seven
possible terms.
From the transformation properties under the discrete symmetries of
the system, listed in Table \ref{Table1}, it can be seen that
$\beta_c$ and $\beta_s$ have the same symmetries; indeed this is why
the $\Delta_1$ and $\Delta_-$ terms in Eq(\ref{Hrefermionized1}) could
be combined into the $\Delta_K$ term of Eq(\ref{Hrefermionized2}). In
the presence of small $i\delta\beta_c$ and $i\delta\beta_s$ terms, a
small rotation of the ($\delta$, $\gamma$) pair as well as a small
additional rotation of the ($\beta_c$, $\beta_s$) pair can be
performed to yield just a slightly modified $i\Delta_K\gamma\beta_I$
term, and a single remaining relevant perturbation, the $i\delta\beta_X$
term. The extra operators ($i\gamma\beta_X$ and $i\delta\beta_I$) are
thus ``redundant''.\cite{Footnote22A} Therefore at the 2CK fixed
point, there are exactly {\em six relevant} operators, all with RG
eigenvalue of 1/2, like $\Delta_+$. These, along with their symmetry
properties, are listed in Table \ref{Table2}.
The first three operators in Table \ref{Table2} do not break the
spin $SU(2)$ symmetry. Among these, the first, our familiar
$i\Delta_+\delta\alpha_c$, is interchange and time-reversal symmetric,
corresponding to the relevant part of the channel pseudo-spin operator
$S_x$. Correspondingly, the second, $i\delta\gamma$, breaks
interchange symmetry but is time reversal invariant, corresponding to a
$S_z$ operator. This will result from simple asymmetry between the
impurity energies of the two sites, i.e. a $\sigma_z$ term in the
original Hamiltonian. Finally, $i\delta\beta_c$ (or, equivalently,
$i\delta\beta_s$) breaks interchange and time reversal which makes it an
imaginary operator, i.e. it is generated by a $S_y$ operator in the
channel sector, corresponding to complex hopping matrix elements.
The relevant spin $SU(2)$ breaking operators are $i\delta\alpha_s$,
$i\delta\alpha_{es}$ and $i\delta\beta_{es}$; these correspond to
joint electron-impurity hops accompanied by a spin flip or carrying
electronic spin. They correspond to combinations of
$\sigma_+\left(c^\dagger_{1\uparrow}c_{2\uparrow} -
c^\dagger_{1\downarrow}c_{2\downarrow} \right)$, $\sigma_+
c^\dagger_{1\downarrow}c_{2\uparrow}$,
$\sigma_+c^\dagger_{1\uparrow}c_{2\downarrow}$ and their hermitian
conjugates and are discussed in Appendix B (see Eq(\ref{Hsf}) and
Eq(\ref{MajoranaHsf})). The first corresponds to the ``flavor''
anisotropy term in the conventional two channel Kondo
model\cite{Nozieres1,Nozieres2,Coleman}, and, being interchange and
time reversal symmetric but odd under spin flip, is induced by a
magnetic field in the $z$-direction. Interestingly, the remaining two
relevant spin $SU(2)$ breaking operators are {\em odd} under the $Z_2$
interchange transformation. This means that in order to adjust them to
zero in a non-zero external magnetic field, one would have to tune
also terms that break the interchange symmetry of the problem, making
any additional novel, finite magnetic field, non--Fermi liquid fixed
points (analogous to that found in zero field for $\Delta_+\gg
\Delta_1$,$\Delta_-$), extremely hard to observe in a system that does
not have interchange symmetry.
\section{Additional intermediate coupling fixed point}
\label{Additionalfixedpoint}
In the previous section we analyzed the behavior of the system close
to the 2CK fixed point, that corresponds to the limit $\Delta_K \gg
\Delta_+$. There, we showed that $\gamma$, ``half'' of the impurity,
acquired non-trivial dynamics, which essentially gave rise to the
non--Fermi liquid behavior of the system. However, as is evident by
examining Eq(\ref{Hrefermionized2}), it should be, in principle,
possible to get the same type of non--Fermi behavior if the inequality
above were reversed ($\Delta_K \ll \Delta_+$).
Indeed, following the same arguments analyzed above, we see that when
$\Delta_K=\bar{q}=\bar{y}=w=u=0$ and $\Delta_+ \neq 0$ we have a
critical point, which has an extra artificial $U(1)$ symmetry, namely
rotations in the $\left(\beta_I, \beta_X\right)$ plane. At this
secondary fixed point, $\delta$, the other ``half'' of the impurity,
acquires dynamics, rather than $\gamma$. The operators $\bar{q}$,
$\bar{y}$, $i\gamma\beta_I\Psi_{ec}^\dagger\Psi_{ec}$ and
$i\gamma\beta_X\Psi_{ec}^\dagger\Psi_{ec}$ have scaling dimension 3/2
and thus are irrelevant with RG eigenvalue -1/2. But of these, only
the last two will give singular specific heat corrections
($O\left(T\ln T\right)$).
However, there is an important difference from the primary 2CK fixed
point discussed earlier. About this fixed point, there are {\em two}
relevant operators, consistent with the symmetries of the model,
namely $i\gamma \beta_I$ and $i\gamma \beta_X$ (or, equivalently,
$i\gamma \beta_c$ and $i\gamma \beta_s$), both with dimension 1/2.
These correspond simply to $\Delta_1$ and $\Delta_-$ before the change
of variables (Eq(\ref{defbetaix})) leading to
Eq(\ref{Hrefermionized2}); they can be generated from nonzero $q$ and
$y$ . The existence of two relevant operators is due to the fact that
$\beta_I$ and $\beta_X$ transform the {\em same} way under the
discrete symmetries; thus the artificial $U(1)$ symmetry {\em cannot}
be extended into an $O(2)$ group (as was the case for the 2CK fixed
point). As a result, this new fixed point is harder to find in the
interchange symmetric case than the primary 2CK fixed point, as it
requires the impurity-single-electron hopping term to be small and the
two-electron- plus-impurity and the simple impurity hopping terms to
be almost exactly equal at the Kondo scale. It should be noted,
however, that the {\em total} number of relevant, dimension 1/2,
operators around this fixed point is again six, including the above
mentioned ones, together with $i\gamma\delta$ and the three spin
$SU(2)$ symmetry breaking operators, $i\gamma\alpha_s$,
$i\gamma\alpha_{es}$ and $i\gamma\beta_{es}$, as discussed in Section
IV.E and listed in Table \ref{Table2}, indicating that the fixed point
symmetry is again that of the conventional 2CK model. This is
supported by the fact that, just like in the case of the 2CK fixed
point, there are four operators with scaling dimension 3/2, albeit
with completely different symmetries.
Finally, some comments are needed on the the nature of this novel
fixed point. We should first stress the {\em absence} of $\Delta_1$,
the impurity-single-electron hopping term, which together with the
$Q_0$ term (see Eq(\ref{calHint})) would form the conventional
Kondo-like interaction term. Hence, the appearance of non-Fermi liquid
behavior does {\em not} originate from the competition between the
spin up and spin down electrons to form a channel-pseudo-spin singlet
ground state with the impurity, but, rather, in the presence of strong
impurity-electron repulsion ($Q_0\approx 1/2$), from the competition
between bare impurity tunnelling and two-electron-plus-impurity
tunnelling. Formally, there is an analogy with the conventional two
channel flavor-anisotropic Kondo model,\cite{Nozieres2} with
$\Delta_\pm$ playing the role of the Kondo couplings of the two
``flavor'' channels, which can be seen in the left hand side of
Eq(\ref{explanatoryforHbosrotated}) if $\Phi_c$ is substituted by
$\Phi_s$. When $\Delta_0\neq\Delta_2$ one flavor channel couples more
strongly to the impurity therefore screening it alone at low energy,
which results in usual Fermi liquid behavior. However, if
$\Delta_0=\Delta_2$ the flavor anisotropy disappears and the system
flows to a non-Fermi liquid fixed point (provided $\Delta_1$ is zero).
As a result, although this fixed point may be in the same universality
class as the conventional 2CK model, the mechanism that brings it
about is completely different physically.
\section{Accessibility of the two channel Kondo fixed point and Conclusions}
In the previous sections we have shown how the physical operators in
the tunnelling impurity problem behave near the two channel Kondo
fixed point. In particular, we observed that a linear combination,
$\Delta_+=\Delta_0+\Delta_2$, of the bare impurity hopping,
$\Delta_0$, and the impurity-plus-two-electron hopping, $\Delta_2$, is
relevant and drives the system away from this special critical point
resulting in conventional Fermi liquid behavior at low temperatures,
as for the usual one-channel Kondo system. If one could somehow tune
$\Delta_+$, (or one other coupling such as the electronic hopping $y$)
then one might be able to tune through the critical point and find the
2CK non-Fermi liquid behavior at low temperatures in the vicinity of a
critical coupling. Unfortunately, such tuning over an adequate range
is probably difficult to achieve. Thus one probably has to rely on the
hope that a natural regime of couplings will lead to flow under
renormalization close to the 2CK fixed point. Vlad\'{a}r and
Zawadowski\cite{Zawadowski1} appear to suggest that this should be the
case. Unfortunately, more complete analysis implies the converse, that
only for fortuitous reasons would the impurity system --- even without
asymmetry between the sites --- exhibit 2CK behavior at low $T$.
In this last section, we use the weak coupling analysis of Section III
and Appendix A combined with the intermediate coupling analysis of
Section IV and Appendix B, to find criteria for approaching close to
the 2CK fixed point.
Since we are interested in systems in which the Kondo temperature is
much less than the bandwidth, the weak hopping behavior will control
the relative strengths of couplings at the Kondo scale, at which the
first of the hopping terms becomes of order the renormalized bandwidth.
Operators which are irrelevant for weak hopping will flow away rapidly
under renormalization, changing by finite amounts the remaining
parameters, $q$, $y$ and the $\left\{\Delta_i\right\}$, $i=0$, 1, 2.
For example, the complicated hopping-scattering term, $w$, which was
discussed in Section IV and plays a role near the intermediate
coupling fixed point, will be of order the hopping terms
$\left\{\Delta_i\right\}$ or smaller initially and flow away rapidly,
modifying, among other terms, $\Delta_+$ as in Eq(\ref{Delta+rgeq}),
in the process. This will be the main role of such a term and we can
incorporate its effects into a modified ``bare'' $\Delta_0$ and
$\Delta_2$. We thus start at an energy scale substantially below the
bandwidth at which the important parameters for $Q\in\left[0,1\right]$
are just $q$, $y$ and the $\left\{\Delta_i\right\}$ at this scale, the
irrelevant operators having become small. The relevant eigenvalues
for the hopping about the zero hopping fixed manifold will be
universal. For small $y$ they are given by Eq(\ref{rgivalues}).
If $Q$ is initially small, i.e. $q\leq 1/2$, $\lambda_+$ will be
substantially larger than the other eigenvalues and thus a particular
linear combination of the $\left\{\Delta_i\right\}$ will grow fast.
Unfortunately, as can be seen from Appendix A, this combination
($\tilde{\Delta}_0$) is the wrong one to yield flow near the 2CK fixed
point as it {\em includes} $\Delta_+$. Only if this combination,
$\tilde{\Delta}_0$, is initially very small relative to a power of
another linear combination of the same $\left\{\Delta_i\right\}$, can
behavior near the 2CK critical point be obtained; furthermore the
criteria become more stringent the lower the Kondo temperature, as
shown in Appendix A.
Better prospects occur when $Q\approx1/2$ (i.e. $q$
small). Unfortunately, even if the one-electron plus impurity hopping
term, $\Delta_1$, were initially much bigger than $\Delta_0$ and
$\Delta_2$, the purely electronic hopping term $y$ --- determined
basically by the spatial separation of the two impurity sites\cite{MF}
--- would combine with $\Delta_1$ to generate the {\em wrong}
combination, $\Delta_+$, of $\Delta_0$ and $\Delta_2$, as in
Eq(\ref{Delta+rgeq}). Thus, again, unless $y$ is small the criteria
from Appendix A are very strict, as could be anticipated from the $y$
dependence of $\lambda_\pm$.
The best prospects are thus for $q$ and $y$ both small so that the
eigenvalues are all comparable. But this is just the condition for the
analysis of Section IV and Appendix B via refermionization to be
valid. To get near the 2CK fixed point, $\Delta_+$ must remain small,
thus we can study the RG equations Eq(\ref{intermediatergeqns}) to
leading order in $\Delta_+$ and the linear combinations of $y$ and
$q$, $\bar{y}$ and $\bar{q}$;
\begin{eqnarray}
\label{intermedrgeqnstext2}
\frac{d\Delta_K}{dl} &=&
\frac{\Delta_K}{2}\left(1-\frac{\Delta_K^2}{4\pi}\right)
\\ \nonumber
\frac{d\bar{q}}{dl} &=& -\frac{\Delta_K^2}{8\pi}\bar{q}
\\ \nonumber
\frac{d\bar{y}}{dl} &=& -\frac{\Delta_K^2}{8\pi}\bar{y}
\end{eqnarray}
and, ignoring $w$ from Eq(\ref{Delta+rgeq})
\begin{equation}
\label{Delta+intermediatergeq}
\frac{d\Delta_+}{dl} = \frac{1}{2} \Delta_+ + 2\bar{y}\Delta_K.
\end{equation}
At the intermediate coupling 2CK fixed point, the convention we have
chosen yields $\Delta_K^*=\sqrt{4\pi}$, so that $\bar{y}$ and
$\bar{q}$ have the correct eigenvalues there as well as for weak
coupling ($\Delta_K\approx 0$).
Integrating the above equations we
find that the criterion to flow to the 2CK fixed point is, to leading
order in $\Delta_+$ and $\bar{y}$
\begin{equation}
\label{critsurfacecriterionw/outw}
\Delta_+ -
\frac{2\bar{y}\Delta_K}{1-\frac{\Delta_K^2}{4\pi}}
\ln\left|\frac{\Delta_K^2}{4\pi}\right| =0
\end{equation}
The parameters can be evaluated at any scale where
the intermediate coupling RG equations are valid, i.e. for small
$\Delta_+$, $\bar{y}$ and $\bar{q}$. If these are small at the
starting scale, then their starting values can be used. Note that
since $\Delta_K=\sqrt{\Delta_1^2+\left(\Delta_0-\Delta_2\right)^2}$,
unless $\Delta_0$ and $\Delta_2$ are almost exactly equal and
opposite, $\Delta_K$ will be at least as big as $\Delta_+$ initially
so that $\left|\bar{y}\ln\Delta_K\right|$ needs to be small rather than
just $y$, even in the best case of $q=1/2-Q$ small (recall that
$\bar{y}$ is a linear combination of $q$ and $y$ given by
Eq(\ref{defqybar})).
We thus see that the flows near the 2CK
fixed point, in particular the generation via Eq(\ref{Delta+rgeq}) of
the unique relevant operator, $\Delta_+$ from other operators, yield
stringent conditions for the accessibility of the non-Fermi liquid 2CK
behavior. If there is no symmetry between the two sites, then the
presence of a second relevant operator (see Section IV) makes
prospects even worse.
This work strongly suggests that to observe two-channel-Kondo-like
non-Fermi liquid behavior of an impurity hopping between two sites in a
metal one must either be able to tune some parameters over a
substantial range, or be extremely lucky. This casts doubt on the
interpretation of Ralph {\em et al}\cite{Ralph1,Ralph2} of their narrow
constriction tunnelling data. One possibility, although perhaps
farfetched, is that these might be some kind of defects tunnelling in
environments with higher symmetry, or at least approximate
symmetry. In another paper, we will show how an impurity hopping among
{\em three} sites with triangular symmetry can, without fine tuning,
lead to a two channel Kondo behavior at low temperatures.
\acknowledgments
We would like to thank Andreas Ludwig, Jinwu Ye, Dan
Ralph, Jan von Delft, Igor Smolyarenko and especially Anirvan Sengupta
for useful discussions. This work was supported by the National
Science Foundation via grant DMR 91-06237.
\end{multicols}
|
1,108,101,564,173 | arxiv | \section{Artifact Appendix}
\input{extendedAppendix}
\end{document}
\section{Artifact Appendix}\label{sec:artifact}
\begin{table*}\small\centering
\begin{threeparttable}[b]
\caption{Where artifact contents are hosted.}\label{tbl:artifact}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{lll}
\toprule
\textbf{Content} & \textbf{Location} & \textbf{Branch / tag / release} \\ \midrule
\textbf{Artifact README} & \url{https://github.com/blockaid-project/artifact-eval} & \texttt{main} branch \\
\textbf{Blockaid\xspace{} source} & \url{https://github.com/blockaid-project/blockaid} & \texttt{main} branch (latest version) \\
& & \texttt{osdi22ae} branch (AE version)\tnote{a} \\
\textbf{Experiment launcher} & \url{https://hub.docker.com/repository/docker/blockaid/ae} & \texttt{latest} tag \\
\quad Launcher source & \url{https://github.com/blockaid-project/ae-launcher} & \texttt{main} branch \\
\textbf{VM image} & \url{https://github.com/blockaid-project/ae-vm-image} & \texttt{osdi22ae} release \\
\quad Experiment scripts & \url{https://github.com/blockaid-project/experiments} & \texttt{osdi22ae} branch \\
\textbf{Applications} \\
\quad diaspora*\xspace{} & \url{https://github.com/blockaid-project/diaspora} & \texttt{blockaid} branch\tnote{b} \\
\quad Spree & \url{https://github.com/blockaid-project/spree} & \texttt{bv4.3.0-orig} branch (original)\tnote{c} \\
& & \texttt{bv4.3.0} branch (modified)\tnote{d} \\
\quad Autolab & \url{https://github.com/blockaid-project/Autolab} & \texttt{bv2.7.0-orig} branch (original)\tnote{c} \\
& & \texttt{bv2.7.0} branch (modified)\tnote{d} \\
\textbf{Policies for applications} & \url{https://github.com/blockaid-project/app-policies} & \texttt{main} branch \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item [a] The ``AE version'' is the version of Blockaid\xspace{} used in artifact evaluation.
\item [b] The same diaspora*\xspace{} branch is used for both baseline and Blockaid\xspace{} measurements. The code added for Blockaid\xspace{} is gated behind conditionals that check whether Blockaid\xspace{} is in use.
\item [c] ``(original)'' denotes the original application modified only to run on top of JRuby.
\item [d] ``(modified)'' denotes the ``(original)'' code additionally modified to work with Blockaid\xspace{} (\Cref{sec:eval:change}).
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection*{Abstract}
Our artifact includes our Blockaid\xspace{} implementation, which is compatible with applications that can run atop the JVM and connect to a database via JDBC (\Cref{sec:impl}).
We also provide the three applications we used for our evaluation---modified according to \Cref{sec:eval:change}---as well as the data-access policy we wrote for each.
Finally, we provide a setup for reproducing the evaluation results from \Cref{sec:eval}.
\subsection*{Scope}
This artifact can be used to run the main experiments from this paper: the page load time (PLT) measurements (\Cref{sec:eval:plt}) and the fetch latency measurements (\Cref{sec:eval:breakdown} and \Cref{sec:eval:solvers}) on the three applications.
From these experiments, it generates \Cref{tbl:benchmark} (with URLs and descriptions omitted), \Cref{fig:fetch}, and \Cref{fig:solvers}.
Because the full experiment can be time- and resource-consuming (taking roughly 15~hours on six Amazon EC2 c4.8xlarge instances), the experiment launcher can be configured to take fewer measurement rounds at the expense of accuracy.
Our Blockaid\xspace{} implementation can also be used to enforce data-access policies on new applications, as long as they have been modified to satisfy our requirements (\Cref{sec:design:code}), run atop the JVM, and connect to the database using JDBC (\Cref{sec:impl}).
\subsection*{Contents}
This artifact consists of our Blockaid\xspace{} implementation, the three applications used in our evaluation (with modifications described in \Cref{sec:eval:change}), the data-access policy we wrote for each, and scripts and virtual machine image for running the experiments.
\subsection*{Hosting}
See \Cref{tbl:artifact}.
\subsection*{Requirements}
The experiment launcher, which relies on Docker, launches experiments on Amazon EC2 and so requires an AWS account.
By default, it uses six c4.8xlarge instances---to run the PLT and fetch latency experiments for the three applications simultaneously.
However, it can be configured to launch fewer instances at a time (e.g., to run the experiments serially, using one instance at a time).
\subsection{Example}\label{sec:cache:ex}
Suppose a user with $\textit{UId}=1$ requests Event \#42 in the calendar application,
resulting in the application issuing a sequence of SQL queries.
Consider the third query, shown in~\Cref{fig:calendar-trace}.
As we explained in~\Cref{ex:trace-compliance}, Query~\#3 is compliant because
Query~\#2 has established that the user attends the event.
Blockaid\xspace{} aims to abstract this query (with trace) into a decision template that applies to another user viewing a different event.
\Cref{fig:template-ex} shows such a template; the notation says:
If each query-output pair above the line has a match in a trace~$\mathcal{T}$,
then any query of the form below the line is compliant given~$\mathcal{T}$.
This particular template states: after it is determined that
user~$x$ attends event~$y$, user~$x$ can view event~$y$
for any $x$ and $y$.
Compared with the concrete query and trace, this template
\begin{inparaenum}[(1)]
\item omits Query~\#1, which is immaterial to the compliance decision; and
\item replaces the concrete values with parameters.
\end{inparaenum}
Occurrences of~\prm{\texttt{?0}} here constrain the event ID fetched by the query to equal the previously checked event ID.
We use~\prm{\texttt{*}} to denote a fresh parameter, i.e., any arbitrary value is allowed.
We now dive into how Blockaid\xspace{} extracts such a decision template from a concrete query and trace.
But before we do so, let us first define what a decision template is, what it means for a template to have a match, and what makes a ``good'' template.
\subsection{Definitions and Goals}
For convenience, from now on we will denote a trace as a set of query-\emph{tuple} pairs $\{(Q_i,t_i)\}_{i=1}^{n}$,
where each $t_i$ is \emph{one} of the rows returned by~$Q_i$.
A query that returns multiple rows is represented as multiple such pairs.
\begin{mychange}%
This change of notation is permissible because under strong compliance (\Cref{def:strong-compliance}),
we no longer take into account the \emph{absence} of a returned row.
\begin{definition}
We say a trace~$\mathcal{T}=\{(Q_i,t_i)\}_{i=1}^{n}$ is \emph{feasible} if there exists a database~$D$ such that $t_i\in Q_i(D)$ for all $1\leq i\leq n$.
\end{definition}
\begin{definition}
A \emph{decision template} $\mathcal{D}[\mathbf{x},\mathbf{c}]$, where $\mathbf{c}$ denotes variables from the request context and $\mathbf{x}$ a sequence of variables disjoint from $\mathbf{c}$,
is a triple $(Q_{\mathcal{D}},\mathcal{T}_{\mathcal{D}},\Phi_{\mathcal{D}})$ where:
\begin{itemize}
\item $Q_{\mathcal{D}}$ is the \emph{parameterized query}, whose definition can refer to variables from~$\mathbf{x} \cup \mathbf{c}$;
\item $\mathcal{T}_{\mathcal{D}}$ is the \emph{parameterized trace}, whose queries and tuples can refer to variables from~$\mathbf{x}\cup\mathbf{c}$; and
\item $\Phi_{\mathcal{D}}$, the \emph{condition}, is a predicate over $\mathbf{x}\cup\mathbf{c}$.
\end{itemize}
We will often denote a template simply by $\mathcal{D}$ if the variables are either unimportant or clear from the context.
\end{definition}
As we later explain, $\Phi_{\mathcal{D}}$ represents any extra constraints that a template imposes on its variables (e.g., $\prm{?0} < \prm{?1}$).
\begin{definition}
A \emph{valuation}~$\nu$ over a collection of variables~$\mathbf{y}$ is a mapping from $\mathbf{y}$ to constants (including \texttt{NULL}),
extended to objects that contain variables in~$\mathbf{y}$.
For example, given a parameterized query~$Q$, $\nu(Q)$ denotes $Q$ with each occurrence of variable $y\in\mathbf{y}$ substituted with $\nu(y)$.
\end{definition}
\begin{definition}
Let $\mathcal{D}[\mathbf{x},\mathbf{c}]=(Q_{\mathcal{D}},\mathcal{T}_{\mathcal{D}},\Phi_{\mathcal{D}})$ be a decision template, $\mathit{ctx}$ be a request context,
$\mathcal{T}$ be a trace, and $Q$ be a query.
We say that $\mathcal{D}$ \emph{matches $(Q,\mathcal{T})$ under $\mathit{ctx}$} if there exists a valuation~$\nu$ over $\mathbf{x}\cup\mathbf{c}$ such that:
\begin{itemize}
\item $\nu(\mathbf{c})=\mathit{ctx}$,
\item $\nu(Q_{\mathcal{D}})=Q$,
\item $\left(\nu(Q_j), \nu(t_j)\right)\in \mathcal{T}$ for all $\left(Q_j,t_j\right)\in\mathcal{T}_{\mathcal{D}}$, and
\item $\nu(\Phi_{\mathcal{D}})$ holds.
\end{itemize}
\end{definition}
\end{mychange}
\begin{mychange}%
\begin{example}
\Cref{fig:template-ex} can be seen as a stylized rendition of a decision template $\mathcal{D}[\mathbf{x},\mathbf{c}]$
where $\mathbf{x}=(x_0,x_1)$---$x_0$ denoting \prm{\texttt{?0}} and $x_1$ denoting the occurrence of \prm{\texttt{*}}---and $\mathbf{c}=(\textit{MyUId})$;
$Q_{\mathcal{D}}$ and $\mathcal{T}_{\mathcal{D}}$ are as shown below and above the line;
and $\Phi_{\mathcal{D}}$ is the constant $\top$, meaning the template imposes no additional constraints on the variables.\footnote{Technically, this template requires $\textit{MyUId}\neq\texttt{NULL}\land x_0\neq \texttt{NULL}$. We omitted this condition in \Cref{fig:template-ex} because we assume the user ID parameter and the \textit{Attendances} table's \textit{EId} column are both non-NULL.}
Under the request context $\textit{MyUId}=1$, this template matches the query and trace in \Cref{fig:calendar-trace}
via the valuation $\{x_0\mapsto 42, x_1\mapsto \texttt{"05/04 1pm"}, \textit{MyUId}\mapsto 1\}$.
\end{example}
We are interested only in templates that imply compliance.
\begin{definition}
A decision template $\mathcal{D}$ is \emph{sound} with respect to a policy~$\mathcal{V}$ if for every request context~$\mathit{ctx}$,
whenever $\mathcal{D}$ matches $(Q,\mathcal{T})$ under $\mathit{ctx}$, $Q$ is strongly $ctx$-compliant to~$\mathcal{V}$ given~$\mathcal{T}$.
\end{definition}
Blockaid\xspace{} can verify that a template is sound via the following theorem derived from strong compliance (\Cref{def:strong-compliance}):
\begin{theorem}
A decision template $\mathcal{D}[\mathbf{x},\mathbf{c}]=(Q_{\mathcal{D}},\mathcal{T}_{\mathcal{D}},\Phi_{\mathcal{D}})$ is sound with respect to a policy~$\mathcal{V}$ if and only if:
\begin{align*}
&\forall \mathbf{x}, \mathbf{c}, D_1, D_2\ldotp \\
&\quad\left.
\begin{aligned}
\Phi_{\mathcal{D}} \\
\forall V\in\mathcal{V}\ldotp V(D_1) \subseteq V(D_2) \\
\forall (Q_i,t_i)\in\mathcal{T}_{\mathcal{D}}\ldotp t_i \in Q_i(D_1)
\end{aligned}
\right\} \implies Q_{\mathcal{D}}(D_1)\subseteq Q_{\mathcal{D}}(D_2).
\end{align*}
\end{theorem}
For a compliant query~$Q$ (with trace~$\mathcal{T}$) that misses the cache, there often exist many sound templates that match $(Q,\mathcal{T})$.
But all such templates are not equal---we prefer the more \emph{general} ones, those that match a wider range of \emph{other} queries and traces.
\begin{definition}
A template~$\mathcal{D}_1$ is \emph{at least as general as} a template $D_2$ if for every query~$Q$ and feasible trace~$\mathcal{T}$,
if $\mathcal{D}_2$ matches $(Q,\mathcal{T})$, $\mathcal{D}_1$ also matches $(Q,\mathcal{T})$.
\end{definition}
Thus, Blockaid\xspace{} aims to generate a decision template that
\begin{inparaenum}[(1)]
\item is sound,
\item matches $(Q,\mathcal{T})$, and
\item is general enough for practical purposes.
\end{inparaenum}
We now explain how this is achieved.
\end{mychange}
\subsection{Generating Decision Templates}\label{sec:cache:core}
\begin{mychange}%
Blockaid\xspace{} starts from the trivial template $D_0=(Q,\mathcal{T},\top)$, which is sound but not general,
and generalizes it in two steps:
\begin{enumerate}
\item Minimize the trace~$\mathcal{T}$ to retain only those $(Q_i,t_i)$~pairs that are required for $Q$'s compliance (\Cref{sec:cache:core:min}).
\item Replace each constant in the trace and query with a fresh variable,
and then generate a weak condition~$\Phi$ over the variables that guarantees compliance (\Cref{sec:cache:core:cond}).
\end{enumerate}%
\end{mychange}
\subsubsection{Step One: Trace Minimization}\label{sec:cache:core:min}
\begin{mychange}%
Blockaid\xspace{} begins by finding a minimal sub-trace of~$\mathcal{T}$ that preserves compliance.
It removes each $(Q_i,t_i)\in\mathcal{T}$ and, if $Q$ is no longer compliant, adds the element back.
For example, for \Cref{fig:calendar-trace} this step removes Query~\#1.
Denote the resulting minimal trace by $\T_{\text{min}}$ and let decision template $\mathcal{D}_1=(Q,\T_{\text{min}},\top)$.
\begin{proposition}
$\mathcal{D}_1$ is sound, matches $(Q,\mathcal{T})$, and is at least as general as $\mathcal{D}_0$.
\end{proposition}
As an optimization, Blockaid\xspace{} starts the minimization
from the sub-trace that the solver has actually used to prove compliance.
It extracts this information from a solver-generated \emph{unsat core}~\cite[\S~11.8]{barrett18:smt}---%
a subset of clauses in the formula that remains unsatisfiable even with all other clauses removed.
If we attach \emph{labels} to the clauses we care about,
a solver will identify all labels in the unsat core when it proves the formula unsatisfiable.
To get an unsat core, Blockaid\xspace{} uses the following formula:
\begin{align*}
&& V^{\mathit{ctx}}(D_1) &\subseteq V^{\mathit{ctx}}(D_2), & (\forall V\in\mathcal{V}) \\
[\textit{LQ}_i] && t_i &\in Q_i(D_1), & (\forall (Q_i,t_i)\in\mathcal{T}) \\
&& Q(D_1) &\not\subseteq Q(D_2),
\end{align*}
where the clause asserting the $i$\textsuperscript{th} trace entry is labeled $\textit{LQ}_i$.
If $Q$ is compliant, the solver returns as the unsat core a set~$S$ of labels.
Blockaid\xspace{} ignores any $(Q_i,t_i)\in\mathcal{T}$ for which $\textit{LQ}_i\not\in S$.%
\end{mychange}
\subsubsection{Interlude: Model Finding for Satisfiable Formulas}
A common operation in template generation is to remove parts of a formula and re-check satisfiability.
A complication arises when the formula turns satisfiable---%
while solvers are adept at proving unsatisfiability, they often fail on satisfiable formulas.%
\footnote{For example, finite model finders in CVC4~\cite{reynolds13:finite} and Vampire~\cite{reger16:finite} often time out or run out of memory on tables with only tens of columns.}
\begin{mychange}%
To solve these formulas faster,
we observe that
they are typically satisfied by databases with small tables.
We thus construct SMT formulas to directly seek such ``small models'' by representing each table not as an uninterpreted relation,
but as a conditional table~\cite{Imielinski84} whose size is bounded by a small constant.
A conditional table generalizes a regular table by
\begin{inparaenum}[(1)]
\item allowing variables in its entries, and
\item associating with each row with a \emph{condition}, i.e., a Boolean predicate for whether the row exists.
\end{inparaenum}
For example, a \textit{Users} table with a bound of~2 appears as:
\begin{center}
\begin{tabular}{cccc}
\toprule
\textit{UId} & \textit{Name} & Exists? \\
\midrule
$x_{u,1}$ & $x_{n,1}$ & $b_1$ \\
$x_{u,2}$ & $x_{n,2}$ & $b_2$ \\
\bottomrule
\end{tabular}
\end{center}
where each entry and condition is a fresh variable, signifying that the table is not constrained in any way other than its size.%
Queries on condition tables are evaluated via an extension of the relational algebra operators~\cite[\S~7]{Imielinski84}.
This allows queries to be encoded into SMT without using quantifiers or using relation symbols for tables.%
\footnote{To avoid using quantifiers in these formulas, we drop the transitivity axiom for the uninterpreted less-than relation (\Cref{sec:checking:opt}).}
For example, the query
\lstinline{SELECT Name FROM Users WHERE UId = 5} can be written as:
\end{mychange}
\[
\mathsf{Q}(\mathsf{x}_n) \coloneqq
\bigvee_{i=1}^{2}
\left(
x_{u,i}=5 \land x_{n,i}=\mathsf{x}_n \land b_i
\right).
\]
We found that such formulas could be solved quickly by Z3.
After Blockaid\xspace{} generates an unsat core as described in \Cref{sec:cache:core:min},
it switches to using bounded formulas (i.e., ones that use conditional tables instead of uninterpreted relations) for the remainder of template generation.
Blockaid\xspace{} sets a table's bound to one plus the number of rows required to produce the sub-trace induced by the unsat core;%
\footnote{If the bounds are too small for a database to produce the trace, the resulting formula will be unsatisfiable regardless of compliance.}
it relies on the solvers to produce small unsat cores to keep formula sizes manageable.
Care must be taken because using bounded formulas breaks soundness---%
a query compliant on small tables might not be on larger ones.
Therefore, after a decision template is produced Blockaid\xspace{} verifies its soundness on the unbounded formula, and if this fails, increments the table bounds and retry.
\subsubsection{Step Two: Find Value Constraints}\label{sec:cache:core:cond}
Taking the template $\mathcal{D}_1=(Q,\T_{\text{min}},\top)$ from Step~1,
Blockaid\xspace{} generalizes it further by abstracting away the constants.
To do so, Blockaid\xspace{} \emph{parameterizes} $\T_{\text{min}}$ and $Q$ by replacing each occurrence of a constant with a fresh variable.
We use a superscript ``p'' to denote the parameterized version of a query, tuple, or trace.
\Cref{fig:parameterized} shows~$\paramd{\T_{\text{min}}}$ and~$\paramd{Q}$ from our example.
As an optimization, Blockaid\xspace{} assigns the same variable (e.g., \texttt{x0}) to locations that are guaranteed by SQL semantics to be equal.
\begin{mylisting}
\caption{Parameterization and candidate atoms for~\Cref{fig:calendar-trace}.}\label{fig:base-template}%
\begin{submylisting}{\columnwidth}
\vspace{-1.5ex}
\caption{Parameterized trace~$\paramd{\T_{\text{min}}}$ and query~$\paramd{q}$.}\label{fig:parameterized}%
\begin{mdframed}[skipabove=0em]\small
\begin{enumerate}[leftmargin=*,start=2]
\item
\begin{minted}{sql}
SELECT * FROM Attendances
WHERE UId = |\prm{x0}| AND EId = |\prm{x1}|
\end{minted}
\begin{rrlist}
\item \verb!(UId = !\prm{\texttt{x0}}\verb!, EId = !\prm{\texttt{x1}}\verb!, ConfirmedAt = !\prm{\texttt{x2}}\verb!)!
\end{rrlist}
\end{enumerate}
\rule{\textwidth}{0.4pt}
\begin{enumerate}[leftmargin=*,start=3]
\item
\begin{minted}{sql}
SELECT * FROM Events WHERE EId = |\prm{x3}|
\end{minted}
\end{enumerate}
\end{mdframed}%
\end{submylisting}
\begin{submylisting}{\columnwidth}
\vspace{1.5ex}
\caption{Candidate atoms (with symmetric duplicates removed).}\label{fig:constraints}%
\begin{mdframed}[skipabove=0em]\small
\begin{minipage}[t]{.36\columnwidth}
Form \texttt{x = v}:
\begin{itemize}
\item \texttt{MyUId = 1}
\item \texttt{x0 = 1}
\item \texttt{x1 = 42}
\item \texttt{x2 = "05/04 1pm"}
\item \texttt{x3 = 42}
\end{itemize}
\end{minipage}\hfill%
\begin{minipage}[t]{.29\columnwidth}
Form \texttt{x = x'}:
\begin{itemize}
\item \texttt{MyUId = x0}
\item \texttt{x1 = x3}
\end{itemize}
\end{minipage}\hfill%
\begin{minipage}[t]{.29\columnwidth}
Form \texttt{x < x'}:
\begin{itemize}
\item \texttt{MyUId < x1}
\item \texttt{MyUId < x3}
\item \texttt{x0 < x1}
\item \texttt{x0 < x3}
\end{itemize}
\end{minipage}
\end{mdframed}%
\end{submylisting}
\end{mylisting}
\begin{mychange}%
Blockaid\xspace{} must now generate a condition $\Phi$ such that the resulting template
$\mathcal{D}_2=(\paramd{Q}, \paramd{\T_{\text{min}}}, \Phi)$ meets our goals.
It picks as $\Phi$ a conjunction of atoms from a set of \emph{candidate atoms}.
Let $\mathbf{x}$ denote all variables generated from parameterization, and let $\nu$ map~$\mathbf{x}$ to the replaced constants
and $\mathbf{c}$ to the current context~$\mathit{ctx}$.
\begin{definition}\label{def:candidate}
The set of \emph{candidate atoms} is defined as:
\[
C = \bigcup
\begin{cases}
\{ \texttt{x = v} &\mid x\in\mathbf{x}\cup\mathbf{c}, v=\nu(x)\neq\texttt{NULL} \} \\
\{ \texttt{x IS NULL} &\mid x\in\mathbf{x}\cup\mathbf{c}, \nu(x)=\texttt{NULL} \} \\
\{ \texttt{x = x'} &\mid x,x'\in\mathbf{x}\cup\mathbf{c}, \nu(x)=\nu(x')\neq\texttt{NULL} \} \\
\{ \texttt{x < x'} &\mid x,x'\in\mathbf{x}\cup\mathbf{c}, \nu(x)<\nu(x') \}
\end{cases}.
\]
(We write atoms in \texttt{monospace font} to distinguish them from mathematical expressions.
Following SQL, the ``\texttt{=}'' in an atom implies that both sides are non-\texttt{NULL}.)
\end{definition}
Note that all candidate atoms hold on $Q$ and $\T_{\text{min}}$.
Blockaid\xspace{} now selects a subset that not only guarantees compliance, but also imposes relatively few restrictions on the variables.
\begin{definition}
With respect to $\paramd{Q}$ and $\paramd{\T_{\text{min}}}$, a subset of atoms $C_0\subseteq C$ is \emph{sound} if the decision template $(\paramd{Q}, \paramd{\T_{\text{min}}}, \bigwedge C_0)$ is sound.
($\bigwedge C_0$ denotes the conjunction of atoms in~$C_0$.)
\end{definition}
\begin{definition}
Let $C_1,C_2\subseteq C$.
We say that $C_2$ is \emph{at least as weak as} $C_1$ (denoted $C_1\preceq C_2$) if $\bigwedge C_1 \implies \bigwedge C_2$,
and that $C_2$ is \emph{weaker} than $C_1$ if $C_1\preceq C_2$ but $C_2\not\preceq C_1$.
\end{definition}
\begin{example}
\Cref{fig:constraints} shows all the candidate atoms from \Cref{fig:parameterized} (after omitting symmetric ones in the \texttt{x = x'} group).
Consider the following two subsets of atoms:
\begin{align*}
C_1 &=
\begin{Bmatrix}
\texttt{MyUId = x0}, & \texttt{x1 = 42}, & \texttt{x3 = 42}
\end{Bmatrix},\\
C_2 &=
\begin{Bmatrix}
\texttt{MyUId = x0}, & \texttt{x1 = x3}
\end{Bmatrix}.
\end{align*}
While both are sound, $C_2$ is preferred over $C_1$ as it is weaker and thus applies in more scenarios.
In fact, $C_2$ is \emph{maximally} weak: there exists no subset that is both sound and weaker than~$C_2$.
\end{example}
Ideally, Blockaid\xspace{} would produce a maximally weak sound subset of~$C$ to use as the template condition, but finding one can be expensive.
It thus settles for finding a subset that is weak enough for practical generalization. It does so in three steps.%
\end{mychange}
\textbf{First}, as a starting point, Blockaid\xspace{} generates a minimal unsat core of the formula:
\begin{align*}
&& V^{\mathit{ctx}}(D_1) &\subseteq V^{\mathit{ctx}}(D_2), & (\forall V\in\mathcal{V}) \\
&& \paramd{t_i} &\in \paramd{Q_i}(D_1), & (\forall (\paramd{t_i}, \paramd{Q_i})\in \paramd{\T_{\text{min}}}) \\
[\textit{LC}_i] && c_i &, & (\forall c_i\in C) \\
&& \paramd{Q}(D_1) &\not\subseteq \paramd{Q}(D_2).
\end{align*}
Let $C_{\text{core}}$ denote the atoms whose label appears in the unsat core.
For example, $C_{\text{core}}=\Set{\texttt{MyUId = x0}, \texttt{x1 = 42}, \texttt{x3 = 42} }$.
\textbf{Second}, it augments $C_{\text{core}}$ with other atoms that are implied by it:
$C_{\text{aug}}=\Set{c\in C | \bigwedge C_{\text{core}} \implies c}$.
In our example,
\begin{align*}
C_{\text{aug}} &= C_{\text{core}}\cup \Set{\texttt{x1 = x3}} \\
&=
\begin{Bmatrix}
\texttt{MyUId = x0}, & \texttt{x1 = 42}, & \texttt{x3 = 42}, & \texttt{x1 = x3}
\end{Bmatrix}.
\end{align*}
$C_{\text{aug}}$ enjoys a closure property: if $C_0\subseteq C_{\text{aug}}$ and $C_0\preceq C_1$, then $C_1\subseteq C_{\text{aug}}$.
In particular, $C_{\text{aug}}$ contains a maximally weak sound subset of~$C$.
Thus, Blockaid\xspace{} focuses its search within $C_{\text{aug}}$.
\textbf{Finally}, as a proxy for weakness,
Blockaid\xspace{} finds a \emph{smallest} sound subset of $C_{\text{aug}}$, denoted $C_{\text{small}}$, breaking ties arbitrarily.
It does so using the \textsf{MARCO}\xspace{} algorithm~\cite{liffiton13:marco,previti13:emus,liffiton16:marco} for minimal unsatisfiable subset enumeration,
modified to enumerate from small to large and to stop after finding the first sound subset.
In our example, the algorithm returns $C_{\text{small}}=\Set{\texttt{MyUId=x0}, \texttt{x1=x3}}$ of cardinality two, which is also a maximally weak subset (even though this might not be the case in general).%
\footnote{For example, $\Set{\texttt{x < y}, \texttt{x < z}}$ is strictly weaker than $\Set{\texttt{x < y}, \texttt{y < z}}$ even though the two sets have the same cardinality.}
Nevertheless, searching for a smallest sound subset has produced templates that generalize well in practice.
\medskip
\begin{mychange}%
At the end, Blockaid\xspace{} produces the decision template:
\[
\mathcal{D}_2[\mathbf{x},\mathbf{c}] = \left(\paramd{Q}, \paramd{\T_{\text{min}}}, \bigwedge C_{\text{small}}\right).
\]
\begin{proposition}
$\mathcal{D}_2$ is sound, matches $(Q,\mathcal{T})$, and is at least as general as $\mathcal{D}_1$.
\end{proposition}
As an optimization, whenever $\bigwedge C_{\text{small}} \implies x=y$ for $x,y\in\mathbf{x}\cup\mathbf{c}$, Blockaid\xspace{} replaces $x$ with $y$ in the template.
This is how, e.g., in \Cref{fig:template-ex} \prm{?0} appears in both the trace and the query.%
\end{mychange}
\subsubsection{Optimizations}\label{sec:cache:core:opt}
We implement two optimizations that improve the performance of template generation
and the generality of templates.
\paragraph{Omit irrelevant tables.} Given trace~$\mathcal{T}$ and query~$Q$,
we call a table \emph{relevant} if (1)~it appears in~$\mathcal{T}$ or~$Q$,
or (2)~the table appears on the right-hand side of a database constraint of the form~$Q_1\subseteq Q_2$,
given that a relevant table appears on the left.%
\footnote{Every constraint encountered in our evaluation can be written in the form
$Q_1\subseteq Q_2$, including primary-key, foreign-key, and integrity constraints.}
Blockaid\xspace{} sets the size bounds of irrelevant tables to zero, reducing formula size while preserving compliance.
\paragraph{Split \texttt{IN}.}
A query~$Q$ that contains ``$c~\texttt{IN}~(x_1,x_2,\ldots,x_n)$''
often produces a template with a long trace.
If $Q$ is a basic query that does not contain the \texttt{NOT}~operator,
it can be split into $q_1,\ldots,q_n$ where $q_i$ denotes~$Q$ with the \texttt{IN}-construct
substituted with~$c = x_i$, such that $Q \equiv q_1 \cup \ldots \cup q_n$.
If $q_1,\ldots,q_n$ are all compliant then so is~$Q$,
and so Blockaid\xspace{} checks the subqueries instead.
This is usually fast because $q_2,\ldots,q_n$ typically match the decision template generated from~$q_1$.
If any~$q_i$ is not compliant, Blockaid\xspace{} reverts to checking~$Q$ as a whole.
This optimization also improves generalization.
Suppose $Q'$ has structure identical to~$Q$ but a different number of \texttt{IN}~operands.
It would not match a template generated from~$Q$,
but its split subqueries~$q_i'$ could match the template from~$q_1$.
\subsection{Decision Cache and Template Matching}\label{sec:cache:cache}
\begin{mychange}%
Blockaid\xspace{} stores decision templates in its \emph{decision cache}, indexing them by their parameterized query using a hash map.
When checking a query~$Q$, Blockaid\xspace{} lists all templates whose parameterized query matches~$Q$; for each such template, it uses recursive backtracking (with pruning optimizations) to search for a valuation that results in a match.
This simple method proves efficient in practice as the templates tend to be small.%
\end{mychange}
\subsection{Translating Noncompliance to SMT}\label{sec:checking:smt}
Blockaid\xspace{} verifies query compliance by framing \emph{noncompliance} (i.e, the negation of \Cref{def:compliance}) as an SMT formula and checking its satisfiability---%
a query is compliant if and only if the formula is \emph{unsatisfiable}.
We use a straightforward translation based on Codd's theorem~\cite{Codd72:relational-completeness},
which states, informally,
that relational algebra under set semantics is equivalent in expressiveness to first-order logic (FOL).
Relational algebra has five operators---projection, selection, cross product, union, and difference---%
and tables are interpreted as \emph{sets} of rows (i.e., no duplicates).
Under this equivalence, tables are translated to predicates in FOL,
and operators are implemented using existential quantifiers, conjunctions, disjunctions, and negations.
\begin{example}\label{ex:query-fol}
Let us translate into FOL the following query~$Q$ executed on a database~$D$:
\begin{lstlisting}
SELECT e.EId, e.Title
FROM Events e, Attendances a
WHERE e.EId = a.EId AND a.UId = 2
\end{lstlisting}
Let $E^D(\cdot,\cdot,\cdot)$ and $A^D(\cdot,\cdot,\cdot)$ be FOL predicates representing
the $\textit{Events}$ and $\textit{Attendances}$ table in the database~$D$ in:
\begin{align*}
\mathsf{Q}^D(\mathsf{x}_e,\mathsf{x}_t) &\coloneqq \exists x_d, x_u, x_e', x_c\ldotp
E^D(\mathsf{x}_e, \mathsf{x}_t, x_d) \land A^D(x_u,x_e',x_c) \\
& \qquad \land \mathsf{x}_e=x_e' \land x_u = 2.
\end{align*}
$\mathsf{Q}^D(\mathsf{x}_e,\mathsf{x}_t)$ encodes the statement $(\mathsf{x}_e,\mathsf{x}_t)\in Q(D)$,
i.e., that the row~$(\mathsf{x}_e,\mathsf{x}_t)$ is returned by $Q$ on database~$D$.
Note that $\mathsf{Q}^D$~is not a logical symbol, but merely a shorthand for the right-hand side.
\end{example}
\begin{example}
We now present the noncompliance formula for a single query~$Q$ with respect to~$\mathcal{V}$ from \Cref{sec:spec:spec}.
Let $\mathsf{V}_1^{D_i},\ldots,\mathsf{V}_4^{D_i}$ and~$\mathsf{Q}^{D_i}$ encode the views and query on database~$D_i$ ($i=1,2$) in FOL.
The desired formula would then be the conjunction of:
\begin{align*}
\forall \mathbf{x}\ldotp \mathsf{V}_1^{D_1}(\mathbf{x}) &\leftrightarrow \mathsf{V}_1^{D_2}(\mathbf{x}), & (V_1(D_1)=V_1(D_2)) \\
&\vdots \\
\forall \mathbf{x}\ldotp \mathsf{V}_4^{D_1}(\mathbf{x}) &\leftrightarrow \mathsf{V}_4^{D_2}(\mathbf{x}), & (V_4(D_1)=V_4(D_2)) \\
\exists \mathbf{x}\ldotp \mathsf{Q}^{D_1}(\mathbf{x}) &\not\leftrightarrow \mathsf{Q}^{D_2}(\mathbf{x}), & (Q(D_1)\neq Q(D_2))
\end{align*}
where $\mathbf{x}$ denotes a sequence of fresh variables.
Database constraints and consistency with a trace can be encoded similarly.
\end{example}
\subsection{Handling Practical SQL Queries}\label{sec:checking:sql}
The encoding of relational algebra into logic, while straightforward, fails to cover real-world SQL due to two semantic gaps:
\begin{enumerate}
\item While the encoding assumes that relational algebra is evaluated under \emph{set semantics},
in practice databases use a mix of set, bag, and other semantics when evaluating queries.\footnote{
For example, a SQL \texttt{SELECT} clause can return duplicate rows,
but the \texttt{UNION} operator removes duplicates.
}
\item SQL operations like aggregation and sorting have no corresponding operators in relational algebra.
\end{enumerate}
For Blockaid\xspace to bridge these gaps, it must first assume that database tables contain no duplicate rows.
This is generally the case for web applications as object-relational mapping libraries like Active~Record~\cite{ActiveRecord} and Django~\cite{Django} add a primary key for every table.
Given this assumption, Blockaid\xspace rewrites complex SQL into \emph{basic queries} that map directly to relational algebra.
\subsubsection{Basic SQL Queries}\label{sec:checking:sql:basic}
\begin{definition}\label{def:basic}
A \emph{basic} query is either a \texttt{SELECT}-\texttt{FROM}-\texttt{WHERE} query that never returns duplicate rows,
or a \texttt{UNION} of \texttt{SELECT}-\texttt{FROM}-\texttt{WHERE}~clauses
(the \texttt{UNION} always removes duplicates).\footnote{The \texttt{MINUS}~operator is not used in our applications and is omitted.}
\end{definition}
A basic query on duplicate-free tables maps to relational algebra under set semantics, and so can be directly translated to FOL.
To ensure a \texttt{SELECT} query is basic, we check it against these sufficient conditions for returning no duplicate rows:
\begin{itemize}
\item It contains the \texttt{DISTINCT} keyword or ends in \texttt{LIMIT 1}; or
\item It projects unique key column(s) from every table in~\texttt{FROM}, e.g.,
\mintinline{sql}{SELECT UId, Name FROM Users}; or
\item It is constrained by uniqueness in its \texttt{WHERE}~clause---e.g.:
\begin{lstlisting}
SELECT e.EId
FROM Events e, Attendances a
WHERE e.EId = a.EId AND a.UId = 2
\end{lstlisting}
For this query to return multiple copies of $x$,
the database must contain multiple rows of the form $\textit{Attendances}(2,x,\texttt{?})$;
this is ruled out by the uniqueness constraint on $(\textit{UId},\textit{EId})$.
\end{itemize}
\begin{mychange}%
In our experience, policy views can typically be written as basic queries directly---%
e.g., for \Cref{listing:views} we can frame $V_3$ and $V_4$ as equivalent basic queries
by replacing subqueries with joins and using the inner join transformation from \Cref{sec:checking:sql:rewrite}.
\end{mychange}
\subsubsection{Rewriting Into Basic Queries}\label{sec:checking:sql:rewrite}
\begin{mychange}%
When the application issues a query~$Q$, Blockaid\xspace{} attempts to rewrite it into a basic query~$Q'$ and verify its compliance instead.
Ideally, $Q'$ would be equivalent to $Q$, but when this is not possible,
Blockaid\xspace{} produces an \emph{approximate} $Q'$ that reveals \emph{at least as much} information as $Q$~does.%
\footnote{It suffices to guarantee that $Q$ can be computed from the result of~$Q'$.}
Such approximation preserves soundness but may sacrifice completeness,
although it caused no false rejections in our evaluation.
We now explain how to rewrite several types of queries encountered in practice.%
\end{mychange}
\paragraph{Inner joins.}
A query of the form:
\begin{lstlisting}
SELECT ... FROM R1
INNER JOIN R2 ON C1 WHERE C2
\end{lstlisting}
is equivalently rewritten as the basic query:
\begin{lstlisting}
SELECT ... FROM R1, R2 WHERE C1 AND C2
\end{lstlisting}
\paragraph{Left joins on a foreign key.}
Consider a query of the form:
\begin{lstlisting}
SELECT ... FROM R1
LEFT JOIN R2 ON R1.A = R2.B WHERE ...
\end{lstlisting}
\begin{mychange}%
If \texttt{R1.A} is a foreign key into \texttt{R2.B},
then every row in \texttt{R1} matches at least one row in \texttt{R2}.
In this case, the left join can be equivalently written as an inner join, which is handled as above.%
\end{mychange}
\paragraph{Order-by and limit.}
Blockaid\xspace{} adds any \texttt{ORDER BY} column as an output column
and then discards the \texttt{ORDER BY} clause.
It also discards any \texttt{LIMIT} clause but, when adding the query to the trace,
uses a modified condition $O_i\subseteq D(Q_i)$ (instead of ``$=$'') to indicate that
it may have observed a partial result.
\paragraph{Aggregations.}
\begin{mychange}%
Blockaid\xspace{} turns
\mintinline{sql}{SELECT SUM(A) FROM R}
into
\mintinline{sql}{SELECT PK, A FROM R},
where \texttt{PK} is table \texttt{R}'s primary key.
By projecting the primary key in addition to \texttt{A},
the rewritten query reveals the multiplicity of the values in \texttt{A}---necessary for computing \lstinline{SUM(A)}---%
without returning duplicate rows.%
\end{mychange}
\paragraph{Left joins that project one table.}
Left joins of the form:
\begin{lstlisting}
SELECT DISTINCT A.* FROM A
LEFT JOIN B ON C1 WHERE C2
\end{lstlisting}
can be equivalently rewritten to the basic query:
\begin{lstlisting}
(SELECT A.* FROM A
INNER JOIN B ON C1 WHERE C2)
UNION
(SELECT * FROM A WHERE C3)
\end{lstlisting}
where \texttt{C3} is obtained by replacing each occurrence of \texttt{B.?}
with \texttt{NULL} in \texttt{C2} and simplifying the resulting predicate.%
\footnote{As long as \texttt{C2}~contains no negations, it is safe to treat a
\texttt{NULL} literal as \texttt{FALSE} when propagating through or short-circuiting
\texttt{AND} and \texttt{OR} operators.}
\begin{mychange}
The first subquery covers the rows in~\texttt{A} with at least one match in \texttt{B},
and the second subquery covers those with no matches.%
\end{mychange}
\paragraph{Feature not supported.}
The SQL features not supported include \texttt{GROUP BY},
\texttt{ANY}, \texttt{EXISTS}, etc.,
although they can also be formulated / approximated using basic queries.
In the future we plan to leverage other formalisms~\cite{cheung13:qbs,chu17:hottsql,veanes09:query_exp,veanes10:qex,wang17:synth,chu18:axiom,wang18:equiv} to model complex SQL
semantics more precisely.
\subsection{Optimizations and SMT Encoding}\label{sec:checking:opt}
We end this section with several optimizations for compliance checking and some notes on the SMT encoding.
\paragraph{Strong compliance.}
\begin{mychange}
We define a stronger notion of compliance, which we found SMT solvers can verify more efficiently.
\begin{definition}\label{def:strong-compliance}
A query~$Q$ is \emph{strongly $ctx$-compliant} to policy~$\mathcal{V}$ given trace $\{(Q_i,O_i)\}_{i=1}^{n}$
if for each pair of databases $D_1,D_2$ that conform to the schema and satisfy:
\begin{align}
V^{\mathit{ctx}}(D_1) &\subseteq V^{\mathit{ctx}}(D_2), & (\forall V\in\mathcal{V}) \label{eqn:strongCompliance1} \\
Q_i(D_1) &\supseteq O_i, & (\forall 1\leq i\leq n) \label{eqn:strongCompliance2}
\end{align}
we have $Q(D_1)\subseteq Q(D_2)$.
\end{definition}
\begin{theorem}\label{thm:strong}
If $Q$~is strongly compliant to~$\mathcal{V}$ given trace $\mathcal{T}$, then $Q$ is also compliant to~$\mathcal{V}$ given~$\mathcal{T}$.
\end{theorem}
\begin{proof}
Let $Q$ be strongly compliant to~$\mathcal{V}$ given~$\mathcal{T}$.
To show that $Q$ is also compliant, let $D_1, D_2$ be databases that satisfy \Crefrange{eqn:compliance1}{eqn:compliance3} from the compliance definition.
These imply the strong compliance assumptions (\Cref{eqn:strongCompliance1,eqn:strongCompliance2}),
and so we have $Q(D_1)\subseteq Q(D_2)$.
By symmetry, we also have $Q(D_2)\subseteq Q(D_1)$.
Putting the two together, we conclude $Q(D_1)=Q(D_2)$, showing $Q$ to be compliant to~$\mathcal{V}$ given~$\mathcal{T}$.
\end{proof}
For faster checking, Blockaid\xspace{} verifies strong compliance rather than compliance; by \Cref{thm:strong}, soundness is preserved.
However, there are scenarios where a query is compliant but \emph{not} strongly compliant (see \refAppendix{sec:strong});
such queries will be falsely rejected by Blockaid\xspace.
This did not pose a problem in practice as we found the two notions to coincide for every query encountered in our evaluation.
\end{mychange}
\paragraph{Fast accept.}
Given a view \sql{SELECT C1, ...,} \sql{Ck FROM R}, any query that references only
columns \texttt{R.C1}, \dots, \texttt{R.Ck} must be compliant and is accepted without SMT solving.
\paragraph{Trace pruning.}
Queries that returns many rows can inflate the trace and slow down the solvers.
Fortunately, often times only few of the rows matter to a later query's compliance.
We thus adopt a trace-pruning heuristic: when checking a query $Q$, look for any previous query has returned over ten rows,
and keep only those rows that contain the first occurrence of a primary-key value (e.g., user ID) appearing in~$Q$.
This heuristic is sound, but may need to be adapted for any application where our premise for pruning does not hold.
\paragraph{SQL types and predicates.}
To model SQL types, we use SMT's uninterpreted sorts, which we found to yield better performance than
theories of integers, strings, etc.
We support logical operators \texttt{AND} and \texttt{OR},
comparison operators \texttt{<}, \texttt{<=}, \texttt{>}, \texttt{>=},
and operators \texttt{IN}, \texttt{NOT IN},%
\footnote{We only support \texttt{IN} and \texttt{NOT IN} with a list of values, not with a subquery.}
\texttt{IS NULL}, and \texttt{IS NOT NULL}.
We model \texttt{<} as an uninterpreted relation with a transitivity axiom.
\paragraph{NULLs.}
We model \texttt{NULL} using a two-valued semantics of SQL~\cite[\S~6]{guagliardo17:sql}
by (1)~designating a constant in each sort as \texttt{NULL},
and (2)~taking \texttt{NULL} into account when implementing SQL operators.
\begin{mychange}
For example, the SQL predicate \texttt{x=y} translates into the following SMT formula:
$x=y \land x\neq \textit{null} \land y\neq\textit{null}$.
\end{mychange}
\subsection{Application Assumptions and Threat Model}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/new_overview.pdf}
\vspace{-.25in}
\caption{An overview of Blockaid\xspace{} (for a single web request).}\label{fig:overview}%
\end{figure}
Blockaid\xspace targets web applications that store data in a SQL database.
We assume that a user is logged in and that the current user's identifier is stored in a \emph{request context}.
The application can access the database and the request context when serving a request;
each request is handled independently from others.
We assume that the application authenticates the user correctly,
and that the correct request context is passed to Blockaid\xspace~(\Cref{sec:design:arch}).
A \emph{data-access policy} dictates, for a given request context, what information in the database
is \emph{accessible} and what is \emph{inaccessible}.
We treat the database schema and the policy itself as public knowledge
and assume that the user cannot use side channels to circumvent policies.
We enforce policies on database \emph{reads} only, as done in prior work~\cite{agrawal05:privacy,
bender14:explainable,bender13:fine,brodsky00:secure,halder10:fine,
lefevre04:hippocratic,rizvi04:rewriting,shi09:soundness,
stonebraker74:acl,Wang07:correctness,marzoev19:multiverse}.
\begin{mychange}
Ensuring the integrity of updates, while important, is orthogonal to our goal and is left to future work.%
\end{mychange}
\subsection{System Overview}\label{sec:design:arch}
Blockaid\xspace{} is a SQL proxy that sits between the application and the database~(\Cref{fig:overview}).
It takes as input
\begin{inparaenum}[(1)]
\item a database schema (including constraints), and
\item a data-access policy specified as database views~(\Cref{sec:spec}),
\end{inparaenum}
and checks query compliance for each web request separately.
For each web request, it maintains a \emph{trace} of queries issued so far and their results;
the trace is cleared when the request ends.
\begin{mychange}
Blockaid\xspace{} assumes that the results returned by queries in the trace are not altered till the end of the request.
\end{mychange}
When a web request starts, the application sends its request context to Blockaid\xspace{}.
Then, every SQL query from the application traverses Blockaid\xspace, which attempts to verify that the query is \emph{compliant}---i.e., it can be answered using accessible information only.
To do so, Blockaid\xspace{} checks the decision cache for any similar query has been determined compliant previously.
If not, it encodes noncompliance as an SMT formula~(\Cref{sec:checking}) and checks its satisfiability using several SMT solvers in parallel~(\Cref{sec:impl}).
If a query is compliant, Blockaid\xspace forwards it to the database unmodified.
In case of a cache miss, Blockaid\xspace also extracts and caches a decision template (\Cref{sec:cache}).
Finally, it appends the query and its result to the trace.
If verification fails, Blockaid\xspace blocks the query by raising an error to the application.
Although our core design assumes that all sensitive information is stored in the relational database, Blockaid\xspace supports limited compliance checking for two other common data sources:
\begin{enumerate}
\item If the application stores database-derived data in a {\bf caching layer} (e.g., Redis),
the programmer can annotate a cache key pattern with SQL queries from which the value can be derived.
Blockaid\xspace can then intercept each cache read and verify the compliance of the queries associated with the key.
\item If the application stores sensitive data in the {\bf file system},
it can generate hard-to-guess names for these files and store the file names in a database column protected by the policy.
\end{enumerate}
Blockaid\xspace's basic requirement is soundness: preventing the revelation of inaccessible information (formalized in \Cref{sec:spec:ni}).
However, it may reject certain behaviors that do not violate the policy (\Cref{sec:discussion}), although such false rejections never arose in our evaluation (\Cref{sec:eval}).
\begin{mychange}%
We end by emphasizing two aspects of Blockaid\xspace's operation:
\begin{enumerate}
\item Blockaid\xspace{} has \emph{no visibility into or control over} the application (except by blocking queries).
So it must assume that \emph{any} data fetched by the application will be shown to the user.
\item Blockaid\xspace{} has \emph{no access} to the database except by observing query results---%
it cannot issue additional queries of its own.
\end{enumerate}%
\end{mychange}
\subsection{Application Requirements}\label{sec:design:code}
For use with Blockaid\xspace, an application must:
\begin{enumerate}
\item Send the request context to Blockaid\xspace at the start of a request and signal Blockaid\xspace to clear the trace at the end;
\item Handle rejected queries cleanly (although a web server's default behavior of returning HTTP~500 often suffices); and,
\item Not query data that it does not plan on revealing to the user.
\end{enumerate}
\begin{mychange}%
Existing applications often violate the third requirement.
For example, when a user views an order on a Spree e-commerce site, the order is fetched from the database, and only then does Spree check, in application code, that the user is allowed to view it.
To avoid spurious errors from Blockaid\xspace,
such applications must be modified
to fetch only data known to be accessible.%
\end{mychange}
\subsection{Constraints, Policies, and Annotations}\label{sec:eval:app}
\Cref{tbl:policies} summarizes the constraints and policies for database tables queried in our benchmark,
including any necessary application-level constraints (e.g., a reshared post is always public in diaspora*\xspace).
Spree and Autolab use the Rails cache, and we annotate their cache key patterns with queries (\Cref{sec:design:arch}).
\begin{mychange}%
Once a policy is given, transcribing it into views was straightforward.
The more arduous task lied in divining the intended policy for an application, by studying its source code and interacting with it on sample data.
This effort was complicated by edge cases in policies---e.g., a Spree
item at an inactive location is inaccessible \emph{except} when filtering for backorderable variants.
Such edge cases had to be covered using additional views.
To give a sense of the porting effort, writing the Spree policy took one of us roughly a month.
However, this process would be easier for the developer of a new application, who has a good sense of what policies are suitable and can create policies while building the application, amortizing the effort over time.%
\end{mychange}
When writing the Autolab policy,
we uncovered two access-check bugs in the application:
\begin{inparaenum}[(1)]
\item a persistent announcement (one shown on all pages of a course) is displayed regardless of whether it is active on the current date, and
\item an unreleased handout is hidden on its course page but can be downloaded from its assignment page.
\end{inparaenum}
This experience corroborates the difficulty of making every access check airtight,
especially for code bases that enjoy fewer maintenance resources.
\subsection{Code Modifications}\label{sec:eval:change}
Our changes to application code fall into five categories:
\begin{enumerate}
\item \textbf{Boilerplate}: We add code that sends the request context to Blockaid\xspace at request start and clears the trace at request end.
\item \textbf{Fetch less data}: We modify code to not fetch potentially sensitive data unless it will be revealed to the user;
some of these changes use the \texttt{lazy\_column} gem~\cite{lazycolumn}.
\item \textbf{SQL features}: We modify some queries to avoid SQL features not supported by Blockaid\xspace (e.g., general left joins) without altering application behavior.
\item \textbf{Parameterize queries}: We make some queries parameterized so that Blockaid\xspace{} can effectively cache their parsing results.
Most changes are mechanical rewrites of queries with comparisons,
as idiomatic ways of writing comparisons~\cite{railsLt}
cause query parameters to be filled within Rails.
\item \textbf{File system checking}: Autolab uses files to store submissions; the file name are always accessible but the content is inaccessible during an exam. We modify it to store the submission content under a randomly generated file name and restrict access to the file name in the database (\Cref{sec:design:arch}).
\end{enumerate}
The code changes are summarized also in \Cref{tbl:policies},
which omits configuration changes, adaptations for JRuby, and experiment code.
The changes range from 19 to 96 lines of code.
\subsection{Experiment Setup and Benchmark}\label{sec:eval:setup}
We deploy each application on an Amazon EC2 c4.8xlarge instance running Ubuntu~18.04.
Because our prototype only supports JVM applications~(\Cref{sec:impl}),
we run the applications using JRuby~\cite{jruby} (v9.3.0.0),
a Ruby implementation atop the JVM (we use OpenJDK~17).
In Rails's database configuration, we turn on \texttt{prepared\_statements}
so that Rails issues parameterized queries in the common case.%
\footnote{\begin{mychange}%
In case a Rails query is not fully parameterized (e.g., due to the use of raw SQL), it gets parameterized by Blockaid\xspace as described in \Cref{sec:cache:core:cond}.%
\end{mychange}}
The applications run atop the Puma web server
over HTTPS behind NGINX (which serves static files directly),
and stores data in MySQL (and, if applicable, Redis) on the same instance.
To reduce variability, all measurements are taken from a client on the same instance.
For each application, we picked five page loads that exercise various behaviors (\Cref{tbl:benchmark}).
Each page load can fetch multiple URLs, some common among many pages (e.g., D9, which is the notifications URL).
All queries issued are compliant, and all experiments are performed with the Rails cache populated.
\subsection{Page Load Times}\label{sec:eval:plt}
We start by measuring page load times~(PLTs) using a headless Chrome browser (v96) driven by Selenium~\cite{selenium}.
PLTs are reported as the time elapsed between \texttt{navigationStart}
and \texttt{loadEventEnd} as defined by the \texttt{PerformanceTiming}
interface~\cite{navtime}.
The one exception is the Autolab ``Submission'' page, a file download,
for which we report Chrome's download time instead.
Since the client is on the same VM as the server,
these experiments reflect the best-case PLT, as clients
outside the instance / cloud are likely to experience higher network latency.
We report PLTs under four settings:
\emph{original} (unmodified application), \emph{modified} (modified {\`a} la~\Cref{sec:eval:change}),
\emph{cached} (modified application under Blockaid\xspace{} with every query hitting the decision cache),
and \emph{no cache} (decision caching disabled).
For the first three, we perform 3000~warmup loads before measuring
the PLT of another 3000~loads.
For \emph{no cache}, where each run takes longer, we use 100~warmup loads and 100~measurement loads.
\Cref{tbl:benchmark} shows that when compliance decisions are cached, Blockaid\xspace incurs
up to \SI{12}{\percent} overhead to median PLT over the modified application
(and up to \SI{17}{\percent} overhead to P95).
With caching disabled, Blockaid\xspace incurs up to $236\times$ higher median PLT.
Compared with the original applications,
the modified versions result in up to \SI{6}{\percent} overhead to median PLT for all pages but
Autolab's ``Submissions'', which suffers a \SI{19}{\percent} overhead.
(The P95 overhead is up to \SI{7}{\percent} for all but two pages with up to \SI{26}{\percent} overhead.)
We will comment on these overheads in the next subsection, where we break down the pages into URLs.
\input{figures/fetch_data.tex}
\begin{figure*}\centering
\include{figures/fetch}
\ifextended
\vspace{-.3in}
\else
\vspace{-.4in}
\fi
\caption{URL fetch latency (median). With all compliance decisions cached, Blockaid\xspace incurs up to \SI[round-mode=places,round-precision=0]{10.4772879602681}{\percent} overhead over ``modified''.}\label{fig:fetch}
\end{figure*}
\subsection{Fetch Latency}\label{sec:eval:breakdown}
To better understand page load performance,
we separate out the individual URLs fetched by each page (\Cref{tbl:benchmark}), omitting URLs for assets, and measure the latency of fetching each URL (not including rendering time).
The median latencies are shown in \Cref{fig:fetch}.
In addition to the four settings from \Cref{sec:eval:plt}, it includes performance under a ``cold cache'',
where the decision cache is enabled but cleared at the start of each load (100 warmup runs followed by 100 measurements).
When all compliance decisions are cached, Blockaid\xspace incurs up to
\SIIntPercent{10.4772879602681} of overhead
(median \SIIntPercent{6.50078573288968})
over ``modified''.
In contrast, it incurs
$\num[round-mode=places,round-precision=0]{7.30647540663289}\times$--$\num[round-mode=places,round-precision=0]{422.388023726704}\times$
overhead on a cold decision cache,
and
$\num[round-mode=places,round-precision=0]{6.70864747455904}\times$--$\num[round-mode=places,round-precision=0]{310.453168304745}\times$
overhead if the decision cache is disabled altogether.
For most URLs, ``cold cache'' is slower than ``no cache'' due to the extra template-generation step.
Two exceptions are D4 and A6, where many structurally identical queries are issued, and so
the performance gain from cache hits \emph{within each URL} offsets the performance hit from template generation.
Compared to the original, the modified diaspora*\xspace{} and Spree are up to
\SIIntPercent{4.59306183132621} slower (median \SIIntPercent{2.25749195305619}),
but Autolab is up to \SIIntPercent{21.0190676558494} slower (median \SIIntPercent{8.03726846984426}).
Autolab routinely reveals partial data on objects that are not fully accessible.
For example, a user can distinguish among the cases where
\begin{inparaenum}[(1)]
\item a course doesn't exist,
\item a course exists but the user is not enrolled, and
\item the user is enrolled but the course is disabled.
\end{inparaenum}
The original Autolab fetches the course in one SQL query
but we had to split it into multiple---checking whether the course exists,
whether it is disabled, etc.---and return an error immediately if one of these checks fails.
In one instance~(S2), the modified version is \SIIntPercent{10.736344761846} faster than the original because we were able to remove queries for potentially inaccessible data that is never used in rendering the URL.
\begin{mylisting*}
\caption{Two (abridged) decision templates generated for the same parameterized query from Spree.
\texttt{Token} is a Spree request context parameter identifying the current (possibly guest) user, and \texttt{NOW} is a built-in parameter storing the current time.}\label{fig:bad-template}
\vspace{-1.5ex}
\begin{submylisting}[b]{.48\textwidth}
\caption{This template doesn't fully generalize.}\label{fig:bad1}
\begin{mdframed}[align=center,skipabove=0em]\small
\begin{minted}{sql}
SELECT * FROM products WHERE id IN (*, *, *)
\end{minted}
\begin{rrlist}
\item \verb!(id = !\prm{\texttt{?1}}\verb!, available_on < !\prm{\texttt{?NOW}}\verb!, !\\
\verb! discontinue_on IS NULL, deleted_at IS NULL, *)!
\end{rrlist}
\begin{minted}{sql}
SELECT * FROM variants WHERE id IN (*, *, *)
\end{minted}
\begin{rrlist}
\item \verb!(id = !\prm{\texttt{?2}}\verb!, deleted_at IS NULL, !\\
\verb! discontinue_on IS NULL, product_id = !\prm{\texttt{?1}}\verb!, *)!
\end{rrlist}
\rule{\textwidth}{0.4pt}
\begin{minted}{sql}
SELECT a.* FROM assets a
JOIN variants mv ON a.viewable_id = mv.id
JOIN variants ov ON mv.product_id = ov.product_id
WHERE mv.is_master AND mv.deleted_at IS NULL
AND a.viewable_type = 'Variant' AND ov.id = |\prm{?2}|
\end{minted}
\end{mdframed}
\end{submylisting}\hfill%
\begin{submylisting}[b]{.48\textwidth}
\caption{This template does fully generalize.}\label{fig:bad-fixed}
\begin{mdframed}[align=center,skipabove=0em]\small
\begin{minted}{sql}
SELECT * FROM orders WHERE ...
\end{minted}
\begin{rrlist}
\item \verb!(id = !\prm{\texttt{?0}}\verb!, token = !\prm{\texttt{?Token}}\verb!, *)!
\end{rrlist}
\begin{minted}{sql}
SELECT * FROM line_items WHERE order_id = |\prm{\texttt{?0}}|
\end{minted}
\begin{rrlist}
\item \verb!(variant_id = !\prm{\texttt{?1}}\verb!, *)!
\end{rrlist}
\rule{\textwidth}{0.4pt}
\begin{minted}{sql}
SELECT a.* FROM assets a
JOIN variants mv ON a.viewable_id = mv.id
JOIN variants ov ON mv.product_id = ov.product_id
WHERE mv.is_master AND mv.deleted_at IS NULL
AND a.viewable_type = 'Variant' AND ov.id = |\prm{?1}|
\end{minted}
\end{mdframed}
\end{submylisting}
\end{mylisting*}
\subsection{Solver Comparison}\label{sec:eval:solvers}
\begin{figure}
\centering
\include{figures/winners}
\ifextended
\vspace{-.3in}
\else
\vspace{-.4in}
\fi
\caption{Fraction of wins by each solver. ``Vampire'' covers a portfolio of six configurations (\Cref{sec:impl}).}\label{fig:solvers}
\end{figure}
When a query arrives, Blockaid\xspace invokes an ensemble of solvers to check compliance when decision caching is disabled,
and to generate a decision template on a cache miss when caching is enabled.
The \emph{winner}, in the no-cache case, is the first solver to return a decision;
and in the cache-miss case, the first to return a small enough unsat core (\Cref{sec:impl}),
assuming the query is compliant.
\Cref{fig:solvers} shows, in the fetch latency experiments (\Cref{sec:eval:breakdown}), the fraction of wins by each solver in the two cases.
In the no-cache case, the wins are dominated by Z3 followed by \textsc{cvc5}\xspace, with none for Vampire.
In the cache-miss case, however, Vampire wins a significant portion of the time.
This is because Z3 and \textsc{cvc5}\xspace{} often finish quickly but with large unsat cores,
causing Blockaid\xspace to wait till Vampire produces a smaller core.
\subsection{Template Generalization}\label{sec:eval:gen}
We found that the generated decision templates typically generalize to similar requests.
The rest generalize in more restricted scenarios, and none is tied to a particular user ID, post ID, etc.
To illustrate how Blockaid\xspace{} might produce a template that fails to generalize fully,
consider a query from Spree's cache key annotations (\Cref{fig:bad-template}).
This query fetches assets for product variants in the user's order.
(Here, the asset of a variant belongs to its product's ``master variant''.)
\Cref{fig:bad1} shows a template that fails to generalize fully, for three reasons.
\textbf{First}, due to the queries with the \texttt{IN} operator in its premise (above the horizontal line),
this template applies only when an order has exactly three variants.
The \texttt{IN}-splitting optimization from \Cref{sec:cache:core:opt} only applies to the query being checked,
and we plan to handle such queries in the premise in future work.
\textbf{Second}, this template constrains the variant to be ``not discontinued'',
which is defined as \mintinline{sql}{discontinue_on IS NULL} or
\mintinline{sql}{discontinue_on >= NOW}.
But because disjunctions are not supported in decision templates,
Blockaid\xspace picked only the condition that matches the current variant (\texttt{IS NULL}).
\textbf{Third}, in this example there are multiple justifications for this query's compliance,
and Blockaid\xspace happened to pick one that does not always hold in a similar request.
The policy states that a variant's asset can be viewed if it is not discontinued,
or if it is part of the user's order.%
\footnote{This is to allow users to view past purchases that are since discontinued.}
This particular variant in the user's order happens to not be discontinued,
and the template captures the former justification for viewing the asset.
However, it does not apply to variants in the order that \emph{are} discontinued;
indeed, for such variants, Blockaid\xspace produces the template in~\Cref{fig:bad-fixed},
which generalizes fully.
We could address this issue by finding multiple decision templates for every query.
Incidentally, inspecting decision templates has helped us expose overly permissive policies.
When writing the Autolab policy, we missed a join condition in a view,
a mistake that became apparent when Blockaid\xspace{} generated a template stating that an instructor
for one course can view assignments for \emph{all} courses.
Although manual inspection of templates is not required for using Blockaid\xspace{}, doing so can help debug overly broad policies, whose undesirable consequences are often exposed by the general decision templates produced by Blockaid\xspace{}.
\section{Proof: From Query Compliance to Application Noninterference}\label{sec:proofs}
\ifextended{}
\begin{figure*}
\centering
\begin{subfigure}[b]{.3\textwidth}
\begin{align*}
V^{\mathit{ctx}}(D_1) &\diffBox{$=$} V^{\mathit{ctx}}(D_2) \tikzmark{eq3L} \\
Q_i(D_1) &\diffBox{$=$} O_i \tikzmark{eq1L} \\
Q_i(D_2) &\diffBox{$=$} O_i \tikzmark{eq2L} \\
\midrule
Q(D_1) &\diffBox{$=$} Q(D_2) \tikzmark{conclusionL}
\end{align*}
\vspace{-2em}
\caption{Compliance (\Cref{def:compliance}).}
\end{subfigure}\hspace{.2\textwidth}%
\begin{subfigure}[b]{.4\textwidth}
\begin{align*}
\tikzmark{eq3R} V^{\mathit{ctx}}(D_1) &\diffBox{\textcolor{red}{$\subseteq$}} V^{\mathit{ctx}}(D_2) & (\forall V\in\mathcal{V}) \\
\tikzmark{eq1R} Q_i(D_1) &\diffBox{\textcolor{red}{$\supseteq$}} O_i & (\forall 1\leq i\leq n) \\
\tikzmark{eq2R} Q_i(D_2) &\diffBox{\textcolor{red}{$\supseteq$}} O_i \tikzmark{eq2RR} & (\forall 1\leq i\leq n) \\
\midrule
\tikzmark{conclusionR} Q(D_1) &\diffBox{\textcolor{red}{$\subseteq$}} Q(D_2)
\end{align*}
\vspace{-2em}
\caption{Strong compliance (\Cref{def:strong-compliance}).}
\end{subfigure}
\caption{The definition of compliance is turned into that of strong compliance in five steps.
Dashed arrows for Steps~1--4 denote modifications; the solid line (strikethrough) for Step~5 denotes removal.}\label{fig:complianceMod}
\newcommand*\circled[1]{\tikz[baseline=(char.base)]{
\node[shape=circle,inner sep=1.5pt,fill=red,text=white] (char) {#1};}}
\begin{tikzpicture}[overlay,remember picture,stepArrow/.style={-Latex,red,thick,dashed}]
\draw[stepArrow] ([shift={(2ex,.5ex)}]pic cs:conclusionL) -- node {\circled{1}} ([shift={(-2ex,.5ex)}]pic cs:conclusionR);
\draw[stepArrow] ([shift={(2ex,.5ex)}]pic cs:eq1L) -- node {\circled{2}} ([shift={(-2ex,.5ex)}]pic cs:eq1R);
\draw[stepArrow] ([shift={(2ex,.5ex)}]pic cs:eq2L) -- node {\circled{3}} ([shift={(-2ex,.5ex)}]pic cs:eq2R);
\draw[stepArrow] ([shift={(2ex,.5ex)}]pic cs:eq3L) -- node {\circled{4}} ([shift={(-2ex,.5ex)}]pic cs:eq3R);
\draw[red,thick] ([shift={(-.5ex,.5ex)}]pic cs:eq2R) -- ([shift={(.5ex,.5ex)}]pic cs:eq2RR);
\node at ([shift={(3ex,.5ex)}] pic cs:eq2RR) {\circled{5}};
\end{tikzpicture}
\end{figure*}
\begin{proof}[Proof of \Cref{thm:compliance}]
\begin{mychange}%
We prove each part separately:
\paragraph{Part~1.}
Suppose $E(\mathit{ctx},Q,\mathcal{T})=\cmark$ only when $Q$ is $\mathit{ctx}$-compliant to~$\mathcal{V}$ given~$\mathcal{T}$.
Pick any $\Prog$, $\mathit{ctx}$, and $\mathit{req}$, and let $D_1$ and $D_2$ be databases such that $V^{\mathit{ctx}}(D_1)=V^{\mathit{ctx}}(D_2)$ for all $V\in\mathcal{V}$.
Consider executions $\Prog^{E}(\mathit{ctx},\mathit{req},D_1)$ and $\Prog^{E}(\mathit{ctx},\mathit{req},D_2)$.
We will show that the two executions coincide, by induction on the number of steps taken by~$\Prog$.
This will imply that $\Prog^{E}(\mathit{ctx},\mathit{req},D_1)=\Prog^{E}(\mathit{ctx},\mathit{req},D_2)$, finishing the proof.
\begin{description}
\item[Base case] Because $\Prog$ is assumed to be deterministic, so is $\Prog^{E}$, and so the two executions start off with the same program state.
\item[Inductive step] Suppose the two executions coincide after $\Prog$ has taken $i$ steps.
Consider the $i+1$st step taken on both sides:
\begin{itemize}
\item Suppose this step is a query $Q$ to the database.
Let $\mathcal{T}$ denote the (same) trace maintained by the two executions so far.
If $E(\mathit{ctx},Q,\mathcal{T})=\xmark$, then both executions terminate with an error.
Otherwise, $Q$ must be $\mathit{ctx}$-compliant to~$\mathcal{V}$ given~$\mathcal{T}$.
By assumption, $V^{\mathit{ctx}}(D_1)=V^{\mathit{ctx}}(D_2)$ for all $V\in\mathcal{V}$;
and by the construction of $\Prog^{E}$,
$Q_i(D_1)=Q_i(D_2)=O_i$ for every $(Q_i,O_i)\in\mathcal{T}$.
Therefore, we must have $Q(D_1)=Q(D_2)$, and so the two executions end up in the same program state after this step.
\item If this step is \emph{not} a database query, then its behavior depends only on $\Prog$'s program state and inputs $\mathit{ctx}$ and $\mathit{req}$,
all of which are the same across the two executions at this time.
\end{itemize}
\end{description}
\paragraph{Part~2.}
Suppose that $E$ correctly enforces~$\mathcal{V}$.
Pick any $\mathit{ctx}$, $Q$, and prefix $E$-allowed $\mathcal{T}=\{(Q_i,O_i)\}_{i=1}^{n}$ such that $E(\mathit{ctx},Q,\mathcal{T})=\cmark$.
Consider the following program~$\Prog$:
\begin{algorithmic}[lines=0]
\Procedure{$\Prog$}{$\mathit{ctx},\mathit{req},D$}
\For{$i\gets 1..n$}
\State issue $Q_i(D)$ and discard the result
\EndFor
\State \textbf{return} $Q(D)$
\EndProcedure
\end{algorithmic}
To show that $Q$ is $\mathit{ctx}$-compliant to~$\mathcal{V}$ given~$\mathcal{T}$, let $D_1$ and $D_2$ be databases such that:
\begin{align}
V^{\mathit{ctx}}(D_1) &= V^{\mathit{ctx}}(D_2), & (\forall V\in\mathcal{V}) \label{eqn:appendixB:V} \\
Q_i(D_1) &= O_i, & (\forall 1\leq i\leq n) \\
Q_i(D_2) &= O_i. & (\forall 1\leq i\leq n)
\end{align}
Let $\mathit{req}$ be a request, and
consider executions $\Prog^{E}(\mathit{ctx},\mathit{req},D_1)$ and $\Prog^{E}(\mathit{ctx},\mathit{req},D_2)$.
Because $\mathcal{T}$ is prefix $E$-allowed, neither execution ends in a policy violation error.
Therefore,
\begin{align*}
\Prog^{E}(\mathit{ctx},\mathit{req},D_1) &= \Prog(\mathit{ctx},\mathit{req},D_1) = Q(D_1), \\
\Prog^{E}(\mathit{ctx},\mathit{req},D_2) &= \Prog(\mathit{ctx},\mathit{req},D_2) = Q(D_2).
\end{align*}
Furthermore, because $E$ correctly enforces~$\mathcal{V}$, \Cref{eqn:appendixB:V} implies that
$\Prog^{E}(\mathit{ctx},\mathit{req},D_1)=\Prog^{E}(\mathit{ctx},\mathit{req},D_2)$.
We thus have $Q(D_1)=Q(D_2)$, concluding $Q$ to be $\mathit{ctx}$-compliant to~$\mathcal{V}$ given~$\mathcal{T}$.
\end{mychange}
\end{proof}
\fi
\section{From Compliance to Strong Compliance}\label{sec:strong}
\ifextended
\begin{mychange}%
To understand when strong compliance fails to coincide with compliance,
let us look at \Cref{fig:complianceMod}, which illustrates how we modified the definition of compliance (\Cref{def:compliance}) into that of strong compliance (\Cref{def:strong-compliance}) in five steps.
Step~1 does not affect the truthfulness of the formula since $D_1$ and $D_2$ are symmetric.
Steps~2--3 adopt an \emph{open-world assumption} (OWA)~\cite{reiter77:closed},
treating every query as returning partial results.
Under this assumption, a trace can no longer represent the \emph{nonexistence} of a returned row;
this can cause Blockaid\xspace{} may falsely reject a query.
However, such cases never arose in our evaluation.
The OWA also proves convenient during decision template generation (\Cref{sec:cache:core:min})
when Blockaid\xspace{} computes a minimal sub-trace (which, by necessity, represents partial information) that guarantees strong compliance.
To see how Step~4 affects the definition,
suppose there are no database dependencies and the trace is empty (so Steps~2--3 are irrelevant).
In this scenario, compliance holds iff $\mathcal{V}$ \emph{determines} $Q$~\cite{Nash10:determinacy,Segoufin05:determinacy},
while strong compliance states that $Q$ has a \emph{monotonic} rewriting using $\mathcal{V}$.
There are cases where determinacy holds but no monotonic rewriting exists;
e.g., Nash~et~al.~\cite[\S~5.1]{Nash10:determinacy} present an example in terms of conjunctive queries.
Finally, in Step~5 we drop the condition that $D_2$ be consistent with the trace.
We can show by induction that this condition is redundant as long as each query in the trace is strongly compliant given the trace before it
(which is the case in Blockaid\xspace).
\end{mychange}
\fi
\section{Introduction}\label{sec:intro}
\input{intro}
\section{Related Work}\label{sec:related}
\input{related}
\section{System Design}\label{sec:design}
\input{design}
\section{View-based Policy and Compliance}\label{sec:spec}
\input{specification}
\section{Compliance Checking with SMT}\label{sec:checking}
\input{checking}
\section{Decision Generalization and Caching}\label{sec:cache}
\input{caching}
\section{Implementation}\label{sec:impl}
\input{impl}
\section{Evaluation}\label{sec:eval}
\input{eval}
\section{Additional Issues}\label{sec:discussion}
\input{discussions}
\section{Conclusion}
\input{conclusion}
\section*{Acknowledgments}
We are grateful to Alin Deutsch and Victor Vianu for the many discussions about query determinacy,
and to Nikolaj Bjørner, Alvin Cheung, Vivian Fang, and members of the Berkeley NetSys Lab for their help with the project.
We also thank the anonymous reviewers and our shepherd Malte Schwarzkopf for their helpful comments.
This research was funded in part by NSF grants 1817116 and 2145471, and gifts from Intel and VMware.
\bibliographystyle{plain}
\subsection{Specifying Policies as Views}\label{sec:spec:spec}
\begin{mychange}%
A policy is a collection of SQL queries that, together, define what information a user is allowed to access.
Each query is called a \emph{view definition} and can refer to parameters from the request context.
As an example, \Cref{listing:views} shows four view definitions, $V_1$--$V_4$;
we denote this policy as $\mathcal{V}=\{V_1,V_2,V_3,V_4\}$.
Notationally, for a view~$V$ and a request context~$\mathit{ctx}$,
we write $V^{\mathit{ctx}}$ to denote $V$ with its parameters replaced with values in~$\mathit{ctx}$.
We often drop the superscript when the context is apparent.%
\end{mychange}
\subsection{Compliance to View-based Policy}\label{sec:spec:compliance}
\begin{mychange}%
Under a policy consisting of view definitions, Blockaid\xspace can allow an application query to go through \emph{only if} it is certain
that the query's result is \emph{uniquely determined} by the views.
In other words, an allowable query must be answerable using accessible information alone.
If a query's output \emph{might} depend on information outside the views,
Blockaid\xspace must block the query.
\begin{example}
Let $\textit{MyUId} = 2$.
The following query selects the names of everyone whom the user attends an event with:
\begin{lstlisting}
SELECT DISTINCT u.Name
FROM Users u
JOIN Attendances a_other
ON a_other.UId = u.UId
JOIN Attendances a_me
ON a_me.EId = a_other.EId
WHERE a_me.UId = 2
\end{lstlisting}
Looking at \Cref{listing:views}, this query can always be answered by combining $V_4$,
which reveals the \textit{UId} of everyone whom the user attends an event with, with $V_1$,
which supplies the names associated with these \textit{UId}'s.
Hence, Blockaid\xspace allows it through.
\end{example}%
\end{mychange}
\begin{mychange}%
This query above is allowed \emph{unconditionally} because it is answerable using the views on \emph{any} database instance.
More commonly, queries are allowed \emph{conditionally} based on what Blockaid\xspace has learned about the current database state,
given the trace of prior queries and results in the same web request.
\begin{example}\label{ex:trace-compliance}
Again, let $\textit{MyUId} = 2$. Consider the following sequence of queries issued while handling one web request:
\smallskip
\begin{enumerate}
\item
\begin{compactsql}
SELECT * FROM Attendances
WHERE UId = 2 AND EId = 5
\end{compactsql}
\begin{rrlist}
\item \texttt{(UId=2, EId=5, ConfirmedAt="05/04 1pm")}
\end{rrlist}
\end{enumerate}
\begin{enumerate}[start=2]
\item
\begin{compactsql}
SELECT Title FROM Events WHERE EId = 5
\end{compactsql}
\end{enumerate}
\smallskip
The application first queries the user's attendance record for Event~\#5---an unconditionally allowed query---%
and receives one row, indicating the user is an attendee.
It then queries the title of said event.
This is allowed because $V_3$ reveals the information on all events attended by the user.
More precisely, the trace limits our scope to only databases where the user attends Event~\#5.
Because the second query is answerable using~$V_3$ \emph{on all such databases},
it is conditionally allowed given the trace.
\end{example}%
\end{mychange}
\begin{mychange}%
Context is important here: the second query cannot be safely allowed if it were issued in isolation.
\begin{example}\label{ex:noncompliant}
Suppose instead that the application issues the following query by itself:
\begin{lstlisting}
SELECT Title FROM Events WHERE EId = 5
\end{lstlisting}
Blockaid\xspace must block this query because it is not answerable using~$\mathcal{V}$ on a database where the user does not attend Event~\#5.
Whether or not the user \emph{actually} is an attendee of the event is irrelevant:
The application, not having queried the user's attendance records,
cannot be certain that the query is answerable using accessible information alone.
This differs from alternative security definitions~\cite{Guarnieri14:optimal,Koutris12:pricing,Zhang05:authorization-views}
where a policy enforcer can allow a query after inspecting additional information in the database that has not been fetched by the application.
\end{example}%
\end{mychange}
\begin{mychange}%
\begin{definition}
A \emph{trace}~$\mathcal{T}$ is a sequence $(Q_1,O_1),\ldots,(Q_n,O_n)$ where each $Q_i$ is a query and each $O_i$ is a collection of tuples.
\end{definition}%
Such a trace denotes that the application has issued queries $Q_1,\ldots,Q_n$ and received results $O_1,\ldots,O_n$ from the database.
We now motivate the formal definition of query compliance given a trace (using colors to show correspondence between text and equations).
Consider any two databases that are:
\begin{itemize}
\item \textcolor{tol1}{Equivalent in terms of accessible data} (i.e., they differ only in information outside the views), and
\item \textcolor{tol2}{Consistent with the observed trace} (i.e., we consider only databases that \emph{could} be the one the application is querying).
\end{itemize}
Blockaid\xspace{} must ensure that such two databases are indistinguishable to the user---%
by allowing only queries that \textcolor{tol8}{produce the same result on both databases}.
\end{mychange}
\begin{definition}\label{def:compliance}
\begin{mychange}%
Let $\mathit{ctx}$ be a request context, $\mathcal{V}$ be a set of views, and $\mathcal{T}=\{(Q_i,O_i)\}_{i=1}^{n}$ be a trace.
A query~$Q$ is \emph{$\mathit{ctx}$-compliant} to~$\mathcal{V}$ given~$\mathcal{T}$ if for every pair of databases $D_1,D_2$
that conform to the database schema and constraints,\footnote{We will henceforth use ``schema'' to mean both schema and constraints, and rely on the database and/or the web framework to enforce the constraints.}
and satisfy:
\begin{align}
\textcolor{tol1}{V^{\mathit{ctx}}(D_1)} &\; \textcolor{tol1}{=V^{\mathit{ctx}}(D_2)}, & \textcolor{tol1}{(\forall V\in\mathcal{V})} \label{eqn:compliance1} \\
\textcolor{tol2}{Q_i(D_1)} &\;\textcolor{tol2}{= O_i}, & \textcolor{tol2}{(\forall 1\leq i\leq n)} \label{eqn:compliance2} \\
\textcolor{tol2}{Q_i(D_2)} &\;\textcolor{tol2}{= O_i}, & \textcolor{tol2}{(\forall 1\leq i\leq n)} \label{eqn:compliance3}
\end{align}
we have $\textcolor{tol8}{Q(D_1)=Q(D_2)}$.
We will simply say \emph{compliant} if the context is clear.%
\end{mychange}
\end{definition}
We call \Cref{def:compliance} \emph{trace determinacy} because it extends the classic notion of query determinacy~\cite{Nash10:determinacy,Segoufin05:determinacy} with the trace.
Query determinacy is undecidable even for conjunctive views and queries~\cite{Gogacz15,Gogacz16};
trace determinacy must also be undecidable in the same scenario.
Although several decidable cases have been discovered for query determinacy~\cite{Nash10:determinacy,Afrati11,Pasaila11}, they are not expressive enough for our use case.
A promising direction is to identify classes of views and queries that capture common web use cases and for which trace determinacy is decidable.
\subsection{From Query Compliance to Noninterference}\label{sec:spec:ni}
\begin{mychange}%
Blockaid\xspace's end goal is to ensure that an application's output depends only on information accessible to the user.
In relation to this goal, query compliance (\Cref{def:compliance}) satisfies two properties, making it the right criterion for Blockaid\xspace{} to enforce:
\begin{enumerate}
\item \textbf{Sufficiency}:
As long as \emph{only compliant queries} from the application are let through,
there is no way for an execution's outcome to be influenced by inaccessible information.
\item \textbf{Necessity}:
Any enforcement system that makes per-query decisions based solely on the query and its preceding trace
\emph{cannot safely allow any non-compliant query} without the risk of the application revealing inaccessible information.
\end{enumerate}
Before stating and proving these properties formally, let us first model our target applications, enforcement systems, and goals.
We model a web request handler as a program $\Prog(\mathit{ctx},\mathit{req},D)$
that maps a request context~$\mathit{ctx}$, an HTTP request~$\mathit{req}$, and a database~$D$ to an HTTP response.%
\footnote{For simplicity, we assume $\Prog$ is a pure function---deterministic, terminating, and side-effect free---%
although this assumption can be relaxed through standard means from information-flow control~\cite[\S~2]{HedinS12:ifc}.}
A program that abides by a policy~$\mathcal{V}$ satisfies a \emph{noninterference} property~\cite{Cohen77,Goguen82}
stating that its output depends only on the inputs that the user has access to---%
namely, $\mathit{ctx}$, $\mathit{req}$, and $V^{\mathit{ctx}}(D)$ for each $V\in\mathcal{V}$.
The formal definition follows from a similar intuition as \Cref{def:compliance}.
\begin{definition}
A program~$\Prog$ satisfies \emph{noninterference} under policy~$\mathcal{V}$ if the following condition holds:
\begin{align*}
\textit{NI}_{\mathcal{V}}(\Prog) &\coloneqq \forall \mathit{ctx}, \mathit{req}, D_1, D_2\ldotp \\
& \qquad\left[ \forall V\in\mathcal{V}\ldotp V^{\mathit{ctx}}(D_1)=V^{\mathit{ctx}}(D_2) \right] \\
&\qquad\qquad \implies \Prog(\mathit{ctx},\mathit{req},D_1) = \Prog(\mathit{ctx},\mathit{req},D_2).
\end{align*}
\end{definition}
An enforcement system must ensure that any program running under it satisfies noninterference.
We now model such a system that operates under Blockaid\xspace's assumptions.
\begin{definition}
An \emph{enforcement predicate} is a mapping from a request context, a query, and a trace to an allow/block decision:
\[
E(\mathit{ctx}, Q, \mathcal{T}) \to \{\cmark, \xmark\}.
\]
\end{definition}
\begin{definition}
Let $\Prog(\mathit{ctx},\mathit{req},D)$ be a program and $E$ be an enforcement predicate.
We define the program~$\Prog$ \emph{under enforcement using~$E$} as a new program $\Prog^{E}(\mathit{ctx},\mathit{req},D)$ that simulates every step taken by the original program~$\Prog$,
except that it maintains a trace~$\mathcal{T}$ and blocks any query~$Q$ issued by~$\Prog$ where $E(\mathit{ctx},Q,\mathcal{T})=\xmark$ by immediately returning an error.
\end{definition}
Note that $\Prog^{E}$ evaluates~$E$ only on traces in which every query has been previously allowed by~$E$ given its trace prefix.
\begin{definition}
Given a request context~$\mathit{ctx}$,
we say that a trace $\mathcal{T}=\{(Q_i, O_i)\}_{i=1}^{n}$ is \emph{prefix $E$-allowed} if for all $1\leq i\leq n$,
\[
E(\mathit{ctx},Q_i,\mathcal{T}[1..i-1])=\cmark.
\]
\end{definition}
\begin{definition}
A predicate~$E$ \emph{correctly enforces} policy~$\mathcal{V}$ if:
\[
\forall\Prog\ldotp~\textit{NI}_{\mathcal{V}}(\Prog^{E}).
\]
\end{definition}
We are ready to state the sufficiency-and-necessity theorem, whose proof is left to \refAppendix{sec:proofs}.
Like before, we use colors to link a statement to its explanation.
\begin{theorem}\label{thm:compliance}
Let $\mathcal{V}$ be a set of views and $E$ be a predicate.
\begin{enumerate}
\item Suppose $E(\mathit{ctx},Q,\mathcal{T})=\cmark$ \textcolor{tol1}{only when $Q$ is $\mathit{ctx}$-compliant} to~$\mathcal{V}$ given~$\mathcal{T}$.
Then $E$ \textcolor{tol2}{correctly enforces $\mathcal{V}$}.
\item Suppose $E$ \textcolor{tol7}{correctly enforces $\mathcal{V}$}.
Then \textcolor{tol8}{for any} request context~$\mathit{ctx}$, query~$Q$, and prefix $E$-allowed trace~$\mathcal{T}$ \textcolor{tol8}{such that $E(\mathit{ctx},Q,\mathcal{T})=\cmark$,
$Q$ is $\mathit{ctx}$-compliant} to~$\mathcal{V}$ given~$\mathcal{T}$.
\end{enumerate}
\end{theorem}
To unpack, \Cref{thm:compliance} says:
\begin{inparaenum}[(1)]
\item as long as an enforcement predicate \textcolor{tol1}{ensures query compliance}, it \textcolor{tol2}{correctly enforces the policy} on applications (i.e., sufficiency); and
\item for a predicate to \textcolor{tol7}{correctly enforce the policy}, it must \textcolor{tol8}{ensure query compliance} (i.e., necessity).
\end{inparaenum}
Thus, query compliance can be regarded as the ``projection'' of application noninterference onto Blockaid\xspace's lens, making it the ideal criterion to enforce.%
\end{mychange}
|
1,108,101,564,174 | arxiv | \section{Introduction} \label{sec_intro}
A tiny drop of magnetic fluid responds to magnetic fields in many ways -- it is
a "world in a nutshell". Typically 1\,$\tcmu$l of magnetic fluid (MF) contains
more than $10^{13}$ magnetic mono-domain particles, each with a diameter of around
$10$\,nm, which are suspended in a carrier fluid like water or kerosene
\cite{rosensweig1985}.
In the absence of an external magnetic field there is no long-range order in
the MF, but when exposed to a static field the magnetic grains orient in part which
results in a net magnetization.
Application of a rotating magnetic field induces a torque on the suspended
magnetic grains. Due to the viscous coupling of the particles to its
surrounding carrier liquid angular momentum is transferred to the whole drop
and an abundance of phenomena is observed.
In a series of experiments pioneered by Bacri \emph{et al}~\cite{bacri1994} a
magnetic drop was levitated in a surrounding liquid and exposed to a field
rotating in the horizontal plane. For very small angular frequencies of the field
an elongated drop follows the field rotation quasi-adiabatically with small
phase lag
\cite{cebers1995,lebedev1997,morozov1997,sandre1999,cebers2002}.
In the limit of high angular frequency one observes for small magnetic fields
an oblate spheroid, for intermediate values transient shapes, and for large
fields an oblate spheroid with "spiny starfish" appearance
\cite{bacri1994,morozov2000,lebedev2003}, for a review see
Ref.\,\cite{blums1997}.
Our setup, investigated in experiment and theory in this article, differs from
the above configuration in two points fundamentally: (i) the field is rotating
in a plane \emph{oriented vertically}, (ii) the drop of ferrofluid is swimming
\emph{on top} of a layer of non-magnetic fluid. The field configuration is
borrowed from a recent experiment ("the magnetic pump") where the magnetic
torque drives a continuous flow of ferrofluid in an open duct
\cite{krauss2005,krauss2006}. By replacing the ferrofluidic layer with a
floating drop we are able to propel the drop with a constant translation
velocity $\vv_\mathrm{drop}$ with respect to the liquid surface. Moreover we
could in principle manoeuvre the drop to arbitrary positions on the whole two
dimensional liquid layer by utilizing an additional alternating field in
y-direction. This is a new and promising technique for microfluidic
applications.
Our theoretical model describes the ferrofluid drop first as a solid
sphere with a Navier slip boundary condition at its surface, then as a liquid
(half-)sphere with own inner flow field. The problem is treated within Stokes
approximation and the assumption of certain symmetries. In both cases
an analytical expression for the drop speed $\vv_\mathrm{drop}$ in terms
of the experimentally accessible parameters is obtained. While the solution
of the Navier slip model contains an unknown parameter, the slip length,
the result of the liquid half-sphere model is completely free of fitting parameters
and is shown to represent the experimentally measured dependencies very well.
\begin{figure}[tb]
\centering
\includegraphics[width=11cm]{setupG.eps}
\caption{Sketch of the experimental setup. For details see text.}
\label{setup}
\end{figure}
The article is organized as follows. Next we present the experimental
arrangement together with some qualitative observations. This is followed by a
comprehensive theoretical analysis (section~\ref{sec_theory}). Subsequently the
results obtained by experiment and theory are compared in section~\ref{sec_comp}
and discussed in section~\ref{sec_conc}.
\section{Experiment}\label{sec_experiment}
Our experimental setup is shown in figure~\ref{setup}. We place a cylindrical
glass beaker in between a Helmholtz pair of coils that produce an externally
applied field $G_x(t)$ which is oriented horizontally. In addition another coil
is wrapped directly around the beaker providing a field $G_z(t)$ in vertical
direction. Here we denote the external magnetic far field by $\bi{G}$ and the
local one by $\bi{H}$. A sinusoidal driving current is supplied by connecting
the output of a function generator (Fluke PM 5138A) to one channel of a power
amplifier (Rotel RB-1090). The input of the second channel is supplied with a
delayed signal of the function generator. In order to allow an independent
adjustment of both currents, a variable resistor is inserted in one driving
circuit. An oscilloscope serves to control the phase difference of both
currents. When the phase difference is set to 90$^o$ the coils produce a
rotating field $\bi{G}(t)$ inside the beaker. Any motion of the drop of
magnetic liquid is observed from above by means of a video camera (not shown
here).
For a good performance of the driving by the rotating field a large imaginary
part of the susceptibility of the MF is important. Thus we have selected a
magnetic fluid based on air stable cobalt particles \cite{boennemann2003b},
which are stabilized by oleic acid in kerosene. Figure~\ref{fig:chi} reproduces
the frequency dependence of the complex susceptibility of this fluid measured
by an ac-susceptometer \cite{krauss2006}. The MF has a volume fraction of
5\,$\%$ and constitutes the interior ${\mathrm{(i)}}$ of the drop. Its
viscosity was determined to be $\eta^\mathrm{(i)}=5.4$\,mPa\,s by means of a
low shear rheometer (Contraves LS40), and the density of the MF has been found
to be $\rho^\mathrm{(i)}=1.07$\,gcm$^{-3}$.
\begin{figure}[tb]
\begin{minipage}{0.45\textwidth}
\includegraphics[width=10cm]{chi_Co.eps}
\end{minipage}\hspace{1cm}
\begin{minipage}{0.45\textwidth}
\begin{flushleft}
\caption{The magnetic susceptibility of the cobalt based magnetic liquid versus
the external alternating magnetic field. The data points for the real and
imaginary parts of the susceptibility are marked by squares and circles,
respectively.} \label{fig:chi}
\end{flushleft}
\end{minipage}
\end{figure}
The drop of MF has to float on top of a liquid layer of a non-magnetic fluid.
The quantities of this fluid outside of the drop will be marked by $\mathrm{(o)}$.
This fluid must not mix with any of the components of the MF.
Moreover it must be denser than the MF. A per-fluorinated hydrocarbon fluid
(Galden SV-90) proved as suitable substrate because of its higher density
$\rho^\mathrm{(o)}=1.69$\,gcm$^{-3}$, its long-term stability, and its
non-miscibility with the MF. According to the manufacturer the viscosity
amounts to $\eta^\mathrm{(o)}=1.27$\,mPa\,s, and the surface tension to
$\gamma=16$\,mN/m. This fluid is poured into a cylindrical glass beaker up to a height
of 2\,cm in order to minimize fringe effects from the bottom of the glass.
At the beginning of an experiment a definite volume $V$ of MF is put on the
surface of the per-fluorinated liquid with a pipette. According to the density
ratio of the two liquids the forming drop immerses with approximately two
thirds of its volume (corresponding to a measured maximum penetration depth of
about 60\,\% of its diameter). The rotating field generated by the coils leads
to a motion of the droplets in the direction the field is rolling. Hence the
direction of the motion can be reversed by changing the sign of the phase
difference between the ac-fields. Under the given experimental conditions we
can achieve droplet velocities up to a few cm/s. The good contrast between the
black MF and the transparent hydrocarbon liquid allows an easy observation by a
digital video camera. Two exemplary movies can be activated at
figure~\ref{fig:movie}. The velocity of the droplets was determined by
extracting the time a drop takes to travel the distance of 1\,cm in the center
of the beaker. Within this distance the magnetic field varies less than 1\,\%.
\begin{figure}[tb]
\centering
(a)
\includegraphics[width=0.4\textwidth]{5mul_a.eps}
(b)
\includegraphics[width=0.4\textwidth]{80mul_a.eps}
\caption{Drops of magnetic fluid with a volume of (\textbf{a}) $5\,\tcmu$l
(see \underline{movie1}) and (\textbf{b}) $80\,\tcmu$l (\underline{movie2})
are rolling on top of a per-fluorinated Newtonian liquid.} \label{fig:movie}
\end{figure}
\section{Theory}\label{sec_theory}
The theoretical description of the "real" setup poses a very complicated
boundary value problem which would have to be solved by numerical methods. In
order to extract the essence of the effect we make some simplifying assumptions
which even lead to an analytical solution.
The droplet is considered to be a spherical object half-way immersed into a
liquid with an otherwise perfectly flat surface. Effects of gravity are
neglected as is the inertia term in the Navier-Stokes equation which is hence
rendered linear. This Stokes approximation is in order when the Reynolds number
$\mathrm{Re}$ is sufficiently small. Here it is given by $\mathrm{Re} =\itom
R^2\varrho^\mathrm{(o)}/\eta^\mathrm{(o)}$, with $\itom$ the angular velocity
of the sphere, and ranges between one and ten. The problem is treated within
the reference frame where the sphere is rotating with its center at rest
(cf.~figure\,\ref{sphere}). In order to ensure stationarity in this frame, the
overall forces and torques acting on the sphere must cancel out. After the
velocity field of the surrounding liquid has been determined, its asymptotic
value at $r\to\infty$ will give the negative translation velocity of the sphere
in the laboratory frame.
\begin{figure}[tb]
\begin{minipage}{0.45\textwidth}
\includegraphics[width=10cm]{sphere}
\end{minipage}\hspace{1cm}
\begin{minipage}{0.45\textwidth}
\begin{flushleft}
\caption{A spherical ferrofluid drop with radius $R$ hosts in its inner (i) a
fluid with density $\varrho^\mathrm{(i)}$. It is covered from above (a) by a
gas with density $\varrho^\mathrm{(a)}$. The lower part of the drop is half-way
immersed into an outer (o) Newtonian fluid with density $\varrho^\mathrm{(o)}$
and dynamic viscosity $\eta^\mathrm{(o)}$. The drop rotates with constant
angular velocity $\itom$. The center of the sphere is the origin of the
reference frame as indicated in the picture.}
\label{sphere}
\end{flushleft}
\end{minipage}
\end{figure}
The simplest approach is treating the droplet as a \emph{solid} sphere and employing
the common \emph{no-slip} boundary condition at its surface, but this would lead to a
logarithmically divergent viscous torque \cite{DA_sterr}.
It has long been shown \cite{huh_scri}, that
hydrodynamic problems containing a moving contact line in combination with the no-slip
condition give rise to diverging quantities due to an inherent contradiction:
on the one hand the fluid is supposed to stick to the solid surface,
and yet the line where solid, liquid, and gas meet shall advance on that very same
surface.
Several means have been proposed to relieve the singularity, e.g., taking into
account a strong curvature of the fluid surface near the solid, or describing
the contact region in terms of molecular interactions, as has been done in
Ref.\,\cite{deGen1}. A straightforward approach is to allow a certain amount of
slippage over the solid surface. As early as 1823, a linear relation between
the tangential stresses at the solid surface and the velocity of the latter was
proposed by C.-L.~Navier \cite{navier}. Although other forms of slip condition
can be successful \cite{dussan1,pis_rub} this "Navier slip" has become the most
popular one and has since been examined and applied oftentimes. Earlier works
distinguish between several regions where different expansions are made, and
only employ the slip condition in the contact region itself, finally matching
the solutions together \cite{huh_mas, hocking2,
cox}. Our treatment, however, will follow the lines of Ref.\,\cite{oneill}
who applied the Navier slip condition on the whole solid surface without
seperating different regions. This is justified a posteriori by the fact that
the slippage shows most of its impact in the direct vicinity of the contact
line where the stresses are largest and leaves the flow field undisturbed
further away, as will be made clear by the results of the present paper.
Although Ref.\,\cite{oneill} considered a problem quite analogous to ours,
\emph{i.e.}, the rotation and translation of a solid spherical object which is
half-way immersed in a liquid, we will present the treatment in a more lucid
albeit less general way that will lead to a closed expression for the resulting
flow field which is lacking in Ref.\,\cite{oneill}.
The disadvantage of the Navier slip condition is that it contains a
characteristic length $\ls$ which is supposed to be small compared to the
length scales characterizing the problem (in our case the sphere radius $R$)
and essentially indicates how much the fluid molecules slip over the solid
surface. $\ls\rightarrow 0$ is equivalent to no slip, while
$\ls\rightarrow\infty$ corresponds to completely unimpeded slip or zero
tangential stress. This \emph{slip length} does not necessarily "represent true
slippage but merely recognizes the fact that the liquid consists of molecules
of finite size", as stated by Huh and Mason in Ref.\,\cite{huh_mas}. Or as Cox
puts it in Ref.\,\cite{cox}: "Slip between liquid and solid is a convenient
assumption to get rid of the non-integrable stress singularity." Although the
slip length between certain materials can be measured by now (see
e.g.~\cite{denn,jos_tab, schmatko,fetzer_etal}), this is of no use to the
present problem, as the experiment does not involve a solid sphere.
By consequence, the result of these calculations will not be entirely satisfying,
so that a second approach is taken in which the ferrofluid drop is treated as a
\emph{liquid half-sphere} with its own inner flow field. In this case, the
velocity fields and also the sums of viscous and magnetic stresses must be continuous
at the interface between the two liquids. Though the liquid drop cannot be described
as a whole sphere but only as a half-sphere, the resultant drop speed, which no longer
depends on any unknown parameters, represents the experimental data
extremely well. This may indicate that the true flow field in the drop
is mainly restricted to its lower part.
\subsection{Solid sphere}\label{sec_Nav}
The basic hydrodynamic equations are the continuity equation for incompressible fluids
\bq
\nabla\cdot\vv = 0 \label{cont}\,,
\eq
and the stationary Stokes equation
\bq
\mathbf{0} = -\nabla p + \eta\nabla^2\vv \label{stokes}
\eq
which by eliminating the pressure can be written as
\bq
\nabla^2 \left( \nabla\times\vv \right) = \mathbf{0}\,. \label{stokesrot}\,
\eq
The velocity field of the non-magnetic liquid bearing the sphere is expanded in
vector spherical harmonics according to Ref.\,\cite{sorokin,MoFe}.
\ref{flowfieldapp} gives the details of this expansion and shows how the
various coefficients occurring in it are determined from the boundary
conditions.
When only one boundary condition is left, namely the requirement that
the dissipating viscous torque compensate for the accelerating
magnetic torque, the velocity components of the flow field below the
sphere still depend on the yet unknown angular velocity $\itom$ with
which the sphere is rotating. The resulting expressions are
(cf. \ref{flowfieldapp}):
\bqa\fl
\frac{v_r}{\itom R}
= \frac{1}{2} \,\frac{\cph\sth}{1+2\frac{\ls}{R}}\left[ 1 - \frac{R^3}{r^3} \right]
\nonumber\\
+\frac{1}{2}\cph
\sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty \pleins(\cth)
\frac{R^\ell}{r^\ell}
\left[ 1 - \frac{R^2}{r^2} \right]
\frac{(-1)^{\frac{\ell-1}{2}} (2\ell+1)}{1+(2\ell+1)\frac{\ls}{R}}\cdot
\frac{(\ell-2)!!}{(\ell+1)!!}
\label{vr_Nav}
\eqa
and
\bqa\fl
\frac{1}{\itom R}
\left(
\eqalign{\vth \\ \vphi}
\right)
= \frac{1}{2}
\left(
\eqalign{ \cph\,\cth \\ -\sph}
\right)
\left[ 1 + \frac{R^3}{2r^3} \right]\frac{1}{1+2\frac{\ls}{R}}
\nonumber\\
+ \frac{1}{2}
\left(
\eqalign{ \cph \,\parth \\ -{\sph}/{\sth} }
\right) \nonumber\\
\times
\sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\frac{\pleins(\cth)\, (-1)^{\frac{\ell-1}{2}}}{1+(2\ell+1)\frac{\ls}{R}}\,
\frac{R^\ell}{r^\ell}
\left[ (2-\ell) + \ell\frac{R^2}{r^2} \right]
\frac{(2\ell+1)(\ell-2)!!}{\ell(\ell+1)(\ell+1)!!} \nonumber\\
+ 2 \left(
\eqalign{ -\cph/\sth \\ \sph\,\parth}
\right)
\sum_{\stackrel{\ell=2}{\ell\textrm{ \scriptsize even}}}^\infty
\frac{\pleins(\cth)\, (-1)^\frac{\ell}{2}}{1+(\ell+2)\frac{\ls}{R}}\,
\frac{R^{\ell+1}}{r^{\ell+1}}\,
\frac{(2\ell+1)(\ell-3)!!}{\ell(\ell+1)(\ell+2)!!}\; .
\label{vthphi_Nav}
\eqa
The flow field determines the pressure via Stokes' equation (\ref{stokes}).
Straightforward calculation yields
\bqa
\nabla^2\vv &=& \nabla\sum_{\ell,m}\frac{2(2\ell-1)}{(\ell+1)}\frac{\clm}{r^{\ell+1}}\ylm
= \frac{1}{\eta} \nabla p
\eqa
so that the pressure field is given by
\bqa\fl
p\of = \eta \sum_{\ell,m} \frac{2(2\ell-1)}{(\ell+1)}\frac{\clm}{r^{\ell+1}} \ytp
\nonumber\\
= \frac{3}{4}\,\eta\itom \,\frac{\cph \sth}{1+2\frac{\ls}{R}} \frac{R^2}{r^2}
\nonumber\\
+ \,\eta\itom \cph \sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\frac{\pleins (\cth)\,(-1)^\frac{\ell-1}{2}}{1+(2\ell+1)\frac{\ls}{R}}\,
\frac{R^{\ell+1}}{r^{\ell+1}}\,
\frac{4\ell^2-1}{\ell+1}\,\frac{(\ell-2)!!}{(\ell+1)!!} \,.
\eqa
\subsection{Viscous torque}
The viscous torque acting on the lower half-sphere is
gained from the tangential viscous forces
\bq
\de\bi{F}_\mathrm{tang} = [\sigrt\etheta + \sigrp\ephi] R^2\dphi\,\dth\sth
\eq
according to
\bq
\de\bi{T}_\mathrm{vis} = \bi{R}\times\de\bi{F}_\mathrm{tang}(r=R)
\eq
with the tangential components of the viscous stress tensor $\sigrt$ and $\sigrp$ as
defined in \cite{LLhydro}. Integration over the lower half-sphere
$0\leq\vartheta\leq\pi/2$, $0\leq\varphi\leq 2\pi$ yields the dimensionless viscous
torque in $y$-direction
\bqa\fl
\frac{-T_\mathrm{vis}}{\pi\eta\,\itom R^3} =
\frac{3}{2}\, \frac{1}{1+2\frac{\ls}{R}}
+ \lim_{N\rightarrow\infty}
\sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^N
\frac{(2\ell+1)^2}{1+(2\ell+1)\frac{\ls}{R}} \left[ \frac{(\ell-2)!!}{(\ell+1)!!}
\right]^2 \nonumber\\
+ \lim_{N\rightarrow\infty}
\sum_{\stackrel{\ell=2}{\ell\textrm{ \scriptsize even}}}^N
\frac{4(2\ell+1)(\ell+2)}{1+(\ell+2)\frac{\ls}{R}}
\left[ \frac{(\ell-3)!!}{(\ell+2)!!} \right]^2 \,.
\label{viscous_torque_Nav}
\eqa
When the doublefactorials in (\ref{viscous_torque_Nav}) are transformed to single
factorials and Stirling's approximation
\bq
\ell!\approx \sqrt{2\pi\ell}\,\ell^\ell\ehoch{-\ell},\qquad \ell\gg 1
\eq
is employed, it can be shown that the terms for large $\ell$ in the infinite series
give in leading order
\bqa
\frac{2}{\pi \ell^2}\frac{R}{\ls}, &\qquad \textrm{for } \ls>0 \\
\frac{4}{\pi \ell} , &\qquad \textrm{for } \ls=0.
\eqa
While $\sum_{\ell=1}^\infty \ell^{-2}$ is a convergent series, $\sum_{\ell=1}^\infty
\ell^{-1}$ diverges logarithmically, so here the necessity of the slip condition
becomes manifest.
Looking at the solution (\ref{vr_Nav}), (\ref{vthphi_Nav}) for the velocity field, it
becomes clear that the field is only changed significantly near the contact line or,
more generally, near the sphere surface: since $\ls \ll R$, the terms with small
$\ell$ hardly deviate from those for no-slip. Only when $\ell{\ls}/{R}$ exceeds the
order unity, the factors containing $\ls$ become important. Each term is made smaller,
and the more so the greater $\ell$ becomes and, of course, the greater the slip length.
On the other hand, the terms with large $\ell$, \emph{i.e.}, those
which are influenced by the slip condition, are negligible when $r\gg R$. So the results
with and without slippage would not be distinguishable far enough from the contact line.
Figures \ref{vth_Nav} and \ref{sigrth_Nav} illustrate the influence of slippage in the
relevant region near $\vartheta=\pi/2$ for expansion orders 99 and 100, respectively.
Where there is a steep descend in the dependence of $\vth^{(100)}(r=R)$ on $\vartheta$
and therefore a large corresponding tangential viscous stress $\sigrt^{(99)}(r=R)$
for $\ls=0$, the curves are considerably smoothed out when the fluid is allowed to slip.
\begin{figure}[tb]
\begin{minipage}{0.4\textwidth}
\caption{Influence of slipping on $\vth^{(N)}(R)$ over $\vartheta$ for $N=100$.
Both the oscillations and the steep descent to zero are considerably
smoothed out when a finite slip length is taken into account.\\[1cm]}
\label{vth_Nav}
\caption{Influence of slipping on the relevant stress component $\sigrt^{(N)}(R)$ over
$\vartheta$ for $N=99$. The greater the slip length, the more are the
oscillations damped, \emph{i.e.}, the more is the stress relieved.}
\label{sigrth_Nav}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics[width=7cm]{vth_Nav}\\[0.5cm]
\includegraphics[width=7cm]{sigrth_Nav}\fl
\end{minipage}\hspace{1.5cm}
\end{figure}
\subsection{Magnetic torque}
In order to obtain an expression for the angular velocity $\itom$, we utilize
the fact that the viscous torque (\ref{viscous_torque_Nav}) must compensate for
the magnetic torque which is calculated now.
The vector of the applied magnetic field rotates within the $xz$-plane,
generating a torque in $y$-direction, so that the external magnetic field is denoted by
\bq
\bi{G} = \real\{\mathbf{\hat{\bi{G}}}\}
, \qquad \mathbf{\hat{\bi{G}}} = G \,\ehoch{\ii\omega t}(-\ii\eix+\ez)
\eq
with $\omega=2\pi f$ being the rotation frequency of the field and
\bq \chi = \chi\prime - \ii\chi\prime\prime = \chi(f) \eq
the frequency dependent magnetic susceptibility of the sphere. Concerning the amplitude
of the magnetic field, the susceptibility is assumed to be a constant.
The sphere is supposed to be magnetized homogeneously, having the overall magnetization
(see for example \S\S\ 8 and 29 in \cite{LLmag})
\bq
\bi{M} = \real\{\mathbf{\hat{\bi{M}}}\}
, \qquad \mathbf{\hat{\bi{M}}} = \frac{G\chi}{1+\frac{\chi}{3}}
\,\ehoch{\ii\omega t}(-\ii\eix+\ez)
\eq
so that the magnetic torque acting on it in the stationary state is given by \cite{rosensweig1985}
\bq
\bi{T}_\mathrm{mag} = \mu_0 V\ey\,(M_z G_x-M_x G_z)
= \frac{4\pi}{3}R^3 \frac{ \mu_0G^2\chi\prime\prime}%
{(1+\frac{\chi\prime}{3})^2+(\frac{\chi\prime\prime}{3})^2}\,\ey \,.
\eq
This must compensate for the viscous torque
\bq
\bi{T}_\mathrm{vis} = -\pi\eta\,\itom R^3 \, \Sigma(\ls)\,\ey\,.
\label{Om_from_torque}
\eq
Here, the right-hand side of (\ref{viscous_torque_Nav}) is abbreviated
by $\Sigma(\ls)$, reminding us that it includes an infinite series
which depends on the slip length and cannot be computed analytically
in closed form. The equality of viscous and magnetic torques poses the
last boundary condition which makes sure that the rotational and,
consequently, also the translational motion of the sphere be not
accelerated, and finally gives the rotation frequency of the sphere:
\bqa
\itom = \frac{4}{3}
\frac{\mu_0 G^2 \chi\prime\prime}%
{\left[(1+\frac{\chi\prime}{3})^2+(\frac{\chi\prime\prime}{3})^2\right]
\eta\,\Sigma(\ls)}
\equiv \frac{8}{3}\frac{\mathfrak{M}}{\eta\,\Sigma(\ls)}
\eqa
The speed with which the sphere advances on the fluid surface is given by the negative
of the velocity field at $r\to\infty$. In this limit, only the $(\ell=1)$-terms remain
and the corresponding factor from the Navier slip condition
can be neglected because of $\ls\ll R$:
\bqa
\vv_{\mathrm{drop}}
= -\frac{\itom R}{2}\left(
\eqalign{\sth\cph\\ \cth\cph\\ -\sph}
\right) = -\frac{4}{3}\frac{\mathfrak{M}R}{\eta\,\Sigma(\ls)}\,\eix
\label{solid_sphere_speed}
\eqa
\subsection{Fluid (half-)sphere}\label{sec_fluid}
Although a definite result has been obtained for the speed of the magnetic sphere,
it cannot be compared to experimental data so easily.
It still depends on an unknown parameter, the slip length $\ls$, which cannot simply
be treated as a fit parameter. Due to the very weak dependence of the viscous torque
on the expansion order, it poses a formidable numerical problem to obtain the slip
length for a given torque, so it would be of advantage to obtain an expression for the
drop speed that does not depend on such a parameter.
In addition, one could expect a model containing a \textit{liquid drop} to be
more realistic than one with a {\it solid sphere}. For these reasons the ferrofluid drop
is now considered liquid, though still spherical, being also subject to the hydrodynamic
equations like the surrounding liquid. The Navier slip condition is replaced by the
condition of continuous velocities and stresses at the interface between the two liquids.
All other boundary conditions remain as before, including the addition of the mirror
image. As a consequence of the requirement of a flat "surface" ($\vth=0$ at $z=0$ for
all $r>R$), it is not possible to obtain a spherical inner (i) velocity field:
$\vth^{\mathrm{(i)}}$ is rendered zero within the whole section $z=0$ when the
corresponding outer (o) component $\vth^{\mathrm{(o)}}$ is demanded to vanish on the
whole contact circle $z=0$, $r=R$.
However, when the boundary conditions are posed in analogy to the previous
section, a flow field is obtained which proves to be very useful. As the field
becomes completely horizontal within the plane of symmetry, it is suggested
that only the lower half-sphere is identified with the ferrofluid drop,
\emph{i.e.}, after solving the mirror image set-up, the whole upper half-space
is neglected, resulting in the flow field displayed in figure \ref{velc_1}.
\begin{figure}[tb]
\centering\includegraphics[width=11cm]{velc1_frame.eps}
\caption{Flow field of the liquid half-sphere within the plane \mbox{$y=0$}.\\}
\label{velc_1}
\end{figure}
The same differential equations (\ref{cont}), (\ref{stokesrot}) and ansatz
(\ref{expan_vr}), (\ref{expan_real}) together with the requirement that the velocity be
finite at $r=0$ yield for the radial functions of the inner velocity field ($\ell>0$):
\bqa
f_{00}^{\mathrm{(i)}}(r) \equiv 0 \\
\iflm(r) = \qlm r^{\ell+1} + \Blm r^{\ell-1} \\
\iglm(r) = \frac{\ell+3}{\ell(\ell+1)}\,\qlm r^{\ell+1} + \frac{\Blm}{\ell}\, r^{\ell-1}
\\
\ihlm(r) = \pplm r^\ell
\eqa
Starting point for the velocity components of the surrounding liquid are again the
radial functions (\ref{hlm_a}) - (\ref{glm_cd}).
For simplicity it is still assumed that the drop remains spherical, \emph{i.e.},
\bq
v_r^{(\mathrm{i})}(R) = v_r^{(\mathrm{o})}(R) = 0 \qquad \forall\,\vartheta,\varphi,
\eq
instead of demanding that the normal stresses be continuous at $r=R$.
As mentioned above, the tangential components $\vth$ and $\vphi$
must be continuous. Due to the orthogonalities (\ref{orth2}) and (\ref{orth3}) this
condition reduces to the radial functions $\glm$ and $\hlm$ being continuous.
Furthermore, the tangential forces must cancel out at every point on the
spherical interface so that the tangential stresses are pointwise continuous.
The latter consist of viscous stresses
$\sigma_{r\vartheta/\varphi}^{\mathrm{(vis)}}\equiv\sigma_{r\vartheta/\varphi}$
and magnetic stresses \cite{shliomis}
\bqa
\sigma_{ij}^{(\mathrm{mag})} = \mu_0 H_i H_j - \frac{\mu_0}{2}\,H_i H_j \delta_{ij}
+ \frac{\mu_0}{2}\,(M_i H_j - M_j H_i) \label{mag_stress_tensor}
\eqa
where $i,j=x,y,z$ and the local magnetic field is given by
\bq
\bi{H}=\real\{\mathbf{\hat{\bi{H}}}\}, \qquad
\mathbf{\hat{\bi{H}}} = \frac{G}{1+\frac{\chi}{3}}\, \ehoch{\ii\omega t}\,(-\ii\eix+\ez),
\eq
assuming a linear magnetization law
\bq
\mathbf{\hat{\bi{M}}} = \chi\mathbf{\hat{\bi{H}}}.
\eq
The quantities $\bi{M}$, $\bi{G}$, $\chi$, and $\mathfrak{M}$ are defined as in the
previous section. For the condition of continuous tangential stresses, the symmetric
part of the magnetic stress tensor (\ref{mag_stress_tensor}) need not be considered
since it is the same on both sides of the interface due to the usual boundary
conditions for $\bi{H}$.
The antisymmetric part, on the other hand, is the crucial one which leads to the
propagation of the drop. It shall be denoted by $\sigma_{ij}^{(\mathrm{m})}$. Because
of antisymmetry in addition to $\hat{{H}}_y=\hat{M}_y=0$, only one independent
cartesian component is left:
\bq
{\sigma_{xz}^{(\mathrm{m})}} =
-\frac{\mu_0}{2}\,\frac{G^2 \chi\prime\prime}%
{\left(1+\frac{\chi\prime}{3}\right)^2+\left(\frac{\chi\prime\prime}{3}\right)^2}
= -\mathfrak{M}
\eq
This gives the tangential magnetic stresses
\bqa
{\sigrt^{(\mathrm{m})}} = \mathfrak{M}\cph\\
{\sigrp^{(\mathrm{m})}} = -\mathfrak{M}\cth\sph.
\eqa
Now the boundary condition reads
\bq
F_{\vartheta/\varphi}^{\mathrm{(m)}}(R)
= F_{\vartheta/\varphi}^{\mathrm{(vis,i)}}(R)
+ F_{\vartheta/\varphi}^{\mathrm{(vis,o)}}(R) \label{forces_compensate}
\eq
because the accelerating magnetic force must be compensated by the viscous ones.
With $F_{\vartheta/\varphi}=\sigma_{r\vartheta/\varphi}\,\mathbf{n}\!\cdot\!\er$ and
the convention that the surface normal $\mathbf{n}=+\er$ for a force that acts on the
\textit{outer} surface and $\mathbf{n}=-\er$ for a force that acts on the \emph{inner}
surface, this yields in terms of stresses
\bq
\sigma_{r\vartheta/\varphi}^{\mathrm{(m)}}
+ \sigma_{r\vartheta/\varphi}^{\mathrm{(vis,o)}}(R)
- \sigma_{r\vartheta/\varphi}^{\mathrm{(vis,i)}}(R) = 0 \label{sum_of_sigmas}
\eq
for all $\vartheta,\varphi$.
As before, the viscous force in $x$-direction must vanish.
The resulting expressions of the components of inner and outer flow field
are given explicitely in \ref{thirdapp}.
Again, the speed of the drop in the laboratory frame is obtained by evaluating the
negative of the outer velocity field at $r\rightarrow \infty$, giving
\bq
\vv_\mathrm{drop}^\mathrm{liq} =
- \frac{1}{2} \frac{\mathfrak{M}R}{2\eta^{(\mathrm{o})}+3\eta^{(\mathrm{i})}}\,\eix\,.
\label{drop_speed}
\eq
Although this result looks very similar to the one obtained in the previous section,
\bq
\vv_\mathrm{drop}^\mathrm{sol} = - \frac{4}{3}\frac{\mathfrak{M}R}{\eta^{\mathrm{(o)}}
\Sigma(\ls)}\,\eix\,, \label{drop_speed_Nav}
\eq
it clearly has two advantages. First, it purely consists of parameters that are
experimentally measurable or tunable (sphere radius $R$, viscosities $\eta$, and via
$\mathfrak{M}$ susceptibility $\chi$ and external magnetic field amplitude $G$).
Second, there is no need of calculating
numerically an infinite sum.
Since no singularity has occurred in the scope of the calculations for the
liquid sphere, it can be compared to a model where slipping is taken into
account. The stresses which diverge within the framework of the very rigid
no-slip condition are relieved both when the surrounding fluid is allowed to
slip over the solid and when the solid is replaced by an elastic or, as in our
case, viscous medium. Indeed, the crucial viscous stress component
$\sigrt^{\mathrm{liq}}\equiv\sigrt^{\mathrm{(vis,o)}}$ is essentially identical
to the one obtained from the velocity field with Navier slip, the only
differences being constant factors, at least when $\ell\gg1$:
\bqa\fl
\frac{\sigrt^{\mathrm{sol}}(R)}{\mathfrak{M}/\Sigma(\ls)} =
-\frac{4}{1+2{\ls}/{R}} \,\cph\cth \nonumber\\
- \frac{8}{3} \cph \sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\parth \pleins(\cth)\,
\frac{(-1)^\frac{\ell-1}{2}}{{1}/({2\ell+1})+{\ls}/{R}}\cdot
\frac{(2\ell+1)(\ell-2)!!}{\ell(\ell+1)(\ell+1)!!}
\nonumber\\
+ \frac{16}{3} \frac{\cph}{\sth}
\sum_{\stackrel{\ell=2}{\ell\textrm{ \scriptsize even}}}^\infty
\pleins(\cth)\,
\frac{(-1)^{\frac{\ell}{2}}}{{1}/{(\ell+2)}+{\ls}/{R}}\cdot
\frac{(2\ell+1)(\ell-3)!!}{\ell(\ell+1)(\ell+2)!!}
\eqa
\bqa\fl
\frac{\sigrt^{\mathrm{liq}}(R)}{\mathfrak{M}} =
-\frac{3}{2}\,\frac{1}{2 + 3{ \eta^{\mathrm{(i)}}}/{\eta^{\mathrm{(o)}} }}
\,\cph\cth \nonumber\\
- \frac{2\cph}{1+{ \eta^{\mathrm{(i)}}}/{\eta^{\mathrm{(o)}} }}
\sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\parth \pleins(\cth)\, (-1)^\frac{\ell-1}{2} \,
\frac{(2\ell+1)(\ell-2)!!}{\ell(\ell+1)(\ell+1)!!} \nonumber\\
+2\,\frac{\cph}{\sth}
\sum_{\stackrel{\ell=2}{\ell\textrm{ \scriptsize even}}}^\infty
\frac{\pleins(\cth)\,(-1)^\frac{\ell}{2}}%
{ 1 + ({\ell-1})/({\ell+2})\cdot{\eta^{\mathrm{(i)}}}/{\eta^{\mathrm{(o)}}} }
\cdot\frac{(2\ell+1)(\ell-3)!!}{\ell(\ell+1)(\ell+2)!!}
\eqa
\section{Comparison of experimental and theoretical results}\label{sec_comp}
The main result of the theory for the drop speed (\ref{drop_speed}) reads
explicitly
\bq
v_\mathrm{drop}^\mathrm{liq} =
- \frac{R}{4}\,\frac{\mu_0 G^2 }{2\eta^{(\mathrm{o})}+3\eta^{(\mathrm{i})}}
\cdot\frac{\chi\prime\prime}{(1+\frac{\chi\prime}{3})^2+(\frac{\chi\prime\prime}{3})^2}
. \label{drop_speed_expl}
\eq
\begin{figure}[tb]
\begin{minipage}{0.4\textwidth}
\caption{The drop speed in dependence of the magnetic field amplitude $G$
for {$f=0.8$\,kHz} and {$V=5\,\tcmu$l}. The blue circles mark the
measured data, the red line gives the theoretical curve according to
(\ref{drop_speed_expl}), taking into account the proper material values.}
\label{v_over_field}
\end{minipage}\hspace{0.8cm}
\begin{minipage}{0.6\textwidth}
\includegraphics[width=8cm]{v_over_field.eps}
\end{minipage}
\end{figure}
\begin{figure}[htb]
\begin{minipage}{0.4\textwidth}
\caption{Drop velocity versus drop radius for an alternating magnetic far field
with $G = 0.844$\,kA/m and $f=0.8$\,kHz. The blue dots mark the experimental
results, the solid line the theoretical outcome.} \label{v_over_vol}
\end{minipage}\hspace{0.8cm}
\begin{minipage}{0.6\textwidth}
\includegraphics[width=8cm]{v_over_R.eps}
\end{minipage}
\end{figure}
It can be well compared with the data obtained in the experiments. They have
been measured following the procedure described in section~\ref{sec_experiment}.
Figure~\ref{v_over_field} presents a plot of the drop velocity versus the
magnetic field amplitude $G$ for a driving frequency of $f=0.8$\,kHz. We have
put a droplet of volume $V=5\,\tcmu$l, corresponding to a sphere of radius
$R\approx 1.1$\,mm, on top of the liquid layer. The measured velocities (marked
by full circles) show a monotonous increase with $G$. The solid line gives the
values of (\ref{drop_speed_expl}), taking into account the viscosities of
the ferrofluid, $\eta^\mathrm{(i)}=5.4$\,mPa\,s and of the liquid below, which
amounts to $\eta^\mathrm{(o)}=1.27$\,mPa\,s. The driving frequency enters into
expression (\ref{drop_speed_expl}) only via the real and imaginary part of the
magnetic susceptibility which were determined as $\chi\prime=4.66$ and
$\chi\prime\prime=3.25$, respectively, for the given frequency
(cf.~figure~\ref{fig:chi}). As can be seen,
the values for the liquid half-drop solution represent the given experimental
data extremely well.
In a series of measurements different drops with a volume ranging from $1$ to
$50\,\tcmu$l were investigated. For comparison with theory we assume a
spherical symmetry and estimate the drop radius $R$ from the drop volume $V$.
As shown in figure~\ref{v_over_vol}, the measured drop velocity increases with
the radius of the drops. The solid line marks the result of (\ref{drop_speed_expl})
for an amplitude of $G=0.884$\,kA/m, as set in the experiment.
Again we find a quantitative agreement of the half-drop solution with the
experimental data.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\textwidth]{v_over_freq.eps}
\caption{\ Frequency dependence of the reduced drop velocity $u$ for $V=5\,\tcmu$l.
The full circles mark the experimental data, the solid line gives
the theoretical curve where the measured frequency dependence of
$\chi\prime(f)$ and $\chi\prime\prime(f)$ has been plugged in. For all data
$G_z$ was fixed to 0.844\,kA/m, but $G_x$ was decreasing with increasing $f$.
The green open squares are indicating the actual ratio $G_x/G_z$.}
\label{v_over_freq}
\end{figure}
As a further parameter the driving frequency $f$ was varied in the experiment.
When the frequency dependence of the drop speed was determined, the vertical
field was fixed at $G_z=0.844$\,kA/m. However, the frequency dependent
inductance of the outer coils did not permit to keep $G_x$ at this value for
the whole range of frequencies (the ratio $G_x/G_z$ is indicated at the
r.h.s.~of figure~\ref{v_over_freq}). In order to obtain a magnitude which is
independent of $G$, we introduce the reduced velocity
\begin{equation}
u = v_\mathrm{drop}^\mathrm{liq} \frac{\eta^{\mathrm{(i)}}}{R \mu_0 G_x G_z},
\label{eq:pump.u}
\end{equation}
where $G_x$ denotes the horizontal and $ G_z$ the vertical field amplitude.
Within the linear regime this quantity should be independent of the choice of
the amplitudes. Figure~\ref{v_over_freq} shows an increase of the reduced drop
velocity (marked by solid circles) up to a maximum at $f=10$\,kHz. The
theoretical values (solid line) stem from (\ref{drop_speed_expl}), where the
material parameters and the measured frequency dependence of the complex
susceptibility $\chi\prime(f) + i \chi\prime\prime(f)$, as presented in
figure~\ref{fig:chi}, have been utilized. In order to be able to compare the
predictions with the experimental results, $v_\mathrm{drop}^\mathrm{liq}$ is
scaled according to (\ref{eq:pump.u}). We observe a good agreement up to a
frequency of about $f=1.5$\,kHz. Beyond that point, the theoretical curve
deviates from the experimental results. The former shows a maximum at about
$f=3.5$\,kHz, while the measured velocity is largest at $f=10$\,kHz, and the
maximum values differ by a factor of two.
\section{Discussion and Conclusion}\label{sec_conc}
The measured propagation velocity of the droplet shows a parabolic dependence
from the magnetic field amplitude, and a linear dependence from the radius of
the droplet. Both experimental observations are quantitatively described by the
liquid half-drop solution, without any free fitting parameter. The theory just
needs the magnetic field amplitude, the complex susceptibility and the
viscosities of both fluids (\emph{i.e.}, the ferrofluid and the liquid layer).
Taking into account the over-simplifying assumption of a half-spherical drop,
the theory describes the experimental data remarkably well for driving
frequencies up to $1.5$\,kHz.
For higher driving frequencies, however, (cf.~figure~\ref{v_over_freq}) a
discrepancy between experiment and theory of up to 100\,\% is observed. This
discrepancy may have several origins. Firstly, due to experimental
characteristics, the rotating magnetic field becomes elliptical. Following
\cite{lebedev2003}, the nonlinear effects of an elliptical field are
expected to diminish the flow within the droplet. This, however, does not
explain our experimental data, which overcome the predictions by theory. Of
course our experimental situation differs from that of \cite{lebedev2003}
where an elliptical drop can freely rotate in the horizontal plane. In our
case the horizontal surface is pinning a free rotation of an elliptical droplet
in the vertical plane.
Secondly, for higher driving frequencies the liquid-liquid interface of a fully
immersed drop develops spikes and resembles a "spiny starfish", as reported in
Refs.\,\cite{bacri1994,lebedev2003}. This may also happen for the lower part of
our half-immersed, swimming drop. A complex interface of the two liquids may
enhance the interaction in between the fluids and thus increase the propulsion
-- similar to a paddle wheel of a Mississippi steam boat. This can of course
not be covered by the simplifying model ansatz. The shape and dynamics of the
liquid-liquid interface shall be studied in forthcoming experiments.
The main achievement of the article is that rotating fields can transport
ferrofluidic drops. Our experimental results can be quantitatively explained
without any free fitting parameters.
Moreover the theory gives an explicit solution of the flow fields both for a
rotating solid magnetic sphere and a spherical ferrofluid drop which both are
half-way immersed in a liquid. The similarity of the final results of both
cases demonstrates the equivalence of Navier slip at a solid surface on the one
hand and the continuity of tangential stresses at a fluid-fluid boundary on the
other hand.
For a quantitative description of "magnetic pumping" by means of a rotating
field a droplet is more suitable than a plain ferrofluidic layer
\cite{krauss2006}. For the droplet one does not need any tracer particles (the
droplet is its own tracer), and the demagnetization factor of an elliptical
droplet is well defined.
Future experiments shall unveil whether the half-drop model works also in the
pico-liter range. Here the dimensioning of droplets is very precise (see e.g.
Ref.\,\cite{thorsen2001}) and their position may be detected by magnetic
sensors \cite{pekas2004}. Taking advantage of (\ref{drop_speed_expl}) one may
even select the size of the generated droplets by their speed.
We propose that the controlled transport of small amounts of liquid to any
desired position on top of a liquid two dimensional layer is a promising
technique for microfluidic applications. There ferrofluidic drops are commonly
manipulated utilizing local field gradients, which are locally created by
embedded wires \cite{pamme2006} or planar coils \cite{nguyen2006}. In contrast,
our driving technique yields a constant drop velocity globally, i.e.~on the
complete surface.
\begin{ack}
The authors would like to thank Jens Eggers and Thomas Fischer for valuable discussions concerning the theoretical modelling. In addition they thank Norbert Buske for drawing their
attention to the per-fluorinated liquid, Nina Matoussevitch for her excellent
magnetic fluid, and Marit {\O}verland for experimental support. Moreover
R.K.~and R.R.~gratefully acknowledge financial support from the collaborative
research center SFB 481 via project B9.
\end{ack}
\begin{appendix}
\section{Explicit computation of the flow field below the solid sphere}
\label{flowfieldapp}
The velocity field is expanded in vector spherical harmonics according to
\cite{sorokin,MoFe}
\bqa\fl
\vv\of
= \mysum\Bigg\{ \er\flm(r) + \glm(r)r\nabla + \hlm(r)\vecr\times\nabla \Bigg\}\ytp
\label{expan1}
\eqa
with the normalized spherical harmonics $\ylm$ and the Legendre functions $\plm$
as defined in \cite{cohentann} for $\ell \geq 0$ and $0\leq m\leq \ell$:
\bqa
\ytp &= (-1)^m\sqrt{\frac{2\ell +1}{4\pi}\frac{(\ell-m)!}{(\ell+m)!}}
\times\,\ehoch{\ii m\varphi}\pth \\
&\equiv \klm\,\ehoch{\ii m\varphi}\pth \label{klmdef}
\eqa
\bq
\ytpmi = (-1)^m\ytpst \label{ylm_minus}
\eq
\bq
\pth = \frac{(-1)^\ell}{2^\ell\ell!}(\sth)^m \dlm(\sth)^{2\ell} \label{plm_mnotneg}
\eq
When the expansion (\ref{expan1}) is put into (\ref{cont}) and (\ref{stokesrot}),
these \emph{partial} differential equations for the \emph{vector} $\vv$
are transformed to \emph{ordinary} differential equations for the
\emph{scalar} radial functions $\flm$, $\glm$, and $\hlm$. Before this is done,
equation (\ref{expan1}) is simplified by several means.
With the Nabla operator in spherical coordinates
\bq
\nabla
=\er\parr + \frac{1}{r}\,\etheta\parth + \frac{1}{r\sin\vartheta}\,\ephi\parphi
\eq
where $\partial_j\equiv{\partial}/{\partial j}$, the velocity components read:
\bq
v_r = \mysum \flm \ylm \label{expan_vr}
\eq
\bqa
\left(
\eqalign{\vth \\ \vphi}
\right)
=\mysum \left\{ \glm
\left(
\eqalign{\parth\ylm \\ \frac{\ii m}{\sth}\ylm }
\right) + \hlm
\left(
\eqalign{-\frac{\ii m}{\sth}\ylm \\ \parth\ylm }
\right) \right\} \label{expan_vthphi}
\eqa
With (\ref{ylm_minus}) and the fact that the velocity field is real valued
it follows
\bq
\glmmi = (-1)^m\glm,\quad \hlmmi = (-1)^m\hlm \,. \label{gh_minus}
\eq
Furthermore, when the symmetry of the problem with respect to the $xz$-plane, \emph{i.e.}
\bq
\vth(-\varphi) = \vth(\varphi),\qquad \vphi(-\varphi) = -\vphi(\varphi)\label{symm1}
\eq
is taken into account, it can be shown with the aid of relations (\ref{ylm_minus})
and (\ref{gh_minus}) that
\bqa\fl
\left(\eqalign{\vth \\ \vphi }\right)
=\sum_{\ell=1}^\infty\sum_{m=0}^{\ell}\!\strut^{'} 2\klm\!\left\{
\, \glm \left( \eqalign{\cos(m\varphi)\parth\plm\\ \sin(m\varphi)\frac{-m}{\sth}\plm}
\right)
- \hlm \left(
\eqalign{\cos(m\varphi)\frac{-m}{\sth}\plm\\ \sin(m\varphi)\parth\plm}
\right) \right\}
\nonumber\\
\equiv 2\mysum \Big\{ \glm\vAlm+\hlm\vBlm \Big\}\,. \label{expan_real}
\eqa
The prime at the second sum indicates that the terms with $m=0$ are divided by two.
When the boundary conditions are applied
it will be important that the two velocity components of (\ref{expan_real})
always be considered together, because $\vAlm=\vAtp$ and $\vBlm=\vBtp$ fulfil the
orthogonality relations
\bqa
\la\vAlm,\vBlmp\ra &=& 0 \label{orth2} \\
\la\vAlm,\vAlmp\ra &=& \la\vBlm,\vBlmp\ra = \frac{1}{2}\ell(\ell+1)\delta_{\ell\ell'}
\delta_{mm'} \label{orth3}
\eqa
with the vector inner product
\bq
\la\bi{X}_1,\bi{X}_2\ra
:= \int_0^{2\pi}\dphi\int_0^\pi\dth\,\sth\,(\bi{X}_1^*)\transp\bi{X}_2\,,
\label{vecprod}
\eq
where $^*$ denotes the complex conjugate and $\transp$ the transpose of the vector.
By computing the inner product of $\bi{A}_{\ell'm'}$ or $\bi{B}_{\ell'm'}$ with
(\ref{expan_real}) one can reduce the infinite series to one function $g_{\ell'm'}(r)$
or $h_{\ell'm'}(r)$, respectively.
If $\vth$ and $\vphi$ were considered seperately, it would not be possible to get at
the radial functions, because $\parth P_{\ell'm'}$ and $\pm\frac{\ii m}{\sth}\plm$ alone
are \emph{not} orthogonal.
Now putting the expansions (\ref{expan_vr}) and (\ref{expan_real}) into the basic
equations (\ref{cont}) and (\ref{stokesrot}) gives the following ordinary differential
equations for the radial functions with $\ell>0$ ($g_{00}(R)=h_{00}(R)\equiv 0$ can be
assumed w.l.o.g.):
\bq
f_{00}' + \frac{2}{r} f_{00} = 0 \label{difffnull}
\eq
\bqa\fl
\frac{r}{\ell(\ell+1)}\flm'''' + \frac{8}{\ell(\ell+1)}\flm'''
+\frac{2}{r} \left[ \frac{6}{\ell(\ell+1)} - 1 \right]\flm''
- \frac{4}{r^2}\flm' +\frac{1}{r^3} \Bigl[ \ell(\ell+1) - 2 \Bigr]\flm \nonumber\\= 0
\label{difff}
\eqa
\bq
\glm(r) = \frac{1}{\ell(\ell+1)} \Bigl[ r\flm' + 2\flm \Bigr] \label{diffg}
\eq
\bq
\hlm'' + \frac{2}{r}\hlm' - \frac{\ell(\ell+1)}{r^2}\hlm = 0 \label{diffh}
\eq
These equations are solved by a power law ansatz which together with the requirement
that the velocity be finite as $r\to\infty$ leads to
\bqa
h_{\ell m}(r) &= \frac{a_{\ell m}}{r^{\ell+1}},\qquad \ell>0 \label{hlm_a}\\
f_{00}(r) &= \frac{d_{00}}{r^2} \\
f_{1m}(r) &= b_{1m}+\frac{c_{1m}}{r}+\frac{d_{1m}}{r^3} \\
g_{1m}(r) &= b_{1m}+\frac{c_{1m}}{2r}-\frac{d_{1m}}{2r^3} \label{g1m_cd}
\eqa
and for $\ell>1$:
\bqa
f_{\ell m}(r)
&=& \frac{c_{\ell m}}{r^\ell}+\frac{d_{\ell m}}{r^{\ell+2}}\\
g_{\ell m}(r) &=& \frac{-1}{\ell(\ell+1)}\left[\frac{(\ell-2)c_{\ell m}}{r^\ell}
+\frac{\ell d_{\ell m}}{r^{\ell+2}}\right] \label{glm_cd}
\eqa
The coefficients $\alm$, $b_{1m}$, $\clm$, and $\ddlm$ are determined by successively
applying the remaining boundary conditions. In the following section, the ferrofluid
drop will be treated as a solid sphere. Its angular velocity $\itom$ is introduced as
a parameter that will have to be determined by the equality of viscous and magnetic
torques generated by external field and surrounding liquid.
It should be noted here, that the orthogonality relations (\ref{orth2}) and (\ref{orth3})
would not be valid if the $\vartheta$-integral within the scalar product (\ref{vecprod})
were only carried out up to $\vartheta=\pi/2$. On the other hand, the liquid only
occupies the lower half-space in the given problem, so we perform a little trick in
order to be able to integrate over the whole sphere, \emph{i.e.}, we take advantage of our
equations being linear and employ the superposition principle by adding
the {\it mirror image} of our problem with respect to the $xy$-plane (fluid above, void
below the sphere). The problem can be solved in this way and the resulting flow field in
the upper half space is simply neglected in the end.
Within the framework of this "mirror image construction" the following boundary
conditions are employed:
\begin{itemize}
\item
Navier slip at the sphere surface
\bq
\left[\parr-\frac{1}{R}\right]\left.
\left(
\eqalign{\vth\\ \vphi}
\right)\right|_{r=R}
= \frac{1}{\ls}\left[\left(
\eqalign{\vth(r=R)\\ \vphi(r=R)}
\right)-\bi{U}\right] \label{BCrot}
\eq
with the slip length $\ls\ll R$ and the velocity $\bi{U}$ of the sphere surface
\bq
\bi{U}=\left\{
\eqalign{
\phantom{-R\vecom} 0 & \qquad \textrm{for }\vartheta= {\pi}/{2} \\
\phantom{-}R\vecom\times \er & \qquad \textrm{for }\vartheta< {\pi}/{2} \\
-R\vecom\times \er & \qquad \textrm{for }\vartheta> {\pi}/{2} }
\right.
\eq
implying
\bq
v_r(r=R) = 0\,.
\eq
\item
Flat "interface":
\bq
\vth\left(\vartheta=\frac{\pi}{2}\right) = 0
\qquad \forall \,r\geq R \label{BCcontsymm}
\eq
\item
No resulting (viscous) force on the sphere:
\bqa
F_i &=& \int_0^{2\pi}\dphi\int_0^{\pi/2} \dth\sth
\sum_j\sigma_{rj}(r=R)\,\ej\cdot\ei = 0 \label{BCforce}
\eqa
with $i\in\{x,y,z\}$, $j\in\{r,\vartheta,\varphi\}$, and $\ei$, $\ej$ the unit vectors in
respective direction. The relevant components of the viscous stress tensor $\sigma_{rj}$
are taken as defined in \cite{LLhydro}.
As is obvious from the given symmetry, only $F_x$ will be different from zero and
thereby determine the last coefficient.
Since the magnetic field only creates a torque but no linear force, this
boundary condition provides the requirement of unaccelerated translational motion.
\end{itemize}
\subsection{Applying the boundary conditions}{bcapp}
The first coefficients are determined by the $r$-component of the Navier slip condition
\bq
v_r(R) = \mysum\flm(R)\,\ytp = 0
\eq
and the orthogonality of the scalar spherical harmonics $\ylm$ \cite{AbSt}:
\bqa
\flm(R) &=& 0\quad\forall\ell,m \\
&\Rightarrow&
\eqalign{d_{00\phantom{,}} = 0 \\
d_{1m} = -R^3b_{1m}-R^2c_{1m} \\
d_{\ell m} = -R^2c_{\ell m},\quad \ell>1}
\eqa
The coefficients $c_{\ell m}$ and $a_{\ell m}$ are obtained by applying
the appropriate vector inner product to the $\vartheta$- and $\varphi$-component of
the Navier slip condition
\bqa\fl
\left[ 1+\frac{\ls}{R}-\ls\parr \right]
\sum_{\ell=1}^\infty\sum_{m=0}^{\ell}\!\strut^{'} 2\klm\!\!
\left.\left[
\glm \left( \eqalign{\cos(m\varphi)\parth\\ \sin(m\varphi)\frac{-m}{\sth}}
\right)
- \hlm \left( \eqalign{ \cos(m\varphi)\frac{-m}{\sth}\\ \sin(m\varphi)\parth}
\right) \right]\!\plm\,\right|_R
\nonumber\\
= \left\{
\eqalign{
\phantom{-R\itom(\cph\,\etheta)} 0 & \qquad \textrm{for } \vartheta= {\pi}/{2}\\
\phantom{-}R\itom(\cph\,\etheta-\cth\sph\,\ephi) & \qquad \textrm{for }
\vartheta< {\pi}/{2} \\
-R\itom(\cph\,\etheta-\cth\sph\,\ephi) & \qquad \textrm{for } \vartheta> {\pi}/{2}}
\right.
\eqa
which is done here exemplary for the scalar product with $\vAlmp(\vartheta,\varphi)$
as defined in \ref{vecprod}. The orthogonalities of the sine and cosine
functions yield
\bq
\left( 1+\frac{\ls}{R} \right)\glm(R) - \ls \,\glm'(R) = 0 \qquad \forall\;m\neq\pm1
\eq
and
\bqa\fl
\left[\left( 1+\frac{\ls}{R} \right)\glm(R) - \ls \,\glm'(R) \right] \ell(\ell+1) =
\nonumber\\
\pi\itom R\kleins\int_0^{\pi/2}\dth\sth
\Bigl[\parth + \cot\vartheta\Bigr]\pleins(\cth) \nonumber\\
-\pi\itom R\kleins\int_{\pi/2}^{\pi}\dth\sth
\Bigl[\parth + \cot\vartheta\Bigr]\pleins(\cth).
\eqa
Now from \cite{AbSt} one finds
\bq
\parth\pleins + \cot\vartheta\pleins = \ell(\ell+1)\plnull
\eq
and
\bqa
\int_0^1\du P_{\ell}(u) = \phantom{-}\int_{-1}^0\du P_{\ell}(u),&\qquad
\ell\textrm{ even}\\
\int_0^1\du P_{\ell}(u) = -\int_{-1}^0\du P_{\ell}(u)= (-1)^\frac{\ell-1}{2}
\frac{(\ell-2)!!}{(\ell+1)!!}
,&\qquad \ell\textrm{ odd}
\eqa
so that with the definition of $K_{\ell m}$ according to (\ref{klmdef}) one obtains
\bq
\left[\left( 1+\frac{\ls}{R} \right)g_{\ell 1}(R) - \ls \,g_{\ell 1}'(R) \right]= 0
\qquad \forall\;\ell\textrm{ even}
\eq
and for odd $\ell$
\bq\fl
\left[\left( 1+\frac{\ls}{R} \right)g_{\ell 1}(R) - \ls \,g_{\ell 1}'(R) \right]=
\itom R \,\sqrt{\frac{(2\ell+1)\pi}{\ell(\ell+1)}}\, (-1)^{\frac{\ell+1}{2}}
\,\frac{(\ell-2)!!}{(\ell+1)!!}\,.
\eq
With (\ref{g1m_cd}) and (\ref{glm_cd}) this gives in detail
\bqa
b_{10} \left[ \frac{3}{2}+3\frac{\ls}{R} \right] + \frac{c_{10}}{R}
\left[ 1+3\frac{\ls}{R} \right] = 0 \\
b_{1,\pm 1} \left[ \frac{3}{2}+3\frac{\ls}{R} \right] + \frac{c_{1,\pm 1}}{R}
\left[ 1+3\frac{\ls}{R} \right]
= \mp \itom R\, \sqrt{\frac{3\pi}{2}}
\eqa
\bq\eqalign{
c_{\ell m} = 0 \;\qquad\forall \;m\neq \pm 1 \\
c_{\ell, \pm 1} = 0 \qquad\forall \;\ell \textrm{ even} \\
c_{\ell,\pm 1} = \pm\frac{\itom}{2} \,\sqrt{\pi\ell(\ell+1)(2\ell+1)}\,
\frac{R^{\ell+1}\,(-1)^\frac{\ell+1}{2}}{1+(2\ell+1)\frac{\ls}{R}}\cdot
\frac{(\ell-2)!!}{(\ell+1)!!},\quad\ell\textrm{ odd}.}
\eq
The condition of a flat "interface" reads
\bqa\fl
g_{10}(r)K_{10}\Bigl[\parth P_{10}(\cth)\Bigr]_{\vartheta=\frac{\pi}{2}}
+ \sum_{m=\pm1} \sum_{\stackrel{\ell=1}{\ell\textrm{ \scriptsize odd}}}^\infty
\glm(r)\klm\Bigl[\parth\plm(\cth)\Bigr]_{\vartheta=\frac{\pi}{2}} \nonumber\\
+ \sum_{m=\pm1}\sum_{\stackrel{\ell=2}{\ell\textrm{ \scriptsize even}}}^\infty
m\hlm(r)\klm\plm(0)=0 .
\eqa
The sums vanish completely due to properties of the Legendre functions at zero
\cite{AbSt}, so that only the first term remains, giving
\bq
b_{10}\left[ 1+\frac{1}{2}\frac{R^3}{r^3} \right] + \frac{c_{10}}{2r}
\left[ 1+\frac{R^2}{r^2} \right] = 0\,.
\eq
Since this equation must be valid for arbitrary $r$ it follows $b_{10}=0=c_{10}$.\\
In order to evaluate the force condition
\bqa\fl
F_x = R^2 \!\int_0^{2\pi}\dphi\int_0^{\pi/2}\dth\sth
\Bigl[ \sigrr(R)\sth\cph \nonumber\\
+\sigrt(R)\cth\cph -\sigrp(R)\sph \Bigr]
= 0
\eqa
the following integrals are needed:
\bqa\fl
\int_0^{\pi/2}\dth\,\sth\left[\frac{1}{\sth} +\cth\parth\right] \pleins(\cth)
= \int_0^{\pi/2}\dth\sqth\pleins(\cth) = \frac{4}{3}\,\delta_{\ell 1}
\eqa
Then the last coefficients are given by
\bq
b_{1,\pm 1} = \mp\,\frac{\itom R}{1+2\frac{\ls}{R}}\,\sqrt{\frac{\pi}{6}}\,.
\eq
\section{Resulting flow fields for the liquid half-sphere model}\label{thirdapp}
\bqa\fl
v_r^{(\mathrm{i})}
= \frac{3}{4}
\frac{\mathfrak{M} R}{2\eta^{(\mathrm{o})}+3\eta^{(\mathrm{i})}}
\, \sth\,\cph \left[ \frac{r^2}{R^2}-1 \right] \nonumber\\
+\frac{\mathfrak{M} R\cph}{\eta^{(\mathrm{o})}+\eta^{(\mathrm{i})}}
\sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\pleins(\cth)\,
\frac{r^{\ell-1}}{R^{\ell-1}}\, (-1)^\frac{\ell-1}{2}
\left[ \frac{r^2}{R^2}-1 \right]
\frac{(\ell-2)!!}{(\ell+1)!!}
\eqa
\bqa\fl
v_r^{(\mathrm{o})}
= \frac{1}{2}
\frac{\mathfrak{M} R}{2\eta^{(\mathrm{o})}+3\eta^{(\mathrm{i})}}
\,\sth\cph\left[ 1-\frac{R^3}{r^3} \right] \nonumber\\
+\frac{\mathfrak{M} R\cph}{\eta^{(\mathrm{o})}+\eta^{(\mathrm{i})}}
\sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\pleins(\cth)\, \frac{R^\ell}{r^\ell}
\left[ 1-\frac{R^2}{r^2} \right](-1)^\frac{\ell-1}{2}\,
\frac{(\ell-2)!!}{(\ell+1)!!}
\eqa
\bqa\fl
\left(
\eqalign{\vth^{(\mathrm{i})} \\ \vphi^{(\mathrm{i})} }
\right)
= \frac{3}{4}
\frac{\mathfrak{M} R}{2\eta^{(\mathrm{o})}+3\eta^{(\mathrm{i})}}
\left(
\eqalign{ \cph\cth\\ -\sph}
\right)
\left[ 2\frac{r^2}{R^2}-1 \right] \nonumber\\
+ \frac{\mathfrak{M} R}{\eta^{(\mathrm{o})}+\eta^{(\mathrm{i})}}
\left(
\eqalign{ \cph\,\parth \\ -\sph/\sth}
\right) \nonumber\\
\times\sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\pleins(\cth) \,
\frac{r^{\ell-1}}{R^{\ell-1}}
\left[ \frac{(\ell+3)}{(\ell+1)} \frac{r^2}{R^2}-1 \right]
\frac{(-1)^\frac{\ell-1}{2}}{\ell}\,
\frac{(\ell-2)!!}{(\ell+1)!!} \nonumber\\
+ 2\mathfrak{M} R \left(
\eqalign{ -\cph/\sth \\ \sph \,\parth}
\right) \nonumber\\
\times \sum_{\stackrel{\ell=2}{\ell\textrm{ \scriptsize even}}}^\infty
\frac{\pleins(\cth)\,(-1)^\frac{\ell}{2}}{(\ell+2)\eta^{(\mathrm{o})}+(\ell-1)%
\eta^{(\mathrm{i})}}\,
\frac{r^\ell}{R^\ell}\,
\frac{(2\ell+1)(\ell-3)!!}{\ell(\ell+1)(\ell+2)!!}
\eqa
\bqa\fl
\left(
\eqalign{\vth^{(\mathrm{o})} \\ \vphi^{(\mathrm{o})}}
\right)
= \frac{1}{2}
\frac{\mathfrak{M} R}{2\eta^{(\mathrm{o})}+3\eta^{(\mathrm{i})}}
\left(
\eqalign{ \cph\cth \\ -\sph}
\right)
\left[ 1+\frac{1}{2}\frac{R^3}{r^3} \right] \nonumber\\
+\frac{\mathfrak{M} R}{\eta^{(\mathrm{o})}+\eta^{(\mathrm{i})}}
\left(
\eqalign{ \cph \,\parth \\-\sph/\sth}
\right) \nonumber\\
\times \sum_{\stackrel{\ell=3}{\ell\textrm{ \scriptsize odd}}}^\infty
\pleins(\cth)\,
\frac{R^\ell}{r^\ell}
\left[ (2-\ell)+\ell\frac{R^2}{r^2}\right]
\frac{(-1)^\frac{\ell-1}{2}}{\ell(\ell+1)}\,
\frac{(\ell-2)!!}{(\ell+1)!!} \nonumber\\
+ 2\mathfrak{M} R
\left(
\eqalign{ -\cph/\sth \\ \sph \,\parth}
\right) \nonumber\\
\times \sum_{\stackrel{\ell=2}{\ell\textrm{ \scriptsize even}}}^\infty
\frac{\pleins(\cth)\,(-1)^\frac{\ell}{2}}%
{(\ell+2)\eta^{(\mathrm{o})}+(\ell-1)\eta^{(\mathrm{i})}}\,
\frac{R^{\ell+1}}{r^{\ell+1}}\,
\frac{(2\ell+1)(\ell-3)!!}{\ell(\ell+1)(\ell+2)!!}
\eqa
\end{appendix}
\section*{References}
|
1,108,101,564,175 | arxiv | \section{\label{sec1}Introduction}
Relativistic hadrodynamics (RHD) of interacting nucleons and mesons provide
a simple and successful tool for the theoretical description of different
nuclear systems such as nuclear matter, finite nuclei, heavy-ion collisions
and compact neutron stars~\cite{Serot:1984ey}. Starting from the pioneering work of
Duerr~\cite{Duerr:1956zz},
simple RHD Lagrangians have been introduced~\cite{Walecka:1974qa,Serot:1997xg} and since
then many different extensions of RHD approach, which
rely on relativistic mean-field (RMF) approximation, have been developed. They describe
the saturation mechanism in nuclear matter and generate a natural
mechanism for the strong spin-orbit force in nuclei.
An energy dependence of the Schr\"{o}dinger-equivalent optical
potential~\cite{Cooper:1993nx,Hama:1990vr} is thereby included
as a consequence of a relativistic description.
However, when using the standard RHD Lagrangian in RMF
approximation, the nucleon selfenergies become simple functions
of density only, and do not depend explicitly on momentum
of the nucleon. As a consequence a linear energy dependence of the
Schr\"{o}dinger-equivalent optical potential with a divergent behavior
at high energies arises~\cite{Weber:1992qc}. This well-known feature contradicts Dirac
phenomenology~\cite{Cooper:1993nx,Hama:1990vr,Typel2002299}.
To solve this issue one may go beyond the
mean-field approximation in a quantum field theoretical framework
by a systematic diagrammatic expansion of nucleon selfenergies. For instance,
in Dirac-Brueckner-Hartree-Fock
(DBHF)~\cite{Haar:1986ii,Brockmann:1996xy,Muther2000243} calculations
the nucleon selfenergies indeed depend on both the density and single
particle momentum. They reproduce the empirical saturation point of
nuclear matter as well as the energy dependence of the optical potential
at low energies. However, the DBHF approach has its apparent limitations at
high energies and densities relevant, for instance, in heavy-ion collisions where its
application within transport theory turns out to be
intricate~\cite{Botermans1990115,Buss:2011mx}. Also the thermodynamic
consistency of the DBHF calculations is not obvious~\cite{Hugenholtz:1958}.
As an alternative approach to {\it ab-initio} DBHF calculations for the nuclear
many-body systems a phenomenological treatment of the problem in the spirit of
the RMF approximation is still considered as a powerful tool. However, the simple Lagrangian
of RHD~\cite{Walecka:1974qa,Serot:1997xg} has to be further modified for a
quantitative description of static nuclear systems such as nuclear matter and/or finite
nuclei. Therefore, it is mandatory to introduce new terms, {\it e.g.}, including
non-linear self interactions of the scalar~\cite{Boguta:1977xi} and/or
vector~\cite{Sugahara1994557} meson fields, or to modify existing contributions
in the Lagrangian, {\it e.g.}, introducing density dependent meson-nucleon
couplings~\cite{PhysRevLett.68.3408,Fuchs:1995as,Typel:1999yq}.
The model parameters have to be then fitted to properties
of nuclear matter and/or atomic nuclei, since, they cannot be derived in a simple manner
from a microscopic description.
The momentum dependence of in-medium interactions becomes particularly important in description of
nuclear collision dynamics such as heavy-ion collisions. Indeed, analyses of proton-nucleus scattering
data \cite{Cooper:1993nx,Hama:1990vr} show that the proton-nucleus optical potential starts to
level off already at incident energies of about $300$ MeV.
Thus, other RMF approaches have been developed by including additional
non-local contributions, {\it i.e.}, by introducing Fock-terms, on the level of
the RMF selfenergies leading to a density and momentum dependent
interactions~\cite{Weber:1992qc}. However, such a
treatment is not covariant and also its numerical
realization in actual transport calculations is rather difficult~\cite{Weber:1992qc}.
Another approach has been proposed in~\cite{Zimanyi:1990np}
and more recently in~\cite{Typel:2002ck,Typel:2005ba} by
introducing higher order derivative couplings in the Lagrangian of RHD.
In Ref.~\cite{Zimanyi:1990np} such gradient terms have been studied with the conclusion
of a softening of the nuclear EoS. In another study of Ref.~\cite{Typel:2005ba}
both the density dependence of the
nuclear EoS and the energy dependence of the optical potential have been investigated. While
the modified interactions of meson fields with nucleons explain the
empirical energy dependence of the optical potential, a stiff EoS at high
densities results from an introduction of an explicit density dependence of the nucleon-meson
couplings with additional parameters. The impact of momentum
dependent RMF models on nuclear matter bulk properties and
particularly on the high density domain of EoS relevant for neutron stars is
presently less understood.
The purpose of the present work is to develop a relativistic and
thermodynamically consistent RMF model, which provides the correct
momentum dependence of the nucleon selfenergies and agrees well with available
empirical information on nuclear matter ground state,
in a self consistent Lagrangian framework.
Some steps in this direction have been already done in
Refs.~\cite{Gaitanos:2011yb,Gaitanos:2011ej,Gaitanos:2009nt}
where the concept of non-linear derivative meson-nucleon Lagrangian
has been introduced. However, the calculations of
Refs.~\cite{Gaitanos:2011yb,Gaitanos:2011ej,Gaitanos:2009nt} were based
on a particular exponential form of the regulators in the RHD Lagrangian and a
detailed study of nuclear matter ground state properties has not been done.
In the present work the generalized form of the energy-momentum tensor in the
NLD model is derived and allows to consider different regulator functions in the
Lagrangian. The thermodynamic consistency of the NLD model is demonstrated for
arbitrary choice of the regulators. A thorough study of the
properties of nuclear matter around saturation density is further
performed. The model describes
the bulk properties of the nuclear matter and
compares well with microscopic calculations and Dirac phenomenology.
We also investigate the high density region of the
NLD EoS relevant for the neutron stars.
It is found that the low density constraints imposed
on the nuclear matter EoS and by the momentum dependence of the
Schr\"odinger-equivalent optical potential lead to a maximum mass of the
neutron stars around $M \simeq 2 M_{\odot}$. It is demonstrated
that the high density pressure-density diagram as extracted from
astrophysical measurements~\cite{Ozel:2010fw,Steiner:2010fz}
can be well described with nucleonic degrees of freedom only.
\section{\label{sec2}Field theory with higher derivatives}
The non-linear derivative (NLD) model is based on a field-theoretical
formalism which accounts for the higher-order derivative interactions in the RHD Lagrangian.
As a consequence, the conventional RHD mean-field theory based on minimal interaction Lagrangians
has to be extended to the case of higher-order non-linear derivative functionals.
For that purpose
we consider the most general structure of a Lagrangian density $\mathcal{L}$
with higher-order field derivatives, {\it i.e.}
\begin{align}
{\cal L}\left(
\varphi_{r}(x), \, \partial_{\alpha_{1}}\varphi_{r}(x),
\, \partial_{\alpha_{1}\alpha_{2}}\varphi_{r}(x),
\cdots\!,
\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r}(x)
\right)
\,,
\label{EL_0}
\end{align}
where it is supposed that $\mathcal{L}$ has continuous derivatives up to
order $n$ with respect to all its arguments, that is
\begin{equation}
\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r}(x) \equiv \frac{\partial}{\partial
x^{\alpha_1}}\cdots \frac{\partial}{\partial
x^{\alpha_n}} \varphi_{r}(x) \equiv
\partial_{\alpha_{1}}\cdots\partial_{\alpha_{n}}\varphi_{r}(x)\,, \nonumber
\end{equation}
where $\alpha_i$ is a four index and $x$ denotes the coordinates
in Minkowski space.
The order $n$ can be a finite number or $n\rightarrow \infty$.
The subscript $r$ denotes different fields, for instance,
in the case of the spinor fields one would have $\varphi_{1}=\Psi$
and $\varphi_{2}=\overline{\Psi}$.
The derivation of the generalized Euler-Lagrange equations of motion follows from the
variation principle for the action $S=\int d^4 x {\cal L}(x)$ with the Lagrangian of
Eq.~(\ref{EL_0}), where one considers $\varphi_{r}$,
$\partial_{\alpha_{1}}\varphi_{r}$,
$\partial_{\alpha_{1}}\partial_{\alpha_{2}}\varphi_{r}$, $\cdots$,
$\partial_{\alpha_{1}}\!\!\cdots\partial_{\alpha_{n}}\varphi_{r}$
as independent generalized coordinates.
The Euler-Lagrange equations are obtained from principle of least action
\begin{align}
\delta S=0
\,,
\end{align}
where $\delta S$ is given by
\begin{align}
\delta S = \int d^{4}x \,
\delta{\cal L}\left(
\varphi_{r}, \, \partial_{\alpha_{1}}\varphi_{r},
\, \partial_{\alpha_{1}\alpha_{2}}\varphi_{r},
\cdots\!,
\, \partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r}
\right)
\label{ELa}
\end{align}
and is obtained by the variation of the generalized coordinates
\begin{align}
\varphi_{r} & \longrightarrow \varphi_{r} + \delta\,\varphi_{r} ,
\nonumber\\
\partial_{\alpha_{1}}\varphi_{r} & \longrightarrow
\partial_{\alpha_{1}}\varphi_{r} + \delta\,\partial_{\alpha_{1}}\varphi_{r} ,
\nonumber\\
\partial_{\alpha_{1}\alpha_{2}}\varphi_{r} & \longrightarrow
\partial_{\alpha_{1}\alpha_{2}}\varphi_{r} +
\delta\,\partial_{\alpha_{1}\alpha_{2}}\varphi_{r} ,
\nonumber\\
& ,\cdots ,
\nonumber\\
\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r} & \longrightarrow
\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r} +
\delta \, \partial_{\alpha_{1}\cdots\alpha_{n}} \varphi_{r} ,
\label{ELc}
\end{align}
with vanishing contributions on the surface of the integration volume as the
boundary condition.
The variation of the Lagrangian density with respect to all degrees of freedom reads
\begin{align}
\delta{\cal L} = & \!\!
\left[
\frac{\partial {\cal L}}{\partial\varphi_{r}}\delta\varphi_{r}
+ \!
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}}\varphi_{r})}
\partial_{\alpha_{1}}\delta\varphi_{r}
+\!
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}
\partial_{\alpha_{1}\alpha_{2}}\delta\varphi_{r}
\right.
\nonumber\\
&
\left.
+ \cdots
+\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\partial_{\alpha_{1}\cdots\alpha_{n}}\delta\varphi_{r}
\right]
\,.
\label{ELd}
\end{align}
As a next step one inserts Eq.~(\ref{ELd}) into Eq.~(\ref{ELa}) and then
performs successively partial integrations, {\it e.g.}, one partial integration for the
second term in Eq.~(\ref{ELd}), two partial integrations for the third term
in Eq.~(\ref{ELd}), and $n$ partial integrations for the last term. This
procedure results in to the following integrand in Eq.~(\ref{ELa})
\begin{align}
\delta{\cal L} = & \!\!
\left[
\frac{\partial {\cal L}}{\partial\varphi_{r}}
- \partial_{\alpha_{1}}
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}}\varphi_{r})}
+\partial_{\alpha_{1}\alpha_{2}}
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}
\right.
\nonumber\\
&
\left.
+ \cdots
+(-)^{n}\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right] \delta\varphi_{r}
\label{ELd2}
\end{align}
up to $4$-divergence terms, which by Gauss law do not contribute to the action
in Eq.~(\ref{ELa}). Thus, one
arrives to the following generalized Euler-Lagrange equation
\begin{align}
\frac{\partial{\cal L}}{\partial\varphi_{r}}
+
\sum_{i=1}^{n}
(-)^{i}
\partial_{\alpha_{1}\cdots\alpha_{i}}
\frac{\partial{\cal L}}
{\partial(\partial_{\alpha_{1}\cdots\alpha_{i}}\varphi_{r})}
= 0
\;.
\label{Euler0}
\end{align}
The Noether theorem follows from invariance
principles of the Lagrangian density, Eq.~(\ref{EL_0}), with respect to
infinitesimal variations of the generalized coordinates
and their argument $x^{\mu}$ (see for notations Appendix~\ref{app1}). As further shown in
Appendix~\ref{app2}, the requirement of invariance of the Lagrangian density,
Eq.~(\ref{EL_0}), with respect to global phase transformations
\begin{align}
\varphi_{r}(x) \longrightarrow
\varphi^{\prime}_{r}(x)=e^{-i\epsilon}\varphi_{r}(x)
\label{phaseTrafo}
\end{align}
leads to a continuity equation $\partial_{\mu}J^{\mu}=0$ for a conserved
Noether current $J^{\mu}$. The latter is given by the following expression
\begin{widetext}
\begin{align}
J^{\mu} = -i\left[
{\cal K}^{\mu}_{r}\varphi_{r}
+ {\cal K}^{\mu\sigma_{1}}_{r}\partial_{\sigma_{1}}\varphi_{r}
+ {\cal K}^{\mu\sigma_{1}\sigma_{2}}_{r}
\partial_{\sigma_{1}\sigma_{2}}\varphi_{r}
+ \cdots +
{\cal K}^{\mu\sigma_{1}\cdots\sigma_{n}}_{r}
\partial_{\sigma_{1}\cdots\sigma_{n}}\varphi_{r}
\right]
\label{current}
\,.
\end{align}
In fact, for $n\to\infty$ the Noether current consists of an infinite sequence of tensors with
increasing rank order. Furthermore, each of the different tensors
${\cal K}^{\mu\sigma_{1}\sigma_{2}\cdots}_{r}$ in Eq.~(\ref{current})
contains again infinite series terms of higher-order derivatives with respect
to the Lagrangian density.
They are given by the following expressions
\begin{align}
{\cal K}^{\mu}_{r} & = \sum_{i=1}^{n}\;
(-)^{i+1}\;
\prod_{j=1}^{i-1}\partial_{\alpha_{j}}\;
\frac{\partial {\cal L}}
{\partial (\partial_{\mu\alpha_{j}}\varphi_{r})}\,,
\label{tensors}\\
{\cal K}^{\mu\sigma_{1}}_{r} & = \sum_{i=1}^{n}\;
(-)^{i+1}\;
\prod_{j=1}^{i-1}\partial_{\alpha_{j}}\;
\frac{\partial {\cal L}}
{\partial (\partial_{\mu\alpha_{j}\sigma_{1}}\varphi_{r})}\,,
\nonumber\\
{\cal K}^{\mu\sigma_{1}\sigma_{2}}_{r} & = \sum_{i=1}^{n}\;
(-)^{i+1}\;
\prod_{j=1}^{i-1}\partial_{\alpha_{j}}\;
\frac{\partial {\cal L}}{\partial (\partial_{\mu\alpha_{j}\sigma_{1}\sigma_{2}}\varphi_{r})}\,,
\nonumber\\
\vdots &
\nonumber\\
{\cal K}^{\mu\sigma_{1}\cdots\sigma_{n}}_{r} & = \sum_{i=1}^{n}\;
(-)^{i+1}\;
\prod_{j=1}^{i-1}\partial_{\alpha_{j}}\;
\frac{\partial {\cal L}}{\partial (\partial_{\mu\alpha_{j}\sigma_{1}\cdots\sigma_{n}}\varphi_{r})}
\;. \nonumber
\end{align}
The derivation of the energy-momentum tensor proceeds in a similar way,
see Appendix~\ref{app2}. Now the field arguments
are transformed, but not the fields them self. In particular, invariance of
the Lagrangian density~(\ref{EL_0}) with respect to a constant displacement
$\delta_{\mu}$ of the coordinates $x_{\mu}$
\begin{align}
x_{\mu}\longrightarrow x_{\mu}^{\prime} = x_{\mu} + \delta_{\mu}\,,
\label{phaseTrafo2}
\end{align}
implies a continuity equation $\partial_{\mu}T^{\mu\nu} = 0$ for the
energy-momentum tensor $T^{\mu\nu}$ which takes the following form
\begin{align}
T^{\mu\nu} =
{\cal K}^{\mu}_{r}\partial^{\nu}\varphi_{r}
+ {\cal K}^{\mu\sigma_{1}}_{r}\partial_{\sigma_{1}}^{\nu}\varphi_{r}
+ {\cal K}^{\mu\sigma_{1}\sigma_{2}}_{r}
\partial_{\sigma_{1}\sigma_{2}}^{\nu}\varphi_{r}
+ \cdots
+ {\cal K}^{\mu\sigma_{1}\cdots\sigma_{n}}_{r}
\partial_{\sigma_{1}\cdots\sigma_{n}}^{\nu}\varphi_{r}
- g^{\mu\nu}{\cal L}
\;.
\label{tensor}
\end{align}
The $00$-component of the energy-momentum tensor describes the energy density and
the spatial diagonal components are related to the pressure
density. These equations form a background for the construction and
application of the NLD formalism presented in the proceeding sections. They will
further provide a thermodynamically consistent framework for the calculation
of the EoS in mean field approximation in terms of energy and pressure densities.
\section{\label{sec3}The non-linear derivative model}
In this section we introduce the non-linear derivative (NLD) model and derive the
equations of motion for the relevant degrees of freedom. The NLD approach is
essentially based on the Lagrangian density of
RHD~\cite{Duerr:1956zz,Walecka:1974qa,Serot:1997xg}, which is given by
\begin{align}
{\cal L} = & \frac{1}{2}
\left[
\overline{\Psi}\gamma_{\mu} i\overrightarrow{\partial}^{\mu}\Psi
-
\overline{\Psi} i\overleftarrow{\partial}^{\mu} \gamma_{\mu} \Psi
\right]
- m\overline{\Psi}\Psi
-\frac{1}{2}m^{2}_{\sigma}\sigma^{2}
+\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma
-U(\sigma)
\nonumber\\
+ & \frac{1}{2}m^{2}_{\omega}\omega_{\mu} \omega^{\mu}
-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
+\frac{1}{2}m^{2}_{\rho}\vec{\rho}\,_{\mu}\vec{\rho}\,^{\mu}
-\frac{1}{4}\vec{G}\,_{\mu\nu}\vec{G}\,^{\mu\nu}
-\frac{1}{2}m^{2}_{\delta}\vec{\delta}\,^{2}
+\frac{1}{2}\partial_{\mu}\vec{\delta}\, \, \partial^{\mu}\vec{\delta}\,
+{\cal L}_{int}
\label{NDC-free}
\end{align}
\end{widetext}
where $\Psi=(\Psi_{p},\Psi_{n})^{T}$ denotes the nucleon spinor field
in the Lagrangian density of a Dirac-type. In a spirit of RHD, the
interactions between the nucleon fields are described by the exchange of
meson fields. These are the scalar $\sigma$ and vector $\omega^{\mu}$ mesons
in the isoscalar channel, as well as the scalar $\vec{\delta}\,$ and vector
$\vec{\rho}\,^{\mu}$ mesons in the isovector channel. Their corresponding
Lagrangian densities are of the Klein-Gordon and Proca types, respectively.
The term $U(\sigma)=\frac{1}{3}b\sigma^{3}+\frac{1}{4}c\sigma^{4}$
contains the usual selfinteractions of the $\sigma$ meson.
The notations
for the masses of fields in Eq.~(\ref{NDC-free}) are obvious. The field
strength tensors are defined as
$F^{\mu\nu}=\partial^{\mu}\omega^{\nu}-\partial^{\nu}\omega^{\mu}$,
$\vec{G}\,^{\mu\nu}=\partial^{\mu}\vec{\rho}\,^{\nu}-\partial^{\nu}\vec{\rho}\,^{\mu}$
for the isoscalar and isovector fields, respectively.
In conventional RHD
approaches the interaction Lagrangian $\mathcal{L}_{int}$ is given
by~\cite{Walecka:1974qa,Serot:1997xg}
\begin{align}
{\cal L}_{int} = {\cal L}_{int}^{\sigma}+{\cal L}_{int}^{\omega}
+{\cal L}_{int}^{\rho}+{\cal L}_{int}^{\delta}
\,,
\end{align}
where
\begin{equation}
\label{LintRHD}
{\cal L}_{int}^{\sigma}=
g_{\sigma}\overline{\Psi}\Psi\sigma \,,
\end{equation}
\begin{equation}
\label{LintRHD2}
{\cal L}_{int}^{\omega} = -g_{\omega}\overline{\Psi}\gamma^{\mu}\Psi\omega_{\mu}\,,
\end{equation}
\begin{equation}
\label{LintRHD3}
{\cal L}_{int}^{\rho} = -g_{\rho}\overline{\Psi}\gamma^{\mu}\vec{\tau}\,\Psi\vec{\rho}\,_{\mu}\,,
\end{equation}
\begin{equation}
\label{LintRHD4}
{\cal L}_{int}^{\delta} = g_{\delta}\overline{\Psi}\vec{\tau}\,\Psi\vec{\delta}\, \,,
\end{equation}
and ${\cal L}_{int}^{\sigma,\omega,\rho,\delta}$ contains the meson-nucleon
interactions with coupling strengths $g_{\sigma,\omega,\rho,\delta}$ and
$\tau$ denotes the isospin Pauli operator.
In the NLD model the momentum dependence of fields is realized by the introduction
of non-linear derivative operators in the interaction Lagrangian of
conventional RHD. These additional
operators regulate the high momentum components of the RMF fields in the interaction
vertices and can be interpreted as cut-off form factors. This is in spirit of
boson-exchange models
where the phenomenological cut-off is an indispensable part of any
microscopic description of meson-nucleon interaction~\cite{Machleidt:1987hj,Erkelenz:1974uj}.
In the RMF (Hartree) approximation to RHD
only bare Lorentz structures corresponding to the point-like
meson-nucleon interactions are taken into account and the high momentum
components of fields are not suppressed due to the
missing nucleon finite size effect.
The NLD model attempts to account for the suppression
of the high momentum part of the nucleon field in the meson-nucleon interaction on a
field-theoretical level.
The NLD interaction Lagrangians contain the conventional meson-nucleon
RHD structures, however, they are extended by the inclusion of
non-linear derivative operators into the meson-nucleon vertices.
The NLD interaction Lagrangians followed here read
\begin{equation}
{\cal L}_{int}^{\sigma} = \frac{g_{\sigma}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}}
\Psi\sigma
+\sigma\overline{\Psi}
\, \overrightarrow{{\cal D}}
\Psi
\right]\,,
\end{equation}
\begin{equation}
{\cal L}_{int}^{\omega} = -\frac{g_{\omega}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}}
\gamma^{\mu}\Psi\omega_{\mu}
+\omega_{\mu}\overline{\Psi}\gamma^{\mu}
\, \overrightarrow{{\cal D}}
\Psi
\right]\,,
\end{equation}
\begin{equation}
{\cal L}_{int}^{\rho} =
- \frac{g_{\rho}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}}
\gamma^{\mu}\vec{\tau}\,\Psi\vec{\rho}\,_{\mu}
+\vec{\rho}\,_{\mu}\overline{\Psi}\vec{\tau}\,\gamma^{\mu}
\, \overrightarrow{{\cal D}}
\Psi
\right]\,,
\end{equation}
\begin{equation}
{\cal L}_{int}^{\delta} =
\frac{g_{\delta}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}} \vec{\tau}\,
\Psi\vec{\delta}\,\,
+\vec{\delta}\,\overline{\Psi}\vec{\tau}\,
\, \overrightarrow{{\cal D}}
\Psi
\right] \,.
\label{NDCrd}
\end{equation}
As one can see, the only difference with respect to
the conventional RHD interaction Lagrangian is the presence of additional
operators $\overrightarrow{{\cal D}},~\overleftarrow{{\cal D}}$ which serve to regulate the high momentum
component of the nucleon field.
The hermiticity of the Lagrangian demands $\overleftarrow{{\cal D}}=\overrightarrow{{\cal D}}^{\dagger}$.
The operator functions (regulators) $\overrightarrow{{\cal D}},~\overleftarrow{{\cal D}}$ are assumed to be generic
functions of partial derivative operator and supposed to
act on the nucleon spinors $\Psi$ and $\overline{\Psi}$, respectively.
Furthermore, these regulators are assumed to be smooth functions.
Therefore, the formal Taylor expansion of the operator functions in terms of partial derivatives
generates an infinite series of higher-order derivative terms
\begin{align}
\overrightarrow{{\cal D}} := {\cal D}\left( \overrightarrow{\xi} \right) = &
\sum_{j=0}^{n\to\infty}\,
\frac{\partial^{j}}{\partial\overrightarrow{\xi}^{j}}{\cal D}\vert_{\overrightarrow{\xi}\to 0}
\,\frac{\overrightarrow{\xi}^{j}}{j!} \,, \\
\overleftarrow{{\cal D}} := {\cal D} \left( \overleftarrow{\xi} \right) = &
\sum_{j=0}^{n\to\infty}\, \frac{\overleftarrow{\xi}^{j}}{j!}\,
\frac{\partial^{j}}{\partial\overrightarrow{\xi}^{j}}{\cal D}\vert_{\overleftarrow{\xi}\to 0}
\,. \label{ope1}
\end{align}
The expansion coefficients
are given by the partial derivatives of ${\cal D}$ with respect
to the operator arguments $\overrightarrow{\xi}$ and $\overleftarrow{\xi}$ around the origin. The operators are defined as
$\overrightarrow{\xi} = -\zeta^{\alpha}i\overrightarrow{\partial}_{\alpha},~
\overleftarrow{\xi} = i\overleftarrow{\partial}_{\alpha}\zeta^{\alpha}$
where the four vector $\zeta^{\mu}=v^{\mu}/\Lambda$ contains the cut-off $\Lambda$
and $v^{\mu}$ is an auxiliary vector. The functional form of the regulators is constructed such
that in the limit $\Lambda\to\infty$ the following limit holds $\overrightarrow{{\cal D}}(\overleftarrow{{\cal D}}) \to
1$. Therefore, in the limit $\Lambda\to\infty$ the original RHD Lagrangians
are recovered.
In the most general case the NLD formalism can be extended to the case of
multiple variable regulators. In
particular, we can assume the non-linear operator to be a multi-variable non-linear
function of higher-order partial derivatives, which are given by the following
Taylor expansion
\begin{widetext}
\begin{align}
\label{ope0}
\overrightarrow{{\cal D}} := {\cal D}(\overrightarrow{\xi}_{1},\overrightarrow{\xi}_{2},\overrightarrow{\xi}_{3},\overrightarrow{\xi}_{4}) = &
\sum_{i_{1}=0}^{n\to\infty}\sum_{i_{2}=0}^{n\to\infty}
\sum_{i_{3}=0}^{n\to\infty}\sum_{i_{4}=0}^{n\to\infty} \,
\frac{\partial^{i_{1}+i_{2}+i_{3}+i_{4}}}{\partial\overrightarrow{\xi}^{i_{1}}_{1}\partial\overrightarrow{\xi}^{i_{2}}_{2}\partial\overrightarrow{\xi}^{i_{3}}_{3}\partial\overrightarrow{\xi}^{i_{4}}_{4}}
{\cal D}\vert_{\{\overrightarrow{\xi}_{1},\overrightarrow{\xi}_{2},\overrightarrow{\xi}_{3},\overrightarrow{\xi}_{4}\}\to 0}\,
\frac{\overrightarrow{\xi}_{1}^{i_{1}}\overrightarrow{\xi}_{2}^{i_{2}}\overrightarrow{\xi}_{3}^{i_{3}}\overrightarrow{\xi}_{4}^{i_{4}}}{i_{1}!i_{2}!i_{3}!i_{4}!}
\,, \\
\overleftarrow{{\cal D}} := {\cal D}(\overleftarrow{\xi}_{1},\overleftarrow{\xi}_{2},\overleftarrow{\xi}_{3},\overleftarrow{\xi}_{4}) = &
\sum_{i_{1}=0}^{n\to\infty}\sum_{i_{2}=0}^{n\to\infty}\sum_{i_{3}=0}^{n\to\infty}
\sum_{i_{4}=0}^{n\to\infty} \,
\frac{\overleftarrow{\xi}_{1}^{i_{1}}\overleftarrow{\xi}_{2}^{i_{2}}\overleftarrow{\xi}_{3}^{i_{3}}\overleftarrow{\xi}_{4}^{i_{4}}}{i_{1}!i_{2}!i_{3}!i_{4}!}\,
\frac{\partial^{i_{1}+i_{2}+i_{3}+i_{4}}}{\partial\overleftarrow{\xi}^{i_{1}}_{1}\partial\overleftarrow{\xi}^{i_{2}}_{2}\partial\overleftarrow{\xi}^{i_{3}}_{3}\partial\overleftarrow{\xi}^{i_{4}}_{4}}
{\cal D}\vert_{\{\overleftarrow{\xi}_{1},\overleftarrow{\xi}_{2},\overleftarrow{\xi}_{3},\overleftarrow{\xi}_{4}\}\to 0}
\,. \label{ope}
\end{align}
\end{widetext}
Then Eqs.~(\ref{ope0}) and~(\ref{ope}) can be rearranged into the
terms with increasing order with respect to the partial derivatives, see for details
Appendix~\ref{app3}. The operators $\xi_{i}$ are defined in a similar way as before
\begin{align}
\overrightarrow{\xi}_{i} =
-\zeta^{\alpha}_{i}i\overrightarrow{\partial}_{\alpha}
~,~
\overleftarrow{\xi}_{i} =
i\overleftarrow{\partial}_{\alpha}\zeta^{\alpha}_{i}\,,
\label{opee}
\end{align}
with $\zeta^{\mu}_{i}=v^{\mu}_{i}/\Lambda$ ($i=1,2,3,4$) in this case.
As we will show latter on, this representation allows to generate any desired form of
the regulator function, {\it i.e.}, momentum and/or energy dependent monopole,
dipole {\it etc.} functions.
The derivation of the equation of motion for the Dirac field
follows the generalized Euler-Lagrange equations,
Eq.~(\ref{Euler0}), to the NLD-Lagrangian density using the Taylor form of
the regulators. This obviously will generate an infinite number of partial
derivative terms in the equations of motions. However,
as shown in detail in Appendix~\ref{app4} these infinite series can be resummed
(up to terms containing the derivatives of the meson fields) to the following
Dirac equation
\begin{equation}
\left[
\gamma_{\mu}(i\partial^{\mu}-\Sigma^{\mu}) -
(m-\Sigma_{s})
\right]\Psi = 0
\;,
\label{Dirac_nld}
\end{equation}
where the selfenergies $\Sigma^{\mu}$ and $\Sigma_{s}$ are given by
\begin{eqnarray}
\Sigma^{\mu} & = & g_{\omega}\omega^{\mu}\overrightarrow{{\cal D}} +
g_{\rho}\vec{\tau}\, \cdot \vec{\rho}\,^{\mu}\overrightarrow{{\cal D}}+ \cdots~,
\label{Sigmav}\\
\Sigma_{s} & = & g_{\sigma}\sigma\overrightarrow{{\cal D}} +
g_{\delta}\vec{\tau}\, \cdot \vec{\delta}\, \, \overrightarrow{{\cal D}}+ \cdots
\;. \label{Sigmas}
\end{eqnarray}
Here both Lorentz-components of the selfenergy, $\Sigma^{\mu}$ and $\Sigma_{s}$,
show an explicit linear behavior with respect to the meson fields $\sigma$,
$\omega^{\mu}$, $\vec{\rho\,}^{\mu}$ and $\vec{\delta}\,$ as in the standard RMF.
However, they contain an additional dependence on regulator functions.
The additional terms in Eqs.~(\ref{Sigmav}) and~(\ref{Sigmas})
containing the meson field derivatives are denoted by multiple dots.
All these contributions can be also resummed.
However, in the mean-field approximation to infinite
nuclear matter, which will be discussed in the next section, these terms vanish.
On the other hand, they will be needed in the description of finite systems, such as
finite nuclei and heavy-ion collisions.
Therefore, for simplicity we do not consider these terms here, and postpone
the effect of these terms for future studies.
The derivation of the meson field equations of motion is straightforward, since here one
has to use the standard Euler-Lagrange equations
\begin{align}
\frac{\partial{\cal L}}{\partial\varphi_{r}}
-
\partial_{\alpha}
\frac{\partial{\cal L}}
{\partial(\partial_{\alpha}\varphi_{r})}
= 0
\;,
\label{EulerMeson}
\end{align}
where now $r=\sigma,\omega,\rho$ and $\delta$.
The following Proca and
Klein-Gordon equations are obtained
\begin{align}
&
\partial_{\alpha}\partial^{\alpha}\sigma + m_{\sigma}^{2}\sigma
+ \frac{\partial U}{\partial\sigma} =
\frac{1}{2}g_{\sigma}
\left[
\overline{\Psi} \, \overleftarrow{{\cal D}} \Psi + \overline{\Psi}\overrightarrow{{\cal D}} \Psi
\right] \,,
\label{sigma_meson}\\
&
\partial_{\mu}F^{\mu\nu} + m_{\omega}^{2}\omega^{\nu} =
\frac{1}{2}g_{\omega}
\left[
\overline{\Psi}\, \overleftarrow{{\cal D}} \gamma^{\nu}\Psi + \overline{\Psi}\gamma^{\nu}\overrightarrow{{\cal D}} \Psi
\right] \,,
\label{omega_meson}\\
&
\partial_{\mu}\vec{G}\,^{\mu\nu} + m_{\rho}^{2}\vec{\rho}\,^{\nu} =
\frac{1}{2}g_{\rho}
\left[
\overline{\Psi} \, \overleftarrow{{\cal D}} \gamma^{\nu}\vec{\tau}\, \, \Psi +
\overline{\Psi}\vec{\tau}\, \, \gamma^{\nu}\overrightarrow{{\cal D}} \Psi
\right] \,,
\label{rho_meson}\\
&
\partial_{\alpha}\partial^{\alpha}\vec{\delta}\, + m_{\sigma}^{2}\vec{\delta}\, =
\frac{1}{2}g_{\delta}
\left[
\overline{\Psi} \, \overleftarrow{{\cal D}} \vec{\tau}\, \Psi + \overline{\Psi}\vec{\tau}\,\overrightarrow{{\cal D}} \Psi
\right]
\label{delta_meson}
\,.
\end{align}
Finally, we provide the general expressions for the Noether theorems within the NLD
formalism. The evaluation of the conserved baryon current results from
the application of the generalized expression for $J^{\mu}$,
Eq.~(\ref{current}), to the Lagrangian density of the NLD model.
As shown in detail in Appendix~\ref{app5}, a systematic evaluation
of the higher-order field derivatives of the NLD Lagrangian and the resummation
procedure result in
\begin{widetext}
\begin{align}
J^{\mu} = \overline{\Psi}\gamma^{\mu}\Psi
- & \frac{1}{2}\, g_{\sigma} \,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \Psi -
\overline{\Psi}\, \overrightarrow{\varOmega}^{\mu}\Psi
\right]\sigma
+ \frac{1}{2}\, g_{\omega}\,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \gamma^{\alpha}\Psi -
\overline{\Psi}\gamma^{\alpha}\, \overrightarrow{\varOmega}^{\mu}\Psi
\right] \omega_{\alpha}
\label{stromNLD} \nonumber\\
+ & \frac{1}{2}\, g_{\rho}\,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \gamma^{\alpha}\vec{\tau}\,\Psi -
\overline{\Psi}\gamma^{\alpha}\, \overrightarrow{\varOmega}^{\mu}\vec{\tau}\,\Psi
\right] \vec{\rho}\,_{\alpha}
- \frac{1}{2}\, g_{\delta}\,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \vec{\tau}\, \Psi -
\overline{\Psi}\, \overrightarrow{\varOmega}^{\mu}\vec{\tau}\,\Psi
\right] \vec{\delta}\, + \cdots
\,.
\end{align}
\end{widetext}
The new non-linear derivative operators in Eq.~(\ref{stromNLD}),
$\overleftarrow{\varOmega}^{\mu} := \partial\overleftarrow{{\cal D}}/\partial(i\overleftarrow{\partial}_{\mu})$ and
$\overrightarrow{\varOmega}^{\mu} := \partial\overrightarrow{{\cal D}}/\partial(i\overrightarrow{\partial}_{\mu})$,
denote the derivatives of $\overleftarrow{{\cal D}}$ and $\overrightarrow{{\cal D}}$ with respect to their
operator argument $i\overleftarrow{\partial}_{\mu}$ and $i\overrightarrow{\partial}_{\mu}$
(see Appendix~\ref{app5}).
The first term in Eq.~(\ref{stromNLD}) corresponds to the standard
expression of the RHD models and the additional contributions arise due to
the additional higher-order field derivatives in the Noether theorem,
Eq.~(\ref{current}).
The energy-momentum tensor, $T^{\mu\nu}$, is determined
according to Eq.~(\ref{tensor}).
The evaluation procedure, which is similar to that one for the Noether current,
results in the following NLD expression for $T^{\mu\nu}$
\begin{widetext}
\begin{align}
T^{\mu\nu} = &
\frac{1}{2}\,
\overline{\Psi}\gamma^{\mu} \, i\overrightarrow{\partial}^{\nu} \Psi -
\frac{1}{2}\,
\overline{\Psi} \, i\overleftarrow{\partial}^{\nu}\gamma^{\mu} \Psi
\label{tensorNLD}\nonumber\\
+ & \frac{1}{2}\, g_{\sigma}\,
\left[
\overline{\Psi}\, \overrightarrow{\varOmega}^{\mu}\, i\overrightarrow{\partial}^{\nu}\, \Psi +
\overline{\Psi}\, i\overleftarrow{\partial}^{\nu}\overleftarrow{\varOmega}^{\mu} \, \Psi
\right] \sigma
- \frac{1}{2}\, g_{\omega}\,
\left[
\overline{\Psi}\gamma^{\alpha}\, \overrightarrow{\varOmega}^{\mu}\, i\overrightarrow{\partial}^{\nu}\,\Psi+
\overline{\Psi}\, i\overleftarrow{\partial}^{\nu}\, \overleftarrow{\varOmega}^{\mu} \gamma^{\alpha}\Psi
\right]\omega_{\alpha} \nonumber \\
- & \frac{1}{2}\, g_{\rho}\,
\left[
\overline{\Psi}\vec{\tau}\,\gamma^{\alpha}\, \overrightarrow{\varOmega}^{\mu} \,
i\overrightarrow{\partial}^{\nu}\, \Psi +
\overline{\Psi}\, i\overleftarrow{\partial}^{\nu}\, \overleftarrow{\varOmega}^{\mu} \gamma^{\alpha}\vec{\tau}\,\Psi
\right] \vec{\rho}\,_{\alpha}
+ \frac{1}{2}\, g_{\delta}\,
\left[
\overline{\Psi}\, \vec{\tau}\,\overrightarrow{\varOmega}^{\mu}\, i\overrightarrow{\partial}^{\nu}\, \Psi +
\overline{\Psi}\, i\overleftarrow{\partial}^{\nu}\overleftarrow{\varOmega}^{\mu} \, \vec{\tau}\,\Psi
\right] \vec{\delta}\,
\nonumber\\
-& g^{\mu\nu}\, {\cal L} + \cdots
\,.
\end{align}
\end{widetext}
The first line in Eq.~(\ref{tensorNLD}) is just the usual kinetic RHD
contribution to $T^{\mu\nu}$, while the additional kinetic terms originate
from the evaluation of the higher-order derivatives in Eq.~(\ref{tensor}).
These terms will be important for the the thermodynamic consistency of the
model and the validation of the Hugenholtz-Van Hove
theorem~\cite{Weisskopf:1957,Hugenholtz:1958}. Again the terms not shown
in Eqs.~(\ref{stromNLD}) and~(\ref{tensorNLD}) describe the contribution
of terms containing the derivatives of the meson fields.
\section{\label{sec4}RMF approach to infinite nuclear matter}
In the mean-field approximation the mesons are treated as classical fields.
Infinite nuclear matter is described by a static homogeneous, isotropic, spin and
isospin-saturated system of protons and neutrons. In this case, the spatial components
of the Lorentz-vector meson fields vanish with $\omega^{\mu}\to (\omega^0,~\vec{0}\,)$,
and in isospin space only the neutral
component of the isovector fields survive, {\it i.e.}, $\vec{\rho}\,^{\mu} \to
(\rho^0_3,~\vec{0}\,)$ and $\vec{\delta}\, \to \delta_3$. For simplicity, we denote
in the following the third
isospin components of the isovector fields as $\rho$ and $\delta$.
The derivation of the RMF equations starts with the usual plane wave \textit{ansatz}
\begin{align}
\Psi_i(s,\vec{p}\,) =
u_i(s,\vec{p}\,)e^{-ip^{\mu}x_{\mu}}
\,,
\label{plane_wave}
\end{align}
where $i$ stands for protons ($i=p$) or neutrons ($i=n$) and $p^{\mu}=(E,\vec{p}\,)$
is a single nucleon 4-momentum.
The application of the non-linear derivative operator ${\cal D}$ to the
plane wave \textit{ansatz} of the spinor fields results in
\begin{equation}
{\cal D}(\overrightarrow{\xi})\Psi_{i} = {\cal D}(\xi) u_i(s,\vec{p}\,)e^{-ip^{\mu}x_{\mu}} \,,
\label{ope_nm1}
\end{equation}
\begin{equation}
\overline{\Psi}_{i}{\cal D}(\overleftarrow{\xi}) =
{\cal D}(\xi) \overline{u}_i(s,\vec{p}\,)e^{+ip^{\mu}x_{\mu}} \,,
\label{ope_nm}
\end{equation}
where the regulators in the r.h.s. of above equation are now functions of
the scalar argument $\xi=-\frac{v_{\alpha}p^{\alpha}}{\Lambda}$.
With the help of Eqs.~(\ref{plane_wave}) and~(\ref{ope_nm1}) one gets the Dirac
equation similar to Eq.~(\ref{Dirac_nld}) with selfenergies given by
\begin{equation}
\Sigma^{\mu}_{vi} = g_{\omega}\omega^{\mu}{\cal D}
+g_{\rho} \tau_{i} \rho^{\mu}{\cal D}~,
\label{Sigmav_nm}
\end{equation}
\begin{equation}
\Sigma_{si} = g_{\sigma}\sigma{\cal D}
+g_{\delta} \tau_{i} \delta{\cal D}~,
\label{Sigmas_nm}
\end{equation}
where now $\tau_{i}=+1$ for protons ($i=p$) and $\tau_{i}=-1$ for neutrons ($i=n$).
We note again that in the RMF approximation to infinite matter the additional terms including
the meson field derivatives vanish. This largely simplifies the formalism,
since these terms which show up in the original Dirac equation,
see Eq.~(\ref{Dirac_nld}), do not appear any more.
The solutions of the Dirac equation takes the form
\begin{equation}
u_{i}(s,\vec{p}\,) = N_{i}
\left(
\begin{array}{c}
\varphi_{s} \\ \\
\displaystyle \frac{ \vec{\sigma}\,\cdot\vec{p}\,}{E^{*}_{i}+m^{*}_{i}}\varphi_{s}\\
\end{array}
\right)
\; , \label{Spinor}
\end{equation}
with spin eigenfunctions $\varphi_{s}$, the in-medium energy
\begin{equation}
E^{*}_{i} := E - \Sigma^{0}_{vi}~,
\end{equation}
and the Dirac mass
\begin{equation}
m^{*}_{i} := m - \Sigma_{si}~.
\end{equation}
For a given momentum the single particle energy $E$ is
obtained from the in-medium on-shell relation
\begin{equation}
E^{*2}_{i} - \vec{p\,}^{2} = m^{*2}_{i}~.
\label{onshell}
\end{equation}
The factor $N_{i}$ is determined from the normalization of the
probability distribution, that is $\int d^{3}x \,J^{0}=1$.
In the conventional RMF models the baryon density is given by the familiar
expression $J^{0} = \Psi^{\dagger}\Psi$ and the normalization condition
$\int d^{3}x \,\Psi^{\dagger}\Psi=1$ would result in
$N_i=\sqrt{\frac{E^{*}_{i}+m^{*}_{i}}{2E^{*}_{i}}}$.
In the NLD model one has to use Eq.~(\ref{current}) for the Noether current
by keeping in mind that the infinite series of meson field derivatives vanish
in the RMF approach to nuclear matter. In this case, see again
Appendix~\ref{app5} for details, the conserved baryon current $J^{\mu}$ is
resummed up to infinity and the result reads
\begin{align}
J^{\mu} = & \!\!
\sum_{i=p,n} \Big[
\Big<
\overline{\Psi}_{i}\gamma^{\mu}\Psi_{i}
\Big>
\label{current_NLD}\\
&
\left.
+ g_{\sigma}
\Big<\overline{\Psi}_{i} [\partial^{\mu}_{p}{\cal D}]\Psi_{i}\Big>
\sigma
- g_{\omega}
\Big<
\overline{\Psi}_{i} [\partial^{\mu}_{p}{\cal D}]
\gamma^{\alpha}\Psi_{i}
\Big>
\omega_{\alpha}
\right.
\nonumber\\
&
- g_{\rho}
\tau_{i}
\Big<
\overline{\Psi}_{i} [\partial^{\mu}_{p}{\cal D}]\gamma^{\alpha}\Psi_{i}
\Big>
\rho_{\alpha}
+ g_{\delta}
\tau_{i}
\Big<
\overline{\Psi}_{i} [\partial^{\mu}_{p}{\cal D}]\Psi_{i}
\Big>
\delta
\Big]
\nonumber
\; .
\end{align}
The $0$-component of the Noether current describes the conserved nucleon
density $\rho_{B}=J^{0}$, from which also the relation between the Fermi
momentum $p_{F}$ and $\rho_{B}$ is uniquely determined. In particular,
using the Gordon identity and Eqs.~(\ref{Sigmav_nm}) and~(\ref{Sigmas_nm})
for the RMF selfenergies, one obtains
\begin{align}
J^{\mu} & =
\frac{\kappa}{(2\pi)^{3}} \; \sum_{i=p,n} \; \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \, N_{i}^{2}
\nonumber\\
& \times \left[
\frac{p^{*\mu}_{i}}{E^{*}_{i}}
+
\Big( \partial_{p}^{\mu}\Sigma_{si} \Big) \frac{m^{*}_{i}}{E^{*}_{i}}
-
\left( \partial_{p}^{\mu}\Sigma^{\beta}_{vi} \right)
\frac{p^{*}_{i\beta}}{E^{*}_{i}}
\right]
\nonumber
\,,
\end{align}
where $\kappa=2$ is a spin degeneracy factor, $p_{F_{i}}$ stands for the
proton or neutron Fermi-momentum and the effective momentum is given by
\begin{equation}
p^{*\mu}_{i}=p^{\mu}-\Sigma^{\mu}_{vi}.
\end{equation}
One defines now a new in-medium
$4$-momentum $\Pi^{\mu}_i$ as
\begin{align}
\Pi^{\mu}_{i} = p^{*\mu}_{i}+ m^{*}_{i}\Big(\partial_{p}^{\mu}\Sigma_{si} \Big)
- \Big(\partial_{p}^{\mu}\Sigma^{\beta}_{vi} \Big) p^{*}_{i\beta}
\label{bigPi}
\,,
\end{align}
and arrives to the following expression
\begin{align}
J^{\mu} =
\frac{\kappa}{(2\pi)^{3}} \, \sum_{i=p,n} \, \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \, N_{i}^{2} \,
\frac{\Pi^{\mu}_{i}}{E^{*}_{i}}
\label{nldeq7}
\,.
\end{align}
On the other hand, the general definition of the baryon current results from the
covariant superposition of all the occupied in-medium on-shell nucleon
positive energy states up to the proton or neutron Fermi momentum~\cite{Weber:1992qc}
\begin{align}
J^{\mu} = &
\frac{\kappa}{(2\pi)^{3}} \, \sum_{i=p,n} \, \int\limits_{|\vec{p}\,|\leq p_{F_{i}}}\!\!\!\!\!\! d^4p
\label{nldeq8}\\
\times & \Pi^{\mu}_{i} \,
\delta\left( p^{*\mu}_{i}p_{i\mu}^{*}-m^{*2}_{i}\right) \, 2\Theta(p^{0})
\nonumber
\,.
\end{align}
In the NLD approach the mean-field selfenergies depend explicitly on the
single-particle momentum $p^{\mu}$. Therefore, using the properties of the
$\delta$-function the time-like $dp^{0}$ component can be integrated out
explicitly. The result reads
\begin{align}
J^{\mu} = \frac{\kappa}{(2\pi)^{3}} \, \sum_{i=p,n} \, \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \, \frac{\Pi^{\mu}_{i}}{\Pi^{0}_{i}}
\label{nldeq12}
\,.
\end{align}
Comparing Eq.~(\ref{nldeq12}) with the equation for the NLD current,
Eq.~(\ref{nldeq7}), one gets the following result for the normalization
\begin{align}
N_{i}=\sqrt{\frac{E^{*}_{i}+m^{*}_{i}}{2E^{*}_{i}}}
\sqrt{\frac{E^{*}_{i}}{\Pi^{0}_{i}}}
\label{SpinorNorm}
\,,
\end{align}
and the bilinear products between the in-medium spinors of protons and
neutrons are given by
\begin{align}
\overline{u}_i(p)u_i(p) = & \frac{m^{*}_i}{\Pi^{0}_i},
\label{norms}\\
\overline{u}_i(p)\gamma^{0}u_i(p) = & 1
\label{normv}
\,.
\end{align}
Eq.~(\ref{normv}) ensures also the proper normalization of the
probability distribution, {\it i.e.}, $\int \Psi^{\dagger}\Psi=1$.
In our first work~\cite{Gaitanos:2009nt}, where the non-linear derivative model
has been proposed,
the correction terms proportional to the partial derivatives of the
selfenergies with respect to the single-particle momentum $p^{\mu}$ in
Eq.~(\ref{bigPi}) were not taken into account.
Even if
their contributions are small at low densities, these
terms will be included in the present calculations which attempt to consider
also the high density domain of the EoS in neutron stars. On the other hand,
the inclusion of these terms is crucial for a fully thermodynamically consistent
formalism and is independent of the particular form of the
cut-off functions. Note that, the additional cut-off dependent terms in the baryon
and energy densities
of Ref.~\cite{Gaitanos:2009nt} are now canceled by the proper normalization
conditions.
The energy-momentum tensor in NLD is obtained by applying the Noether theorem
for translational invariance. In nuclear matter the
resummation procedure results in the following expression
\begin{widetext}
\begin{align}
T^{\mu\nu} = &
\sum_{i=p,n}\bigg[
\Big< \overline{\Psi}_{i} \gamma^{\mu} p^{\nu} \Psi_{i} \Big>
\bigg.
\label{tensornm} \nonumber \\
&
\bigg.
+ g_{\sigma}
\Big<
\overline{\Psi}_{i} [\partial^{\mu}_{p}{\cal D}] p^{\nu} \Psi_{i}
\Big>
\sigma
- g_{\omega}
\Big<
\overline{\Psi}_{i} [\partial^{\mu}_{p}{\cal D}]\gamma^{\alpha}p^{\nu}\Psi_{i}
\Big> \omega_{\alpha}
- g_{\rho}
\tau_{i}
\Big<
\overline{\Psi}_{i}[\partial^{\mu}_{p}{\cal D}]\gamma^{\alpha}p^{\nu}\Psi_{i}
\Big>
\rho_{\alpha}
+ g_{\delta}
\tau_{i}
\Big<
\overline{\Psi}_{i}[\partial^{\mu}_{p}{\cal D}]p^{\nu}\Psi_{i}
\Big>
\delta_{\alpha}
\bigg]
\nonumber\\
&
- g^{\mu\nu}\langle{\cal L}\rangle
\,.
\end{align}
The evaluation of the expectation values in Eq.~(\ref{tensornm}) can be done
in a similar way as for the current with the result
\begin{align}
T^{\mu\nu} =
\sum_{i=p,n} \frac{\kappa}{(2\pi)^{3}} \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \, \frac{p^{\nu}}{\Pi^{0}_{i}} \left[
p^{*\mu}_{i} + m^{*}_{i} \Big( \partial_{p}^{\mu}\Sigma_{si} \Big)
- p^{*\alpha}_{i} \Big( \partial_{p}^{\mu}\Sigma_{\alpha i} \Big)
\, \right]
- g^{\mu\nu}\langle{\cal L}\rangle
\label{nldeq14}
\,.
\end{align}
\end{widetext}
Using Eq.~(\ref{bigPi}) one arrives to the final expression for the energy-momentum
tensor in the NLD formalism, which can be written in the following form
\begin{align}
T^{\mu\nu} =
\sum_{i=p,n} \frac{\kappa}{(2\pi)^{3}} \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \,
\frac{\Pi^{\mu}_{i} p^{\nu}}{\Pi^{0}_{i}} - g^{\mu\nu}\langle{\cal L}\rangle
\label{nldeq17}
\,,
\end{align}
from which the energy density $\varepsilon\equiv T^{00}$ and the pressure $P$ can
be calculated, {\it i.e.},
\begin{align}
\varepsilon = &
\sum_{i=p,n} \frac{\kappa}{(2\pi)^{3}} \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \, E(\vec{p}\,) - \langle{\cal L}\rangle ~,
\label{nldeq18a}\\
P = & \frac{1}{3}\sum_{i=p,n} \frac{\kappa}{(2\pi)^{3}} \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \,
\frac{\vec{\Pi}\,_i \cdot \vec{p}\,}{\Pi^{0}_{i}} + \langle{\cal L}\rangle
\label{nldeq18b}
\,.
\end{align}
Eqs.~(\ref{nldeq18a}) and~(\ref{nldeq18b}) look similar as the familiar expressions of
the usual RMF models. However, the non-linear effects induced by the
regulators show up through the generalized
momentum $\Pi^{\mu}_i$ and through the dispersion relation for the single-particle
energy $E(\vec{p}\,)$. Note the different form of the generalized momentum, $\Pi^{\mu}_i$,
when one chooses energy or momentum dependent cut-off functions. Indeed, in the latter
case the spatial derivatives in $\vec{\Pi}_i$ contribute in the pressure, while for
energy-dependent cut-off functions they vanish and $\vec{\Pi}\,_i=\vec{p}$
holds. In any case, the expressions for the energy-density and pressure within the
conventional RMF models are recovered by simple replacement
$\Pi^{\mu}_i\rightarrow p^{*\mu}_i$, which is just equivalent to taking the limiting case
$\Lambda\rightarrow\infty$ in the NLD expressions.
Finally, the NLD meson-field equations in the RMF approach to nuclear matter
read
\begin{align}
m_{\sigma}^{2}\sigma + \frac{\partial U}{\partial\sigma} = & g_{\sigma}
\sum_{i=p,n}\,\Big< \overline{\Psi}_{i}{\cal D}\Psi_{i}\Big>
= g_{\sigma}\rho_{s} ~, \\
m_{\omega}^{2}\omega = &
g_{\omega}
\sum_{i=p,n}\,\Big< \overline{\Psi}_{i} \gamma^{0}{\cal D}\Psi_{i}\Big>
= g_{\omega}\rho_{0} ~,\\
m_{\rho}^{2}\rho = &
g_{\rho}
\sum_{i=p,n}\,\tau_{i}\Big< \overline{\Psi}_{i} \gamma^{0} {\cal D}\Psi_{i}\Big>
= g_{\rho}\rho_{I} ~,\\
m_{\delta}^{2}\delta = &
g_{\delta}
\sum_{i=p,n}\,\tau_{i}\Big< \overline{\Psi}_{i} {\cal D}\Psi_{i}\Big>
= g_{\delta}\rho_{IS} ~.
\label{mesonsNM}
\end{align}
Using Eqs.~(\ref{norms}) and (\ref{normv}),
the evaluation of the source terms of the
meson-field equations is straightforward. In particular, the scalar-isoscalar $\rho_{s}$,
vector-isoscalar $\rho_{0}$, vector-isovector $\rho_{I}$ and scalar-isovector
$\rho_{IS}$ are given
\begin{equation}
\rho_{s} =
\frac{\kappa}{(2\pi)^{3}}\sum_{i=p,n} \; \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \,
\frac{m^{*}_{i}}{\Pi^{0}_{i}} \, {\cal D}(p)
~,
\label{dens_s}
\end{equation}
\begin{equation}
\rho_{0} =
\frac{\kappa}{(2\pi)^{3}}\sum_{i=p,n} \; \int\limits_{|\pvec|\leq p_{F_{i}}}\!\!\!\!\!\! d^{3}p \, \frac{E^{*}_{i}}{\Pi^{0}_{i}} \, {\cal D}(p)
\,,
\label{dens_0}
\end{equation}
\begin{equation}
\rho_{I} = \rho_{0p} - \rho_{0n} \,,
\label{dens_i}\
\end{equation}
\begin{equation}
\rho_{IS} = \rho_{sp} - \rho_{sn} \,.
\end{equation}
The meson-field equations of motion show a similar structure
as those of the standard RMF approximation. For example, the scalar-isoscalar
density $\rho_{s}$ is suppressed with respect to the vector density $\rho_{0}$
by the factor $m^{*}_i/\Pi^{0}_{i}$, in a similar way as in the conventional Walecka
models~\cite{Walecka:1974qa}. However, the substantial difference between NLD and
conventional RMF appears in the source terms which now contain in addition the
momentum-dependent regulator ${\cal D}$.
\section{\label{sec5}Results}
\subsection{\label{sec5a}Model parameters}
The non-linear derivative operators, ${\cal D}$, in the NLD Lagrangian are not
constrained from first principles and allows us to consider different functional
forms of ${\cal D}$. In nuclear matter, these regulators can be chosen as functions of
the single-particle energy or momentum, depending on the choice of the auxiliary
multi-dimensional parameters $\zeta^{\mu}_{i}$. The available constraints are the
bulk properties of nuclear matter and the empirically known energy dependence of the
in-medium optical potential. It is also well
known~\cite{Haar:1986ii,TerHaar:1987ce,Plohl:2005fn} that the selfenergies
should decrease or saturate as function of baryon density and single-particle
4-momentum. In the NLD model all these features of the relativistic
mean-fields can be realized using energy or momentum dependent form
factors which regulate the high energy (momentum) behavior of the nucleon
4-momentum.
\begin{table*}
\begin{tabular}{c|cccccccc}
\hline\hline \\
Model & $\rho_{sat}$ & $E_{b}$ & $K$ & $a_{sym}$ & $L$ & $K_{sym}$ & $K_{asy}$ &
Ref. \\
&&&&&&&& \\
& $[fm^{-3}]$ & $[\UNIT{MeV}/A]$ & $[\UNIT{MeV}]$ & $[\UNIT{MeV}]$ & $[\UNIT{MeV}]$ & $[\UNIT{MeV}]$ & $[\UNIT{MeV}]$ & \\
&&&&&&&& \\
\hline\hline \\
NLD & $0.156$ & $-15.30$ & $251$ & $30$ & $81$ & $-28$ & $-514$ & this work
\\ &&&&&&&& \\
\hline
\\
NL3* & $0.150$ & $-16.31$ & $258$ & $38.68$ & $125.7$ & $104.08$ & $-650.12$ & \cite{Lalazissis:2009zz}
\\ &&&&&&&& \\
DD & $0.149$ & $-16.02$ & $240$ & $31.60$ & $56$ & $-95.30$ & $-431.30$ & \cite{Typel:1999yq}
\\ &&&&&&&& \\
D$^3$C & $0.151$ & $-15.98$ & $232.5$ & $31.90$ & $59.30$ & $-74.7$ & $-430.50$ & \cite{Typel:2005ba}
\\ &&&&&&&& \\
\hline \\
DBHF & $0.185$ & $-15.60$ & $290$ & $33.35$ & $71.10$ & $-27.1$ & $-453.70$ & \cite{Li:1992zza,Brockmann:1990cn}
\\ &&&&&&&& \\
& $0.181$ & $-16.15$ & $230$ & $34.20$ & $71$ & $87.36$ & $-340$ & \cite{GrossBoelting:1998jg}
\\ &&&&&&&& \\
\hline
\\
empirical & $~~0.167\pm 0.019$ & $-16\pm 1$ & $230\pm~10$ &
$31.1\pm~1.9$ & $88\pm~25$ & -- & $-550\pm~100$ & \\
& $\mbox{\small{\cite{Myers:1969zz,Myers:1984zz,Blaizot:1980tw}}}$ & $\mbox{\cite{Myers:1969zz,Myers:1984zz,Blaizot:1980tw}}$ & $\mbox{\cite{Blaizot:1980tw,PhysRevLett.82.691,PhysRevC.56.2518}}$ &
$\mbox{\cite{Li:2008gp}}$ & $\mbox{\cite{Li:2008gp}}$ & -- &
$\mbox{\cite{Li:2007bp}}$ &
\\ &&&&&&&& \\
\hline\hline
\end{tabular}
\caption{Bulk saturation properties of nuclear matter, i.e., saturation density
$\rho_{sat}$, binding energy per nucleon $E_{b}$, compression modulus $K$, asymmetry
parameter $a_{sym}$, slope and curvature parameters $L$ and $K_{sym}$, respectively,
and the observable $K_{asy}$ in the NLD model. Our results are compared with the
non-linear Walecka parametrization NL$3^{*}$, the density dependent DD and the
derivative coupling D$^{3}$C models as well as with two versions of the
microscopic DBHF approach. The empirical values are shown too.}
\label{tab2}
\end{table*}
\begin{table*}
\begin{tabular}{c|c|c|cc|ccccc|ccc}
\hline\hline &&&&&&&&&&&& \\
& $\overrightarrow{{\cal D}}$ & cut-off & $~~\Lambda_s$ & $\Lambda_v~~$ & $~~g_{\sigma}$ & $g_{\omega}$ & $g_{\rho}$ & $b$ & $c$ & $~~m_{\sigma}$ & $m_{\omega}$ & $m_{\rho}$ \\
& & & $[\UNIT{GeV}]$ & $[\UNIT{GeV}]$ & & & & $[\UNIT{fm}^{-1}]$ & & $[\UNIT{GeV}]$ & $[\UNIT{GeV}]$ & $[\UNIT{GeV}]$
\\&&&&&&&&&&&&\\ \hline\hline &&&&&&&&&&&&\\
NLD & $\displaystyle~~\frac{1}{1+\sum_{j=1}^{4}\left(\zeta_{j}^{\alpha} \, i\partialr_{\alpha}\right)^{2}}~~$ & $\displaystyle ~~\frac{\Lambda^2}{ \Lambda^2+\vec{p}^{\,2}}~~$ & $0.95$ & $1.125~~$ & $~~10.08$ & $10.13$ & $3.50$ & $15.341$ & $-14.735~~$ & $~~0.592$ & $0.782$ & $0.763$
\\&&&&&&&&&&&& \\
\hline\hline
\end{tabular}
\caption{The parameters of the NLD model. First and second columns show the form of the
non-linear operator and the regulator in nuclear matter, respectively. The other columns
show the values of the parameters, as resulted from the fit to nuclear matter bulk properties.}
\label{tab1}
\end{table*}
Various choices of the regulator functions are possible. We have done
calculations for different forms of ${\cal D}$ and found that the simplest
momentum dependent monopole form factor provides the best description of the
low and high density nuclear matter properties and agrees very well with the
empirical momentum dependence of the in-medium Schr\"odinger-equivalent
optical potential. A momentum-dependent cut-off of a monopole form
can be obtained by the following choice
\begin{align}
\overrightarrow{{\cal D}} =
\left[
\frac{1}{1+\sum_{j=1}^{4}\left(\zeta_{j}^{\alpha} \, i\overrightarrow{\partial}_{\alpha}\right)^{2}}
\right] \,,
\end{align}
with $v_{1}^{\alpha}=(0,0,0,0)$, $v_{2}^{\alpha}=(0,1,0,0)$, $v_{3}^{\alpha}=(0,0,1,0)$ and
$v_{4}^{\alpha}=(0,0,0,1)$. In nuclear matter this results in
\begin{equation}
{\cal D} = \frac{\Lambda^2}{\Lambda^2 + \vec{p}^{\,2}}\,.
\end{equation}
Furthermore, in our fit to bulk properties of nuclear matter we use different cut-off
parameters $\Lambda_{s}\equiv\Lambda_{\sigma}$ and
$\Lambda_{v}\equiv\Lambda_{\omega}=\Lambda_{\rho}$ for the scalar and vector
meson-nucleon vertices, respectively, and we neglect in the following the contribution
of the $\delta$-meson.
The NLD model contains in total eight parameters. These are the meson-nucleon couplings
$g_{\sigma}$, $g_{\omega}$ and $g_{\rho}$, the parameters $b$ and $c$ of the
selfinteractions of the $\sigma$-meson, the mass $m_\sigma$ of the $\sigma$-meson,
and the cut-offs $\Lambda_{s}$ and $\Lambda_{v}$.
In principle, the masses of the $\omega$- and $\rho$-mesons should be also included in
the fit. In all the calculations concerning the fit the results for these
two masses turned out to be always around their free values. Therefore,
we keep for $m_\omega$ and $m_\rho$ the bare masses.
Seven parameters, {\it i.e.}, $g_{\sigma}$, $g_{\omega}$, $b$ and $c$,
$\Lambda_{s}$, $\Lambda_{v}$ and $m_\sigma$ are adjusted to the bulk properties of
symmetric nuclear matter and to the empirical energy dependence of the in-medium optical
potential. The remaining parameter $g_{\rho}$ is determined by the experimentally known symmetry
energy value at saturation density.
The constraints in symmetric nuclear matter are the binding energy per nucleon
$E_{b}\equiv\varepsilon/\rho_{B}-m$ and the compression modulus
$K=9\rho_{sat}^{2}\frac{\partial^{2}E_{b}}{\partial\rho_{B}^{2}}\vert_{\rho_{B}=\rho_{sat}}$
at the ground-state or saturation density $\rho_{sat}$, and the saturation density
$\rho_{sat}$ itself.
Furthermore, the momentum dependence is fixed by
the optical potential $U_{opt}$ at ground-state density and at two kinetic energies.
First, at $E_{kin}=205$ $\UNIT{MeV}$ (where $U_{opt}=0$ $\UNIT{MeV}$)
and at $E_{kin}=1000$ $\UNIT{MeV}$.
The numerical calculations for the fit procedure have been performed using the
Nelder-Mead minimization
algorithm NELMIN~\cite{Nelder:1965zz}. Furthermore, an adaptive non-linear least-square
package NL2SOL~\cite{Nl2sol:ref1,Nl2sol:ref2} has been supplemented in order to test robustness
of the fit procedure.
The experimental data used in the fit are supplemented by the corresponding errors
as provided by the empirical information in Table~\ref{tab2}.
At each iteration step of the minimization routines, the coupled set
of the NLD equations has been solved with a Powell-hybrid method as provided by the
HYBRD routine~\cite{HYBRD:ref1}. The integrals which appear in the source terms of the meson-field
equations have been treated numerically using an adaptive Gauss algorithm.
Note that, in the NLD model with the momentum-dependent regulator functions the solution
of the dispersion relation for the single particle energy $E$, Eq.~(\ref{onshell}),
does not involve additional complex root-finding algorithms, because the RMF
selfenergies depend on the nucleon momentum $p$ only, and not on the particle energy
as in our previous works~\cite{Gaitanos:2011yb,Gaitanos:2011ej,Gaitanos:2009nt}.
The functional form of the non-linear operator, its regulator (cut-off) in nuclear matter
and the fit parameters of the NLD model are shown in Table~\ref{tab1}. Table~\ref{tab2}
shows the extracted bulk properties of nuclear matter,
{\it i.e.}, the binding energy, the compression modulus,
the asymmetry parameter $a_{sym}=E_{sym}(\rho_{sat})$ ($E_{sym}$ is the symmetry
energy), the slope and curvature parameters,
$L=3\rho_{sat}\frac{\partial E_{sym}}{\partial\rho_{B}}\vert_{\rho_{B}=\rho_{sat}}$ and
$K_{asy}=K_{sym}-6L$ (with
$K_{sym}=9\rho_{sat}^{2}\frac{\partial^{2}
E_{sym}}{\partial\rho_{B}^{2}}\vert_{\rho_{B}=\rho_{sat}}$
being the compressibility of the symmetry energy), respectively, in the
NLD model, and in comparison with other RMF models widely used in the literature. These are
the NL$3^{*}$ parametrization~\cite{Lalazissis:2009zz}, the density dependent (DD) and derivative
coupling (D$^{3}$C) models~\cite{Typel:1999yq,Typel:2005ba}. The bulk nuclear matter properties
in the NLD model are comparable with those results in the NL$3^{*}$, DD and
D$^{3}$C parameterizations, while for the saturation density and the slope and
curvature parameters of the symmetry energy the NLD calculation is closer to the empirical
data. It is interesting that the NLD results are also comparable and in some cases even
better than the DBHF calculations.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure1.eps}
\caption{\label{fig1}
Vector (upper panel) and scalar (lower panel) selfenergies
as function of the baryon density $\rho_{B}$ in the NLD model (solid curves)
and in the DBHF approach (filled squares)~\cite{Brockmann:1990cn}.
The calculations refer to isospin-symmetric ($\alpha=0$) nuclear matter.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
As we will discuss later on in more details, the NLD model with the same parameters
of Table~\ref{tab1} describes remarkably well the empirical energy dependence of the
Schr\"odinger-equivalent optical potential. This is not the case in standard RMF approaches,
such as the NL$3^{*}$ and DD models, except if one uses supplementary derivative interactions
(D$^{3}$C), however, with the cost of many additional parameters~\cite{Typel:2005ba}.
\subsection{\label{sec5b}The NLD equation of state}
We start the discussion of the NLD calculations with the density dependence of the
relativistic mean-fields. Fig.~\ref{fig1} shows the density dependence of the
nucleon selfenergies for isospin-symmetric nuclear matter in the NLD model.
Our results are compared also with DBHF calculations, widely used in the
literature~\cite{Brockmann:1990cn}. It is remarkable that a simple folding
of the meson-nucleon couplings $g_i$ ($i=\sigma,\omega,\rho$) with the cut-off
function ${\cal D}$ results in a very similar density dependence of the of NLD
selfenergies as compared to the DBHF selfenergies,
in particular, for densities above $\rho_{B}>0.3-0.4~fm^{-3}$.
Note again that for momentum dependent cut-off functions the standard normalization
of the spinors, {\it i.e.}, $N_i=\sqrt{\frac{E^{*}_{i}+m^{*}_{i}}{2E^{*}_{i}}}$,
is not affected in this case,
since $\Pi^{0}_{i}=E^{*}_{i}$ holds, and according to Eq.~(\ref{bigPi}) only the pressure
is modified, as shown in Eq.~(\ref{nldeq18b}).
Fig.~\ref{fig2} shows the equation of state in terms of the binding energy per nucleon
as function of baryon density for isospin-symmetric ($\alpha=0$) and pure neutron
matter ($\alpha=-1$). The isospin-asymmetry parameter $\alpha$ is defined in the
usual way as $\alpha = \frac{J^{0}_{p}-J^{0}_{n}}{J^{0}_{p}+J^{0}_{n}}$, where
$J^{0}_{p,n}$ denote the proton and neutron densities, respectively.
The momentum-dependent monopole form-factor of the NLD model regulates the
high-density dependence of the fields such that the NLD EoS fits very well the
DBHF calculations, for both, symmetric nuclear and pure neutron matters. Note that
the NLD parameters are not fitted to the calculations of DBHF models, but to the
empirical information at ground state density only.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure2.eps}
\caption{\label{fig2}
Equation of state (EoS) in terms of the binding energy per nucleon as function of the
baryon density $\rho_{B}$ for isospin-symmetric ($\alpha=0$, lower curve and symbols) and
pure neutron matter ($\alpha=-1$, upper curve and upper symbols). The NLD results (solid curves)
are compared with DBHF calculations (filled squares)~\cite{Li:1992zza}.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
In Fig.~\ref{fig3} the density dependence of the EoS is displayed again, but now
we compare the NLD calculation with other RMF models, which have been widely used
in studies of finite nuclei and of nuclear matter. Even if all models give similar results
for the saturation point of nuclear matter, the differences between the NLD model and
the other RMF approaches at densities beyond the
ground state are large. In particular, the NL3$^{*}$ parametrization of
Lalazissis {\it et. al.}~\cite{Lalazissis:2009zz}, which is based on the Walecka model
with non-linear
selfinteractions predicts the stiffest EoS, while the RMF approaches DD and D$^{3}$C
with density-dependent meson-nucleon couplings give a softer density behavior at high
densities. However, none of these RMF approaches can reproduce the microscopic
calculations of the DBHF model at such high densities. In general, the non-linear
density dependence of the NLD model, induced by the cut-off form factor, result in a
much softer EoS at high densities, and it is in agreement with the DBHF
calculations. Note that heavy-ion studies at energies just below the kaon production
threshold ($E_{beam}\leq 1$ $GeV/A$) on collective nucleon
flows~\cite{Danielewicz:2002pu,Sahu:1998vz,Gaitanos:2001hv}
and on produced mesons (positive charged
kaons)~\cite{Fuchs:2005zg,Fuchs:2000kp,Hartnack:2005tr,Hartnack:2011cn}
support a rather soft EoS at densities around $\rho_{B}\simeq (2-3)\rho_{sat}$.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure3.eps}
\caption{\label{fig3}
Same as in Fig.~\ref{fig2}, but now the NLD model (solid curve)
is compared with various RMF calculations. These are the NL3$^{*}$
parametrization (dashed-dotted-dotted curve)~\cite{Lalazissis:2009zz},
the density-dependent (DD, dashed curve) and derivative-coupling
(D$^{3}$C, dashed-dotted curve) models~\cite{Typel:1999yq,Typel:2005ba}
and the linear Walecka model (RHD, dotted curve)~\cite{Walecka:1974qa}.
The DBHF calculations (filled squares) are shown too. All the
calculations refer to isospin-symmetric ($\alpha=0$) nuclear matter.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
Furthermore, an interesting issue concerns the inclusion of the non-linear
selfinteractions of the $\sigma$-field in the RMF descriptions. It is well known,
that such terms may cause divergences at very high densities, where one intends
to study compact neutron stars. Indeed, many RMF approaches such as the
NL3$^{*}$~\cite{Lalazissis:2009zz} or the NL$\rho$ and
NL$\rho\delta$~\cite{Gaitanos:2003zg}
parameterizations, lead to a non-physical behavior of the $\sigma$-field at
supra-normal densities. This is not the case in the NLD model, where the non-linear
selfinteraction terms of the $\sigma$-meson does not cause any divergences of the
$\sigma$-field even at supra-normal densities. This novel NLD feature arises from
the suppression of the scalar sources by the cut-off function ${\cal D}$, which in the
limiting case of large densities $\rho_{B}$ tend to vanishes.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure4.eps}
\caption{\label{fig4}
Same as in Fig.~\ref{fig3}, but now for the
symmetry energy $E_{sym}$ as function of baryon density $\rho_B$.
The additional symbols (diamonds, triangle and circles) around and below
the saturation density refer to
empirical data~\cite{Shetty:2010ib,Shetty:2007zg}.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
Another quantity of interest is the symmetry energy. This quantity is important
for astrophysical applications, since it directly influences the density dependence
of the proton-fraction in the neutron star interior. It is
extracted as the difference between the EoS's from pure neutron and symmetric
matters. Fig.~\ref{fig4} shows the symmetry energy as function of the
baryon density in the NLD model, in other RMF models and in the DBHF approach.
Note that the NLD results differ from the previous
works~\cite{Gaitanos:2011yb}. This is because now the NLD model consistently includes the
renormalization of the spinors. This is essential for the thermodynamic consistency of the
formalism, which was not taken into account in Ref.~\cite{Gaitanos:2011yb}. Furthermore,
a different form of the regulator is used here and the parameters
of the NLD model are fitted using low-density observables at saturation.
At first, all models describes fairly well the empirically known region around the
saturation density, as it was already shown in Table~\ref{tab2}. However, the
differences between the models become more pronounced with increasing density. The
most stiffer symmetry energy is obtained again in the NL3$^{*}$ and the RHD calculations,
because in these cases $E_{sym}$ is just proportional to the $\rho$-meson, which
linearly increases with density. Furthermore, the symmetry energy is considerably
softened in the DD and D$^{3}$C approaches, due to the exponentially decreasing
density dependence of the $\rho$-nucleon vertices.
On the other hand, the NLD model leads to a softer density behavior of the
symmetry energy relative to the standard NL and RHD parametrization but to a
stiffer dependence than the density-dependent approaches. It is again
an interesting feature that the NLD calculations predict almost perfectly the calculations
of the microscopic DBHF calculations~\cite{Li:1992zza}.
\subsection{\label{sec5c}In-medium nucleon optical potentials}
Apart the density dependence, which arises from both, the source terms of the
meson-field equations and the cut-off functions, the NLD selfenergies
depend explicitly on momentum.
The momentum (or energy) behavior of the RMF fields is described by the
in-medium Schr\"odinger-equivalent optical potential. This quantity results from the
reduction of the Dirac equation and reads~\cite{Jaminon:1989wj}
\begin{align}
U_{opt} = -S + \frac{E}{m}V +
\frac{1}{2m}\left( S^{2}-V^{2}\right)
\label{Uopt}
\,,
\end{align}
where the selfenergies $S=\Sigma_{si}(\rho_{B},p)$,~$V=\Sigma_{vi}^{0}(\rho_{B},p)$
refer to a proton ($i=p$) or a neutron ($i=n$) with a particular momentum $|\vec{p}\,|=p$
relative to nuclear matter at rest at a given density $\rho_{B}$ and isospin-asymmetry
$\alpha$.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure5.eps}
\caption{\label{fig5}
(Color online)
The in-medium proton optical potential $U_{opt}$ according Eq.~(\ref{Uopt})
as function of the in-medium single-particle kinetic energy, Eq.~(\ref{Ekin}),
within the NLD model (solid curve) and the the RHD~\cite{Walecka:1974qa}
approach at saturation density. The symbols (filled circles) refers to
the results of the Dirac phenomenology~\cite{Cooper:1993nx,Hama:1990vr}.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
The single-particle
energy $E$ is calculated from the in-medium dispersion relation
\begin{align}
E = \sqrt{m^{*2}+p^{2}} + V
\label{Ekin}
\end{align}
and the kinetic energy reads $E_{kin}=E-m$~\cite{Typel:2005ba,Haar:1986ii,TerHaar:1987ce}.
Here we consider the real part of $U_{opt}$ only, which enters the momentum dependent
mean-field dynamics of a nucleon in nuclear matter. The imaginary part, which is beyond
the scope of the present work, can be further accounted for in the collision integral
within the relativistic transport equation, see for instance Ref.~\cite{Botermans1990115},
when applying the NLD approach to the description of heavy-ion collisions.
Alternatively, one can use a dispersion theoretical framework to calculate the
imaginary part of the optical potential using as an input the real part of the
optical potential in the dispersion integral, see Ref.~\cite{Gaitanos:2011ej}
as an example of such approach to the imaginary part of $U_{opt}$.
In this context we would like to note, that the imaginary contributions of the
relativistic mean-fields do not influence essentially the real part of the
optical potential, see for example Refs.~\cite{Weber:1992qc,Typel:2005ba}.
The reason is, that the
imaginary contributions of the selfenergies, as obtained from empirical
studies, are of similar magnitude. They enter only through the terms quadratic
in the selfenergies in the expression for the real part of the
Schr\"odinger-equivalent optical potential, Eq.~(\ref{Uopt}),
and therefore they almost cancel each other.
Fig.~\ref{fig5} shows the results of our calculations in comparison with the empirical
data extracted from Dirac phenomenology~\cite{Cooper:1993nx,Hama:1990vr} for the in-medium
proton optical potential in symmetric ($\alpha=0$) nuclear matter at saturation density
$\rho_{B}=\rho_{sat}$. The optical
potential saturates with increasing single-particle energy in the NLD model and it
is in agreement with the experimental data. The saturation mechanism is
induced by the explicit momentum dependence of the cut-off functions, which drop
with increasing momentum.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure6.eps}
\caption{\label{fig6}
Energy dependence of the Lane-type optical potential $U_{iso}$, Eq.~(\ref{Uiso}),
for asymmetric ($\alpha=0.4$) nuclear matter at saturation density.
Calculation in the RHD~\cite{Walecka:1974qa} (dotted curve) and NLD (solid curve) models
are shown.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
It is a novel feature of the NLD model, that the regulators with a simple
momentum-dependent monopole form are sufficient to describe accurately the bulk
properties of nuclear matter and at the same time the empirical energy dependence of the
optical potential. In fact, this issue has been a long-standing problem in nuclear
matter studies, when one attempted to describe heavy-ion reactions within RMF
models~\cite{Blaettel:1993uz}. As also shown in Fig.~\ref{fig5}, in standard RMF models,
such as the widely used NL-parameterizations~\cite{Lalazissis:2009zz,Lalazissis:1996rd},
which describe excellent the
saturation properties and also the properties of finite nuclei, cannot reproduce the
correct energy dependence of the optical potential. Moreover, they strongly diverge with increasing
energy. Similar conclusions are obtained in the RMF approaches with density-dependent
vertex functions~\cite{Typel:1999yq}, except if one includes additional
energy-dependent terms~\cite{Typel:2005ba}.
Note that, the microscopic DBHF models~\cite{Haar:1986ii,TerHaar:1987ce,Plohl:2005fn}
describe satisfactory the empirical data at low energies bellow the pion production threshold.
For isospin-asymmetric nuclear matter the key quantity is the so-called Lane
potential~\cite{Li:2008gp,Baran:2004ih}, which is defined as the difference between
the neutron and proton optical potentials
\begin{align}
U_{iso} = \frac{U_{n}-U_{p}}{2|\alpha|}
\label{Uiso}
\,.
\end{align}
Fig.~\ref{fig6} displays the energy dependence of the Lane potential. In contrast
to the case of isospin-symmetric matter, empirical information is here less known.
The studies from Ref.~\cite{Li:2008gp} predict a decrease of the Lane potential with
increasing momentum, but other analysis~\cite{Baran:2004ih} conclude the opposite
trend with the result of an increasing Lane potential. The microscopic DBHF calculations
predict also a decreasing optical potential, which is in agreement with results of
other BHF calculations~\cite{Zuo:2005hw,vanDalen:2005sk}.
Furthermore, the standard RHD model leads to an
almost quadratic (or linear in energy) dependence in momentum, just because of the missing
momentum dependence in the RHD selfenergies (similar results are obtained in the
NL${}^{*}$ and DD parameterizations). The NLD calculations predict a weakly
decreasing Lane potential, which is in qualitative agreement with the (D)BHF results.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure6_a.eps}
\caption{\label{fig6a}
Energy (momentum) dependence of the optical $U_{opt}$ (Lane) potentials
in the main (inserted) panel. The curves have the same meaning as in Figs.~\ref{fig3}
and~\ref{fig4}.
Here the D$^{3}$C results are taken from~\cite{typelpc}.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
We compare now the NLD results separately in Fig.~\ref{fig6a}
for both, the in-medium proton optical potential for symmetric nuclear matter and the
Lane potential, with the same RMF approaches as in Figs.~\ref{fig3} and~\ref{fig4}. Not
only the original linear Walecka model (RHD), but also more modern RMF approaches, such as
the non-linear NL3 model in its updated version (NL3$^{*}$) and the density-dependent RMF
model (DD) predict an energy dependence, which is not consistent with phenomenology
(filled circles). This circumstance was improved in a modified DD model (D$^{3}$C) by the
inclusion of terms linearly proportional to the single-particle energy. The NLD model
provides here a smoother energy dependence.
The situation for the Lane potential is shown in the insert of Fig.~\ref{fig6a}. The interesting
issue here is, that not only the standard RMF models (RHD, NL3$^{*}$ and DD), but also the
energy-dependent RMF approach (D$^{3}$C), show a common behavior in momentum. This is due
to the fact that the isovector channel in the D$^{3}$C RMF model does not include any
explicit momentum dependence~\cite{typelpc}. On the other hand, in the NLD model
also the isovector mean-field depends on momentum and predicts a decrease of the Lane potential
with increasing momentum. This NLD trend seems to be supported by microscopic DBHF (filled
symbols in the insert of Fig.~\ref{fig6a}), at least on a qualitative level.
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure7.eps}
\caption{\label{fig7}
Thermodynamic consistency of the NLD model
for pure symmetric nuclear ($\alpha=0$, left panel) and asymmetric
($\alpha=-0.4$, right panel) matters.
The pressure densities $P$ as function of the baryon density $\rho_{B}$ are shown, as
calculated within the perfect fluid formula (solid curves) using the l.h.s.
of Eq.~(\ref{press}) and from the thermodynamic definition (filled diamonds) using
the r.h.s. of Eq.~(\ref{press}).
\vspace{-0.3cm}
}
\end{center}
\end{figure}
\subsection{\label{sec5d}Thermodynamic consistency of NLD model}
As showed by Weisskopf~\cite{Weisskopf:1957} in an independent-particle model, the single
particle energy at the Fermi surface must be equal the average energy per particle
at saturation density. Hugenholtz and Van~Hove~\cite{Hugenholtz:1958} proved this also
for an interacting Fermi gas at zero temperature.
For the thermodynamic consistency of the RMF model it is sufficient to prove Euler's theorem
\begin{align}
\varepsilon = -P + \sum_{i=p,n}\mu_{i}\rho_{B_{i}}
\label{euler}
\,,
\end{align}
with the chemical potentials $\mu_{i}=\frac{\partial\varepsilon}{\partial\rho_{B_{i}}}$
and the thermodynamic definition of the pressure
\begin{align}
P=\rho_{B}^{2}\frac{\partial(\varepsilon/\rho_{B})}{\partial\rho_{B}}\,.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure8.eps}
\caption{\label{fig8}
Equation of state in terms of the pressure densities as function of the baryon density
(normalized to the corresponding saturation values $\rho_{sat}$).
The shaded areas denote possible experimental regions, as extracted from studies on
heavy-ion collisions~\cite{Danielewicz:2002pu}. The different curves have the same meaning
as in Figs.~\ref{fig3} and~\ref{fig4}.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
The expression in Eq.~(\ref{euler}) allow to examine
the internal consistency of the model. For this purpose it is sufficient
to check the equality between the pressure obtained from the assumption of nuclear
matter as a prefect-fluid system and the pressure obtained from the thermodynamic
definition. That is
\begin{align}
P=\frac{1}{3}\; (T^{xx}+T^{yy}+T^{zz}) = \rho_{B}^{2}
\frac{\partial(\varepsilon/\rho_{B})}{\partial\rho_{B}}
\label{press}
\,,
\end{align}
where $\varepsilon$ and $T^{ii}$ ($i=x,y,z$) denote the energy density and
the spatial diagonal components of the energy-momentum tensor
tensor $T^{\mu\nu}$, respectively.
Conventional RMF approaches with bare meson-nucleon couplings or density-dependent
meson-nucleon vertices are thermodynamically
consistent~\cite{Lalazissis:1996rd,Serot:1997xg,Typel:2002ck,Boguta:1981yn}.
However, in the
case of explicit energy or momentum-dependent mean-fields the situation is more
complex. In fact, as we have examined here, one has to take care of the proper
renormalization of the Dirac fields. This issue was not taken into
account in previous studies~\cite{Gaitanos:2009nt} resulting in a deviation between the l.h.s.
and r.h.s. of Eq.~(\ref{press}) by $\simeq~5-10\%$ at high baryon densities.
We have checked that
the NLD model presented here satisfies thermodynamic consistency exactly for
any kind of energy- or momentum-dependent cut-off form factors. This is demonstrated in
Fig.~\ref{fig7} for the monopole form factor, where the pressure as function of density
for symmetric nuclear matter (left panel) and pure neutron matter (right panel) are shown.
Indeed, the pressures obtained from both definitions of Eq.~(\ref{press}) agree exactly.
Furthermore, the chemical potentials at zero temperature are equal to the corresponding
Fermi energies of protons and neutrons (not shown here) and therefore Euler's theorem is
evidently fulfilled.
We can now consider the high-density behavior of the pressure by
comparing our results with available empirical informations. Fig.~\ref{fig8} shows
the density dependence of the pressure densities in symmetric matter and pure neutron
matter. The conventional RHD approach leads again to a stiff behavior for all the densities.
Similar conclusions are drawn for the NL3$^*$, DD and D${}^{3}$C approaches
for symmetric matter, where for pure neutron matter the density-dependent
models come closer to the empirical HIC data. A more detailed discussion on these
approaches can be found in Ref.~\cite{Klahn:2006ir}.
The pressure in the NLD model show generally a softer density dependence and agree better with
the estimated experimental regions, in particular, at densities up to
$\rho_{B}\simeq 4\rho_{sat}$ for the symmetric case and for all densities for pure neutron
matter. Note that for larger densities conclusions on the nuclear
matter EoS from heavy-ion studies are more ambiguous, because at such high densities (or
corresponding beam energies larger than 4 $\UNIT{GeV}$ per nucleon) a large fraction of the initial
energy is converted into new degrees of freedom~\cite{Danielewicz:2002pu,Klahn:2006ir}.
\subsection{\label{sec5e}High density observables and Neutron Stars}
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure9.eps}
\caption{\label{fig9}
Relation between Neutron star mass $M$ (in units of the solar mass $M_{\odot}$)
versus radius $R$ in the NLD model (dashed and solid curves).
The three horizontal shaded bands refer to astrophysical measurements
from double neutron star (NS) systems~\cite{Lattimer:2004pg,Lattimer:2006xb}
and from the pulsars PSR J1903+0327~\cite{Freire:2010tf} and
PSR J1614-2230~\cite{Demorest:2010bx}, as indicated. The other shaded areas
bordered by thick curves indicate parameter space excluded by general
relativity, causality (shaded area on the top-left) and rotational
constraints (shaded area on the bottom-right)~\cite{Lattimer:2004pg,Lattimer:2006xb}.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
We test now the high density domain of the NLD equation of state. Compact neutron
stars offer such an opportunity to gain deeper insight into compressed baryonic
matter. Of particular interest are recent measurements on the binary millisecond
pulsar J1614-2230 with a mass of $(1.97\pm 0.04)~M_{\odot}$~\cite{Demorest:2010bx}.
The latter is much heavier than the average mass
of the binary radio pulsars $M=1.35\pm 0.04~M_{\odot}$~\cite{Thorsett:1998uc}
and provides a strong constraint on high density EoS.
Therefore, we apply the NLD model to spherical, non-rotating stars in $\beta$-equilibrium
between protons, neutrons and electrons including crustal effects on the
EoS~\cite{Baym:1971pw}. The star structure is calculated by solving the
Tolman-Oppenheimer-Volkov (TOV) equation~\cite{Glendenning:2005ix,Sagert:2005fw,Weber:2004kj}.
The results for neutron stars are shown in Fig.~\ref{fig9} in terms
of the mass-radius relation. The various
astrophysical measurements of NS masses~\cite{Lattimer:2004pg,Lattimer:2006xb}
can be arranged in the three
horizontal shaded areas as displayed in Fig.~\ref{fig9}. The lowest band
around an average value of $1.44~M_{\odot}$ refer to the well established
measurements on double neutron star systems and the middle one around
$1.67~M_{\odot}$ on the extracted mass of the pulsar PSR J1903+0327. The band
on the top represents by far the highest precisely observed neutron star mass
$1.97\pm 0.04~M_{\odot}$
of the pulsar PSR J1614-2230~\cite{Demorest:2010bx}. There are two regions in
Fig.~\ref{fig9} excluded~\cite{Lattimer:2004pg,Lattimer:2006xb} by
general relativity, causality (shaded area on the top-left) and rotational
constraints (shaded area on the bottom-right).
\begin{figure}[t]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth,angle=0.]
{Figure10.eps}
\caption{\label{fig10} Pressure densities as function
of the baryon density for nuclear matter in $\beta$-equilibrium within
the NLD (solid curve) model. The shaded areas together with the three
error bars indicate the pressure-density region extracted directly from
neutron-star measurements~\cite{Ozel:2010fw}.
\vspace{-0.3cm}
}
\end{center}
\end{figure}
The neutron stars mass-radius relation in the NLD model is shown by the solid/dotted curve
in~Fig.~\ref{fig9}. The dotted part of the NLD curve is
excluded by rotational constraints. The solid curve crosses the low-mass
constraints, and arrives to a maximum neutron star mass of $M=2.03~M_{\odot}$ at
a radius of $R=11.07$ km and a corresponding central density of
$\rho_{c}\simeq 7~\rho_{sat}$. The NLD prediction for the maximum value
of neutron star masses crosses also the constraint provided by the pulsar
PSR J1614-2230, and therefore this recent mass measurement is accommodated by the
NLD model.
Other possible constraints on the high-density EoS are obtained by statistical
Bayesian analyses, which rely on neutron star
measurements~\cite{Ozel:2010fw,Steiner:2010fz}.
They provide the most probable distribution of the equation of state,
as shown in Fig.~\ref{fig10} (shaded area)
for highly compressed matter in $\beta$-equilibrium.
The RHD model (not shown in this figure) leads to a too stiff density
dependence and it overestimates this empirical region particularly at high densities.
The NLD calculations, where only nucleonic degrees
of freedom are accounted for, describe fairly well the most probable region
of the pressure at high densities.
Note again that the parameters of the NLD model have been adjusted just to the saturation
properties of nuclear matter and to the energy dependence of the in-medium proton optical
potential at saturation, without the consideration of any other high density
observables in the fit procedure. We conclude here that the NLD describes well the available
low- and also high-density constraints on EoS of nuclear matter and
neutron stars.
\section{\label{sec6}Summary}
In summary, in the present work
the generalized form of the energy-momentum tensor in the NLD model was
derived which allowed us to consider different forms of the regulator
functions in the NLD Lagrangian. The thermodynamic consistency of the NLD
model was further demonstrated for
arbitrary choice of the regulator functions. A thorough study of the
properties of nuclear matter around saturation density has been
performed. We have shown that the NLD approach describes well
the saturation properties of the nuclear matter and
compares remarkably well with microscopic calculations and Dirac phenomenology.
We have investigated the high density part of the
NLD EoS. This is relevant for the neutron stars in $\beta$-equilibrium.
We found that the low density constraints imposed
on the nuclear matter EoS and by the momentum dependence of the
Schr\"odinger-equivalent optical potential lead to a maximum mass of the
neutron stars around $M \simeq 2 M_{\odot}$. The latter mass accommodates
the observed mass in the J1614-2230 millisecond radio pulsar system.
We further studied the EoS of matter in $\beta$-equilibrium and
find that the high density pressure-density diagram as extracted from
astrophysical measurements can be well described in the NLD model
which rely on nucleonic degrees of freedom only.
The EoS proposed here can be used in transport theoretical studies of nuclear
collisions, since, it describes very well both, the low energy (density) and
the high energy (density) regions of the nuclear phase diagram. The model
predicts saturation of the optical potential at high energies and results
in saturations of the symmetry energy.
An interesting finding is that the momentum dependent
interaction make the EoS softer at low densities, however, it is still stiff
enough at supra-normal densities to account for the recent measurements of
the neutron star masses. Furthermore, the model can be applied to the transport
description of the anti-nucleon optical potential as well as to the study of
dynamics of compressed matter in reactions induced by heavy-ions and
anti-proton beams at the future FAIR facility.
\begin{acknowledgments}
This work was supported by DFG and by DFG through TR16. We are greatful to
Dr. Thomas von Chossy for discussions concerning the numerical implementation of
the minimization routines. We also acknowledge the correspondence with
Prof. Dr. Lie-Wen Chen and Bao-Jun Cai on the thermodynamic consistency of the
NLD model.
\end{acknowledgments}
\begin{appendix}
\section{\label{app1}Notations and infinetisemal variations}
We use following abbreviations for higher-order
partial derivatives
\begin{align}
\partial_{\alpha_{1}\cdots\alpha_{n}} := &
\partial_{\alpha_{1}}\partial_{\alpha_{2}}\cdots\partial_{\alpha_{n}} \,,
\nonumber\\
\partial^{\alpha_{1}\cdots\alpha_{n}} := &
\partial^{\alpha_{1}}\partial^{\alpha_{2}}\cdots\partial^{\alpha_{n}} \,,
\nonumber\\
\partial_{\alpha_{1}\cdots\alpha_{n}}^{\beta_{1}\cdots\beta_{m}} := &
\partial_{\alpha_{1}}\cdots\partial_{\alpha_{n}}
\partial^{\beta_{1}}\cdots\partial^{\beta_{m}}
\nonumber
\,.
\end{align}
In the following appendices we will need various definitions of infinitesimal
variations, which are specified here. The total variation of a field
$\varphi_{r}(x)$ with $x^{\mu} = (x^{0},\vec{x}\,)$ is defined as
\begin{align}
\delta_{T}\varphi_{r}(x) := \varphi^{\prime}_{r}(x^{\prime})-\varphi_{r}(x)
\, ,
\end{align}
with the variation with respect to the $4$-coordinate given by
\begin{align}
x^{\prime\,\mu} := x^{\mu} + \delta x^{\mu} =
x^{\mu} + \Delta\vartheta^{\mu\nu} x_{\nu} + \epsilon^{\mu}
\,,
\end{align}
where $\delta x^{\mu}$ denotes an infinitesimal transformation,
\textit{e.g.}, a constant
translation, $\epsilon^{\mu}$, and/or a rotation,
$\Delta\vartheta^{\mu\nu}x_{\mu}$, $\vartheta^{\mu\nu}$
is an infinitesimal antisymmetric tensor and
$\Delta\vartheta^{\mu\nu}=-\Delta\vartheta^{\nu\mu}$.
We define the infinitesimal transformation of the field at a fixed
argument as
\begin{align}
\delta\varphi_{r}=\varphi^{\prime}_{r}-\varphi_{r}
\,.
\end{align}
For the derivation of the Noether theorem we will need not only the infinitesimal
variation of a field at fixed argument only, but also of its higher-order
derivatives, {\it i.e.},
$\delta(\partial_{\alpha_{1}}\varphi_{r})$,
$\delta(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})$, $\cdots$,
$\delta(\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r})$. For such variations
the commutation between symbols $\delta$ and $\partial$ is holds, that is
\begin{align}
\delta\left( \partial_{\alpha_{1}}\varphi_{r} \right) = &
\partial_{\alpha_{1}}\varphi^{\prime\,}_{r}(x) - \partial_{\alpha_{1}}\varphi_{r}(x)
\nonumber\\
= &
\partial_{\alpha_{1}}\left( \varphi^{\prime\,}_{r}(x) - \varphi_{r}(x) \right) =
\partial_{\alpha_{1}}\delta\varphi_{r}(x) \,,
\nonumber\\
\delta\left( \partial_{\alpha_{1}\alpha_{2}}\varphi_{r} \right) = &
\partial_{\alpha_{1}\alpha_{2}}\varphi^{\prime\,}_{r}(x) -
\partial_{\alpha_{1}\alpha_{2}}\varphi_{r}(x)
\nonumber\\
= & \partial_{\alpha_{1}\alpha_{2}}\left( \varphi^{\prime\,}_{r}(x) -
\varphi(x) \right) =
\partial_{\alpha_{1}\alpha_{2}}\left(\delta\varphi_{r}(x)\right)
\,, \nonumber
\end{align}
and obviously for higher-order fields.
The total variation of a field $\varphi_{r}(x)$ can be written
in a more handleable way as
\begin{align}
\delta_{T}\varphi_{r}(x) := & \varphi^{\prime}_{r}(x^{\prime})-\varphi_{r}(x)
\nonumber\\
= & \left[
\varphi^{\prime}_{r}(x^{\prime}) - \varphi_{r}(x^{\prime})
\right]
+
\left[
\varphi_{r}(x^{\prime}) - \varphi_{r}(x)
\right]
\,,
\label{var1}
\end{align}
where the first term is just the variation $\delta\varphi_{r}$ at fixed argument.
The second term in Eq.~(\ref{var1}) is the variation with respect to the argument.
Eq.~(\ref{var1}) reduces in first order to
\begin{equation}
\delta_{T}\varphi_{r}(x) =
\delta\varphi_{r}(x) + \partial_{\alpha}\varphi_{r}(x)\delta x^{\alpha}
\quad .
\label{var2}
\end{equation}
\begin{widetext}
\section{\label{app2}Derivation of the Noether theorem for higher-order Lagrangians}
For the derivation of the Noether theorem we start with the following
Lagrangian density
\begin{align}
{\cal L} =
{\cal L}\left[ \varphi_{r}(x), \partial_{\alpha_{1}}\varphi_{r}(x), \cdots,
\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r}(x)
\right]
\label{La}
\,.
\end{align}
Invariance of the Lagrangian density, (\ref{La}), with respect to an infinitesimal
transformation of all the fields and their coordinates implies
\begin{align}
{\cal L}\left[ \varphi^{\prime}_{r}(x^{\prime\,}),
\partial_{\alpha_{1}}^{\prime}\varphi^{\prime}_{r}(x^{\prime\,}), \cdots,
\partial_{\alpha_{1}\cdots\alpha_{n}}^{\prime}\varphi^{\prime}_{r}(x^{\prime\,})
\right] =
{\cal L}\left[ \varphi_{r}(x), \partial_{\alpha_{1}}\varphi_{r}(x), \cdots,
\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r}(x)
\right]
\label{Lvar0}
\,.
\end{align}
In terms of the total variation Eq.~(\ref{Lvar0}) results in
\begin{align}
\delta_{T}{\cal L} =
{\cal L}\left[ \varphi^{\prime}_{r}(x^{\prime\,}),
\partial_{\alpha_{1}}^{\prime}\varphi^{\prime}_{r}(x^{\prime\,}), \cdots,
\partial_{\alpha_{1}\cdots\alpha_{n}}^{\prime}\varphi^{\prime}_{r}(x^{\prime\,})
\right] -
{\cal L}\left[ \varphi_{r}(x), \partial_{\alpha_{1}}\varphi_{r}(x), \cdots,
\partial_{\alpha_{1}\cdots\alpha_{n}}\varphi_{r}(x)
\right] = 0
\label{Lvar}
\,,
\end{align}
where $\displaystyle \partial_{\mu}\equiv \frac{\partial}{\partial x^{\mu}}$
and $\displaystyle \partial_{\mu}^{\prime}\equiv \frac{\partial}{\partial x^{'\mu}}$.
Our goal is to arrive from Eq.~(\ref{Lvar}) to a continuity equation of the form
\begin{equation}
\partial_{\mu}f^{\mu} = 0
\,,
\label{cont}
\end{equation}
with $f^{\mu}$ being a conserved current to be determined in the following.
We will work out here all the analytical evaluations up to third order in the
partial derivatives of the Lagrangian density, and we will give all the terms
up to infinity in the final equations.
At first, the total variation of the Lagrangian density, Eq.~(\ref{Lvar}),
can be written as
\begin{align}
\delta_{T}{\cal L} = &
{\cal L}[\varphi^{\prime}_{r}(x^{\prime}),
\partial_{\alpha}\varphi^{\prime}_{r}(x^{\prime}),
\partial_{\alpha\beta}\varphi^{\prime}_{r}(x^{\prime}),\ldots ]
-
{\cal L}[\varphi_{r}(x^{\prime}),
\partial_{\alpha}\varphi_{r}(x^{\prime}),
\partial_{\alpha\beta}\varphi_{r}(x^{\prime}),\ldots ]
\label{var3}\\
+ &
{\cal L}[\varphi_{r}(x^{\prime}),
\partial_{\alpha}\varphi_{r}(x^{\prime}),
\partial_{\alpha\beta}\varphi_{r}(x^{\prime}),\ldots ]
-
{\cal L}[\varphi_{r}(x),
\partial_{\alpha}\varphi_{r}(x),
\partial_{\alpha\beta}\varphi_{r}(x),\ldots ] = 0
\,, \nonumber
\end{align}
where we used
$\partial_{\mu}^{\prime}\varphi^{\prime}(x^{\prime\,}) =
\partial_{\nu}\varphi^{\prime}(x^{\prime\,})\partial_{\mu}^{\prime}x^{\nu} =
\partial_{\nu}\varphi^{\prime}(x^{\prime\,})\, \delta_{\mu}^{\nu} =
\partial_{\mu}\varphi^{\prime}(x^{\prime\,})$ where $\delta_{\mu}^{\nu}$ is a
Kronecker symbol.
The first line in Eq. (\ref{var3}) is just the variation of ${\cal L}$
with respect to the fields
$\varphi_{r}(r^{\prime}),\partial_{\alpha}\varphi_{r}(r^{\prime}),\cdots$
at fixed argument, whereas the terms in the second line
give the variation of the Lagrangian with respect to the argument $x^{\mu}$.
Since we consider infinitesimal transformations only, it is sufficient to
evaluate latter quantity up to first order with respect to the argument.
Therefore, Eq.~(\ref{var3}) can be written as
\begin{align}
\delta_{T}{\cal L} =
\delta{\cal L} + \partial_{\alpha}{\cal L}\delta x^{\alpha} = 0
\,.
\label{var4}
\end{align}
The variation at fixed argument, $\delta{\cal L}$, can be evaluated
in the usual way according
\begin{align}
\delta{\cal L} =
\sum_{r} \left[
\frac{\partial{\cal L}}{\partial\varphi_{r}} \delta\varphi_{r}
+
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}}\varphi_{r})}
\delta(\partial_{\alpha_{1}}\varphi_{r})
+
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}
\delta (\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})
+
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})}
\delta (\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})
+ \cdots
\right]
\,.
\label{var5}
\end{align}
Replacing the first term in Eq. (\ref{var5}) with the help of the Euler-Lagrange
equations of motion, Eq.~(\ref{Euler0}), results in
\begin{align}
\delta{\cal L} =
\sum_{r} & \left[
\partial_{\alpha_{1}}\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}}\varphi_{r})}\delta\varphi_{r}
-
\partial_{\alpha_{1}\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}\delta\varphi_{r}
+
\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})}\delta\varphi_{r}
- \cdots
\right.
\label{deltaL_a}\\
&
\left.
+ \frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}}\varphi_{r})}
\delta(\partial_{\alpha_{1}}\varphi_{r})
+
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}
\delta (\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})
+
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})}
\delta (\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})
+
\cdots
\right]
\,. \nonumber
\end{align}
We use the commutation between the variation at fixed argument and
the partial derivative (see appendix~\ref{app1}) and apply once the product rule
as follows for the term proportional to $\partial_{\alpha_{1}\alpha_{2}}\delta\varphi$
\begin{align}
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}
\partial_{\alpha_{1}\alpha_{2}}\delta\varphi_{r} =
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}
\partial_{\alpha_{2}}\delta\varphi_{r}
+ \partial_{\alpha_{1}}
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\varphi_{r})}
\partial_{\alpha_{2}}\delta\varphi_{r}
\right]
\,.
\label{product1}
\end{align}
For the term proportional to $\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\delta\varphi$
we apply the product rule twice
\begin{align}
\frac{\partial{\cal L}}{\partial ( \partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r} ) }
\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\delta\varphi_{r} = &
\partial_{\alpha_{1}\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})}
\partial_{\alpha_{3}}\delta\varphi_{r}
\nonumber\\
& -
\partial_{\alpha_{2}}
\left[
\partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})}
\partial_{\alpha_{3}}\delta\varphi_{r}
\right]
+
\partial_{\alpha_{1}}
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}\alpha_{3}}\varphi_{r})}
\partial_{\alpha_{2}\alpha_{3}}\delta\varphi_{r}
\right]
\,,
\label{product2}
\end{align}
and so forth for the terms proportional to higher-order derivatives.
In total, this procedure leads to a series of terms
proportional to a $4$-divergences only.
It is more convenient to arrange these terms such to obtain several
infinite series for each derivative field
$\delta\varphi$, $\partial_{\alpha_{1}}\delta\varphi$,
$\partial_{\alpha_{1}\alpha_{2}}\delta\varphi$,
$\partial_{\alpha_{1}\cdots\alpha_{n}}\delta\varphi$. After
insertion of Eqs.~(\ref{product1}) and~(\ref{product2}) into the
Eq.~(\ref{deltaL_a}) and after their rearrangement we obtain as an
intermediate result
\begin{align}
\delta{\cal L} = \!
\partial_{\mu} \!
\Big \{ & \!
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\alpha_{2}}\varphi_{r})}
-
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\delta\varphi_{r}
\Big .
\label{delta_L}\\
\Big .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}}\varphi_{r})}
-
\phantom{{}_{\alpha_{2}}}
\partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\alpha_{1}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}}\delta\varphi_{r}
\Big .
\nonumber\\
\Big .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\sigma_{2}}\delta\varphi_{r}
\Big .
\nonumber\\
\Big .
& \vdots
\Big .
\nonumber\\
\Big . \!
+ &
\left[
\phantom{~~~~~~~~~}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}}\varphi_{r})}
+
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\cdots\sigma_{n}}\delta\varphi_{r}
\Big \}
\nonumber
\,.
\end{align}
As next step we replace $\delta\varphi_{r}$ by the total variation,
$\delta_{T}\varphi$, and insert the resulting equation into the total
variation for the Lagrangian in Eq.~(\ref{var4}). Furthermore, we use
\begin{align}
\partial_{\alpha}{\cal L}\delta x^{\alpha}
= \partial_{\mu}\left( g^{\mu}_{\alpha}{\cal L}\delta x^{\alpha}\right)
\label{trick}
\,,
\end{align}
where $ g^{\mu}_{\alpha}$ is a metric tensor. Eq.~(\ref{trick}) obviously
holds, when the infinitesimal transformation for
the coordinates, $\delta x^{\mu}$, concerns a constant displacement of the
$4$-vector $x^{\mu}$. In case of rotations, where $\delta x^{\mu}$ depends
on the coordinate $x^{\mu}$ its self, Eq.~(\ref{trick}) still applies due to
the antisymmetry of the tensor $\vartheta^{\mu\nu}$.
These steps give us the final and general expression for the Noether theorem
with respect to variations of the different fields and their coordinates
\begin{align}
\delta{\cal L} = \!
\partial_{\mu} \!
\Bigg \{ & \!
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\alpha_{2}}\varphi_{r})}
-
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]
\Bigg .
\nonumber\\
\Bigg.
&
\times
\left( \delta_{T}\varphi_{r} - \partial_{\alpha}\varphi_{r}\delta x^{\alpha} \right)
\Bigg .
\nonumber\\
\Bigg .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}}\varphi_{r})}
-
\phantom{{}_{\alpha_{2}}}
\partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\alpha_{1}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]
\Bigg .
\nonumber\\
\Bigg.
&
\times
\partial_{\sigma_{1}}
\left( \delta_{T}\varphi_{r} - \partial_{\alpha}\varphi_{r}\delta x^{\alpha} \right)
\Bigg .
\nonumber\\
\Bigg .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]
\Bigg .
\nonumber\\
\Bigg.
&
\times
\partial_{\sigma_{1}\sigma_{2}}
\left( \delta_{T}\varphi_{r} - \partial_{\alpha}\varphi_{r}\delta x^{\alpha} \right)
\Bigg .
\nonumber\\
\Bigg .
& \vdots
\Bigg .
\nonumber\\
\Bigg . \!
+ &
\left[
\phantom{~~~~~~~~~}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}}\varphi_{r})}
+
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]
\Bigg .
\nonumber\\
\Bigg.
&
\times
\partial_{\sigma_{1}\cdots\sigma_{n}}
\left( \delta_{T}\varphi_{r} - \partial_{\alpha}\varphi_{r}\delta x^{\alpha} \right)
\Bigg .
\nonumber\\
\Bigg .
& - g^{\mu\alpha}{\cal L}\delta x_{\alpha}
\Bigg \}
\label{delta22_L}
\,.
\end{align}
We consider now global phase transformations ($\epsilon \ll 1$)
\begin{align}
\delta x^{\mu}=0~~,~~\varphi^{\prime}_{r}(x^{\prime})=e^{-i\epsilon}\varphi_{r}(x)
\Rightarrow \delta\varphi_{rT} = \delta\varphi_{r} = -i\epsilon \varphi_{r}
\label{disymm}
\end{align}
and obtain the following relations for global phase transformations
\begin{align}
\delta\varphi_{r} = & -i\epsilon \varphi_{r}\,,
\nonumber\\
\partial_{\alpha_{1}}\delta\varphi_{r} = & -i\epsilon \partial_{\alpha_{1}}\varphi_{r}\,,
\nonumber\\
\partial_{\alpha_{1}\alpha_{2}}\delta\varphi_{r}
= &
-i\epsilon \partial_{\alpha_{1}\alpha_{2}}\varphi_{r}\,,
\nonumber\\
\cdots & \,,
\nonumber\\
\partial_{\alpha_{1}\alpha_{2}\cdots\alpha_{n}}\delta\varphi_{r}
= &
-i\epsilon \partial_{\alpha_{1}\alpha_{2}\cdots\alpha_{n}}\varphi_{r}
\,.
\end{align}
Therefore,
the invariance of the Lagrangian under global phase transformations results
to the continuity equation $\partial_{\mu}J^{\mu} = 0$ with the current given by
\begin{align}
J^{\mu} = \! -i
\Bigg \{ & \!
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\alpha_{2}}\varphi_{r})}
-
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\varphi_{r}
\Bigg .
\label{Noether-Current}\\
\Bigg .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}}\varphi_{r})}
-
\phantom{{}_{\alpha_{2}}}
\partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\alpha_{1}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}}\varphi_{r}
\Bigg .
\nonumber\\
\Bigg .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\sigma_{2}}\varphi_{r}
\Bigg .
\nonumber\\
\Bigg .
& \vdots
\Bigg .
\nonumber\\
\Bigg . \!
+ &
\left[
\phantom{~~~~~~~~~}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}}\varphi_{r})}
+
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\cdots\sigma_{n}}\varphi_{r}
\Bigg \}
\nonumber
\,.
\end{align}
Using Eqs.~(\ref{tensors}) for the tensors
${\cal K}^{\mu\sigma_{1}\sigma_{2}\cdots}_{r}$ one arrives to the Expression in
Eq.~(\ref{current}).
The energy-momentum tensor is derived again with the help of Eq.~(\ref{delta22_L})
for the case of constant $4$-translations
$x^{\prime\,\mu} := x^{\mu} + \delta x^{\mu}$. This means
$\delta_{T}\varphi_{r}(x)=0$ and following expression is obtained
\begin{align}
\partial_{\mu}
\Bigg \{ &
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\alpha_{2}}\varphi_{r})}
-
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial^{\nu}\varphi_{r}
\Bigg .
\nonumber\\
\Bigg ..
+ &
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}}\varphi_{r})}
-
\phantom{{}_{\alpha_{2}}}
\partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\alpha_{1}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]
\partial_{\sigma_{1}}^{\nu}\varphi_{r}
\Bigg ..
\nonumber\\
\Bigg .
+ &
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\sigma_{2}}^{\nu}\varphi_{r}
\Bigg .
\nonumber\\
\Bigg .
&
\vdots
\Bigg.
\nonumber\\
\Bigg . \!
+ &
\left[
\phantom{~~~~~~~~~}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}}\varphi_{r})}
+
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\cdots\sigma_{n}}^{\nu}\varphi_{r}
\Bigg .
\nonumber\\
\Bigg .
- & g^{\mu\nu}{\cal L}
\Bigg \}\delta x_{\nu} = 0
\,. \label{currapp}
\end{align}
This leads to the continuity equation
\begin{align}
\partial_{\mu}\, T^{\mu\nu} = 0
\end{align}
with the energy-momentum tensor $T^{\mu\nu}$ given by
\begin{align}
T^{\mu\nu} = \! -i
\Bigg \{ & \!
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\alpha_{2}}\varphi_{r})}
-
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial^{\nu}\varphi_{r}
\Bigg .
\label{tensorapp}\\
\Bigg .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}}\varphi_{r})}
-
\phantom{{}_{\alpha_{2}}}
\partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\alpha_{1}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}}^{\nu}\varphi_{r}
\Bigg .
\nonumber\\
\Bigg .
+ & \!
\left[
\phantom{
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\alpha_{1}}\varphi_{r})}
+ \partial_{\alpha_{1}\alpha_{2}}
}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}}\varphi_{r})}
+\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\sigma_{2}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\sigma_{2}}^{\nu}\varphi_{r}
\Bigg .
\nonumber\\
\Bigg .
& \vdots
\Bigg .
\nonumber\\
\Bigg .. \!
+ &
\left[
\phantom{~~~~~~~~~}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}}\varphi_{r})}
- \partial_{\alpha_{1}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}}\varphi_{r})}
+
\cdots
+ (-)^{n}
\partial_{\alpha_{1}\cdots\alpha_{n}}
\frac{\partial{\cal L}}{\partial(\partial_{\mu\,\sigma_{1}\cdots\sigma_{n}\,\alpha_{1}\cdots\alpha_{n}}\varphi_{r})}
\right]\partial_{\sigma_{1}\cdots\sigma_{n}}^{\nu}\varphi_{r}
\Bigg \}
\nonumber\\
- & g^{\mu\nu}{\cal L}
\,.
\nonumber
\end{align}
Above expression for the energy-momentum tensor can be written also in a more
compact form resulting to Eq.~(\ref{tensor}).
\section{\label{app3}Preliminaries for the NLD formalism}
For the derivation of the Dirac equation, the current and the
energy-momentum tensor in the NLD model we need the
derivatives of the NLD Lagrangian with respect to the spinor fields
and their higher-order derivatives. We evaluate them here up to second order
and provide all terms of infinite series in the final expressions.
Since we are interesting on the derivatives with respect to the spinor
fields only, we start with the NLD Lagrangian without
the meson-field contributions. That is
\begin{align}
{\cal L} = & \frac{1}{2}
\left[
\overline{\Psi}i\gamma_{\mu}\partial^{\mu}\Psi
-
(i\partial^{\mu}\overline{\Psi}) \gamma_{\mu}\Psi
\right]
- \overline{\Psi}\Psi m
+
\frac{g_{\sigma}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}}
\Psi\sigma
+\sigma\overline{\Psi}
\, \overrightarrow{{\cal D}}
\Psi
\right]
- \frac{g_{\omega}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}}
\gamma^{\mu}\Psi\omega_{\mu}
+\omega_{\mu}\overline{\Psi}\gamma^{\mu}
\, \overrightarrow{{\cal D}}
\Psi
\right]
\nonumber\\
- & \frac{g_{\rho}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}}
\gamma^{\mu}\vec{\tau}\Psi\vec{\rho}_{\mu}
+\vec{\rho}_{\mu}\overline{\Psi}\vec{\tau}\gamma^{\mu}
\, \overrightarrow{{\cal D}}
\Psi
\right]
+ \frac{g_{\delta}}{2}
\left[
\overline{\Psi}
\, \overleftarrow{{\cal D}}
\vec{\tau}\Psi\vec{\delta}
+\vec{\delta}\,\overline{\Psi}\vec{\tau}
\, \overrightarrow{{\cal D}}
\Psi
\right]
\label{app4-1}
\,.
\end{align}
The application of the various higher-order partial derivatives with respect to
the spinor fields $\overline{\Psi}$ and $\Psi$ to the Lagrangian density
in Eq.~(\ref{app4-1}) proceeds with the help of the multiple Taylor expansions, see
Eqs.~(\ref{ope}). It is convenient to rearrange these series in
ascending order with respect to the partial derivatives.
With $\overrightarrow{\xi}_{j} = -\zeta^{\alpha}_{j}\, i\overrightarrow{\partial}_{\alpha}$ and
$\overleftarrow{\xi}_{j} = i\overleftarrow{\partial}_{\alpha}\,\zeta^{\alpha}_{j}~(j=1,2,3,4)$
where $\zeta^{\mu}_{j}=v^{\mu}_{j}/\Lambda$ one obtains for the expansion up to order $n$
\begin{align}
\overrightarrow{{\cal D}} = &
d^{(0)} -
\frac{1}{1!}\, d^{(1)}_{i_{1}} \, \zeta^{\alpha_{1}}_{i_{1}}\, i\overrightarrow{\partial}_{\alpha_{1}} +
\frac{1}{2!}\, d^{(2)}_{i_{1}i_{2}} \,
\zeta^{\alpha_{1}}_{i_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \,
\zeta^{\alpha_{2}}_{i_{2}}i\overrightarrow{\partial}_{\alpha_{2}} - \cdots +
(-)^{n}\frac{1}{n!}\, d^{(n)}_{i_{1}\cdots i_{4}} \,
\left( \zeta^{\alpha_{1}}_{i_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \right)^{n_{1}} \,
\cdots
\left( \zeta^{\alpha_{4}}_{i_{4}}i\overrightarrow{\partial}_{\alpha_{4}} \right)^{n_{4}} \,,
\label{expr}\\
\overleftarrow{{\cal D}} = &
d^{(0)} +
\frac{1}{1!}\, i\overleftarrow{\partial}_{\alpha_{1}}\zeta^{\alpha_{1}}_{i_{1}} \, d^{(1)}_{i_{1}} +
\frac{1}{2!} \,
i\overleftarrow{\partial}_{\alpha_{1}}\zeta^{\alpha_{1}}_{i_{1}} \,
i\overleftarrow{\partial}_{\alpha_{2}}\zeta^{\alpha_{2}}_{i_{2}} \,
d^{(2)}_{i_{1}i_{2}}
+ \cdots + \frac{1}{n!}\,
\left( i\overleftarrow{\partial}_{\alpha_{1}}\zeta^{\alpha_{1}}_{i_{1}} \right)^{n_{1}} \,
\cdots
\left( i\overleftarrow{\partial}_{\alpha_{4}}\zeta^{\alpha_{4}}_{i_{4}} \right)^{n_{4}} \,
d^{(n)}_{i_{1}\cdots i_{4}}
\label{expl}
\,,
\end{align}
with the condition $n_{1}+\cdots +n_{4}=n$ and
\begin{align}
d^{(0)} := & {\cal D}\vert_{\{\xi_{i_{1}},\xi_{i_{2}},\cdots, \xi_{i_{4}}\}\to 0}
\,,\\
d^{(1)}_{i_{1}} := & \frac{\partial}{\partial\xi_{i_{1}}}
{\cal D}\vert_{\{\xi_{i_{1}},\xi_{i_{2}},\cdots, \xi_{i_{4}}\}\to 0}
\,,\\
\cdots
\,, \nonumber \\
d^{(n)}_{i_{1}i_{2} i_{3} i_{4}} := &
\frac{\partial^{n}}{\partial\xi_{i_{1}}^{n_{1}}\partial\xi_{i_{2}}^{n_{2}}
\cdots \partial\xi_{i_{4}}^{n_{4}}}
{\cal D}\vert_{\{\xi_{i_{1}},\xi_{i_{2}},\cdots, \xi_{i_{4}}\}\to 0}
\,.
\end{align}
The pairs between Latin and between Greek indices in above equations denote
the summation over the
multiple variables $\xi_{i}~(i=1,2,3,4)$ and over the $4$-coordinates,
respectively. In order to simplify the derivations in the following appendices,
we skip the summation over the multiple variables.
For the partial derivative of the Lagrangian density with
respect to the spinor field $\overline{\Psi}$ only the zero-order terms in
Eqs.~(\ref{expr},\ref{expl}) contribute. Therefore we obtain in detail
\begin{align}
\frac{\partial {\cal L}}{\partial\overline{\Psi}} =
\frac{1}{2}\gamma_{\mu}\,i\partial^{\mu}\psi - m\Psi
+ & \frac{1}{2}g_{\sigma} \sigma
\left[
d^{(0)}\Psi+\overrightarrow{{\cal D}}\Psi
\right]
- \frac{1}{2}g_{\omega} \omega^{\mu}
\left[
\gamma_{\mu}d^{(0)}\Psi+\gamma_{\mu}\overrightarrow{{\cal D}}\Psi
\right]
- \frac{1}{2}g_{\rho} \vec{\rho}\,^{\mu}
\left[
\gamma_{\mu}\vec{\tau}\,d^{(0)}\Psi+
\gamma_{\mu}\vec{\tau}\,\overrightarrow{{\cal D}}\Psi
\right]
\nonumber\\
+ & \frac{1}{2}g_{\delta} \vec{\delta}\,
\left[
\vec{\tau}\,\, d^{(0)} \Psi+\overrightarrow{{\cal D}}\vec{\tau}\,\Psi
\right]
\label{deriv0bar}
\,,
\end{align}
and similar for the first-order derivative with respect to the
spinor field $\Psi$
\begin{align}
\frac{\partial {\cal L}}{\partial\Psi} =
-\frac{1}{2}\gamma_{\mu}\,i\partial^{\mu}\overline{\psi} - m\overline{\Psi}
+ & \frac{1}{2}g_{\sigma}
\left[
\overline{\Psi}\overleftarrow{{\cal D}}\sigma+\overline{\Psi} d^{(0)} \sigma
\right]
- \frac{1}{2}g_{\omega} \omega^{\mu}
\left[
\overline{\Psi}\overleftarrow{{\cal D}}\gamma_{\mu}+
\overline{\Psi}\gamma_{\mu} d^{(0)}
\right]
- \frac{1}{2}g_{\rho} \vec{\rho}\,^{\mu}
\left[
\overline{\Psi}\overleftarrow{{\cal D}}\gamma_{\mu}\vec{\tau}+
\overline{\Psi}\gamma_{\mu}\vec{\tau}\,d^{(0)}
\right]
\nonumber\\
+ &
\frac{1}{2}g_{\delta} \vec{\delta}\,
\left[
\vec{\tau}\,\overline{\Psi}\overleftarrow{{\cal D}}+\vec{\tau}\,\overline{\Psi} d^{(0)}
\right]
\label{deriv0}
\,.
\end{align}
Concerning the partial derivatives with respect to the first-order spinor fields
$\partial_{\alpha}\overline{\Psi}$ and $\partial_{\alpha}\Psi$ only the
first-order derivative terms in Eqs.~(\ref{expr},\ref{expl}) contribute
and we get
\begin{align}
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}}\overline{\Psi})} = &
-\frac{1}{2}\gamma^{\alpha_{1}}\,i\psi
+
\frac{1}{2}g_{\sigma}
d^{(1)} \, i\zeta^{\alpha_{1}}\, \Psi\sigma
- \frac{1}{2}g_{\omega}
d^{(1)} \, i\zeta^{\alpha_{1}} \, \gamma^{\mu}\Psi\omega_{\mu}
- \frac{1}{2}g_{\rho}
d^{(1)} \, i\zeta^{\alpha_{1}} \, \gamma^{\mu}\vec{\tau}\,\Psi\vec{\rho}\,_{\mu}
+
\frac{1}{2}g_{\delta}
d^{(1)} \, i\zeta^{\alpha_{1}}\, \vec{\tau}\,\Psi\vec{\delta}\, \,,
\label{deriv1bar}\\
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_1}\Psi)} = &
\frac{1}{2}i\overline{\psi}\gamma^{\alpha_{1}}
-
\frac{1}{2}g_{\sigma}
\sigma\overline{\Psi}\, i\zeta^{\alpha_{1}} \, d^{(1)}
+ \frac{1}{2}g_{\omega}
\omega_{\mu}\overline{\Psi}\gamma^{\mu} \, i\zeta^{\alpha_{1}} \, d^{(1)}
+ \frac{1}{2}g_{\rho}
\vec{\rho}\,_{\mu}\overline{\Psi}\vec{\tau}\,\gamma^{\mu} \, i\zeta^{\alpha_{1}} \, d^{(1)}
-
\frac{1}{2}g_{\delta}
\vec{\delta}\,\vec{\tau}\,\overline{\Psi} \, i\zeta^{\alpha_{1}} \, d^{(1)}
\label{deriv1}
\,.
\end{align}
In a similar way as above one evaluates the derivatives of the Lagrangian density
with respect to the second-order partial derivatives of the Dirac spinors. In
this case obviously the second-order derivative terms in
Eqs.~(\ref{expr},\ref{expl}) are of relevance, and the result reads
\begin{align}
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\overline{\Psi})} = &
\frac{1}{2}g_{\sigma} \,
d^{(2)}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \,i\zeta^{\alpha_{2}} \, \Psi\sigma
- \frac{1}{2}g_{\omega} \,
d^{(2)}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \, i\zeta^{\alpha_{2}} \,
\gamma^{\mu}\Psi\omega_{\mu}
- \frac{1}{2}g_{\rho} \,
d^{(2)}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \, i\zeta^{\alpha_{2}} \,
\gamma^{\mu}\vec{\tau}\,\Psi\vec{\rho}\,_{\mu}
\nonumber\\
+ & \frac{1}{2}g_{\delta} \,
d^{(2)}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \,i\zeta^{\alpha_{2}} \, \vec{\tau}\,\Psi\vec{\delta}\, \,,
\label{deriv2bar}\\
\frac{\partial {\cal L}}{\partial(\partial_{\alpha_{1}\alpha_{2}}\Psi)} = &
\frac{1}{2}g_{\sigma} \,
\sigma\overline{\Psi}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \, i\zeta^{\alpha_{2}} \, d^{(2)}
- \frac{1}{2}g_{\omega} \,
\omega_{\mu}\overline{\Psi}\gamma^{\mu}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \, i\zeta^{\alpha_{2}} \, d^{(2)}
- \frac{1}{2}g_{\rho} \,
\vec{\rho}\,_{\mu}\overline{\Psi}\vec{\tau}\,\gamma^{\mu}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \, i\zeta^{\alpha_{2}} \, d^{(2)}
\nonumber\\
+ & \frac{1}{2}g_{\delta} \,
\vec{\delta}\,\vec{\tau}\,\overline{\Psi}
\frac{1}{2!} \,
i\zeta^{\alpha_{1}} \, i\zeta^{\alpha_{2}} \, d^{(2)}
\label{deriv2}
\,.
\end{align}
With the intermediate results of this appendix we can now derive the relevant
equations of the NLD model, {\it i.e.}, the Dirac-equation for the spinor field
$\Psi$ in Appendix~\ref{app4} as well as the conserved Noether current
and the energy-momentum tensor in Appendix~\ref{app5}.
Furthermore, we will perform these derivations
up to second order in the higher-order fields and for the isoscalar meson-nucleon
interaction Lagrangians only, since the evaluation of higher-order terms and
of the other meson-nucleon vertices proceeds in a similar way. We will insert
then the remaining terms, \textit{i.e.}, the higher-order derivatives as well as all
meson-nucleon contributions in the final expressions.
In the following the
terms containing the derivatives of the meson fields are not shown, since,
they do not contribute on the mean-field level.
\section{\label{app4}Derivation of the Dirac-equation in the NLD model}
For the derivation of the Dirac-equation we start with the Euler-Lagrange
equations of motion, Eq.~(\ref{Euler0}), which read as
\begin{align}
\frac{\partial{\cal L}}{\partial\varphi_{r}}
+
\sum_{i=1}^{n}
(-)^{i}
\partial_{\alpha_{1}\cdots\alpha_{i}}
\frac{\partial{\cal L}}
{\partial(\partial_{\alpha_{1}\cdots\alpha_{i}}\varphi_{r})}
= 0
\;.
\end{align}
Up to second order in the partial derivatives of the spinor field $\overline{\Psi}$
they reduce to
\begin{align}
\frac{\partial{\cal L}}{\partial\overline{\Psi}}
-
\partial_{\alpha_{1}}\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}}\overline{\Psi})}
+
\partial_{\alpha_{1}}\partial_{\alpha_{2}}
\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}}\partial_{\alpha_{2}}\overline{\Psi})} = 0
\label{Euler2}
\,.
\end{align}
We insert the various partial field derivatives,
Eqs.~(\ref{deriv0bar}),~(\ref{deriv1bar}) and~(\ref{deriv2bar}), into the second order
Euler-Lagrange equations, Eq.~(\ref{Euler2}), and obtain
\begin{align}
\gamma_{\mu}\,i\overrightarrow{\partial}^{\mu}\psi - m\Psi
+ & \frac{1}{2}g_{\sigma} \sigma
\left[
d^{(0)}\Psi+\overrightarrow{{\cal D}}\Psi
\right]
- \frac{1}{2}g_{\omega} \omega^{\mu}
\left[
d^{(0)}\gamma_{\mu}\Psi+\gamma_{\mu}\overrightarrow{{\cal D}}\Psi
\right]
\nonumber\\
- &
\frac{1}{2}g_{\sigma} \,
\sigma \, d^{(1)} \, \zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \, \Psi
+ \frac{1}{2}g_{\omega} \,
\omega_{\mu} \, d^{(1)} \, \zeta^{\alpha} \, \gamma^{\mu}i\overrightarrow{\partial}_{\alpha_{1}}\Psi
\nonumber\\
+ &
\frac{1}{2}g_{\sigma} \, \sigma \,
d^{(2)}\,
\frac{1}{2!}\,
\zeta^{\alpha_{1}} \, \zeta^{\alpha_{2}} \,
i\overrightarrow{\partial}_{\alpha_{1}}\, i\overrightarrow{\partial}_{\alpha_{2}}
\Psi
- \frac{1}{2}g_{\omega} \,
d^{(2)} \, \omega_{\mu} \,
\frac{1}{2!} \,
\zeta^{\alpha_{1}} \, \zeta^{\alpha_{2}} \,
\gamma^{\mu}
i\overrightarrow{\partial}_{\alpha_{1}}\, i\overrightarrow{\partial}_{\alpha_{2}}
\Psi
= 0
\label{dirac1_app}
\,.
\end{align}
We rewrite Eq.~(\ref{dirac1_app}) such to separate the series contributions
from the standard terms and obtain following expression
\begin{align}
\left\lbrace
\gamma_{\mu}\,i\overrightarrow{\partial}^{\mu} - m
+
\right. & \left.
\frac{1}{2}g_{\sigma}\, \sigma\, \overrightarrow{{\cal D}}
- \frac{1}{2}g_{\omega}\, \gamma^{\mu}\omega_{\mu }\overrightarrow{{\cal D}}
\right.
\label{dirac2_app}\\
+ & \left.
\frac{1}{2}g_{\sigma} \, \sigma
\left[
d^{(0)} - d^{(1)}\, \zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} +
\frac{1}{2!}\, d^{(2)}\,
\zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \,
\zeta^{\alpha_{2}}i\overrightarrow{\partial}_{\alpha_{2}}
\right]
\right.
\nonumber\\
- & \left.
\frac{1}{2}g_{\omega} \, \omega_{\mu}
\left[
d^{(0)} - d^{(1)}\, \zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} +
\frac{1}{2!}\, d^{(2)}\,
\zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \,
\zeta^{\alpha_{2}}i\overrightarrow{\partial}_{\alpha_{2}}
\right]\gamma^{\mu}
\right\rbrace \Psi = 0
\nonumber
\,.
\end{align}
In fact, if one would perform above procedure for all higher-order terms,
one would obtain
\begin{align}
&
\left\lbrace
\gamma_{\mu}\,i\overrightarrow{\partial}^{\mu} - m
+
\frac{1}{2}g_{\sigma}\, \sigma\, \overrightarrow{{\cal D}}
- \frac{1}{2}g_{\omega}\, \gamma^{\mu}\omega_{\mu }\overrightarrow{{\cal D}}
\right.
\label{resu1}\\
+ & \left.
\frac{1}{2}g_{\sigma} \, \sigma
\left[
d^{(0)} - d^{(1)}\, \zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} +
\frac{1}{2!}\, d^{(2)}\,
\zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \,
\zeta^{\alpha_{2}}i\overrightarrow{\partial}_{\alpha_{2}}
+ \cdots +
(-)^{n} \, \frac{1}{n!} \, d^{(n)}\,
\zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \,
\cdots \,
\zeta^{\alpha_{n}}i\overrightarrow{\partial}_{\alpha_{n}}
\right]
\right.
\nonumber\\
- & \left.
\frac{1}{2}g_{\omega} \, \omega_{\mu}
\left[
d^{(0)} - d^{(1)}\, \zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} +
\frac{1}{2!}\, d^{(2)}\,
\zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \,
\zeta^{\alpha_{2}}i\overrightarrow{\partial}_{\alpha_{2}}
+ \cdots +
(-)^{n} \, \frac{1}{n!} \, d^{(n)}\,
\zeta^{\alpha_{1}}i\overrightarrow{\partial}_{\alpha_{1}} \,
\cdots \,
\zeta^{\alpha_{n}}i\overrightarrow{\partial}_{\alpha_{n}}
\right]\gamma^{\mu}
\right\rbrace \Psi = 0 \,.
\nonumber
\end{align}
The infinite series inside the brackets in Eq.~(\ref{resu1})
add together with the non-linear terms
in the first line of Eq.~(\ref{resu1}) for each meson-nucleon vertex. One arrives
to the following Dirac equation for the spinor field $\Psi$ in the NLD model
\begin{align}
\left[
\gamma_{\mu}\,i\partial^{\mu}
- g_{\omega}\, \gamma^{\mu}\omega_{\mu }\overrightarrow{{\cal D}}
- g_{\rho}\, \gamma^{\mu}\vec{\tau}\,\vec{\rho}\,_{\mu }\overrightarrow{{\cal D}}
- m + g_{\sigma}\, \sigma\, \overrightarrow{{\cal D}} + g_{\delta}\, \vec{\tau}\,\vec{\delta}\,\, \overrightarrow{{\cal D}}
\right]\Psi = 0
\,.
\end{align}
This is the desired result, Eq.~(\ref{Dirac_nld}). Again the terms containing
the derivatives of meson fields are not show in the above equation, since,
they will not contribute in the final RMF expressions.
\section{\label{app5}Derivation of the Noether current in the NLD model}
As in the derivation of the Dirac equation, we consider in the following only
terms up to second order in the field derivatives, and
use Eqs.~(\ref{deriv1bar}),~(\ref{deriv1}),~(\ref{deriv2bar}) and~(\ref{deriv2}).
We start from the general expression in Eq.~(\ref{current}) for the nucleonic
degrees of freedom which up to second order takes the following form
\begin{align}
J^{\mu} = -i \left\lbrace
\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\Psi)}
-
\partial_{\beta}
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\partial_{\beta}\Psi)}
\right]\Psi
- \overline{\Psi}\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\overline{\Psi})}
-
\partial_{\beta}
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\partial_{\beta}\overline{\Psi})}
\right]
+ \left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\partial_{\beta}\Psi)}
\right]\partial_{\beta}\Psi
- \partial_{\beta}\overline{\Psi}\left[
\frac{\partial{\cal L}}{\partial(\partial_{\mu}\partial_{\beta}\overline{\Psi})}
\right] \right\rbrace
\,.
\label{strom-1}
\end{align}
We rewrite now Eq. (\ref{strom-1}) by separating the terms between the different orders
in the partial derivatives (the order is labeled by a subscript)
\begin{align}
J_{\mu} = {\cal O}^{(1)}_{\mu} + {\cal O}^{(2)}_{\mu}
\label{sepapp}
\,.
\end{align}
The first-order contribution to Eq.~(\ref{sepapp}) reads as
\begin{eqnarray}
{\cal O}^{(1)}_{\mu} = -i
\left(
\frac{\partial{\cal L}}{\partial(\partial^{\mu}\Psi)}\Psi
-
\bar{\Psi}\frac{\partial{\cal L}}{\partial(\partial^{\mu}\bar{\Psi})}
\right)
\;, \label{1-order}
\end{eqnarray}
while the second-order contribution to Eq.~(\ref{sepapp}) takes the
following form
\begin{align}
{\cal O}^{(2)}_{\mu} = -i\left[ -
\left(
\partial_{\beta}
\frac{\partial{\cal L}}{\partial(\partial^{\mu}\partial_{\beta}\Psi)}
\Psi
-
\bar{\Psi}
\partial_{\beta}
\frac{\partial{\cal L}}{\partial(\partial^{\mu}\partial_{\beta}\bar{\Psi})}
\right)
+
\left(
\frac{\partial{\cal L}}{\partial(\partial^{\mu}\partial_{\beta}\Psi)}
\partial_{\beta}\Psi
-
\partial_{\beta}\bar{\Psi}
\frac{\partial{\cal L}}{\partial(\partial^{\mu}\partial_{\beta}\bar{\Psi})}
\right) \right]
\;. \label{2-order}
\end{align}
For the evaluation of the first-order contribution ${\cal O}^{(1)}_{\mu}$,
Eq.~(\ref{1-order}), we insert Eq.~(\ref{deriv1}) and its adjoint
form, Eq.~(\ref{deriv1bar}), into Eq.~(\ref{1-order}), and obtain
\begin{align}
{\cal O}^{(1)}_{\mu} = \overline{\Psi}\gamma_{\mu}\Psi
-
\frac{1}{2} \, g_{\sigma}
\left[
\sigma\overline{\Psi} \, \zeta_{\mu} \, d^{(1)} \, \Psi
+
\overline{\Psi} \, d^{(1)} \, \zeta_{\mu} \, \Psi \, \sigma
\right]
+
\frac{1}{2} g_{\omega}
\left[
\omega_{\alpha}\overline{\Psi}\gamma^{\alpha}\zeta_{\mu} \, d^{(1)} \, \Psi
+
\overline{\Psi} \, d^{(1)}\,\zeta_{\mu} \, \gamma^{\alpha}\Psi\omega_{\alpha}
\right]
\label{O-Eins}
\,.
\end{align}
The derivation of the second-order contribution ${\cal O}^{(2)}_{\mu}$,
Eq.~(\ref{2-order}), proceeds in a similar way. We get
\begin{align}
{\cal O}^{(2)}_{\mu} = &
- \frac{1}{2}\,g_{\sigma}\,
\sigma\,\overline{\Psi}\,
i\!\stackrel{\leftarrow}{\partial}_{\beta}\! \frac{1}{2!} \, \zeta_{\mu}\,\zeta^{\beta}\,
d^{(2)}\,
\Psi
+ \frac{1}{2}\,g_{\sigma}\,
\overline{\Psi}\,d^{(2)}
\! \frac{1}{2!} \, \zeta_{\mu}\,\zeta^{\beta}\,
i\!\stackrel{\rightarrow}{\partial}_{\beta}\,
\Psi\,\sigma
\nonumber\\
& +
\frac{1}{2}\,g_{\sigma}\,
\sigma \,\overline{\Psi}
\,\frac{1}{2!} \, \zeta_{\mu}\,\zeta^{\beta}\,
d^{(2)}\, i\!\stackrel{\rightarrow}{\partial}_{\beta}\,
\Psi
- \frac{1}{2}\,g_{\sigma}\,
\overline{\Psi}\,d^{(2)}\,
i\!\stackrel{\leftarrow}{\partial}_{\beta}\! \frac{1}{2!} \, \zeta_{\mu}\,\zeta^{\beta}\,
\Psi\,\sigma
\nonumber\\
& + \frac{1}{2}\,g_{\omega}\,
\omega_{\delta}\,\overline{\Psi}\,
i\!\stackrel{\leftarrow}{\partial}_{\beta}\! \frac{1}{2!} \, \zeta_{\mu}\,\zeta^{\beta}\,
\gamma^{\delta} \, d^{(2)}\,
\Psi
- \frac{1}{2}\,g_{\omega}\,
\overline{\Psi}\, d^{(2)}
\! \frac{1}{2!} \, \zeta_{\mu} \,\zeta^{\beta} \,
\gamma^{\delta}\,
i\!\stackrel{\rightarrow}{\partial}_{\beta}\,
\Psi\,\omega_{\delta}
\nonumber\\
& -
\frac{1}{2}\,g_{\omega}\,
\omega_{\delta} \,\overline{\Psi}
\, \gamma^{\delta} \,
\frac{1}{2!} \, \zeta_{\mu} \, \zeta^{\beta} \,
d^{(2)}\,i\!\stackrel{\rightarrow}{\partial}_{\beta}\,
\Psi
+ \frac{1}{2}\,g_{\omega}\,
\overline{\Psi}\, d^{(2)}\,
i\!\stackrel{\leftarrow}{\partial}_{\beta}\! \frac{1}{2!} \, \zeta_{\mu} \, \zeta^{\beta} \,
\gamma^{\delta}\,\Psi\,\omega_{\delta}
\,.
\label{O-Zwei-step1}
\end{align}
For each isoscalar meson-nucleon interaction we obtain now $4$ terms, which differ
between each other in the direction where the partial derivative operators act.
We arrive to the following expression
\begin{align}
{\cal O}^{(2)}_{\mu} = &
+ \frac{1}{2}\,g_{\sigma}\,
\sigma \,\overline{\Psi}
\,\frac{2}{2!} \, \zeta_{\mu} \, \zeta^{\beta} \,
d^{(2)}\, i\!\stackrel{\rightarrow}{\partial}_{\beta}\,
\Psi
- \frac{1}{2}\,g_{\sigma}\,
\overline{\Psi}\, d^{(2)}\,
i\!\stackrel{\leftarrow}{\partial}_{\beta}\! \frac{2}{2!} \, \zeta_{\mu} \, \zeta^{\beta} \,
\Psi\,\sigma
\nonumber\\
& - \frac{1}{2}\,g_{\omega}\,
\omega_{\delta} \,\overline{\Psi}
\, \gamma^{\delta} \,
\frac{2}{2!} \, \zeta_{\mu} \, \zeta^{\beta} \,
d^{(2)}\,i\!\stackrel{\rightarrow}{\partial}_{\beta}\,
\Psi
+ \frac{1}{2}\,g_{\omega}\,
\overline{\Psi}\, d^{(2)}\,
i\!\stackrel{\leftarrow}{\partial}_{\beta}\! \frac{2}{2!} \, \zeta_{\mu} \, \zeta^{\beta} \,
\gamma^{\delta}\,\Psi\,\omega_{\delta}
\label{O-Zwei-step2} \,.
\end{align}
The procedure is similar for the remaining higher-order derivative contributions.
The evaluation procedure according
Eqs.~(\ref{O-Zwei-step1}) and~(\ref{O-Zwei-step2}) for the third-order
contribution, ${\cal O}^{(3)}_{\mu}$, would result in three terms for each vertex,
for the fourth-order term, ${\cal O}^{(4)}_{\mu}$, in four terms for
each vertex, and so forth.
Therefore, the resummation of all higher-order terms according
\begin{align}
J_{\mu} = {\cal O}^{(1)}_{\mu} + {\cal O}^{(2)}_{\mu} + {\cal O}^{(3)}_{\mu}
+ \cdots + {\cal O}^{(n)}_{\mu}
\label{sepinf}
\end{align}
leads to infinite series for each meson-nucleon interaction. For instance, for
the scalar-isoscalar meson-nucleon interaction we get
\begin{align}
-& \frac{1}{2}\, g_{\sigma}\,
\overline{\Psi}\left[
\frac{1}{1!}\, d^{(1)}\, \zeta^{\mu} +
\frac{2}{2!}\, d^{(2)}\,
i\overleftarrow{\partial}_{\alpha_{1}} \, \zeta^{\alpha_{1}}\, \zeta^{\mu}
+ \cdots +
\frac{n}{n!}\, d^{(n)}\,
i\overleftarrow{\partial}_{\alpha_{1}}\, \zeta^{\alpha_{1}}
i\overleftarrow{\partial}_{\alpha_{2}}\, \zeta^{\alpha_{2}} \cdots
i\overleftarrow{\partial}_{\alpha_{n-1}}\, \zeta^{\alpha_{n-1}}
\, \zeta^{\mu}
\right]\Psi
\nonumber\\
+& \frac{1}{2}\, g_{\sigma}\,
\overline{\Psi}\left[
-\frac{1}{1!}\, \zeta^{\mu} \, d^{(1)} +
\frac{2}{2!}\, \zeta^{\mu}
\, d^{(2)}\, \zeta^{\alpha_{1}} \, i\overrightarrow{\partial}_{\alpha_{1}}
+ \cdots +
(-)^{n}\, \frac{n}{n!}\, \zeta^{\mu} \,
d^{(n)}\,
\zeta^{\alpha_{1}} \, i\overrightarrow{\partial}_{\alpha_{1}}\,
\zeta^{\alpha_{2}} \, i\overrightarrow{\partial}_{\alpha_{2}}\, \cdots
\zeta^{\alpha_{n-1}} \, i\overrightarrow{\partial}_{\alpha_{n-1}}
\right]\Psi
\label{exa1}
\,.
\end{align}
Note that the $n$-th term, ${\cal O}^{(n)}_{\mu}$, contains $(n-1)$ partial
derivatives and that it appears $n$-times. These terms can be also
resummed to infinite series. Indeed, by considering, for instance,
the Taylor-expansion of the operator $\overrightarrow{{\cal D}}$
\begin{align}
\overrightarrow{{\cal D}} = 1 - d^{(1)} \,
\zeta_{\alpha}\, i\overrightarrow{\partial}^{\alpha} +
\frac{1}{2!} \,
d^{(2)}
\, \zeta_{\alpha}\, i\overrightarrow{\partial}^{\alpha} \,
\zeta_{\beta}\, i\overrightarrow{\partial}^{\beta}
+ \cdots
\label{taylor}
\,,
\end{align}
we obtain for the derivative of $\overrightarrow{{\cal D}}$ with respect to the operator-like
argument $i\overrightarrow{\partial}^{\mu}$
\begin{align}
\overrightarrow{\varOmega}^{\mu} :=
\frac{\partial\overrightarrow{{\cal D}}}{\partial(i\overrightarrow{\partial}_{\mu})} =
-d^{(1)} \, \zeta^{\mu} + \frac{2}{2!}\, d^{(2)}\, \zeta^{\mu} \,
\zeta_{\beta}\, i\overrightarrow{\partial}^{\beta}
+ \cdots
\label{dtaylor}
\,.
\end{align}
In a similar way we obtain $\overleftarrow{\varOmega}^{\mu}=(\overrightarrow{\varOmega}^{\mu})^{\dagger}$. As in the case
of the scalar operators $\overrightarrow{{\cal D}}$ and $\overleftarrow{{\cal D}}$, the new vector operators
$\overrightarrow{\varOmega}^{\mu}$ and $\overleftarrow{\varOmega}^{\mu}$ act from the right side to the spinor
field $\Psi$ and from the left
side to $\overline{\Psi}$, respectively. Below we will see that in the RMF
approximation to nuclear matter both vector operators will give the derivative of
the scalar operator ${\cal D}$ with respect to the single-particle $4$-momentum $p^{\mu}$.
Collecting now all the contributions from Eqs.~(\ref{O-Eins})
and~(\ref{O-Zwei-step2}) under consideration of Eq.~(\ref{dtaylor}) we arrive to
compact forms, \textit{e.g.}, the scalar-isoscalar meson-nucleon vertex from
Eq.~(\ref{exa1}) can be resummed as follows
\begin{align}
-& \frac{1}{2}\, g_{\sigma}\,
\overline{\Psi}\left[
\frac{1}{1!}\, d^{(1)}\, \zeta^{\mu} +
\frac{2}{2!}\, d^{(2)}\,
i\overleftarrow{\partial}_{\alpha_{1}} \, \zeta^{\alpha_{1}}\, \zeta^{\mu}
+ \cdots +
\frac{n}{n!}\, d^{(n)}\,
i\overleftarrow{\partial}_{\alpha_{1}}\, \zeta^{\alpha_{1}}
i\overleftarrow{\partial}_{\alpha_{2}}\, \zeta^{\alpha_{2}} \cdots
i\overleftarrow{\partial}_{\alpha_{n-1}}\, \zeta^{\alpha_{n-1}}
\, \zeta^{\mu}
\right]\Psi
\nonumber\\
+& \frac{1}{2}\, g_{\sigma}\,
\overline{\Psi}\left[
-\frac{1}{1!}\, \zeta^{\mu} \, d^{(1)} +
\frac{2}{2!}\, \zeta^{\mu}
\, d^{(2)}\, \zeta^{\alpha_{1}} \, i\overrightarrow{\partial}_{\alpha_{1}}
+ \cdots +
(-)^{n}\, \frac{n}{n!}\, \zeta^{\mu} \,
d^{(n)}\,
\zeta^{\alpha_{1}} \, i\overrightarrow{\partial}_{\alpha_{1}}\,
\zeta^{\alpha_{2}} \, i\overrightarrow{\partial}_{\alpha_{2}}\, \cdots
\zeta^{\alpha_{n-1}} \, i\overrightarrow{\partial}_{\alpha_{n-1}}
\right]\Psi
\nonumber\\
= &
\frac{1}{2}\, g_{\sigma}\, \sigma\,
\overline{\Psi}\, \overrightarrow{\varOmega}^{\mu}\, \Psi
-\frac{1}{2}\, g_{\sigma}\,
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \, \Psi\sigma
\label{exa}
\,.
\end{align}
The evaluation method for the isovector channels of the NLD interaction Lagrangian proceeds
in the same way. In total, including all degrees of freedom we obtain following compact
expression for the conserved baryon current within the NLD formalism
\begin{align}
J^{\mu} = \overline{\Psi}\gamma^{\mu}\Psi
- & \frac{1}{2}\, g_{\sigma}\,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \Psi\sigma - \sigma\overline{\Psi}\, \overrightarrow{\varOmega}^{\mu}\Psi
\right]
+ \frac{1}{2}\, g_{\omega}\,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \gamma^{\alpha}\Psi\omega_{\alpha} -
\omega_{\alpha}\overline{\Psi}\gamma^{\alpha}\, \overrightarrow{\varOmega}^{\mu}\Psi
\right]
\nonumber\\
+ & \frac{1}{2}\, g_{\rho}\,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \gamma^{\alpha}\vec{\tau}\,\Psi\vec{\rho}\,_{\alpha} -
\vec{\rho}\,_{\alpha}
\overline{\Psi}\vec{\tau}\,\gamma^{\alpha}\, \overrightarrow{\varOmega}^{\mu}\Psi
\right]
- \frac{1}{2}\, g_{\delta}\,
\left[
\overline{\Psi}\, \overleftarrow{\varOmega}^{\mu} \vec{\tau}\, \Psi\vec{\delta}\,
- \vec{\delta}\,\,\overline{\Psi}\, \vec{\tau}\,\, \overrightarrow{\varOmega}^{\mu}\Psi
\right]
\label{strom}
\,,
\end{align}
which obeys the continuity equation $\partial_{\mu}\, J^{\mu}=0$.
Now we apply the RMF approximation to the general expression
for the Noether current, Eq.~(\ref{strom}). All bilinear operator-like terms
are replaced by their expectation values relative to the nuclear matter ground state.
Furthermore, we use for the spinor field the plane-wave \textit{ansatz},
Eq.~(\ref{plane_wave}), in order to evaluate the operators $\overrightarrow{{\cal D}},~\overleftarrow{{\cal D}}$,
$\overrightarrow{\varOmega}^{\mu}$ and $\overleftarrow{\varOmega}^{\mu}$. Taking also into account the equations $i\overrightarrow{\partial}^{\mu}\Psi=p^{\mu}\Psi$ and
$\overline{\Psi}i\overleftarrow{\partial}^{\mu}=-\overline{\Psi}p^{\mu}$ the
current $J^{\mu}$, Eq.~(\ref{strom}), takes following form in the RMF approximation
\begin{align}
J^{\mu} = \langle \overline{\Psi}\gamma^{\mu}\Psi \rangle
+ g_{\sigma}\,
\langle\overline{\Psi}\, \big( \partial_{p}^{\mu}{\cal D}\big) \Psi\rangle\sigma
- g_{\omega}\,
\langle
\overline{\Psi}\,
\big( \partial_{p}^{\mu}{\cal D}\big) \gamma^{\alpha}\Psi\rangle\omega_{\alpha}
- g_{\rho}\,
\langle\overline{\Psi}\,
\big( \partial_{p}^{\mu}{\cal D}\big) \gamma^{\alpha}\vec{\tau}\,\Psi\rangle
\vec{\rho}\,_{\alpha}
+ g_{\delta}\,
\langle\overline{\Psi}\, \big( \partial_{p}^{\mu}{\cal D}\big)\vec{\tau}\, \Psi\rangle\vec{\delta}\,
\label{strom_nm}
\,,
\end{align}
with $\partial_{p}^{\mu}=\frac{\partial}{\partial p_{\mu}}$. This is
the desired result, see Eq.~(\ref{current_NLD}).
The derivation of the energy-momentum tensor proceeds in the same way as for
the current, therefore, we skip further derivations.
\end{widetext}
\end{appendix}
|
1,108,101,564,176 | arxiv | \section{Introduction}
\addtocounter{footnote}{1}
It has been established for more than a decade that there is a well
defined B luminosity-metallicity (L-Z) relation for dwarf irregular galaxies,
in the sense that the higher the metallicity, the higher the luminosity
of the dwarf galaxy \citep[e.g. ][]{skh89,rm95}, although \citet{hgo98}
claimed that the relationship was much weaker than previously thought.
More recently the relationship was confirmed by \citet{lee03} with
a root mean square of $\sigma$ = 0.175 dex in $\log$(O/H) for the
same sample of dwarf irregular galaxies originally examined by \citet{rm95}, but
with updated distance determinations and metallicity measurements.
The L-Z relation is also valid for
giant galaxies, with less scatter and with a steeper slope than for
dwarf systems \citep{s05}.
In the studies of dwarf systems associated with strongly interacting
galaxies, this luminosity-metallicity relation has been used to select
possible candidate tidal dwarf galaxies (TDGs) -- newly born galaxies
formed out of recycled material expelled from the parent galaxies during
interaction. TDGs stand out from this relation, i.e., they, in general,
do not follow the L-Z relation of dwarf irregular galaxies, { but
instead have an almost constant metallicity between 1/4 and 1/3 of the
solar value.} Examples of B-band L-Z diagrams showing the location of
tidal dwarf galaxies in a few interacting systems can be found in Fig. 6
of \citet{dm98}, in Fig. 3 of \citet{wdf03} and Fig. 17 of \citet{d00}.
Nevertheless, in systems where star formation is widely spread and the B
luminosities of the parent galaxies and of the TDGs may be altered, the
B-band L-Z relation (used in the studies above) may no longer be a useful
tool to select possible TDGs { mainly because the B band is not a good
tracer of the stellar mass when it is highly affected by starbursts.}
This is the case for the Hickson compact group 31, a gas-rich group,
with intense star forming activity \citep[e.g. ][]{ls04}, { dominated
by a pair of interacting dwarf galaxies, A and C,} which clearly have
their B luminosities quite affected by the light and dust involving
newly born stars \citep[in fact HCG 31C has Wolf-Rayet features in its
spectrum; ][]{c91}. In the course of studying the possible nature of
the various sub-components of this interesting group, we thus felt the
need of compiling from the literature a K-band L-Z relation. Such a
relation is more useful than the B-band relation because { it is less
affected by a starburst}, it suffers significantly less from absorption
effects and it better characterizes the bulk of the stars (old component)
in the group member galaxies.
This paper is divided as follows. In Section~\ref{near}, the K-band
L-Z diagram for ``normal'' dwarf irregular galaxies $-$20.5 $<$ M$_{K}$
$<$ $-$13.5 is determined from literature data. Section~\ref{obs}
describes our new Gemini data, g$^\prime$ and r$^\prime$
photometry and medium-resolution spectroscopy of HCG 31, which are
then used to determine the radial velocities, ages and metallicities of
the regions and to plot the K-band L-Z relation for the objects of this
interacting group { (the K photometry for the HCG 31 members
come mostly from \citet{ls04})}. Finally, in Section~\ref{Discussion}
we discuss the fate of the TDG candidates of HCG 31.
Distance-dependent measurements assume a distance to the group of 54.8 Mpc
(derived by \citet{ls04} from the Hubble law and
H$_0$=75 km~s$^{-1}$~Mpc$^{-1}$). We
use the identifications for the HCG 31
objects suggested by \citet{ls04} with
one exception: their region E corresponds to blob E2 in this paper.
We identify objects with single letters (e.g. objects E, F, R) and we
refer to blobs which form an object with letters followed by numbers
(e.g. blobs E1 and E2 compose object E).
\section{\label{near}The luminosity-metallicity relation}
\subsection{\label{previouswork}Previous works}
A few authors have questioned the existence of the luminosity-metallicity
relation \citep[e.g. ][]{ca93}, specially for gas-rich objects.
The $B$ magnitude, classically used to investigate the
L-Z relation, is known to be highly affected by
strong and young starbursts with ages below 10 Myr, as well as by dust.
Therefore, for the
study of objects like those in HCG 31, which are known to contain young
and strong starbursts, the use of near infrared (NIR) magnitudes should be much more
robust and less affected by the onset of starbursts or dust absorption than the
$B$ magnitude. In the recent study of \citet{s05} the L-Z relation is
actually explored also in the NIR regime for a sample of emission-line galaxies
from the KISS survey. However, they present only global fits to all their sample
galaxies, which are mostly concentrated in the high luminosity part of
the diagrams (M$_K \, < \, -21$), whereas the small numbers of dwarf galaxies
in their sample appear to follow a relation with a shallower slope.
Recent efforts directed to test the existence of an L-Z relation
for dwarf irregular galaxies in the H-band, by following an approach
conceived to minimize the effects of uncertainties in distance determinations \citep{sav05}
have led to encouraging results, although still
based on a small number of galaxies.
Here we investigate the L-Z relation in the $K$ band by compiling from the
literature a sample of nearby dwarf irregular galaxies selected for spanning a wide
range in luminosity, having oxygen abundances obtained following an
homogeneous method (as far as possible), and having reliable distance determinations.
\subsection {\label{dataused}Data used in our compilation}
In Table~\ref{dIrr}
we list absolute $K_s$ magnitudes and oxygen abundances for a number
of nearby irregular galaxies. The sample of 29 galaxies was taken
either from the recent work of \citet{v05}, where new NIR measurements
of dwarf irregular galaxies are presented, or from the classic work of
\citet{rm95} and includes two additional galaxies NGC 1705 and NGC 1156
\citep[taken from ][]{he04}. It contains only nearby (distance $<$ 8 Mpc)
irregular galaxies, for which distances can either be determined from the
brightest-stars (bs) method, tip-of-the-RGB techniques (rgb) or Cepheids
(cep). For one of the galaxies, the available distance was determined
from the Tully-Fisher relation (tf). All distances were obtained from
the compilation of \citet{kkhm04}, except for the distance to IC 4662,
which was taken from \citet{lee03}.
\footnote {We did not include the following
galaxies from the list of \citet{v05}, because we were not able to locate
the corresponding measurements for their metallicities in the literature:
Cas 1, Mb1, Orion Dwarf, UGC 4115, UGC 4998, UGC 5692, UGC 5848, UGC
8508, UGC 5979, NGC 3741, NGC 4163, NGC 4190, and Holmberg IV. On the
other hand, the dwarf irregular galaxies from the list of \citet{rm95}
Sextans A, Sextans B, LMC, SMC, WLM, Leo A, IC 1613, and NGC 2366 were
not included because no NIR total magnitudes are available for them.}
NIR magnitudes for the galaxies not included in the sample of \citet{v05}
are from the 2MASS extended source catalog \citep{jar00}, except for
NGC 5408 whose $K_s$ magnitude is taken from \citet{npcf03}.
The values of foreground Galactic extinction in the $K$-band
(A$_K$) for the sample galaxies are listed in column 3 of
Table~\ref{dIrr}. These are taken from NED and were determined
following \citet{sch98}. Exceptions are galaxies NGC 1569 and
IC 10 that are located at low Galactic latitudes, where extinction
values derived from the dust maps of \citet{sch98} are not accurate.
For these galaxies we adopted the Galactic extinction values determined
by \citet{ola01} and \citet{rbb01}, respectively.
The oxygen abundances, compiled from the literature, were determined in two ways.
Column 6 of Table~\ref{dIrr}
shows determinations based on electronic temperatures measured from
the [\ion{O}{3}] $\lambda$ 4363 line (T$_{\rm e}$-method), and column 7 shows
determinations from the empirical calibration of \citet[][ P-method]{pil01a,pil01b}.
An exception is NGC 3738 for which only a metallicity value determined
with the method of \citet{ep84} was found.
In a few cases, P-method abundances were not available in the literature, but we
could use published values of emission-line intensity ratios (corrected
for extinction) to calculate
oxygen abundances following \citet{pil01a,pil01b}. In particular
we used line intensities from \citet{lee03} for IC 10 and NGC 1560,
from \citet{ti05} for Mrk 209, from \citet{ks01} for NGC 4789A,
from \citet{mmo91} and \citet{mho96} for IC 2574 and DDO 50,
and from \citet{mmc94} for NGC 5408.
\subsection {\label{lzdiagram}The K-band L-Z diagram for dwarf irregulars}
Abundances against extinction corrected K$_s$ magnitudes are plotted in
Fig.~\ref{LZ} (Table~\ref{dIrr}, values of column 6 plotted against those of
column 2, after correction
with extinction values of column 3). Data for all sample galaxies are
used with T$_{\rm e}$-method abundances, when available (i.e. for all
but two galaxies, see Table~\ref{dIrr}, column 6).
A non-weighted least-squares fit to the data gives the following result:
\begin{equation}
\rm 12+\log(O/H) = (-0.14 \pm 0.02)\times M_{Ks} + (5.55 \pm 0.26),
\end{equation}
with a $\sigma$ of 0.15 dex in $\log$(O/H).
{ To establish whether there is a correlation in the data, we calculated the
Spearman's rank
correlation coefficient, a non-parametric measure of correlation, and found
a value of
$-$0.88. This indicates the presence of a good anti-correlation, whose
level of significance can be
inferred by comparison with published tables \citep{wj03}. Taking into
account the number of L-Z pairs in the sample,
the above correlation coefficient indicates that the hypothesis that the
variables are unrelated is rejected at
the 0.1 per cent level of significance.}
{ Residuals of the fit are
shown in the lower panel of Fig. 1}.
A good correlation is also present if
we plotted the subsample of 25 dwarf galaxies that have
abundances determined through the P-method
(not plotted here).
Also in this case the correlation is
good\footnote{We excluded from the fit galaxy
NGC 6822, whose metallicity was determined with the P-method, and which
deviates considerably
from the L-Z relation \citep{pil01b}.},
with a
Spearman-rank correlation coefficient of $-$0.81, { and
same level of significance as above.}
The result of the non-weighted least-squares fit
to the data is not significantly different from the one above:
\begin{equation}
\rm 12+\log(O/H) = (-0.13 \pm 0.02)\times M_{Ks} + (5.73 \pm 0.34),
\end{equation}
with a $\sigma$ of 0.16 dex.
\section{\label{obs} Observations and results for HCG 31}
We have used relation (1) above, combined with
{ available K-band photometry, from \citep{ls04}, and our}
new medium-resolution spectroscopy of the objects in HCG 31, to
study the nature of the members of this
group, as deduced from their location in the K-band L-Z relation for
normal dwarf galaxies.
Our new data
are described in the following.
\subsection {\label{newdata}New data -- Observations}
New imaging and multi-slit spectroscopic observations of HCG 31 were
done with the GMOS instrument, mounted on the Gemini North telescope, on
August 29 and September 21 of 2003, respectively.
The imaging consisted of $5\times180$ s exposures in the r$^\prime$ band,
and $5\times240$ s exposures in the g$^\prime$ band. The filters are
from the SDSS system \citep{fuk96}. The
typical FWHM for point sources was $\sim$ 0\farcs75 in all images.
The observations
were performed in photometric conditions. Fig.~\ref{image1}
displays the r$^\prime$ image of HCG 31. Fig.~\ref{zoomedimages}
displays zoomed pannels of selected regions, { with 20 surface
brightness contour levels
logarithmically spaced from 17.8 to 24.0 mag arcsec$^{-2}$. The positions
of the spectroscopic slits are also indicated.}
Standard reduction steps were performed with the Gemini package GMOS.
After flat-fielding and cleaning from cosmic-ray events, the final frames were
analyzed with the program SExtractor \citep{ber96}.
The calibration to the standard SDSS system was made
with the general zero points and
extinction coefficients provided by the Gemini
observatory\footnote{\texttt{www.gemini.edu/sciops/instruments/gmos/gmosPhotStandards.html}}.
The accuracy of the calibration is claimed to be within 5\% to 8\%.
Three multi-slit exposures of 960 seconds each were obtained through
a mask with 1.0\arcsec\ slits, using the R400 grating, for a final
resolution of 6.0--6.5 \AA, covering approximately the range 4000 --
8000 \AA. Three additional multi-slit exposures of 1200 seconds each
were obtained through a mask with 1.0\arcsec\ slits, using the B600
grating, for a final resolution of 4.5 \AA, covering approximately the
range 3750 -- 6600 \AA. { The typical FWHM for point sources, measured
on images taken for identification of the field, was $\sim$ 0\farcs6.
Only a few of the observations were performed in photometric conditions.}
Standard procedures were used to reduce the multi-slit spectra using
tasks within the Gemini {\sc IRAF}\footnote{IRAF is distributed by
the National Optical Astronomy Observatories, which are operated
by the Association of Universities for Research in Astronomy, Inc.,
under cooperative agreement with the National Science Foundation.}
package. Wavelength calibration was done using Cu-Ar comparison-lamp
exposures before and after the exposure on the target. Flux calibration
was done using spectroscopic standard stars obtained in the same night
of the observations. The blue and red spectra were glued together,
after flux calibration, and are shown in Fig.~\ref{spectra}.
\subsection {\label{measuredproperties}Measured
properties of the HCG 31 objects}
In columns 2 and 3 of Table \ref{properties}
we list the coordinates of all objects in
the HCG 31 group identified in the r' image.
We include
a new object which we named ``R'', at
RA = 05$^{\rm h}$ 01$^{\rm m}$ 34$^{\rm s}$ and DEC = $-$04${}^{\circ}$ 12${}^{\prime}$ 57${}^{\prime\prime}$\ (JD2000),
located 2.5 arcmin north of the group central object C,
about 1 arcmin northwest
of object Q
(see Figs. \ref{image1} and \ref{zoomedimages}).
As discussed in Section~\ref{Discussion}, this may be one of the best candidates
for a tidal dwarf galaxy in HCG 31. We only obtained a spectrum for one
of the blobs which constitute region R, namely R1.
The remaining columns of Table~\ref{properties} list
measurements made by us and other authors
on the properties of the objects in the group, such as magnitudes,
H$\alpha$-luminosities, velocities, metallicities and colours
of the group members.
Column 4 lists the K
absolute magnitudes, computed from the K apparent magnitudes
measured in \citet{ls04}, for an adopted distance
to the group of 54.8 Mpc (values for regions H, Q, and R were measured by us,
see details below),
column 5 lists the
logarithm of the luminosity in H$\alpha$, when available, from \citet{ls04}
or from this work (in the latter case they are not corrected for light
loss from the slit, and are therefore lower limits), columns 6 and 7
list respectively the heliocentric radial velocities of the objects
(with errors) from this work and from the \ion{H}{1} velocities obtained
by \citet{vm05}, when available.
The next four columns of Table~\ref{properties}
list four different determinations of 12 + log (O/H): using the T$_e$
method and the N2 estimator \citep[results from][]{ls04}, in columns 8
and 9, and using the N2 and O3N2 estimators, derived from our own data,
in columns 10 and 11 { (see the definition of these metallicity
estimators in the following subsection). Columns 12 and 13 list the
measured line ratios used in the determination of the metallicities
of columns 10 and 11.
Equivalent widths of H$\alpha$ are listed in column 14
and g'-r' colours (measured within an aperture of 2")
are in the last column of Table~\ref{properties}. }
Details are presented below.
{\it Radial velocities:}
The spectra of all objects marked in Fig.~\ref{zoomedimages} (with
exception of A, A1 and A2) are shown in
Fig.~\ref{spectra}. All spectra have
emission lines.
The heliocentric velocities of the observed objects,
derived from the redshifts of the brightest lines, are listed in column
6 of Table~\ref{properties} (the errors are the rms of the individual
line measurements). The \ion{H}{1} velocities at the location of
each corresponding object \citep[from][]{vm05} are given in column 7.
We note that the velocities of all optical regions studied here coincide
with the \ion{H}{1} velocities from the channel maps within the errors
(except for region C), suggesting a physical association of the objects
with the \ion{H}{1} clouds. The velocities are in agreement with those
derived from long-slit spectroscopy by \citet{rhf90} and \citet{h92},
for the objects measured in common.
{\it Metallicities:}
Besides the metallicity estimates obtained by \citet{ls04},
not available for all objects (see columns 8 and 9 of Table~\ref{properties}),
we have
computed values for 12 + log (O/H) using two
metallicity indicators, from our own measurements of the
line ratios. The results are listed in
columns 10 and 11 of Table~\ref{properties}.
Our first
estimate was obtained with the N2 calibrator, following \citet{pp04},
which is defined as the logarithm of the
[NII]$\lambda$6584/H$\alpha$ ratio. The resulting
metallicities and the measured ratios are listed in columns 10 and 13 of
Table~\ref{properties} respectively.
Our values in column 10 can be directly compared with those
of column 9, derived by \citet{ls04}. As can be noted, these
completely independent measurements are very similar, for the
objects measured in common.
Our second estimate was made using the O3N2 index
\citep{pp04}, based on the logarithm of the
([OIII]$\lambda$5007/H$\beta$)/([NII]$\lambda$6584/H$\alpha$) ratio.
The resulting metallicities and measured ratios
are listed in columns 11 to 13 of Table~\ref{properties}.
The rms scatter in the calibration of these estimators is
$\sim$ 0.25 - 0.4, which is larger than the internal errors;
we then assume that 0.3 is the one-sigma error
of our measurements. The metallicities inferred for the
objects are very similar within the errors, the mean value being
12+log(O/H) = 8.3 for our estimates (with either method).
We point out that the metallicity obtained for object Q is very
uncertain given that the weak emission lines are superposed onto
strong absorption lines, hampering a reliable determination of the
metallicity for this object. We include this object in the tables
but do not plot its magnitude/metallicity in the K-Z relation
of Fig~\ref{LZ}.
{\it Equivalent widths and ages of objects E, F, H and R:}
For objects E1, F1, F2, F3, H1, H2 and R1, where hardly any continuum is
detected (see Fig.~\ref{spectra}),
we assume that the objects are excited by young stars formed in an
instantaneous starburst and use Starburst99 \citep{leit99},
with solar metallicity and Salpeter IMF, to estimate ages
(obtained from the
equivalent widths of H$\alpha$, as given in column 14 of
Table~\ref{properties}).
From a comparison between the
observed H$\alpha$ equivalent widths
and those produced
by Starburst99, we find ages around 3 Myr for all regions but F3 (the
latter has an approximate age of 6 Myr, while the youngest one, R1,
has an age of 2.6 Myr).
{\it Spectroscopic properties of Q and G:}
The two galaxies Q and G, besides exhibiting \ion{H}{2}-region-like
emission lines, also have strong Balmer absorption lines, typical
of spectra dominated by A- and early F-type stars, with H$\delta$
equivalent widths significantly larger than that of normal spiral galaxies.
The equivalent widths (EW) of the [OII]$\lambda$3727 emission line
and the H$\delta$ absorption line are often used to classify galaxy
spectra on the basis of their current/past star formation episodes
\citep[e.g.][]{p99,pw00,ms04}. Galaxy G, with its EW([\ion{O}{2}]) =
$\sim$ 20 \AA\ and EW(H$\delta$) $\sim$ 8\AA, falls in the category of the
so-called e(a) galaxies \citep{p99}, or A+em galaxies, according to
the notation of \citet{bal99}, with still considerable ongoing star formation.
The spectrum of galaxy Q does not include the [OII]$\lambda$3727 line,
however its weak H$\alpha$ emission line indicates that a low level of
current star formation is still present. Its spectrum is dominated by
A-type stars, as indicated by the particularly strong Balmer absorption
lines, EW(H$\delta$) $\sim$ 13 \AA, a rather extreme and unusual value.
Also this galaxy can then be considered of
e(a) type.
{\it Aperture (g'-- r') for all objects and K magnitudes of H, Q and R:}
Aperture magnitudes in the g' and r' bands were
obtained for all studied objects, within an aperture of 2 arcsec,
using the task phot in IRAF.
K magnitudes for objects H, Q and R, which were not given in \citet{ls04}
were derived by us using the program Sextractor (\citet{ber96},
parameter {\it magbest}). We measured these three objects in unpublished
J images of HCG 31 and assumed a J-K colour of 0.345 for all the objects.
We used J-band images from the archive of the New Technology Telescope of
the European Southern Observatory, obtained with the instrument SOFI.
Nine images of HCG 31 obtained on Nov 3rd/2001 were retrieved,
of which only five contained
objects Q and R (object H was visible in all nine). Images in the K
band were also available in the archive but they did not go deep enough
to allow measurements of the objects. Calibration of the instrumental
photometry was done by using stars common in our fields and in the
2MASS images of the group. Our final {\it magbest} J magnitudes were
18.6$\pm$0.1, 14.80$\pm$0.04 and 19.7$\pm$0.5 for objects H, Q and R respectively. These values were
corrected for the foreground Galactic extinction given in NED, A$_J$
= 0.046 mag and transformed into K magnitudes using a J-K colour of 0.345
(which is an average of the J-K colour of the other members of the group).
The corresponding K absolute magnitudes for the three objects,
assuming a distance to the
group of 54.8 Mpc, are listed in column 4 of Table~\ref{properties}.
\subsection {\label{kzrelationh31}The K-Z relation for the TDG candidates of HCG 31}
We overplot in Fig.~\ref{LZ} the values for the metallicity and
K magnitude (values in columns 8, 9 or 10 against those of
column 4 of Table~\ref{properties})
for the components of HCG 31. The data on the metallicities
come either from \citep{ls04} or from
this work. Those from this work (for objects H and R only)
were derived through the N2 calibrator (column 10 of Table~\ref{properties}).
The only value plotted from column 9 was that for the region A1, for
which no other measurement of metallicity is available.
In Fig. 14 of \citet{ls04}, which plots the B-metallicity relation for
the HCG 31 members, galaxies B, C and G are more than 2 magnitudes
off the line for the B-Z relation for normal dwarf galaxies. In our
corresponding Fig.~\ref{LZ}, using K magnitudes instead, these same
objects are closer to the best line that fits the relation for normal
dwarfs, while still in the high-luminosity side. On the
other hand, for the fainter objects, only objects H and R stand off the
correlation, although also A1, E2 and F1+F2 show a larger metallicity and/or
a fainter magnitude than the best-line fit. We suggest that A, B,
C, G and Q are galaxy members of HCG 31 while the fainter objects are
either tidal debris or tidal dwarf galaxies of the group formed due to
the interaction. This is further discussed in section~\ref{Discussion}.
\section {\label{Discussion}Discussion}
\subsection{\label{generalproperties}General Properties of HCG 31 members}
Table~\ref{history} summarizes all of the properties of the HCG 31 members
either from this work or gathered from the literature.
This table may be important for future modelling of the group, for
comparison between simulations and observations. As can be noted, HCG
31 has been observed almost in all wavelengths. The optical morphology
of the galaxies in the group is very disturbed, as is also the optical
and HI kinematics.
{
A point which is worth noting is the large difference between the
equivalent widths of H$\alpha$ of object F3 (EW = 74 \AA~), as compared
to those for F1 and F2 (1508 \AA~ and 1010 \AA~ respectively). This, in
turn, leads to a large difference in the derived ages of F1+F2 (3 Myrs)
and F3 (6 Myrs). The colours g'-r' of the three blobs are very similar
and are the bluest colours observed for any object in this group (see
last column of Table~\ref{properties}). Amram et al. 2004 showed that
the rotation curve is flat through F1 and F2 and the velocity of F3 is
$\sim$ 40 $km~s^{-1}$ higher than that for F1 and F2. The values for
the metallicities are similar for the three blobs (within the errors)
although F3 tends to have higher metallicity than the other two. F3 is
also the lowest surface brightness object of the three. One might think
that the different equivalent widths and the (small) discontinuity in
the kinematics could be hints that F1+F2 is a distinct object from F3,
but we not not believe this is the case, given the detailed analysis of
the kinematics of the region: we have revisited the Fabry-Perot data
cube of HCG 31 (see Amram et al. 2004 for a description of the data)
and have seen that there clearly exists a continuity in the velocity
field between F1+F2 and F3. The situation for object E is also similar:
although the equivalent width of H$\alpha$ for E1 (740 \AA~) is much
larger than that for E2 (34 \AA~), analysis of the kinematics done in
Amram et al. (2004) shows that E1+E2 form one single object.}
One new idea put forward in section \ref{measuredproperties} and also
listed in Table~\ref{history} may deserve some discussion. Although
the optical spectra taken in this study show mainly HII-region-like
spectra for the HCG 31 regions (see Fig.~\ref{spectra}), for two of
the galaxies, G and Q, the spectra are typical of e(a)-type galaxies.
The properties of the e(a) class of galaxies have been interpreted by
\citet{p99} as a possible indication of dusty starbursts. However,
in the spectra of Q and G there are no indications of particularly
high values of internal extinction affecting the emission lines,
since the observed H$\alpha$/H$\beta$ ratios are moderate ($\sim$ 3.4
for Q and 4.0 for G). Hence, the interpratation of these two spectra
remains uncertain. Nevertheless, we suggest that galaxies Q and G have
started reducing their star formation rate and are evolving towards a
post-starburst phase.
\subsection {The tidal tails and the TDG candidates of HCG 31}
HCG 31 is completely embedded within an \ion{H}{1} envelop and it has two
main tails: 1) the southern tail, which contains objects E, H, F and G,
is a narrow and linear optical and H$\alpha$ tidal tail, starting from
galaxy C towards the southeast, ending with galaxy F or
perhaps going even further (to the region of diffuse emission situated between
F and G). This tail has been the subject of several previous studies
\citep[e.g.][]{a04} and 2) the northern tail, including objects A1, Q and
R (see Fig.~\ref{image1}). The base of this tail, close to galaxy A,
shows an open configuration, suggesting that the material has moved from
its original plane. Except for the \ion{H}{1} study of \citet{vm05},
this northern tail has never been studied before.
N-body simulations for compact groups show that stellar tidal tails
are transient features that can be easily destroyed due to multiple
interactions \citep[e.g. ][]{a97,b85}. Gaseous tails may also have
similar fates. The frequency of occurrence and length of the tidal tails
in galaxy mergers are strong functions of the encounter geometry and the
merger phase. Galaxies in the pre-merger phase, where the two galaxies
are still distinct but have gone through the first encounter, are expected
to have well developed tidal features, as seen for HCG 31. As the final
merger takes place, the tidal tails gradually disappear, with the material
in the tails being accreted back onto the remnant or escaping the system
altogether, eventually forming TDGs \citep[e.g. ][]{mdh98,hvg96}.
In Fig. \ref{LZ} we plotted the location of the components of HCG 31
in the K-Z diagram in an attempt to identify good TDG candidates
in this group. The objects display a range of luminosities of at least 6
magnitudes and a range of oxygen abundances of $\sim$ 0.5 dex.
While in the B-Z relation plotted in Fig. 14 of \citet{ls04} objects B, C
and G are more than two magnitudes off the relation, in the corresponding
K-Z relation these objects are closer to the best line for normal dwarfs,
although they are still in the high-luminosity side
of the relation. Although objects A1, E2 and F1/F2 are located in the
low-luminosity side of the relation, they do not stand out, perhaps because
even their K magnitudes could be brightened by the strong star bursts present
in these objects \citep[it is well known that at least F has a low old-stellar
population content,][]{jc00}. On the other hand, the low-luminosity
objects H and R are completely off the K-Z relation.
We note that for galaxy Q the
metallicity determination has a very large error given the weak emission
lines in its spectrum -- this is then not plotted in Fig.~\ref{LZ}.
We suggest, based on the velocities, positions in the L-Z relation, on
their morphologies (our Figs. 2 and 3) and on their internal kinematics
\citep{a04}, that A, B, C, G and Q are galaxy members of HCG 31. On
the other hand, based on the arguments below, we suggest that the
lower-luminosity objects are either tidal debris or tidal dwarf galaxies
of the group, formed as a consequence of the interaction. The more
difficult task is then to decide which of the lower luminosity objects
A1, E, F, H, R are tidal dwarf candidates or merely tidal debris.
\citet{ipv01} devised a scheme to pick out tidal dwarf galaxy candidates
in compact groups, based on their projected distances to the nuclei
of the parent galaxies (at least 2 R$_{25}$) and their H$\alpha$
luminosities (which should be greater than 10$^{38}$ ergs s$^{-1}$).
We note that HCG 31 was present in the sample of \citet{ipv01} and our
regions E2, H1, H2, F1, F2, and F3 correspond to their regions c, d+e,
f, g, h, and i. The last three, composing object F, were among their
final list of good candidates for TDGs.
Following the same criteria,
we classify one additional region as a tidal dwarf galaxy candidate,
namely, region R (not known at the time of that study).
\citet{vm05} also noted that object F is a good candidate for a tidal
dwarf in formation. HCG 31 F1 coincides with a peak \ion{H}{1} column
density of 3 x 10$^{21}$ atoms cm$^{-2}$. In addition, the three blobs,
F1, F2 and F3 are the bluest objects of the group (see last column of
Table~\ref{properties}). No underlying old stellar population has been detected for
this object \citep{jc00} and it has a metallicity similar to that of the
central galaxies of HCG 31 (A+C and B). Although \citet{a04} measured
no rotation for F1+F2, which could suggest that it may not turn into an
independent object, recent simulations of tidal dwarf galaxy formation
\citep{wnb05} conclude that these objects are expected to be non-rotators
or very slow rotators.
Object R is a second excellent candidate tidal dwarf galaxy in
the HCG 31 group. It coincides with an \ion{H}{1} cloud of column
density of 10$^{21}$ atoms cm$^{-2}$, which is one of the highest seen
among all the tidal filaments of HCG 31. Optically it is
formed by an approximately
round distribution of faint H$\alpha$ knots. We obtained a spectrum
for one of these knots (R1), and we confirm that it is at the redshift of the
group. Different from the intergalactic HII regions observed by \citet{ihii},
these knots are located in the peak of the HI distribution and seem to
be a chain of linked star forming regions.
Objects F and R are then
good examples of tidal fragments, well separated from their
progenitors, coincident with peaks of the HI distribution,
that could evolve to become tidal dwarf galaxies.
Objects A1, E2 and H
are likely also tidal fragments, but much closer to their
progenitors, and may be falling back to the main central
object of the group. In fact, E has two components in counter-rotation
\citep{a04}, suggesting that some of the material may be
already returning to the parent galaxy.
Our hypothesis that regions F and R are good TDG candidates
is also supported by the comparison
of their properties with those of tidal dwarf galaxies that have been
identified in other systems, for example those studied by \citet{ipv01},
\citet{wdf03}, and by \citet{twg03}. The TDGs identified by \citet{ipv01}
in a subsample of HCGs have H$\alpha$ luminosities comparable to those
of the TDG candidates studied in this work.
Similar results concerning H$\alpha$ luminosities
of TDG candidates were obtained by \citet{wdf03}, who found an average
H$\alpha$ luminosity of 2.2 $\times$ 10$^{39}$ ergs s$^{-1}$ for TDGs
identified in a sample of ten interacting systems, with the most luminous
knots having values of order 10$^{40}$ ergs s$^{-1}$. Other candidate
TDGs were identified by \citet{twg03} in the tidal tail of the compact
group CG J1720-67.8, whose properties seem to indicate an evolutionary
stage similar to that of HCG 31 \citep{tfva05}. The estimated masses
of the TDGs in this system are of order of 2 $\times$ $10^7~M_\odot$
and burst ages for most objects range from $\sim$ 6 to 8 Myr. H$\alpha$
luminosities from integral field spectroscopy are of order of 10$^{40}$
erg s$^{-1}$, with the brightest TDG candidates having L(H$\alpha$)
= 4 $\times$ 10$^{40}$ erg s$^{-1}$, without correction for internal
extinction \citep{tsk05}. The above ages are comparable to that obtained
for object F3 (c.f., Section 3.2) and the H$\alpha$ luminosities are
of the same order of magnitude of those of our brightest components,
listed in column 5 of Table~\ref{properties}.
Additionally, we note that TDGs are
usually associated with \ion{H}{1} density peaks \citep[e.g. TDGs in
M81 and in NGC 5291, ][ respectively]{mak02,dm98}, as it is the case
for objects F and R \citep[see fig. 3 of][]{vm05}.
Regarding the metallicities, when we place the HCG 31 objects in the
diagram of Fig.~\ref{LZ}, we note that for the lowest luminosity
objects (H and R), the metallicites we measure are much higher than those for
normal dwarf irregulars of similar luminosities, although F is not
far from the best line for dwarf galaxies. We suggest that mostly
the gas and some stars which today form regions F and R were earlier
in galaxies A+C or B and were torned out by the interaction. Then in more
recent times (a few million years ago) the young star complexes in F and
R were formed through compression of the intergalactic \ion{H}{1}
gas by galaxy collisions. In fact, the young star complexes in R and F
would have had to be formed in situ and not be ejected from the central
galaxy because the time needed for those to move even from the outskirts
of the central interacting pair to their current location, at a typical
ejection velocity (due to dynamical interactions) of 200-300 km s$^{-1}$
would be much longer than the age of their massive stars (as determined
in Section~\ref{measuredproperties}).
N-body simulations of galaxy collisions have shown that tidal dwarf
galaxies often form in the tails of major merger galaxies. The timing
of each star-forming burst along the tails is strongly determined by
the orbital orientation (prograde or retrograde) and the internal
structure of the merging galaxies (bulge or bulgeless). Recently,
through high resolution N-body/SPH simulations, \citet{wnb05} have
shown that bound stellar objects can only form in the tidal arms of
interacting disk galaxies, if they have a sufficiently massive and/or
extended gas component. To our knowledge, no simulation including more
than three galaxies, as is the case for compact groups, has ever been
done, to attempt to reproduce an specific observational configuration.
Simulations of HCG 31 are highly needed to understand the evolution of
this complex interacting system and the formation of new objects.
\acknowledgments
We would like to thank the Gemini staff for obtaining the observations.
The authors would like to acknowledge support from the Brazilian agencies
FAPESP (projeto tem\'atico 01/07342-7), CNPq, DAAD/CAPES (projeto 173/04)
and the Alexander von Humboldt Foundation. C.M.d.O. and S.T. would like
to thank the hospitality of the Universitaets-Sternwarte, in Munich and
the Max-Planck-Institut f\"ur Extraterrestrische Physik, in Garching,
where part of this work was developed. S.T. acknowledges support by the
Austrian Science Fund (FWF) under project P17772. We made use of the
Hyperleda database and the NASA/IPAC Extragalactic Database (NED). The
latter is operated by the Jet Propulsion Laboratory, California Institute
of Technology, under contract with NASA.
|
1,108,101,564,177 | arxiv | \section{Introduction}
The discovery of an astrophysical flux of high-energy neutrinos by IceCube~\cite{HESE2} is a major step
forward in the ongoing search for the origin of cosmic rays, since the neutrino emission may be produced
by hadronic interactions in astrophysical accelerators. Of particular interest is the identification
of $\nu_{\tau}$, which are only expected to be produced in negligible amounts in astrophysical accelerators, but should appear in
the flux detected by IceCube due to neutrino flavor change. Up to now, there has been no clear identification of $\nu_{\tau}$ at high
energies, so the detection of $\nu_{\tau}$ neutrinos will be very important from astrophysical and the particle physics point of view.
The detection would give new information about the astrophysical flux as well as serving
as an additional confirmation of the astrophysical origin of the IceCube high energy diffuse neutrino signal. It also
would shed light on the emission mechanisms at the source, test the fundamental properties of neutrinos over extremely long baselines and
better constrain new physics models which predict significant deviations from equal fractions of all flavors.
The existing Imaging Air Cherenkov Telescopes (IACTs) such as MAGIC~\cite{magic}, VERITAS~\cite{veritas} and H.E.S.S.~\cite{hess} could have the capability to detect PeV tau neutrinos by searching for very inclined showers~\cite{fargion}. In order to do that, the Cherenkov telescopes need to be pointed in the direction of the taus escaping from the Earth crust, i.e.\ at or a few degrees below the horizon. In \cite{upgoing_magic}, the effective area for up-going tau neutrino observations with the MAGIC telescopes was calculated analytically and found to be maximum in the range from 100\,TeV to $\sim$~1\,EeV. However, the sensitivity for diffuse neutrinos was found to be very low because of the limited field of view (FOV) (the topographic conditions allow to point the telescopes only for a small window of about 1 degree width in zenith and azimuth to point the telescope downhill), the observation time and the low expected neutrino flux.
On the other hand, if flaring or disrupting point sources such as GRBs are being pointed to, one can expect an observable number of events even from a single GRB if close by, as recently shown by the All-sky Survey High Resolution Air-shower (Ashra) team \cite{Asaoka:2012em}. Also, for IACT sites with different topographic conditions, the acceptance for up-going tau neutrinos is increased by the presence of mountains~\cite{gora:2015}, which serve as target for neutrino interaction
leading to an enhancement in the flux of emerging tau leptons. A target mountain can also shield against cosmic rays and star light. Nights with high clouds often prevent the observation of $\gamma$-ray sources, but still allow pointing the telescopes to the horizon.
As an example for the MAGIC site there are about 100 hours per year where high clouds are present~\cite{clouds}, therefore a large amount of data can be possibly accumulated.
While the observation of tau neutrinos is not the primary goal of IACTs, a certain level of complementarity can be expected when switching from normal (i.e. $\gamma$-ray) observations mode to tau neutrinos (i.e. mostly horizontal) pointing.
Next-generation Cherenkov telescopes, i.e.\ the Cherenkov Telescope Array (CTA)~\cite{cta}, can in addition exploit their much larger FOV (in extended observation mode) and a higher effective area.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.69 \columnwidth]{Fig1A.eps}
\includegraphics[width=0.69 \columnwidth]{Fig1B.eps}
\includegraphics[width=0.69 \columnwidth]{Fig1C.eps}
\end{center}
\caption{ \label{fig::cta::layout} {\bf Cherenkov telescope layouts} considered in this work IACT-4 (A), CTA-E (B) and CTA-I (C). The IACT-4 array consists of
four Cherenkov Telescopes with $\sim12$ m aperture, $2.5^{\circ}$ FOV and $0.16^{\circ}$ camera pixel size, while CTA arrays consist of telescopes of different size i.e. Large Size Telescopes (LST) with $\sim23$ m aperture, $5^{\circ}$ FOV and $0.09^{\circ}$ camera pixel size (red full circles), Medium Size Telescopes (MST) with $\sim$ 12 m aperture, $8^{\circ}$ FOV and $0.18^{\circ}$ camera pixel size (open black circles) and Small Size Telescopes (SST) with $\sim$ 4-7 m aperture, $10^{\circ}$ FOV and $0.25^{\circ}$ camera pixel size (black full circles). For more detailed description of CTA telescope properties, see Table 1 in ~\cite{ctasim}.}
\end{figure*}
In this paper, we present an update of the work in~\cite{gora:2015},
where a detailed Monte Carlo (MC) simulation of event rates induced by Earth skimming tau neutrinos was performed
for an ideal Cherenkov detector in case of MAGIC, VERITAS site and two proposed Cherenkov Telescope Array sites: Meteor Crater and Yavapai Ranch.
For VERITAS and the considered Cherenkov Telescope Array sites the expected neutrino sensitivities are up to factor 3 higher than for the MAGIC site
because of the presence of surrounding mountains. The calculated neutrino rates are comparable to what has been estimated for the IceCube neutrino
telescope assuming realistic observation times for Cherenkov telescopes of a few hours.
However, in our previous work the calculated event rate were obtained with an assumed efficiency for tau induced shower of about 10\%.
Here, we present a more detailed simulation of trigger and identification efficiency for air showers induced by Earth-skimming tau neutrinos,
for IACTs and for a few CTA layouts considered in~\cite{ctasim}. We analyzed the simulated shower images on the camera focal plane showing that IACTs/CTA can distinguish air showers induced by tau neutrinos from the background of very inclined hadronic showers. We also recalculated the
point source acceptance and the expected event rate taking into account this new estimation of the trigger efficiency.
The structure of the paper is the following: Section~\ref{method} describes our MC simulation chain. In Section~\ref{sec:results} we show the trigger/identification efficiencies for $\tau-$induced showers as a function of tau lepton energy and we study the properties of shower images on the camera focal plane, as described by Hillas parameters. This section presents also an update of our previous work~\cite{gora:2015}.
Finally, we summarize the results and give a conclusion in Section~\ref{summary}.
\section{Method}
\label{method}
In order to study the signatures expected from neutrino-induced showers by IACTs, a
full Monte Carlo (MC) simulation chain was set, which consists of three steps.
First, the propagation of a given neutrino flux through the Earth and the atmosphere is simulated using an extended version of the ANIS code~\cite{gora:2007}. For fixed neutrino energies, $10^{6}$ events are generated on top of the atmosphere with zenith angles ($\theta$) in the range $90^{\circ}$--$105^{\circ}$ (up-going showers) and with azimuth angles in the range $0^{\circ}$--$360^{\circ}$. Neutrinos are propagated along their trajectories of length $ \Delta L$ from the generation point on top of the atmosphere to the interaction volume, defined as the volume which can contribute to the expected event rate, in steps of $\Delta L$/1000 ($\Delta L/1000 \geq 6$ km). At each step of propagation, the $\nu$--nucleon interaction probability is calculated according to a parametrization of its cross section based on the chosen parton distribution function (PDF). In particular, the propagation of tau leptons through the Earth is simulated. All computations are done using digital elevation maps (DEM)~\cite{dem} to model the surrounding mass distribution of each site under consideration. The flux of the leptons emerging from the ground as well as their energy and the decay vertex positions are calculated inside an interaction volume, modeled by a cylinder with radius of 35\,km and height 10\,km, see also~\cite{gora:2007,gora:2015} for more details.
Then, the shower development of $\tau$-induced showers and Cherenkov light production from such showers is simulated with CORSIKA~\cite{corsika}. CORSIKA (version 6.99) was compiled with the TAULEP option~\cite{taulep}, such that the tau decay is simulated with PYTHIA~\cite{pythia}. In order to simulate Cherenkov light from inclined showers for any defined Cherenkov telescopes array the CERENKOV and IACT options were also activated~\cite{simtelarray}. Finally, to consider the atmospheric depth correctly for inclined showers, the CURVED EARTH and SLANT options were also selected.
Up to now, we could not simulate showers with zenith angle $\theta>90^{\circ}$ when combining the "CURVED EARTH" and IACT options. Therefore, we use here a zenith angle of $87^{\circ}$ to estimate the trigger efficiency for up-going tau neutrino showers. This should be a reasonable assumption, because the trigger efficiency in case of $\tau$-induced showers with the same energy should only slightly depend on the zenith angle (as its confirmed by our later results), as long as the corresponding altitudes of shower maxima are similar.
The CORSIKA simulations were performed for different configurations: H.E.S.S. like four telescopes
(named here by IACT-4), and for a few CTA arrays considered in~\cite{ctasim}, see Figure~\ref{fig::cta::layout}.
The IACT-4 can be considered as representative for current generation of IACTs. Among different CTA array configurations shown in~\cite{ctasim} the arrays chosen were named CTA-E (59 telescopes) and CTE-I (72 telescopes), which according to ~\cite{ctasim} are the best compromise between compact and dense layout. The selected arrays have only slightly worse sensitivity for $\gamma$-rays than the full CTA array~\cite{ctasim}.
We simulated showers induced by tau leptons with energies from 1 - 1000 {PeV} in steps of 0.33 decades and with an injection position at altitudes ranging from detector level to the top of the atmosphere. We used as the detector level 1800 a.s.l for the simulation of current generation of IACTs and 2000 m a.s.l. for CTA. The injection point spans different vertical depths from ground to top of the atmosphere with steps of at least 50 g/cm$^2$. At each vertical depth, 1000 showers were generated in order to study shower-to-shower fluctuations and to cover different tau decay channels.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\columnwidth]{Fig2A.eps}
\includegraphics[width=0.8\columnwidth,height=6.5cm]{Fig2B.eps}
\end{center}
\vspace{-0.5cm}
\caption{\label{fig::hessimage} Example of simulated shower images with primary particle energy 10 PeV and zenith angle $\theta=88^{\circ}$ as seen by a IACT-4 camera. (Left) proton interacting at the top of the atmosphere, first interaction point at vertical depth below 50 g/cm$^2$ and detector-to-shower distance of about 1000 km; (Right) lepton tau decaying close to the detector, with an injection vertical depth of 760 g/cm$^2$ and a detector-to-shower distance of about 50 km.}
\end{figure*}
For each CORSIKA simulated shower the impact point was randomized in a circle with radius $R_{max}$ on a plane perpendicular to the shower axis i.e. the CSCAT with VOLUMEDET/IACT option was used. This radius was optimized by looking to the fraction of triggered events as a function of $R_{max}$, and finally was set to $R_{max}=200$~m for IACT-4 and $R_{max}=1000$~m for CTA-E in order to avoid information loss due to showers which could be triggered but were not simulated.
For high energies ($>1$ PeV) the computing time become excessively long (scaling roughly with the primary energy). In order to reduce it to tolerable values the so-called
"thin sampling" mechanism is used~\cite{thinning}. To cope with the vast number of secondary particles thinning and re-weighting of secondaries was used with a thinning level of 10$^{-6}$ . The kinetic energy thresholds for explicit tracked particles were set to: 300, 100, 1, 1~MeV for hadrons, muons, electrons and photons, respectively. Shower simulations were performed considering the QGSJET II model for hadronic interactions in the atmosphere.
The results of CORSIKA simulations were used as the input for the last step i.e. simulation of the detector response. We used the Cherenkov telescope simulation package: {\tt sim\_telarray}~\cite{simtelarray}. The light collection area is simulated including the ray-tracing of the optical system, the measured transmittance and the quantum efficiency of PMTs. The response of the camera electronics was simulated in detail including night-sky background and different system triggers.
The {\tt sim\_telarray} simulations were performed for IACT-4, and for CTA-E and CTA-I with so-called {\it production-1} settings.
The response to $\tau$-induced showers is found to depend weakly on the details of the optical set-up, field of view and camera electronics.
In order to compare images at the camera plane we also simulated inclined showers induced by protons, photons and electrons. At energies larger than 1 PeV, we do not expect significant background of showers initiated by photons or electrons. The proton simulations were instead used to estimate the main isotropic background for neutrino searches due to interaction of comics rays in the atmosphere. In order to have enough statistic we use a similar strategy to the case of $\tau-$induced shower
i.e. we simulated proton induced showers with primary particle energy ranging from 1 to 1000 PeV in steps of 0.33 decades. At each considered zenith angle bins (80$^{\circ}$, 83$^{\circ}$, 85$^{\circ}$, 87$^{\circ}$) the number of simulated events in CORSIKA input card was set to the corresponding number of events
from the power law spectrum with spectral index $\gamma=-2.7$. The direction of primary protons was varied within a circle with aperture $\beta=5^{\circ}$ around the fixed primary direction, i.e. the VIEWCONE option was selected in the CORSIKA simulations.
\section{Results} \label{sec:results}
\subsection{Image on the camera}
In case of showers observed at large zenith angles the Cherenkov light has to
undergo a long optical path, due to a thicker layer of the atmosphere. The shower maximum is located far
from the observatory and the photon density at the mirrors decreases. This reduces the efficiency compared to lower zenith angles, especially at low energies. Images on the camera will be dimmer and smaller in size.
As an example, in Figure~\ref{fig::hessimage} we show a representative shower image for a 10~PeV proton injected at the top of the atmosphere and a 10
PeV tau lepton injected close to the detector, respectively. As expected, the shower image on the focal camera plane for the tau lepton has a much larger image size and contains much more photons compared to the proton one. Note also, that for inclined showers
the hadronic and electro-magnetic component is almost completely absorbed in the atmosphere while the muonic component (muons) can reach the Earth.
Thus, the showers images on the cameras from $p$-induced showers will mostly contain the muon ring (if muons propagate parallel to the optical axis) or incomplete ring (arcs) in the camera, see Figure~\ref{fig::hessimage} (Left) as an example.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\columnwidth,height=6.0cm]{Fig3A.eps}
\includegraphics[width=\columnwidth,height=6.0cm]{Fig3B.eps}
\end{center}
\caption{\label{trigger333}(A) Trigger efficiency as a function of the distance between injection point and the detector level measured in g/cm$^2$ ($\Delta X$) with IACT-4 for different zenith angles and energies of the tau lepton. Note, that for different zenith angles the distance from the atmospheric border to detector level is significant different due to the Earth's curvature. (B) Trigger probability for CTA at a fixed zenith angle of $87^{\circ}$.The distance $\Delta X=0$ g/cm$^2$ corresponds to the detector level, $\Delta X\simeq12 000$ g/cm$^{2}$ to the top of the atmosphere. }
\vspace{-0.5cm}
\end{figure*}
\subsection{Trigger efficiency}
The trigger efficiency (trigger probability) depends on the response of a given detector and
is usually estimated based on MC simulations. The trigger efficiency, $T(\theta, E_{i},X)$ in an energy range interval $\Delta E$, is defined
as the number of simulated showers with positive trigger decision over the total number of generated showers
for fixed zenith angle $\theta$, initial energy of primary particle $E_{i}$ and injection depth~$X$. In this work, simulations were done for a two level trigger, so-called Majority trigger. The first level is a camera level trigger ({\bf L1}) defined by 3 pixels above 4 photo-electrons (p.e.) within a short time window and the second level is basically a coincidence level trigger among all telescopes in the defined array or sub-array ({\bf L2}) and requires at least 2 neighboring triggered telescopes.
Figure~\ref{trigger333} (A) shows the trigger probability ({\bf L2}) for $\tau$-induced showers with different zenith angles and energies of the tau lepton in case of the IACT-4 array. The calculated trigger probabilities for different zenith angles $\theta=80^{\circ}, 84^{\circ}, 87^{\circ}$ are quite similar, within errors, if plotted as a function of the distance between the injection point and the detector measured in g/cm$^2$ (in this work this distance to the detector was labeled as $\Delta X$). This is understood, if we note that amount of Cherenkov light detected depends essentially on the distance between the Cherenkov telescope and the shower maximum. At its maximum a shower has the largest lateral extension and Cherenkov light production, thus is capable of producing the largest signal seen by IACTs telescopes.
As expected (see Figure~\ref{trigger333} (A)) the trigger probability increases with primary energy of the tau lepton and decreasing distance to the detector. Only, at $\Delta X < 1000$ g/cm$^2$, the trigger efficiency drops due to the fact that the shower maximum is too close to the detector or the shower did not reach yet the maximum of shower development, decreasing the amount of Cherenkov light seen by telescopes. It is also worth to mention, that
below $\Delta X < 6000$ g/cm$^{2}$ the trigger probability is at the level of about 90\%. In this case the corresponding geometrical distance to the detector (in meters) depends on the zenith angle $\theta$, but for $\theta=87^{\circ}$ is of about $\sim 80$ km. This provides an estimate of the size of the active volume for $\tau$-induced showers seen by IACTs.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=2\columnwidth]{Fig4.eps}
\end{center}
\caption{\label{fig::hillas} Normalized distribution of Hillas parameters for $\tau$, $p$ and $\gamma$-induced showers, zenith angle $\theta=87^{\circ}$ and CTA-E. Only deep $\tau$-induced showers with $\Delta X <8800$ g/cm$^2$ and primary particle energy 1 PeV are shown, while for $p/\gamma$ only events interacting at the top of the atmosphere with $\Delta X >11400$ g/cm$^2 $ are considered. The $p$-events come from CORSIKA simulations for primary protons with energies between 1PeV and 1000 PeV with a differential spectral index of $-2.7$, while $\gamma$-events from simulations
with the primary photon energy of 1 PeV. Vertical dashed lines and arrows indicate our selection cuts developed for $\tau$-induced showers, see text for more details.}
\end{figure*}
Figure~\ref{trigger333} (B) shows the trigger probability for the considered CTA arrays shown in Figure~\ref{fig::cta::layout} (B) and (C) and different primary energy of lepton tau. As for IACT-4, the trigger probability increases because the higher is the energy, the more Cherenkov light is produced, and the larger the number of triggered events. Comparing
with results from Figure~\ref{trigger333}~(A) calculated for larger CTA arrays, with more telescopes with different optics and camera structures,
we find basically a similar fraction of triggered events (above $\Delta X>2000$ g/cm$^2$). The difference in the trigger probability seen for $\Delta X < 2000$ g/cm$^{2}$ between IACT-4 and CTA-E it is due to the different altitudes of detectors i.e. a higher altitude for CTA-E.
The altitude difference is only 200 m, but for zenith angle $\theta=87^{\circ}$ it translates into a difference of about 4 km in the detector to shower distance. In case of IACT-4 this leads to a larger fraction of triggered showers than for CTA-E, because more showers can reach their maximum of shower development.
Moreover, for the considered CTA arrays, the trigger efficiency only slightly depend on the array structure. This can be explained by the fact, that for inclined showers studied in this work (with $\theta>80^{\circ}$) the radius of the Cherenkov light pool distribution at detector level is larger than 1~km~\footnote{For index of refraction $n_{air}=1.00023$ at an altitude of 1800~m, the Cherenkov opening angle is $\alpha \simeq 1.2^{\circ}$. Thus, for geometrical distance from the shower maximum to detector
of about 50 km the Cherenkov ring radius on the ground, assuming not changes of refraction index within this distance, is given by: 50~km~$\times \tan(\alpha)/ \cos(\theta)$=1.04 km/{$\cos(\theta$)} km for fixed zenith angle $\theta$.}, which is much more than the distance between telescopes in the considered arrays. Thus, the fraction of triggered events is expected to be similar and to be only weakly dependent on the density of telescopes.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=2.0\columnwidth]{Fig5.eps}
\end{center}
\caption{\label{fig::hillas2} Normalized distribution of Hillas parameters for 1, 10 and 100 PeV in case of $\tau-$induced showers
for CTA-E array and zenith angle $\theta=87^{\circ}$. }
\end{figure*}
\subsection{Discrimination of $tau-$induced showers }
In this section we show how to discriminate of $\tau-$induced showers from background hadronic showers. The results presented here are based
on simulation of down-going showers with zenith angle $\theta>84^{\circ}$, but they can be applied to any neutrino flavour,
since all neutrinos with different flavours can induced down-going air showers, which produce a large amount of Cherenkov light at high energies ($ > 1$ PeV).
We already show in Figure~\ref{trigger333} (A) that the trigger probability does not depend on zenith angle for inclined showers,
thus it can be used for down-going neutrino searches, as well. Of course, in such a case the neutrino sensitivity is reduced due to small
target density for neutrino interaction (happening in the atmosphere),
compared to the sensitivity obtained for Earth-skimming neutrinos.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\columnwidth,height=5.5cm ]{Fig6A.eps}
\includegraphics[width=\columnwidth,height=5.5cm]{Fig6B.eps}
\end{center}
\caption{\label{trigger3333} (A) Trigger and identification efficiency for 1 PeV $\tau$-induced shower with zenith angle $\theta=87^{\circ}$ for IACT-4 and CTA-E, (B) Identification efficiency for CTA-E as a function of lepton tau energy and injection slant depth measured from detector level.}
\end{figure*}
Each simulated event recorded and calibrated consists of a number of photoelectrons collected
by each pixel in the camera while the trigger gate is opened. The standard trigger configuration requires
at least three connected pixels with a signal above the discriminator
threshold. However, most of the camera pixels collect light not from the Cherenkov
shower but from background. To eliminate the background contribution an
image cleaning is performed~\cite{simtelarray}. The resulting cleaned shower image contains
only the pixels considered to have physical information on the shower development.
The cleaned camera image is characterized by a set of image parameters introduced by M. Hillas in~\cite{hillas}.
These parameters provide a geometrical description of the images of showers and are used to infer the energy of the primary particle, its arrival direction and to distinguish between $\gamma-$ray showers and hadronic showers. It is interesting to study these parameters also in the case of deep $\tau$-induced showers.
In Figure~\ref{fig::hillas} the distribution of Hillas parameters for deep $\tau$-induced showers are shown, in comparison to the one obtained for $p$ and $\gamma-$ induced showers. In general, these parameters depend on the geometrical distance of the shower maximum to the detector, which for deep $\tau$-induced showers is much smaller than for inclined $p$ and $\gamma$-induced showers which develop at the top of the atmosphere. For example, at $\theta>80^{\circ}$ this distance is about a few hundred kilometers for particles interacting at the top of the atmosphere and only a few tens kilometers for deep $\tau$-induced showers. This geometrical effect leads to a rather good separation of close ($\tau$-induced) and far-away ($p$, $\gamma$) events in the Hillas parameter phase space. This is evident in
the $Size$-parameter, see Figure~\ref{fig::hillas} (A). This parameter measures the total amount of detected light (in p.e.) in all camera pixels, and it is correlated with the primary energy of the shower. The size distribution for $\tau$-induced showers is shifted to larger values compared to $\gamma$ and $p-$ induced events, due to closer distances to the detector.
The difference is also seen for parameters characterizing the longitudinal and lateral shower development like {\it Length} and {\it Width}, Figure~\ref{fig::hillas} (B) and (C). For showers induced by hadron (proton) the image on the camera is more irregular and is typically larger compared to showers induced by photons. Thus, the average value of {\it Length} and {\it Width} for photons is expected to be smaller than for protons. At larger inclinations, the so-called the $\gamma$/hadron separation is weaker since images become smaller in size. However, still the difference between $\gamma$ and $p-$induced showers is well visible in our simulations.\footnote{The peak in the {\it Length} distribution around $1^{\circ}$ comes from single muons, which create a ring/arc in the camera and lead to a large value of length from reconstruction. An example of this class of events is shown in Figure~\ref{fig::hessimage} (Left). }.
For $\tau-$induced showers, the maximum of {\it Length} and {\it Width} distribution lies somewhere between the maximum for $\gamma$ and protons.
This can be explained by the fact, that the lepton tau decays according to different decay channels~\cite{dpg}, and a $\tau-$induced shower is usually a superposition of electromagnetic sub-showers coming from decays of neutral pions and hadronic sub-shower coming from decaying of charged pion. Thus,
the shower image in the camera can have different topologies i.e. it can look like $p-$events or $\gamma-$events.
The angular distance between the center of the shower image and the camera center is called
the {\it Distance}-parameter. It is correlated with the angle between the shower and the telescope axis, and
for larger zenith angles it decreases due to larger detector-to-shower distance.
However, this parameter can also increase when the detector-to-shower distance becomes smaller for fixed zenith angles.
The effect it well seen in Figure~\ref{fig::hillas} (D) in case of point-like simulations (i.e. when a normal mode of CORSIKA simulations without VIEWCONE option was used),
when the ellipse center of the shower image for deep $\tau$-induced showers compared to $\gamma$-induced showers moves away from the camera center.
For the proton simulations, when the direction of primary protons within 5$^\circ$ around 87$^{\circ}$ was varied, the distribution is shifted to higher values of distance parameter.
Three peaks seen at {\it Distance} distributions, as for example
at 2.5$^{\circ}$, 4$^{\circ}$ and 5$^{\circ}$ for $\tau$-induced showers,
are due to structures of CTA-E array, which consists of three different telescopes types with different FOV.
Another Hillas parameter, which describes the orientation of the shower image on the camera according to its center is {\it Miss}-parameter.
As we can see in Figure~\ref{fig::hillas} (E) the distribution for $\tau$-events is shifted to lower values compared to $p$ events, showing
that this observable has also a strong separation power from the background of hadronic events.
Figure~\ref{fig::hillas}(F) shows the distribution of {\it Alpha} for deep $\tau-$, $\gamma$ and $p-$induced showers. {\it Alpha} is the
angle between the major axis of the ellipse and the direction from the image Center of Gravity to the center of camera. This parameter has the highest $\gamma$/hadron separation power (for single IACT observation data), since $\gamma$-ray induced images point to the position of the source in the camera, thus they are characterized by a small value of {\it Alpha}. On the contrary, hadronic showers are distributed isotropically in the sky implying a rather flat {\it Alpha} distribution. However, at large zenith angles $\gamma$-induced images have a rather circular shape~\footnote{ For $\gamma$-induced shower at large zenith angle, the Cherenkov light due a long optical path trigger only a few pixels, thus the shape of the image is less well determined in terms of Hillas parameters. In order to see the elliptical structure of typical $\gamma$-induced showers
we need camera with a pixel size much smaller than what proposed right now for CTA (i.e. between $0.09^{\circ}$ and $0.25^{\circ}$).} rather than an elongated elliptical one implying that
the {\it Alpha}-parameter is less well determined. At zenith angles $\sim87^{\circ}$ the distribution is quite flat and becomes similar to the distribution for $p-$events. For deep $\tau$-induced showers the distribution peaks at small values of {\it Alpha}, showing a strong separation power from the
background of hadronic events.
$Distance$, $Miss$ and $Alpha$ only slightly depend on the primary particle energy (as it is shown in Figure~\ref{fig::hillas2} (D-F) and the shower zenith angle (in the range above $80^{\circ}$)). However, as expected, for energy dependent parameters like: $Size, Lenght, Width$ we observe the expected shift to higher values
for higher primary particle energies. It is also worth to mention, that the largest differences between deep $\tau$-induced showers and $p$ and $\gamma$-induced showers are observed for the $Size$, $Miss$ and $Alpha$ parameter. Such observables can be used to distinguish deep $\tau$-induced showers from the background of inclined hadronic showers.
In order to evaluate the best set of cuts to identify deep $\tau$-induced neutrino showers, we used the program GARCON~\cite{garcon}, returning the cuts yielding the maximal signal efficiency with minimal background contamination. We considered a six parameter phase space
$\vec{x} = \{Size, Length,Width, Distance, Miss, Alpha\}$. For signal we considered deep $\tau-$ induced showers
(with $\Delta X < 4000$ g/cm$^2$ i.e. $\sim 50$ km from the detector and $\theta=87^{\circ}$). As a source of background we considered
showers, initiated by primary protons with energies between 1 PeV and 1000 PeV with a differential spectral index of $\gamma=-2.7$, and interacting at the top of the atmosphere, with $\Delta X > 11400$ g/cm$^{-2}$ and zenith angle $\theta=87^{\circ}$ \footnote{For zenith angles:
$85^{\circ}$, $83^{\circ}$ and $80^{\circ}$, the Hillas distributions looks similar, except the $Size$ distribution for which we observed a small shift of maximum
to higher values, when the zenith angles decreases.}.
The set of optimized cuts retaining most signal and zero left protons are listed in Table 1 for IACT-4 and CTA-E.
\begin{table*}[ht]
\vspace{-0.0cm}
\small
\center
\begin{tabular}{cccccccccc}
\hline
array&E$_{i}^{\tau}$ & $Size$ & $Length$& $Width$&$Distance$ & $Miss$ & $Alpha$ & Signal Efficiency \\
type&[PeV] & [p.e.] & [deg] & [deg] & [deg]& [deg] & [deg] & [\%] \\
\hline
\hline
IACT-4& 1& $>$ 2010 & $<$ 1.81 & $<$ 0.17 &$<$ 0.91 & $<$ 0.15 & $<$ 51 & 31 \\
CTA-E && $>$ 791 & $<$ 0.35 & $<$ 0.10 &$<$ 2.34 & $<$ 0.35 & $<$ 62 & 32 \\
& & & & & & & & &\\
IACT-4&10 & $>$ 11500 & $<$ 0.52 & $<$ 0.47 &$<$ 1.09 & $<$ 0.27 & $<$ 90 & 33 \\
CTA-E & & $>$ 2590 & $<$ 0.39 & $<$ 0.20 & $<$ 3.47 & $<$ 0.66 & $<$ 19 & 27\\
& & & & & & & & & \\
IACT-4 &100 & $>$ 43100 & $<$ 0.71 & $<$ 0.72 &$<$ 2.26 & $<$ 0.131& $<$ 17& 30 \\
CTA-E & & $>$ 8700 & $<$ 0.39 & $<$ 0.30 & $<$ 3.47 & $<$ 0.66 & $<$ 19 & 27 \\
\hline
\end{tabular}
\caption{Chosen cuts for the identification of $\tau$-induced showers and zenith angle $\theta = 87^{\circ}$. }
\vspace{-0.5cm}
\end{table*}
The selection cuts presented in Table 1 (and also in Figure~\ref{fig::hillas}) demonstrate that background events triggering the IACT/CTA telescopes when pointing below (or close to) the horizon can be distinguished from MC neutrino signatures. This criterion gives a possibility to identify tau neutrinos from the background of hadronic showers and can be used to calculate the identification efficiency for $\tau$-induced showers.
In Figure~\ref{trigger3333} (A) the influence of cuts on the trigger probability is shown, while Figure~\ref{trigger3333} (B)
gives the identification efficiency as a function of the primary energy of the tau lepton.
At vertical depths smaller than $\Delta X < 3000$ g/cm$^2$, we have lower values of identification efficiency for CTA-E than IACT-4
due to the different altitudes of detectors i.e. a higher altitude for CTA-E of 200 m. However, the CTA-E distribution is extended to higher values of distance to the detector, up to $\Delta X=8000$ g/cm$^2$.
\subsection{Event rate calculations }
\label{eventrate}
The total observable rates (number of expected events) were calculated as $N=\Delta T \times \int_{E_{\mathrm{th}}}^{E_{\mathrm{max}}} A^{\mathrm{PS}}(E_{\nu_\tau})\times\Phi(E_{\nu_\tau})\times dE_{\nu_\tau}$, where $\Phi(E_{\nu_\tau})$ is the neutrino flux, $\Delta T$ the observation time
and $A^{\mathrm{PS}}(E_{\nu_\tau})$ the point source acceptance. The acceptance for a point source can be estimated as the ratio between the diffuse acceptance $A(E_{\nu_\tau})$ and the solid angle $\Delta \Omega$ covered by the diffuse analysis, multiplied by the fraction of time the source is visible $f_{\mathrm{vis}}(\delta_{s},\phi_{\mathrm{site}})$ i.e. is given by: $A^{\mathrm{PS}}(E_{\nu_\tau})\simeq A(E_{\nu_\tau}) / \Delta \Omega \times f_{\mathrm{vis}}(\delta_{s},\phi_{\mathrm{site}})$. The fraction of time where source is visible depends on the source declination ($\delta_{s}$) and the latitude of the observation site ($\phi$).
In this work, the detector diffuse acceptance for an initial neutrino energy $E_{\nu_\tau}$ is calculated from:
\begin{eqnarray}
A(E_{\nu_\tau}) =N_{\mathrm{gen}}^{-1} \times \sum_{i=1}^{N_{k}}
P_{i}(E_{\nu_\tau},E_{\tau},\theta) \nonumber \\
\times T_{\mathrm{eff},i}(E_{\tau},x,y,h,\theta) \times
A_i(\theta)\times \Delta \Omega,
\label{aperture}
\end{eqnarray}
where $N_{\mathrm{gen}}$ is the number of generated neutrino events. $N_k$ is the number of $\tau$ leptons with energies $E_{\tau}$ larger than the threshold energy $E_{\mathrm{th}}=1$\,PeV and a decay vertex position inside the interaction volume\footnote{Only tau leptons which decays in the interaction volume are considered, so the tau decay probability is included.}. $P(E_{\nu_\tau},E_{\tau},\theta)$ is the probability that a neutrino with energy $E_{\nu_\tau}$ and zenith angle $\theta$ produces a lepton with energy $E_{\tau}$ (this probability was used as "weight" of the event). $A_i(\theta)$ is the physical cross-section of the interaction volume seen by the neutrino. $T_{\mathrm{eff}}(E_{\tau},x,y,h,\theta)$ is the trigger efficiency for tau-lepton induced showers with the decay vertex position at ($x$, $y$) and height $h$ above the ground.
As we already mentioned, in our previous work~\cite{gora:2015} we assumed an average trigger efficiency of $\langle T_{\mathrm{eff}} \rangle =10$\% in the energy range 1-1000 PeV. However, as seen for example from Figure~\ref{trigger3333} (A) the average trigger efficiency is significanly
larger than 10\%, even for 1 PeV tau leptons. For tau leptons interacting below 4000 g/cm$^2$ with energy in the range 1-1000 PeV the average trigger efficiency is about ~90\%(77\%) for IACT-4/(CTA), thus we also expect a larger acceptance and event rates by a factor 9 to 8 compared to what was shown in~\cite{gora:2015}.
In Figure~\ref{fig::acccc} we show our new estimates for the acceptance to $\tau$ neutrinos for different IACT-4 sites: La Palma (MAGIC), Namibia (H.E.S.S.) and Arizona (VERITAS) and recently chosen locations of CTA for the North: Chile, (Armazones: Latitude $\phi=24.58^{\circ}$ S, Longitude $\lambda=70.24^{\circ}$ W)
or Tenerife ($\phi=28.27^{\circ}$ S, $\lambda=16.53^{\circ}$ W). As expected, the acceptance depends on local topographic conditions with the largest
acceptance for Arizona and Chile site ~\footnote{Due to the lack of results from IceCube in the tau-neutrino channel, we use IceCube's muon neutrino acceptance \cite{IC-80-acc} for a sensitivity comparison. This is motivated by the fact that at the Earth we expect an equal flavor flux from cosmic neutrino sources due to full mixing \cite{mixing}. In \cite{up-icecube,diff-icecube} it is also shown that for neutrino energies between 1\,PeV and 1000\,PeV, the muon-neutrino acceptance is only slightly larger than that for tau neutrinos.}.
To calculate the acceptance for up-going $\tau$-induced showers we used the trigger efficiency instead of the shower identification efficiency,
since in the studied angular range ($90^{\circ} <\theta < 105^{\circ}$) the expected background from protons and photons will be negligible. This is also
expected in case of Cherenkov telescopes observations in the direction of mountains, when they are shielded against cosmic rays and star light.
However, in some cases like for example for La Palma or Tenerife Cherenkov telescopes can be pointed to the sea.
Thus, for high energies ( $>$ 1 PeV) we can expect a non zero background component due to the presence of high energetic muons or muons bundles
(as for example seen by IceCube~\cite{icecubemuons}) or even gamma showers induced by interacting muons via bremsstrahlung or pair production~\cite{kiraly,sciutto}. If the identification efficiency is used instead of the trigger efficiency, the calculated acceptance for IACT-4/CTA and the expected event rate is of about two/three times lower.
In Table~\ref{tab::rate222} the expected event rates for IACTs, Tenerife and Chile site compared to that of IceCube is shown for
fluxes used in our previous work~\cite{gora:2015}. The rate is calculated for tau neutrinos with zenith angles between 90$^{\circ}$ and 105$^\circ$ assuming that the source is in this FOV for a period of 3 hours. The Flux-1 and Flux-2 are predictions for neutrino from $\gamma$-ray flare of 3C 279~\cite{2009IJMPD}. Flux-3 and Flux-4 are predictions for PKS~2155-304 in low-state and high-state, respectively~\cite{Becker2011269}. Flux-5 corresponds to a prediction for 3C~279 calculated in~\cite{PhysRevLett.87.221102}, and it is at a similar level in the PeV energy range like the flux reported by IceCube in case of astrophysical high-energies neutrinos~\cite{lasticecube}. For Flux-3 and Flux-4 (i.e. those models covering the energy range beyond $\sim 1\times10^{8}$ GeV) the event rate is a factor 16 to 30 larger what expected for IceCube in the northern sky assuming three hours of observation. For neutrino fluxes covering the energy range below $\sim 5\times10^{7}$\,GeV (Flux-1, Flux-2, Flux-5), the number of expected events for these sites is at least three times larger (La Palma) or seven times larger (Arizona) to what estimated for IceCube.
\begin{table*}[bt!]
\caption{\label{tab::rate222} {Expected event rates for Cherenkov detectors at different sites compared to IceCube. The values are calculated with the ALLM~\cite{allm} tau energy loss model and the GRV98lo~\cite{GRVlo} cross-section, with $f_{\mathrm{vis}}=100$\%, $\Delta
\Omega=2\pi (\cos(90^{\circ})-\cos(105^{\circ}))=1.62$ and $\Delta T=3$ hours. Rates are in units $10^{-3}$. For Arizona, Namibia and La Palma site
the rates are calculated with the trigger efficiency obtained for IACT-4, while for Chile and Tenerife with the trigger efficiency obtained for the CTA-E.}}
\begin{center}
\begin{tabular}{cccccccc}
\hline
\hline
&Flux-1 &Flux-2& Flux-3 & Flux-4 &Flux-5 \\
\hline
\hline
$N_{\mathrm{La Palma}}$ &2.5 & 1.4 & 0.77 &7.7 &2.3\\
$N_{\mathrm{Namibia}}$ &4.3 & 2.3 & 0.99 &9.9 &3.8 \\
$N_{\mathrm{Arizona}}$ &7.4 & 3.4 & 1.44 &14.4 &6.2 \\
& & & & & &\\
$N_{\mathrm{Tenerife}}$ &3.0 & 2.2 & 0.73 &7.3 &2.8\\
$N_{\mathrm{Chile}}$ &7.9 & 3.3 & 0.98 &9.8 &6.0\\
& & & & & &\\
$N^{\mathrm{Northern \mbox{ } Sky}}_{\mathrm{IceCube}}$& 0.68& 0.25 & 0.046 & 0.46 & 0.88 \\
$N^{\mathrm{Southern \mbox{ } Sky}}_{\mathrm{IceCube}}$& 1.1& 0.32 & 0.076 & 0.76 & 0.88 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\columnwidth]{Fig7.eps}
\end{center}
\vspace{-0.5cm}
\caption{\label{fig::acccc} Acceptance for point sources, $A^{\mathrm{PS}}(E_{\nu_\tau})$ to earth-skimming tau neutrinos as estimated for the IACT sites and a
future locations of Cherenkov instrument (Chile) and IceCube (as extracted from~\cite{IC-80-acc}). For the Arizona, Namibia and La Palma sites
the acceptance is calculated with the trigger efficiency obtained for IACT-4, while for Chile and Tenerife the CTA-E trigger efficiency was used instead. The local topographic condition are included. }
\end{figure}
The influence on the expected event rate arising from uncertainties on the tau-lepton energy loss and different neutrino-nucleon cross-sections was studied in our previous work~\cite{gora:2015}. The influence of systematic uncertainties on the event rate was estimated to be about +14\%/-7\% for Flux-1 and +43\%/-16\% for Flux-3.
\section{Summary}
\label{summary}
In this paper, we present results of MC simulations of $\tau$-induced air showers for IACTs and for selected CTA arrays. We calculated the trigger and identification efficiencies for $\tau$-induced showers and study the properties of their images on the camera focal plane, as described by Hillas parameters. In our previous work~\cite{gora:2015}, which assumed a trigger efficiency of 10\% we predicted, that the calculated neutrino rates are comparable or even larger (above $\sim30$ PeV) to what expected for the IceCube neutrino telescope assuming observation times for Cherenkov telescopes of a few hours. In this work we have carried out more realistic simulations and we predict even larger efficiencies expected for IACTs. In the most favorable case in Table 2,
we expect 1 event during 210 hours of observation. Taking into account that for this purpose IACTs have to be pointed below the horizon during moonless nights, the detection of tau neutrinos seems to be difficult. However, such observation time/or even larger can be an accumulated during periods with high clouds, when those instruments are normally not operated. Very often (for example for the La Palma site this is of about 100 hours/year) high clouds prevent the observation of $\gamma$-ray sources, but still allow pointing the telescopes to the horizon.
This makes the perspective of detection tau neutrino induced shower by IACT more attractive.
|
1,108,101,564,178 | arxiv | \section{Introduction}
Computer vision plays a major role in sports, beginning with an automatic semantic annotation of the observed scene to enhanced viewing experience.
One major research field in computer vision is the recognition of actions and activities in videos, with a special interest in actions and activities performed by humans. The recognition of specific actions usually takes place in videos of limited length, called trimmed videos, by assigning a single action class to each video.
Being well known, short videos containing only a single action are rather an artificial construct, being produced by special recordings of only a single action or by previously cutting the short video out of a larger one. More naturally, untrimmed videos usually feature no specific or limited length and contain more than a single action -- a video of the Summer Olympics may contain for example the sporting activities `high jump', `hammer throwing', and `fencing', but also non sporting parts, such as the commentators talking, interviews and shots of the crowd as well. Figure \ref{fig:eyecatcher} depicts an example. Another application field is the area of video surveillance, where often only a few time segments of large videos contain actions of interest, such as theft. Independent of the concrete application, time segments containing actions have to be identified from the whole video as accurate as possible in addition to the classification of the different actions taking place. As traditional approaches for solving this problem mostly use an expensive combination of sliding window mechanisms and classification, temporal action proposals generation was introduced as preprocessing step, searching for high-quality time segments first, which are thought to contain an action of interest with both high probability and good temporal localization. Thus, classification has to be performed only on the temporal action proposals.
\begin{figure}[t]
\begin{center}
\fbox{\includegraphics[width=.98\linewidth]{pictures/eyecatcher.png}}
\end{center}
\caption{Example of sports activities in a video which have to be localized temporally (green) and segments of non-sports activities that have not to be localized temporally (red).}
\label{fig:eyecatcher}
\end{figure}
A recent state of the art approach based on deep neural networks is the `Single-Stream Temporal Action Proposal' (SST) model \cite{sst}, processing videos utilizing 3D convolutions and a recurrent architecture.
To the best of our knowledge, we are the first who investigate different positions and ways of fusion in two-stream architectures that utilize 3D convolutions on optical flow and image data for temporal action proposals generation.
Our main contributions are: (1) The development of four two-stream model architectures for temporal action proposals generation originating from the SST model \cite{sst}. (2) Investigation and fine-tuning of the hyperparametrizations of the models. (3) Quantitative evaluation on the THUMOS'14 \cite{THUMOS14} dataset. (4) Showing the independence of a specific optical flow calculation method.
\section{Related Work}
\textit{Action recognition} is a task to associate a single action class to a video. From this field, a lot of relevant innovation emerged. Two-stream convolutional neural networks \cite{simonyan2014two} were designed to process image data on the first stream and stacked optical flow fields on the second stream. The additional usage of stacked optical flow fields contributes temporal dynamical information of motion. Another approach was the extension of two-dimensional kernels used by the classical CNNs into the third dimension, therefore operating on 3D volumes defined by consecutive frames. The prominent C3D (Convolutional 3D) network \cite{c3d} employs this approach by processing videos divided into blocks of 16 consecutive frames. This is another way of utilizing temporal information. More recent approaches combine the two previous ideas: temporal information is utilized by applying 3D convolution on two streams, one using image data and the other one using optical flow. Among others \cite{khong2018improving, varol2018long}, the I3D (Inflated 3D ConvNet) network \cite{carreira2017quo} is a prominent example of that approach, coming to the conclusion that 3D convolutional neural networks also profit from a two-stream architecture. This insight in the field of action recognition serves in this work as inspiration to transfer that approach to the field of temporal action proposals generation.
The need for \textit{temporal action proposals} comes from the task of the temporal localization of actions in long, untrimmed videos and the classification of said actions. Before temporal action proposals, this problem was tackled with sliding window approaches: The extraction of overlapping time segments with varied length. Subsequently, a classification of each time segment was done to find the action in time. As this process was very time-consuming with a lot of time segments to be classified, temporal action proposals were invented to reduce the number of time segments that have to be classified. There exists early work \cite{caba2016fast} on temporal action proposals relying on traditional approaches. Among recent successful work \cite{escorcia2016daps,sst, gao2017cascaded,gao2017turn, lin2017temporal} it is instead common to take advantage of deep neural networks. Several works \cite{escorcia2016daps,sst, gao2017cascaded,gao2017turn} are utilizing 3D convolutional neural networks (3D ConvNets) for the generation of temporal action proposals -- an approach already known from the field of action recognition, see above. Being another prominent approach from the field of action recognition, two-stream networks with 2D kernels are used as well \cite{lin2017temporal,gao2017cascaded}, taking advantage of optical flow on the second stream. Despite being successfully used in action recognition, the combination of 3D convolutions with a two-stream network has not made it to common practice in the field of temporal action proposals generation yet.
In the field of \textit{temporal action localization} -- both the temporal localization and classification of actions in long, untrimmed videos -- the combination of 3D convolutions with two-stream networks found use recently in \cite{chao2018rethinking, nguyen2018weakly}. In most works, the temporal action proposal generation is a sub-task of the overall approach. However, there also exist end-to-end approaches \cite{buch2017end}.
\section{Methodic approach}
In this work, we follow the general approach presented by Buch \etal \cite{sst} for the SST model. Just like there, each video is divided into non-overlapping blocks of 16 frames and features are extracted using the C3D network \cite{c3d} as a feature extractor. Those features serve as input for a recurrent neural network, producing confidence scores for 32 possible time segments in each step. After post-processing with a score threshold and non-maxima suppression, a reduced set of temporal action proposals is generated. We stick to this approach and utilize the existing architectures, but extend them to a two-stream model architecture by introducing a second stream working on the corresponding images of optical flow, with the optical flow corresponding to image $j$ being calculated from image $j-1$ and $j$. Applying 3D convolutions on the optical flow allows efficiently making use of the dynamics of motion. We design four variants of this new architecture, differing in the position and way the separate streams are fused before continuing in a common stream.
In the following, we will have a closer look at the four designed two-stream model architectures. All of them have in common that they process videos by dividing them into subsequent blocks of 16 images without overlap and processing them sequentially. For each block of 16 original images on the first stream -- called video stream -- there are the 16 corresponding images of optical flow on the second stream -- called flow stream.
These blocks of 16 images are then first processed in parallel before the streams get fused later in the architecture. The position and the way of the fusion differ in the four model architectures. In the following, we will highlight the C3D network apart from the SST network which uses it as feature extractor.
\begin{figure*}
\begin{center}
\fbox{\includegraphics[width=1\textwidth]{pictures/models/var_1_and_2_eng_thick_renamed.png}}
\end{center}
\caption{Outline of variant mid fusion by concatenation (2S-Mid+) and mid fusion by \textit{fc} layer (2S-Mid\textit{fc}) of the two-stream model architectures. To get 2S-Mid\textit{fc} from 2S-Mid+, simply replace the 2S-Mid+ content in the dashed box with the one of 2S-Mid\textit{fc}. For 2S-Mid+, the optional processing steps and the concatenation of feature vectors are bundled in the `combination of features'.}
\label{fig:var_1_and_2_scheme}
\end{figure*}
\paragraph{\#1: Mid fusion by concatenation (2S-Mid+)}
The first of the designed two-stream variants is fusing the two separate streams by concatenating features extracted by two separate C3D networks before being used as input into the SST network. This approach is inspired by Khong \etal \cite{khong2018improving} from the field of human action recognition. One of their investigated two-stream models utilizes C3D features extracted from the \textit{fc6}-layer of two separate C3D networks, one of them operating on the original images and the other one operating on optical flow. Among other processing steps, the two separate C3D feature vectors get concatenated there before being fed into a linear support vector machine (SVM) for classification
The idea of fusing two streams by concatenating C3D features serves as a basis for our variant 2S-Mid+ of the designed two-stream networks. Two separate C3D networks get employed: one operating on the original images and one operating on the corresponding images of optical flow. The two streams stay separate until the end of the C3D networks where separate feature vectors are extracted. Optional processing of these feature vectors, like applying $L2$-normalization or principal component analysis, takes place after the extraction. The next performed step is the concatenation of the separate feature vectors. For block $i$ of a video, $f_{\text{\textit{v,i}}}$ denotes the feature vector from the video stream and $f_{\text{\textit{f,i}}}$ denotes the feature vector from the flow stream, which are concatenated and result in the concatenated feature vector $f_{\text{\textit{c,i}}}$.
\begin{equation}
f_{\text{\textit{c,i}}} = [f_{\text{\textit{v,i}}}^T, f_{\text{\textit{f,i}}}^T]^T
\end{equation}
The concatenated feature vector $f_{\text{\textit{c,i}}}$ serves then as an input for the SST network, which determines confidence scores for temporal windows. A schematic representation of the resulting network is shown in Figure \ref{fig:var_1_and_2_scheme}.
Training: Just as with the original combination of C3D and SST network the C3D networks will be trained separately from the SST network on the task of action recognition. The SST network will be trained afterward based upon the extracted and combined C3D feature vectors of the pretrained C3D networks on the task of temporal action proposal generation.
\paragraph{\#2: Mid fusion by \textit{\textbf{fc}} layer (2S-Mid\textit{\textbf{fc}})}
\label{chap:var2_model}
The first variant 2S-Mid+ fuses the streams `by hand', as the fusion is not learned by the neural network but is performed by concatenation instead. A possible logical consequence is therefore to let the neural network learn how to fuse the two streams by combining the separate C3D networks in one of their later, fully connected layers, as it will be done in 2S-Mid\textit{fc}. This idea is supported by the work of Varol \etal \cite{varol2018long} on the field of action recognition, who use two separate C3D networks for original images and optical flow that are fused using a shared \textit{fc6} layer.
2S-Mid\textit{fc} uses -- just as 2S-Mid+ -- two separate C3D networks, one operating on the original images and one operating on the corresponding images of optical flow. In contrast, the two streams stay only separate up to the \textit{fc6} layers. For block $i$ of a video, $a_{\text{\textit{v-fc6,i}}}$ denotes the activations that are put out by the \textit{fc6} layer of the video stream and $a_{\text{\textit{f-fc6,i}}}$ denotes the activations that are put out by the \textit{fc6} layer of the flow stream accordingly. Both have $4096$ elements and serve as common input into the \textit{fc7} layer, thus delivering $8192$ elements. The shared \textit{fc7} layer fuses both streams, producing the output $a_{\text{\textit{c-fc7,i}}}$ with 4096 elements. In equation \ref{eq:var2_activation}, $R$ denotes the ReLU activation function.
\begin{equation}
\label{eq:var2_activation}
a_{\text{\textit{c-fc7,i}}} = R(W_{\text{\textit{fc7}}} \cdot [a_{\text{\textit{v-fc6,i}}}^T, a_{\text{\textit{f-fc6,i}}}^T]^T + b_{\text{\textit{fc7}}})
\end{equation}
The activation $a_{\text{\textit{c-fc7,i}}}$ is used as feature representation and optional post-processing can be applied before being used as input to the SST network. A schematic representation of the resulting network can be seen in Figure \ref{fig:var_1_and_2_scheme}.
\begin{figure*}[ht]
\begin{center}
\fbox{\includegraphics[width=1\textwidth]{pictures/models/var_3_and_4_eng_thick_renamed.png}}
\end{center}
\caption{Outline of variant late fusion by weighted average (2S-LateAvg) and late fusion by fc layer (2S-Late\textit{fc}) of the two-stream model architectures. To get 2S-Late\textit{fc} from 2S-LateAvg, simply replace the 2S-LateAvg's content in the dashed box with the one of 2S-Late\textit{fc}.}
\label{fig:var_3_and_4_scheme}
\end{figure*}
Training: Two single-stream C3D networks are to be trained up front. The estimated weights are used to initialize the layers up to \textit{fc6}. As the dimension of the \textit{fc7} layer changed, it cannot be trained up front with a single-stream C3D network, so the two-stream C3D network with preinitialized weights up to \textit{fc6} has to be trained again on the task of action recognition. The network trained that way is then used to extract features, which are used to train the SST network on the task of temporal action proposals generation.
\paragraph{\#3: Late fusion by weighted average (2S-LateAvg)}
In the third variant 2S-LateAvg the fusion is moved to the very end of the network by forming a weighted average of two separate confidence score vectors. The idea is inspired by the temporal segment network (TSN) from Wang \etal \cite{tsn} for action recognition which fuses the separate streams by a weighted average of class scores.
2S-LateAvg utilizes two separate streams, each consisting of a full C3D and SST network. One stream operates on the original images, the other one on the corresponding images of optical flow. Both separate C3D networks extract separate C3D feature vectors, which are used as input into two separate SST networks. The SST networks are then used to generate separate vectors with confidence scores for the same time windows. For block $i$, the confidence score vectors of the video stream and the flow stream are called $c_{\text{\textit{v,i}}}$ and $c_{\text{\textit{f,i}}}$. The streams get fused by calculating the weighted average over these separate confidence scores with the weight factor $\alpha$, 0 $\leq$ $\alpha$ $\leq$ 1, and result in the common confidence score vector $c_{\text{\textit{c,i}}}$.
\begin{equation}
c_{\text{\textit{c,i}}} = (1 - \alpha) \cdot c_{\text{\textit{v,i}}} + \alpha \cdot c_{\text{\textit{f,i}}}
\end{equation}
A schematic representation of the resulting network architecture can be seen in Figure \ref{fig:var_3_and_4_scheme}.
Training: The two separate C3D networks are pretrained on the task of action recognition. The two separate SST networks are then trained on basis of the extracted C3D feature vectors, one of them on C3D feature vectors extracted from the original images and the other one on C3D feature vectors extracted from images of optical flow. Training of the separate SST networks together based on the performance of the weighted average of the confidence score vectors is possible, but not mandatory.
\paragraph{\#4: Late fusion by \textit{\textbf{fc}} layer (2S-Late\textit{\textbf{fc}})}
\label{chap:var4_architecture}
For 2S-LateAvg, the fusion of the separate streams is just as with 2S-Mid+ done `by hand', as the fusion is not learned by the network but done by calculating the weighted average over the confidence score vectors. Therefore, it seems logical to let the network learn how to fuse the two separate streams, which will be done in 2S-Late\textit{fc} by utilizing the fully connected layer at the end of the SST network. 2S-Mid\textit{fc}, where the second fully connected layer \textit{fc7} of the C3D network was used for the fusion, serves as inspiration.
2S-Late\textit{fc} utilizes two separate C3D networks, one operating on the original images and one on the corresponding images of optical flow. Both are used to extract separate C3D feature vectors. They serve as input into two separate SST networks, one for the C3D feature vectors derived from the original images and one for the C3D feature vectors derived from the images of optical flow. Both SST networks stay separate until the end of the sequence encoders -- the recurrent part before the fully connected layer. The output vectors of the separate sequence encoders -- for block $i$ denoted as $s_{\text{\textit{v,i}}}$ for the video stream and $s_{\text{\textit{f,i}}}$ for the flow stream -- are used as input for a shared fully connected layer, which utilizes a logistic sigmoid function $\sigma$ to calculate the common confidence vector $c_{\text{\textit{c,i}}}$ in each step.
\begin{equation}
c_{\text{\textit{c,i}}} = \sigma(W_{\text{\textit{fc}}} \cdot [s_{\text{\textit{v,i}}}^T, s_{\text{\textit{f,i}}}^T]^T + b_{\text{\textit{fc}}})
\end{equation}
An outline of the resulting network is shown in Figure \ref{fig:var_3_and_4_scheme}.
Training: The separate C3D networks are to be pretrained just as in 2S-LateAvg, the same applies to the two separate SST networks. In contrast to 2S-LateAvg, the weights determined for the separate SST networks can only be used to initialize the two fused SST networks up to the end of the sequence encoder, as the dimension of the shared fully connected layer has changed. Therefore, the fused SST networks are to be trained again to calculate the weights for the fully connected layer, before they can be used for confidence score calculation.
\section{Evaluation}
In this section, a quantitative evaluation will be performed. First of all, we will investigate experiments regarding the hyperparametrization of the flow stream, followed by the evaluation of the four designed two-stream model architectures in comparison to the single-stream variants. The best configurations of fusion will be determined, as well as the improvement to the single-stream networks. Evaluation and training for temporal action proposal generation will be performed on the THUMOS'14 \cite{THUMOS14} dataset. The validation split will be used for the training as it is common practice on this dataset, while the test split remains for the evaluation. If the training of the C3D network is necessary, the UCF101 \cite{UCF101} dataset will be utilized. We are building upon a implementation\footnote{https://github.com/JaywongWang/SST-Tensorflow} of the SST network in TensorFlow, coming with already extracted features for the original video data of THUMOS'14. If not stated otherwise, the method of Brox \etal \cite{brox2004high} is used for optical flow calculation.
\subsection{Flow stream experiments}
As an initial step for the evaluation of the designed two-stream models, the hyperparametrization of the flow stream was investigated, working on images of optical flow. The parameters of the C3D network used for feature extraction remain untouched. For the SST network, several changes for parameters are investigated. C3D features from the \textit{fc6} layer of the C3D network are compared with features from the \textit{fc7} layer. One time the training of the C3D network is stopped early, the features from that C3D network are referred as early C3D features and compared with features from the C3D network when training is not stopped early; these features are referred as late C3D features. Two different preprocessing steps of the C3D features are investigated: \textit{L2}-normalization and principal component analysis for reducing the size of the feature vector from 4096 to 500 elements. Apart from these different inputs into the SST network, the parameters of the network itself are investigated: Different learning rates, different numbers of neurons per GRU layer, different numbers of GRU layers and different dropout rates. For the initial configuration, the parameter values of the video stream delivered with the used implementation are utilized. First, each parameter value got altered independently, afterward combinations of parameter changes on the basis of previous experiments were investigated. The initial parameter values, as well as the best configuration in the conducted experiments, can be seen in Table \ref{tab:param_flow_stream}. Not all parameter changes worked well, Figure \ref{fig:flow_stream_res_comp} shows the best results of the best configurations in comparison to the best results of the worst.
\begin{figure*}
\begin{center}
\fbox{\includegraphics[width=0.46\textwidth]{pictures/comparison_param/average_recall.pdf}}
\begin{minipage}[t]{0.04\textwidth}\includegraphics[width=\textwidth]{pictures/dummy.png}\end{minipage}
\fbox{\includegraphics[width=0.46\textwidth]{pictures/comparison_param/recall_vs_tiou.pdf}}
\end{center}
\caption{Comparison of the results achieved with different parametrizations of the SST network using C3D features from images of optical flow as input. The high gap between best and worst parametrization results shows the importance of a proper parametrization, while the comparably small gap between initial and best parametrization indicates that the initial parametrization already worked well.}
\label{fig:flow_stream_res_comp}
\end{figure*}
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Parameter & Initial Value & Best Value \\
\hline\hline
C3D features & late, \textit{fc7} & early, \textit{L2}, \textit{fc7}\\
Learning rate & 1e-3 & 1e-2\\
Dropout rate & 0.3 & 0.3\\
Neurons per GRU layer & 128 & 256\\
Number of GRU layers & 2 & 1\\
\hline
\end{tabular}
\end{center}
\caption{Configuration of the SST network operating on features from images of optical flow. The initial (left) and experimentally determined best (right) values are displayed. \textit{L2} stands for \textit{L2} feature normalization prior to usage.}
\label{tab:param_flow_stream}
\end{table}
\subsection{2S-Mid+ evaluation}
For these experiments, the already extracted features for the original images delivered with the TensorFlow implementation and the already extracted features for the images of optical flow extracted during the flow stream experiments are used. If being used, preprocessing steps are applied before those features are concatenated.
Experiments are conducted similar to the flow stream experiments, with the difference, that a reduced set of parameter values is explored based upon successful parameter values from the flow stream experiments. For each parameter setup, a new SST network is trained and evaluated on the concatenated features, as no pretrained model exists for the concatenated C3D feature vectors. Per configuration two SST networks were trained and evaluated.
The best results were achieved with the configuration in Table \ref{tab:param_var1_stream}. In Figure \ref{fig:var_1_and_2_res_comp} the comparison with the single-stream networks -- the original SST network and the TensorFlow implementation of the SST network -- shows that the additional usage of optical flow leads to improvements for major parts of both metrics, while for minor parts in both metrics results of a comparable level were achieved.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Parameter & Shared Stream \\
\hline\hline
C3D features (images) & \textit{L2}, \textit{fc6}\\
C3D features (flow) & \textit{L2}, early, \textit{fc7}\\
Learning rate & 1e-2\\
Dropout rate & 0.3\\
Neurons per GRU layer & 256\\
Number of GRU layers & 2\\
\hline
\end{tabular}
\end{center}
\caption{Parameters and their experimentally determined best values for the SST network of 2S-Mid+ that operates on the concatenated feature vectors
}
\label{tab:param_var1_stream}
\end{table}
\begin{figure*}
\begin{center}
\fbox{\includegraphics[width=0.46\textwidth]{pictures/var_1_and_2_renamed/average_recall.pdf}}
\begin{minipage}[t]{0.04\textwidth}\includegraphics[width=\textwidth]{pictures/dummy.png}\end{minipage}
\fbox{\includegraphics[width=0.46\textwidth]{pictures/var_1_and_2_renamed/recall_vs_tiou.pdf}}
\end{center}
\caption{Comparison between the results of 2S-Mid+ and 2S-Mid\textit{fc} of the two-stream model architectures using the experimentally determined best parametrization for them and the single-stream networks. For major parts of the metrics, the two-stream models manage to outperform the single-stream ones, with 2S-Mid+ outperforming 2S-Mid\textit{fc}.}
\label{fig:var_1_and_2_res_comp}
\end{figure*}
\subsection{2S-Mid\textit{\textbf{fc}} evaluation}
The two streams are fused inside the feature extractor, thus, already extracted features cannot be used for the experiments. Instead, the existing weights for the feature extraction on image data and the weights determined during the flow stream experiments for feature extraction on images of optical flow are used for initialization. Training is done as described above using the UCF101 dataset for training the fused C3D networks. The weights used for initialization remain fixed in a first training phase used to determine the not preinitialized weights; an optional subsequent fine-tuning where no weights remain fixed is investigated as well. Solely the \textit{fc7} layer is investigated for feature extraction, as the streams are separate before that layer.
After the extraction of the C3D feature vectors, the training and subsequent evaluation of the SST network takes place. As with the experiments for the previous variant, different parameter configuration derived from successfull parameter values of the flow stream experiments are investigated, with two SST networks being trained and evaluated per configuration.
Among all experiments, the configuration in Table \ref{tab:param_var2_stream} produced the best results. The parametrization is, apart from the obvious deviation in the used C3D features caused by the design of the two-stream network, identical to the one which produced the best results for 2S-Mid+. The comparison of the performance concerning the two metrics can be found in Figure \ref{fig:var_1_and_2_res_comp}. Concerning both metrics, 2S-Mid\textit{fc} achieves in comparison with the single-stream networks improvements for major parts of both metrics as well but produces slightly worse results than 2S-Mid+.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Parameter & Shared Stream \\
\hline\hline
C3D features & \textit{L2}, no finetuning, \textit{fc7}\\
Learning rate & 1e-2\\
Dropout rate & 0.3\\
Neurons per GRU layer & 256\\
Number of GRU layers & 2\\
\hline
\end{tabular}
\end{center}
\caption{Parameters and their experimentally determined best values for the SST network of 2S-Mid\textit{fc}.
}
\label{tab:param_var2_stream}
\end{table}
\subsection{2S-LateAvg evaluation}
Because fusion takes place right after the confidence scores of each stream were created, the pretrained SST network and the SST networks from the flow stream experiments can be used. For $\alpha$ the values 1/3, 1/2, and 2/3 are investigated.
Different sets of parameter values are explored for the hyperparameters of the SST network of the flow stream. The parameter values of hyperparameters of the SST network of the video stream remain untouched. To produce first results no further training is needed as the whole two-stream model can be initialized with pretrained weights, but an optional common fine-tuning of the two SST networks based upon the weighted average of the confidence scores is investigated as well.
The parametrization that delivered the best results among all experiments for 2S-LateAvg is displayed in Table \ref{tab:param_var3_stream}. The comparison with the single-stream networks concerning the two known metrics is displayed in Figure \ref{fig:var_3_and_4_res_comp}.
For major parts -- even for small tIoU -- improvements are achieved.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Parameter & Flow Stream & Image Stream \\
\hline\hline
C3D features & early, \textit{L2}, \textit{fc7} & \textit{fc6}\\
Learning rate & 1e-2 & 1e-3\\
Dropout rate & 0.3 & 0.3\\
Neurons per GRU layer & 256 & 128\\
Number of GRU layers & 1 & 2\\
Flow stream weight $\alpha$ & 0.5 & 0.5\\
\hline
Common Finetuning & \multicolumn{2}{c|}{Not used}\\
\hline
\end{tabular}
\end{center}
\caption{Parameters and their experimentally determined best values for the two separate SST networks utilized by 2S-LateAvg. Different parametrizations were only examined for SST network of the flow stream
}
\label{tab:param_var3_stream}
\end{table}
\begin{figure*}
\begin{center}
\fbox{\includegraphics[width=0.46\textwidth]{pictures/var_3_and_4_renamed/average_recall.pdf}}
\begin{minipage}[t]{0.04\textwidth}\includegraphics[width=\textwidth]{pictures/dummy.png}\end{minipage}
\fbox{\includegraphics[width=0.46\textwidth]{pictures/var_3_and_4_renamed/recall_vs_tiou.pdf}}
\end{center}
\caption{Comparison between the results of 2S-LateAvg, 2S-Late\textit{fc} and the single-stream networks. For major parts of the metrics, the two-stream models manage to outperform the single-stream ones, with 2S-LateAvg also achieving improvements for very small tIoU.}
\label{fig:var_3_and_4_res_comp}
\end{figure*}
\subsection{2S-Late\textit{\textbf{fc}} evaluation}
Feature extraction is performed as in 2S-LateAvg, but already trained SST networks cannot be employed totally, as the fusion is done by a shared fully connected layer of both separate SST networks. Therefore, weights of those networks can only be used for initialization up to the point of fusion. Training is done as described above. Weights used for the initialization remain fixed while training the fully connected layer, but an optional fine-tuning of all weights of the fused SST networks is investigated as well.
Similar to the experiments above the hyperparameters for the part belonging to the video stream remain fixed whereas for the flow stream they are explored. For each configuration two training and evaluation procedures are performed for the fused SST networks.
The parametrization producing the best results can be seen in Table \ref{tab:param_var4_stream}. It can be seen that the values of the parameters that are common to 2S-LateAvg and 2S-Late\textit{fc} are the same, therefore showing consistency. A comparison with the single-stream networks concerning both known metrics is shown in Figure \ref{fig:var_3_and_4_res_comp}. Again, improvements are achieved for major parts of both metrics in comparison with the single-stream networks
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Parameter & Flow Stream & Image Stream \\
\hline\hline
C3D features & early, \textit{L2}, \textit{fc7} & \textit{fc6}\\
Separate learning rate & 1e-2 & 1e-3\\
Common learning rate & 1e-3 & 1e-3\\
Dropout rate & 0.3 & 0.3\\
Neurons per GRU layer & 256 & 128\\
Number of GRU layers & 1 & 2\\
\hline
Common Finetuning & \multicolumn{2}{c|}{Not used}\\
\hline
\end{tabular}
\end{center}
\caption{Parameters and values for the two separate SST networks in 2S-Late\textit{fc}. The `separate learning rate' denotes the learning rate used to pretrain the two separate SST networks, whose weights are used to initialize the separate sequence encoders. The `common learning rate' denotes the learning rate used to train the common \textit{fc} layer after the preinitialized sequence encoders.}
\label{tab:param_var4_stream}
\end{table}
\subsection{Optical flow experiments}
Until now experiments were conducted using Brox \etal \cite{brox2004high} for optical flow. To investigate if the observed improvements can hold when the method of calculating optical flow is changed, for 2S-LateAvgFN FlowNet2 \cite{ilg2017flownet} is used. This method uses a neural network for supervised learning of optical flow in contrast to the traditional optimization approach of Brox \etal.
A C3D network and a single-stream SST network are trained the same way as before, using the best configuration from 2S-LateAvg. The determined weights are used for initialization of the flow stream of 2S-LateAvgFN. Experiments with this parametrization and these weights are conducted just as when using optical flow calculated with Brox \etal. The results are slightly worse compared to the case where Brox is used, but remain on a comparable level, achieving improvements in comparison to the single-stream networks.
\subsection{Summary}
All four two-stream models lead to improvements compared to the single-stream networks. This indicates that the utilization 3D convolutions in a two-stream setup makes sense for the task of temporal action proposal generation. A tabular comparison is shown in Table \ref{tab:res_summary}. 2S-Mid+ and 2S-LateAvg perform best, with negligible differences in performance. They have in common that the fusion of both streams takes place outside of the actual neural networks, thus does not get learned.
\section{Conclusion}
In this work, four different two-stream model architectures with different fusions utilizing sequences of images on one stream and images of optical flow on the other stream were investigated for the purpose of temporal action proposal generation. Utilizing sequences of images of optical flow on the second stream in addition to sequences of the original images on the first and processing them using 3D convolutions on both streams, improvements where achieved for all explored two-stream models in comparison to the single-stream models omitting a second stream. It was also shown that the improvement is not bound to using a certain method of calculating optical flow by investigating another one and achieving improvements as well. Apart from showing that the general approach of combining a two-stream architecture with 3D convolutions is beneficial for the task of temporal action proposal generation, a suitable basis for further work on the larger field of action localization has been created.
\begin{table}[hb]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Network & Score\\
\hline\hline
Original SST network & 0.6025\\
TensorFlow Implementation of SST network & 0.6295\\
SST network (images of optical flow) & 0.6320\\
2S-Mid+ & 0.6497\\
2S-Mid\textit{fc} & 0.6438\\
2S-LateAvg & 0.6495\\
2S-Late\textit{fc} & 0.6466\\
2S-LateAvgFN & 0.6436\\
\hline
\end{tabular}
\end{center}
\caption{Comparison of the single-stream networks with the different two-stream models. The displayed score refers to the metric `average recall at average 1000 proposals'. The scores for the two-stream networks and the single-stream network with optical flow come from the best experiments presented in this work. It can be seen that all the single-stream variants of the SST networks are surpassed by every single two-stream model, even if the calculation method of optical flow is changed. Best results are achieved with 2S-Mid+ and 2S-LateAvg.}
\label{tab:res_summary}
\end{table}
{\small
\bibliographystyle{ieee}
|
1,108,101,564,179 | arxiv | \section{Introduction}\label{sec:introcuction}
A $\varphi$ junction~\cite{mints:1998,buzdin:2003} is a Josephson junction with {a doubly degenerate} ground state, in which the Josephson phase takes the values $+\varphi$ or $-\varphi$ ($0<\varphi<\pi$)~\cite{goldobin:2007}. This junction being closed into a ring is able to self-generate a fractional flux $\Phi_0\varphi$/(2$\pi$), where $\Phi_0$ is the magnetic flux quantum.
{In this sense the $\varphi$ junction is a generalisation of the $\pi$ junction~\cite{bulaevskii:1977} which has a Josephson phase $+\pi$ or $-\pi$ in its ground state. It has been experimentally demonstrated that the $\pi$ junction improves the performance and simplifies the design of classical and quantum circuits~\cite{ustinov:2003,ortlepp:2006,feofanov:2010}. Since the $\varphi$ junction offers the possibility to choose a special value of the phase in the ground state it may further optimize these circuits.}
{The initial $\varphi$ junction proposal \cite{mints:1998} investigated grain-boundary junctions, which were analysed experimentally in \cite{ilichev:1999}. From then on $\varphi$ junctions were studied more and more intensively and many other systems appeared as possible candidates for the realisation of $\varphi$ junctions, e.g. \cite{buzdin:2003,goldobin:2007,cleuziou:2006,gumann:2009,pugach:2010,goldobin:2011,lipman:2012,alidoust:2013}. Only recently,} an experimental evidence of a $\varphi$ junction made of $0$ and $\pi$ parts~\cite{buzdin:2003,pugach:2010,goldobin:2011} was reported~\cite{sickinger:2012}. One half of the junction had the Josephson phase $0$ in its ground state and the other half the phase $\pi$. This was realised~\cite{sickinger:2012} by connecting two superconductor-insulator-ferromagnet-superconductor (SIFS) junctions in parallel. The advantage of this concept is that it is based on the technology already developed for the fabrication of $0$-$\pi$ junctions~\cite{hilgenkamp:2004,
weides:2006}.
{On the other hand this $\varphi$ junction concept is difficult to realise experimentally because, e.g., a step in the thickness of the F layer must be realised with very high accuracy~\cite{pugach:2010,goldobin:2011,sickinger:2012}. A completely other method, the ``ramp-type overlap'' (RTO) $\varphi$ junction, was proposed by Bakurskiy et al.~\cite{bakurskiy:2012}. It only requires one small SFS junction located on a thin normal (N) metal layer, see figure~\ref{fig:junction_02}. This basic setup provides a miniaturized $\varphi$ junction. Moreover, this type of junction has {already} been realised experimentally for the analysis of the double proximity effect~\cite{golikova:2012}.}
\begin{figure}[b]
\begin{center}
\includegraphics{figure1}
\caption{The geometry of the considered system. {The} Josephson junction {consists of two} superconducting (S) electrodes {separated by} a ferromagnetic (F) weak link of thickness $\dFt$ and length $L$. It is located on {top of} a thin normal (N) metal film of thickness $\dNt$.}
\label{fig:junction_02}
\end{center}
\end{figure}
A simple model~\cite{goldobin:2007} to show that the RTO junction can be used as a $\varphi$ junction requires its current-phase relation (CPR). By writing it in terms of a sine series
\begin{equation}
I(\phi)=A \sin(\phi)+B \sin(2\phi)
, \label{eq:cpr:introcuction}
\end{equation}
where $\phi$ is the Josephson phase, the amplitudes have to obey the conditions~\cite{goldobin:2007}
\begin{equation}
|B|>|A|/2 \quad \mathrm{and} \quad B<0
. \label{eq:phijunction:condition:a}
\end{equation}
{The RTO junction, schematically shown in figure~\ref{fig:junction_02},} can fulfil these conditions because the current flows between the S electrodes through the F metal \emph{and} the N layer. In this way the properties of an SFS and SNS junction are combined. The SFS junction can have a negative~\cite{golubov:2004,buzdin:2005} amplitude $\AF$ {in~\eqref{eq:cpr:introcuction}}, while the SNS junction has a positive~\cite{golubov:2004,likharev:1979} amplitude $\AN$ {in~\eqref{eq:cpr:introcuction}}. By adding both the total amplitude $A$ can be minimized and a dominant negative amplitude $B$ {from the SNS part} is obtained to fulfil conditions~\eqref{eq:phijunction:condition:a}. Since supercurrents in SFS junctions are rather small, the SNS contribution has to be reduced. This is done by using only a thin normal metal film.
{In the present paper we investigate an RTO junction which has, differently from the one proposed in~\cite{bakurskiy:2012}, transparent SF interfaces in order to amplify the SFS contribution to the total current. This assumption {has} already successfully {been} used to describe various experiments~\cite{golikova:2012,oboznov:2006,bannykh:2009}. As a result, we obtain slightly {smaller} system sizes for the $\varphi$ junction realisation than~\cite{bakurskiy:2012}, where weakly transparent interfaces were assumed. Moreover, our approach provides a better penetration of the superconducting correlations into the F layer which may increase the Josephson current. In the framework of transparent SF interfaces we cannot use linearised equations for the SFS part, as it was done in~\cite{bakurskiy:2012}. Therefore, we use non-linearised equations in the SFS \emph{and} SNS part for our analytical approach.}
We derive the CPR in the ``dirty'' limit. For this purpose, we combine the solution of the Usadel equations in the N film~\cite{bakurskiy:2012} with the solution of the Usadel equations in the SFS layer~\cite{buzdin:1991}. The resulting current phase relation consists of three parts: (i) a contribution from the SFS layer, (ii) a contribution from the N film and (iii) a composite SNFS term.
The paper is organized as follows. In section~\ref{sec:model} we introduce the model of the considered Josephson junction in terms of Usadel equations. The analytical expression of the CPR of our system is based on this model and presented in section~\ref{sec:calculation}. In section~\ref{sec:discussion} we use this expression together with realistic system parameters to discuss its applicability as $\varphi$ junction. Finally, an appendix provides a detailed derivation of the composite SNFS current.
\section{Model}\label{sec:model}
The considered Josephson junction is sketched in figure~\ref{fig:junction_02}. It consists of an SFS junction located on a normal metal film. The F layer has a thickness $\dFt$ and a length $L$ while the N layer has a thickness $\dNt$ and is considered as infinitely long. We have chosen the $x$ and $z$ axis in directions {parallel and perpendicular} to the plane of the N film, respectively.
For the calculation of the current $I(\phi)$ flowing from one superconducting electrode to the other we determine the Green's functions describing our system. {We consider} the ``dirty'' limit \cite{golikova:2012,oboznov:2006,bannykh:2009}, in which the elastic scattering length is much smaller than the characteristic decay length, we can use the Usadel equations~\cite{usadel:1970} to model our system. We write them in the form~\cite{golubov:2004}
\begin{eqnarray}
\frac{\xi_j^2}{G_j}\left[\frac{\partial}{\partial x} \left( G_j^2 \frac{\partial}{\partial x} \Phi_j \right)
+\frac{\partial}{\partial z} \left( G_j^2 \frac{\partial}{\partial z} \Phi_j \right) \right]
- \frac{\omegat}{\pi \Tc} \Phi_j = 0, \nonumber \\
G_j=\frac{\omegat}{\sqrt{\omegat^2+\Phi_j\Phi_j^*}}, \quad j \in \{\mathrm{N,F}\}
\label{eq:usadel:general}
\end{eqnarray}
in the N and F layer, respectively. Here, $\Phi_j$ and $G_j$ are the Usadel Green's functions in the $\Phi$ parametrization~\cite{kupriyanov:1988}. The frequencies $\omegat=\omega+\ii H$ contain the Matsubara frequencies $\omega=\pi T (2n+1)$ {at temperature $T$}, where $n=0,1,2,\ldots$, and the exchange field $H$ of the ferromagnetic material which is assumed to be zero in the N layer. The decay lengths
\begin{equation}
\xiN=\sqrt{\frac{D_\mathrm{N}}{2\pi \Tc}}, \quad \xiF=\sqrt{\frac{D_\mathrm{F}}{2\pi \Tc}}
\end{equation}
of the superconducting correlations are defined via the critical temperature $\Tc$ of the superconductor (we {use} $\hbar=k_\mathrm{B}=1$) and the diffusion coefficients $D_\mathrm{N}$ and $D_\mathrm{F}$ in the normal and ferromagnetic metal, respectively.
We assume that superconductivity {in the S electrodes} is not suppressed by the neighbouring N and F {layers}. This assumption is valid in our case of transparent SF interfaces {with} the conditions for the suppression parameters
\begin{eqnarray}
\gBSF=\frac{R_\mathrm{BSF}A_\mathrm{BSF}}{\rho_\mathrm{F}\xi_\mathrm{F}}\ll 1,\quad \gSF=\frac{\rho_\mathrm{S}\xi_\mathrm{S}}{\rho_\mathrm{F}\xi_\mathrm{F}} \ll 1, \\
\gBSN=\frac{R_\mathrm{BSN}A_\mathrm{BSN}}{\rho_\mathrm{N}\xi_\mathrm{N}}\gg \gSN=\frac{\rho_\mathrm{S}\xi_\mathrm{S}}{\rho_\mathrm{N}\xi_\mathrm{N}}
. \label{eq:gamma:bsn}
\end{eqnarray}
Here, $R_\mathrm{BSN,BSF}$ and $A_\mathrm{BSN,BSF}$ are the resistances and areas of the SN and SF interfaces. The values of $\rho_\mathrm{N,F,S}$ describe the resistivity of the N, F, and S metals.
This allows us to use the rigid boundary conditions~\cite{golubov:2004}
\begin{equation}
\Phi_\mathrm{S}(\pm L/2)=\Delta \exp(\pm\ii \phi/2), \quad G_\mathrm{S}=\frac{\omega}{\sqrt{\omega^2+\Delta^2}}
, \label{eq:bc:rigid}
\end{equation}
where $\Delta$ is the absolute value of the order parameter in the superconductor.
The boundary conditions~\cite{kupriyanov:1988,koshina:2000,golubov:2004} at the free interfaces are
\begin{equation}
\frac{\partial}{\partial z} \Phi_j=0,\quad j \in \{\mathrm{N,F}\}
, \label{eq:bc:free}
\end{equation}
and at the interfaces of the superconductor they are
\begin{eqnarray}
\gBSN \xiN \PartDeriv{\PhiN}{z}=\frac{\GS}{\GN} \left[\PhiS(\pm L/2) - \PhiN \right]
\label{eq:bc:SN}
\end{eqnarray}
and
\begin{eqnarray}
\PhiF=\frac{\omegat}{\omega}\PhiS(\pm L/2)
. \label{eq:bc:SF}
\end{eqnarray}
Additionally we use
\begin{equation}
\gBNF \xiF \PartDeriv{\PhiF}{z} = \frac{\GN}{\GF}\left( \frac{\omegat}{\omega} \PhiN - \PhiF \right)
\label{eq:bc:NF}
\end{equation}
at the NF interfaces, where
\begin{equation}
\gBNF=\frac{R_\mathrm{BNF}A_\mathrm{BNF}}{\rho_\mathrm{F}\xi_\mathrm{F}}
\label{eq:gamma:bnf}
\end{equation}
is defined analogous to~\eqref{eq:gamma:bsn}.
Finally we calculate the total current
\begin{equation}
I(\phi)=I_\mathrm{N}(\phi)+I_\mathrm{F}(\phi)
\label{eq:current:tot:def}
\end{equation}
by integrating the standard expressions~\cite{golubov:2004} for the current densities of the N and F part over the junction cross section along the $z$ axis. This leads us to
\begin{eqnarray}
I_\mathrm{N}(\phi)&=&\ii \frac{\pi T W}{2 e \rhoN} \sum_{\omega=-\infty}^{\infty} \int_{0}^{\dNt} \dd z \frac{\GN^2}{\omega^2} \nonumber \\
&& \times \left[ \PhiN(\omega)\ \PartDeriv{}{x} \PhiN^*(-\omega) - \PhiN^*(-\omega)\ \PartDeriv{}{x} \PhiN(\omega) \right]_{x=0}
\label{eq:current:n:tot:def}
\end{eqnarray}
and
\begin{eqnarray}
I_\mathrm{F}(\phi)&=&\ii \frac{\pi T W}{2 e \rhoF} \sum_{\omega=-\infty}^{\infty} \int_{\dNt}^{\dNt+\dFt} \dd z \frac{\GF^2}{\omegat^2} \nonumber \\
&& \times \left[ \PhiF(\omega)\ \PartDeriv{}{x} \PhiF^*(-\omega) - \PhiF^*(-\omega)\ \PartDeriv{}{x} \PhiF(\omega) \right]_{x=0}
. \label{eq:current:f:tot:def}
\end{eqnarray}
The width $W$ of the junction {along the $y$ axis} is supposed to be small compared to the Josephson penetration depth. We have chosen the position $x=0$ for the integration over the junction cross section since the $z$ component of the current densities vanishes there because of the symmetry of the considered {junction geometry}.
\section{Currents}\label{sec:calculation}
In order to calculate the current $I(\phi)$ from~\eqref{eq:current:tot:def} we cannot simply add the current through the N layer calculated by Bakurskiy et al.~\cite{bakurskiy:2012} to the SFS current calculated by Buzdin et al.~\cite{buzdin:1991} because we have to take into account a composite SNFS current which appears due to a penetration of superconductivity from the N layer into the F layer. Therefore, we split the current $I_\mathrm{F}(\phi)$ into a contribution $I_\mathrm{F,dir}(\phi)$ due to a direct penetration of superconductivity into the F layer and the additional part $I_\mathrm{NF}(\phi)$. This leads us to
\begin{equation}
I(\phi)=I_\mathrm{N}(\phi)+I_\mathrm{F,dir}(\phi)+I_\mathrm{NF}(\phi)
. \label{eq:current:tot:def:new}
\end{equation}
In the following three sections we derive the expressions of these three currents using the scaling
\begin{equation}
\widetilde{I}_{j}(\phi) = I_{}(\phi) \frac{e \rho_j}{W \Delta }
. \label{eq:current:scaling}
\end{equation}
\subsection{Current in the N layer} \label{sec:n-part}
In this layer we adopt the current
\begin{eqnarray}
\widetilde{I}_\mathrm{N}(\phi)
= 2 \frac{\dNt T}{\xiN \Tc} \sum_{\omega>0} \frac{\Gamma(\phi)}{\mu(\phi)}\, r \sin(\phi)
\label{eq:cpr:sum:n}
\end{eqnarray}
with the definitions
\begin{eqnarray}
\Gamma(\phi) = \frac{r \delta \sqrt{\gBM \Omega + \GS}}
{ \sqrt{2 \gBM \Omega (\sqrt{\Omega^2+\delta^2 r^2} + \mu(\phi))}}, \label{eq:def:gamma_phi} \\
\delta = \frac{\Delta}{\pi \Tc}, \quad \gBM = \frac{\gBSN \dNt}{\xiN}, \quad \Omega=\frac{\omega}{\pi \Tc}, \\
r = \left( \frac{\gBM}{\pi \Tc} \sqrt{\omega^2 + \Delta^2 }+1 \right)^{-1}, \label{eq:def:r}\\
\mu(\phi) = \sqrt{ \Omega^2 + r^2 \delta^2 \cos^2 ({\phi}/{2}) } \label{eq:def:mu}
\label{eq:functions:n:sinphi}
\end{eqnarray}
\label{eq:functions:n:def}
from~\cite{bakurskiy:2012}. Its derivation is based on the assumption $L \ll \xiN$ and an infinitely long N layer. It is calculated with the help of the solution $\PhiN(x)$~\eqref{eq:green:n} of the non-linear Usadel equations which depends only on the coordinate $x$ because the thickness $\dNt \ll \xiN$ is assumed to be small.
\subsection{Current in the F layer} \label{sec:f-part}
The current
\begin{eqnarray}
I_\mathrm{F,dir}(\phi) = \sqrt{2}\,64 {\dFt} \kappa\, \ee^{-2\kappa L} \mathcal{F} \sin\left( 2\kappa L + \frac{\pi}{4} \right)\sin\phi
, \label{eq:cpr:f}
\end{eqnarray}
with
\begin{eqnarray}
\kappa=\frac{\sqrt{h}}{\sqrt{2} \xiF}, \quad { h=\frac{H}{\pi \Tc}, }\quad \mathcal{F}= \pi T \sum_{\omega>0} \frac{\Theta^2}{\Delta}, \label{eq:def:kappa} \\
\Theta = \frac{\Delta}{\eta+|\omega|+\sqrt{2\eta(\eta+|\omega|)}}, \quad \eta=\sqrt{\omega^2+\Delta^2}, \label{eq:def:eta}
\end{eqnarray}
is a result of~\cite{buzdin:1991}. It also has been calculated with the help of a solution of the non-linear Usadel equations because $\gBSF=0$ is assumed. Additionally the condition $\xiF \ll L$ is required.
\subsection{Composite NF current}\label{sec:fn-part}
We determine the current $I_\mathrm{NF}(\phi)$ by combining the two non-linear solutions $\PhiFdir(x)$ and $\PhiN(x)$ of~\eqref{eq:green:F} and~\eqref{eq:green:n} in~\ref{sec:nf:current:deriv}. The main idea is to decompose the ferromagnetic Green's function
\begin{equation}
\PhiF(x,z) = \PhiFdir(x) + \PhiNF(x,z)
\label{eq:green:F:total}
\end{equation}
into a function $\PhiFdir(x)$, which corresponds to currents only flowing in the F layer, and a function $\PhiNF(x,z)$, which corresponds to currents flowing through the N layer into the F layer. The second function is obtained by linearising the Usadel equations~\eqref{eq:usadel:general} in the F {layer}. Then we connect it to the N layer solution $\PhiN(x)$ via the boundary conditions.
The superposition~\eqref{eq:green:F:total} of the solution $\PhiFdir$ of the non-linear Usadel equation with the solution $\PhiNF$ of the linearised Usadel equation is valid because we distinguish in the F part between two cases: (i) at $x \approx \pm L / 2$ near the boundaries to the S regions the Green's function $\PhiFdir$ is very dominant $\left|\PhiFdir \right| \gg \left| \PhiNF \right|$ due to a transparent boundary between the S and the F part, that is $\gBSF=0$; (ii) at $x \approx 0$, that is away from the boundaries the contribution of $\PhiF$ decays exponentially. Therefore, the contribution from the N part is dominant $\left| \PhiNF \right| \gg \left| \PhiFdir \right|$.
As a result~\eqref{eq:cpr:sum:fn} we obtain the current
\begin{eqnarray}
\widetilde{I}_{NF}(\phi)
= \frac{16 \cos(\phi/2) \xiF}{\gBNF h \Delta \xiN} \ee^{-\kappa L/2} \nonumber \\
\times \left[ \sin\frac{\kappa L}{2} + \frac{\kappa L}{\sqrt{2}} \ee^{-\kappa L/2} \cos \left(\kappa L+\frac{\pi}{4}\right) \right] \nonumber \\
\times 2\pi T \sum_{\omega>0} \Theta\, \Gamma(\phi) \sin \frac{\phi}{2}
, \label{eq:cpr:sum:fn:main}
\end{eqnarray}
with the definitions of $\Gamma(\phi)$ from~\eqref{eq:def:gamma_phi}, $\kappa$ from~\eqref{eq:def:kappa} and $\Theta$ together with $\eta$ from~\eqref{eq:def:eta}.
\section{Discussion}\label{sec:discussion}
In this section we estimate the geometrical parameters $\dNt$, $\dFt$ and $L$, see figure~\ref{fig:junction_02}, for which the considered Josephson junction obeys the $\varphi$ junction conditions~\eqref{eq:phijunction:condition:a}. We use the analysing scheme of~\cite{bakurskiy:2012} and finally compare our results with the ones {obtained in~\cite{bakurskiy:2012}}.
We split the sine series amplitudes
\begin{equation}
A=\AN+\AFdir+\ANF
, \label{eq:A}
\end{equation}
\begin{equation}
B=\BN+\BNF
\label{eq:B}
\end{equation}
of the total current~\eqref{eq:current:tot:def:new}, scaled according to~\eqref{eq:current:scaling}, into parts originating from the current of the N layer~\eqref{eq:cpr:sum:n}, the F layer~\eqref{eq:cpr:f} and the composite NF current~\eqref{eq:cpr:sum:fn:main}. There is no amplitude $\BFdir$ because we have {a} pure sinusoidal CPR~\eqref{eq:cpr:f} in the F layer.
In our calculations we chose the temperature $T=0.1\,\Tc$. We make this choice because far away from the critical temperature the CPR has larger deviations from the $\sin\phi$ form~\cite{likharev:1979} which results in a larger second harmonic $B$. As S electrode material we chose Nb with $\Tc=9.2\units{K}$ because it is commonly used in superconducting circuits.
Our first step is to find suitable parameters $\dFt$. For this purpose we analyse the amplitudes~\eqref{eq:A} and~\eqref{eq:B} as a function of $L$ for different values of $\dFt$ for {the same parameters as in~\cite{bakurskiy:2012}:} $\dNt=0.\dN\,\xiN$, $\xi_F=0.1\,\xi_N$, $H=10\,\Tc$, $\Delta=1.76\,\Tc$, $\rhoF=\rhoN=\rho$ and $\gBNF=1$. Figure~\ref{fig:cpr:total} shows three typical examples: (a) $\dFt=0.\dFa\,\xi_N$, (b) $\dFt=0.31\,\xi_N$ and (c) $\dFt=0.\dFb\,\xi_N$. The first (a) and last (c) examples correspond to limiting cases where it is difficult to realize a $\varphi$ junction because the intervals of $L$ where conditions~\eqref{eq:phijunction:condition:a} hold are not large. These intervals of $L$ are highlighted by bold lines. In between the two limiting values for $\dFt$ this line becomes longer. Figure~\ref{fig:cpr:total} (b) shows an optimum situation because there is a wide range {of} $L$ {which yields} a $\varphi$ junction
configuration.
\begin{figure}[ht]
\begin{center}
\includegraphics{figure2a_2c}
\caption{The functions $|A|/2$ and $|B|$, based on~\eqref{eq:A} and~\eqref{eq:B}, as functions of $L$ for $\dNt=0.\dN\,\xiN$ and three characteristic values of $\dFt$. The bold lines correspond to values of $L$ where the conditions~\eqref{eq:phijunction:condition:a} for the $\varphi$ junction realization are fulfilled.}
\label{fig:cpr:total}
\end{center}
\end{figure}
For the optimum value $\dFt=0.31\,\xi_N$ we calculate the magnitudes \mbox{$\Upsilon_A=\AN{W \Delta }/{(e \rho)} = 0.534$} and \mbox{$\Upsilon_B= {\BN} {W \Delta }/{(e \rho)} = -0.106$}. Inserting them together with the amplitude $\AFdir$ from~\eqref{eq:cpr:f} into~\eqref{eq:phijunction:condition:a} and neglecting the small NF contributions leads us to the condition
\begin{equation}
\left| \Upsilon_A +\frac{1}{\varepsilon} \Psi(L) \right| < 2 \left| \Upsilon_B \right|
. \label{eq:phijunction:condition:disc}
\end{equation}
Here, we use the constant $\varepsilon={\xiF}/{(64 \mathcal{F} \sqrt{h} \dFt)}$ with $\dFt=0.31\,\xi_N$, $\mathcal{F}=0.0691$ and
\begin{equation}
\Psi(L)=\exp(-2\kappa L)\sin\left(2\kappa L+ \pi/4 \right)
.
\end{equation}
From~\eqref{eq:phijunction:condition:disc} we find the minimum value $0.\La\,\xiN$ and maximum values $0.\Lb\,\xiN$ of $L$.
For summarising our suggestion of the geometrical configuration of a $\varphi$ junction we use the value $\xiN=100\units{nm}$ for Cu as N layer, a strongly diluted ferromagnet such as FePd or the CuNi alloy with $\xiN=10\units{nm}$ and $H = 10\,\Tc$ as F metal. Our set of parameters then become \mbox{$\dNt \gtrsim 50 \units{nm}$}, \mbox{$\dFa \units{nm} \lesssim \dFt \lesssim \dFb \units{nm}$} and \mbox{$\La \units{nm} \lesssim L \lesssim \Lb \units{nm}$}, which we compare to the values \mbox{$\dNt \gtrsim 50 \units{nm}$}, \mbox{$19 \units{nm} \lesssim \dFt \lesssim 48 \units{nm}$} and \mbox{$7 \units{nm} \lesssim L \lesssim 22 \units{nm}$} of~\cite{bakurskiy_note}.
Since we use the same N layer configuration, the value for $\dNt$ is the same. But the suggested regime for $\dFt$ differs. A change in this direction was expected because we only need a thin F layer since the transparency of our interfaces already amplifies our SFS current contribution. The possible range for the length $L$ of the F part is smaller in our case but the whole junction configuration is still experimentally feasible.
\section{Conclusion}\label{sec:conclusion}
We have shown that the considered Josephson junction with a ferromagnetic weak link located on a thin normal metal film is a good candidate for a $\varphi$ junction realisation. By choosing transparent SF interfaces we obtained slightly different system sizes {for the $\varphi$ junction existence} compared to a junction with weakly transparent interfaces.
The current was split into a contribution through the N layer, the F layer and a composite term which described the current flowing through the N and F parts of the junction simultaneously. We performed our calculations in the ``dirty'' limit, that is, the currents are {obtained from} solutions of the non-linear Usadel equations.
Since our case of a large interface transparency corresponds better to the experimental situation~\cite{oboznov:2006,bannykh:2009,golikova:2012} {than weakly transparent interfaces~\cite{bakurskiy_note}} it is important to note that a smaller thickness and length of {the} F layer have to be chosen than predicted in~\cite{bakurskiy_note}. We are looking forward to experiments realising this $\varphi$ junction and its application in classical and quantum devices.
\ack
We thank S V Bakurskiy for fruitful and stimulating discussions. DMH thanks Professor W P Schleich and K Vogel for giving him the possibility to work at the M.~V. Lomonosov Moscow State University. Financial support by the DFG (Project No. SFB/TRR-21), the Russian Foundation for Basic Researches (RFBR grants No. 11-02-12065-ofi-m, 13-02-01452-a) and the Ministry of Education and Science of the Russian Federation (grant 8641) is gratefully acknowledged.
|
1,108,101,564,180 | arxiv | \section{Introduction}
Suppose $X_1, X_2, \ldots$ is a sequence of random variables. A classic problem in probability is to understand the limiting behavior of the sum
\begin{align*}
Y_n=X_1+X_2+\cdots +X_n.
\end{align*} Early versions of this problem, which consider the case when $X_1, X_2, \ldots$ are independent Bernoulli trials, are rooted in the work of de Moivre \cite{deMoivre} and Laplace \cite{Laplace} pertaining to normal approximations to the binomial distribution. These classic results mark the beginnings of a long standing problem of approximating laws of sums of random variables by normal distributions. This two-hundred year problem ultimately culminated with what is today known as the central limit theorem. Major contributions towards our modern understanding of the central limit theorem are attributed, in particular, to L\'evy \cite{Levy}, Lindeberg \cite{Lindeberg} and Lyapunov \cite{Lyapunov}, among several others.
Suppose $X_1, X_2, \ldots$ are independent and identically distributed (i.i.d.)~random variables with mean $\mu$ and finite positive variance $\sigma^2$. The version of the central limit theorem attributed to L\'evy and Lindeberg \cite[Thm.~27.1]{Billingsley} asserts that the sequence of random variables $(Y_n-n\mu)/(\sigma\sqrt{n})$ converge in distribution to a standard normal. A theorem attributed to Lyapunov \cite[Thm.~27.3]{Billingsley} shows the assumption that the variables $X_1, X_2, \ldots$ be identically distributed can even be relaxed as long as the absolute moments of the $X_i$ satisfy a certain (Lyapunov) growth condition.
The present focus is to study the limiting behavior of sequences of the form
\begin{align}
Z_n(\boldsymbol{a})=a_1X_1+a_2X_2+\cdots +a_nX_n,\label{WeightedSum}
\end{align} in which $X_1, X_2, \ldots $ are i.i.d.~ random variables and the weights $a_1, a_2,\ldots $ correspond to either the eigenvalues or the singular values of a random symmetric matrix. Specifically, we take eigenvalues and singular values corresponding to the Erd\H{o}s-R\'enyi-Gilbert random graph model. A random graph in this model, which was developed independently by Erd\H{o}s-R\'enyi \cite{Erdos1,Erdos2} and Gilbert \cite{Gilbert}, is constructed by attaching edges among a set of labeled vertices independently with probability $p$. The random variables $a_iX_i$ in this case are neither independent nor indentically distributed, and there is no general method available to handle this situation. However, adjacency matrices of Erd\H{o}s-R\'enyi-Gilbert graphs have bounded entries which, modulo the constraints imposed by symmetry, are independent. This simple fact, together, with the almost sure convergence of their empirical spectral distributions to the semicircular law, allow us to establish central limit-type theorems for the sequences $n^{-1}Z_n(\boldsymbol{a})$.
\subsection{Notation and Terminology}
Graph theoretic terminology may be found in \cite{Bollobas}. A \emph{graph $G$ of order $n$} is an ordered pair $(V, E)$ consisting of a set $E=E(G)$ of \emph{edges} and a set $V=V(G)$ of \emph{vertices} such that $|V|=n$. We adopt standard notation and let $m=|E(G)|$ denote the number of edges in a graph $G$. A graph is \emph{simple} if it contains no loops or multiple edges and it is \emph{connected} if it contains no isolated vertices. If $G$ is a graph of order $n$, then its \emph{adjacency matrix} is the $n\times n$ real symmetric matrix $A(G)$ whose entries are defined by setting $[A(G)]_{ij}=1$ if vertices $i$ and $j$ are connected by an edge and $[A(G)]_{ij}= 0$ otherwise. The \emph{spectrum} of $G$ is the spectrum of its adjacency matrix $A(G)$, and is therefore real since $A(G)$ is Hermitian. We adopt a standard convention and write the spectrum of $G$ in non-increasing order,
\begin{align*}
\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n.
\end{align*} Moreover, the \emph{singular spectrum} of $G$ consists of the singular values of $A(G)$. We remark that the singular values of the Hermitian matrix $A(G)$ correspond to the moduli of its eigenvalues. Again, we adopt a standard convention and write the singular spectrum of $G$ in non-increasing order,
\begin{align*}
s_1\geq s_2\geq \cdots \geq s_n.
\end{align*}
For $q\geq 1$, we let $L^q(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$ denote the vector space of random variables defined on the probability space $(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$ with finite $L^q$-norm defined by
\begin{align*}
\norm{X}_q=\Big(\mathbb{E}|X|^q\Big)^{1/q}.
\end{align*} A random variable $X$ defined on a probability space $(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$ is called \emph{sub-gaussian} if $\norm{X}_q\leq \norm{X}_{\psi_2}\sqrt{q}$ for all $q\geq 1$, where
\begin{align*}
\norm{X}_{\psi_2}=\sup_{q\geq 1} \Big\{ q^{-1/2}\norm{X}_q\Big\}
\end{align*} is called the \emph{sub-gaussian norm of} $X$ \cite[Def.~2.5.6]{Vershynin}. Gaussian, Bernoulli and bounded random variables are typical examples of sub-gaussian random variables \cite[Ex.~2.5.8]{Vershynin}.
\subsection{Statement of Results}
This paper establishes central limit-type theorems for the sequences of weighted sums
\begin{align}
W_n(\boldsymbol{a})=\frac{1}{n}\sum_{j=1}^n a_jX_j,\label{EigenSequence}
\end{align} in which $X_1, X_2, \ldots, X_n$ are i.i.d.~random variables and $a_1, a_2, \ldots, a_n$ correspond to either the eigenvalues or the singular values of Erd\H{o}s-R\'enyi-Gilbert graphs. Graphs in the \emph{Erd\H{o}s-R\'enyi-Gilbert $\mathcal{G}(n,p)$ model} are constructed by attaching edges between each vertex pair from a set of $n$ labeled vertices independently with probability $p\in [0,1]$. Here and henceforth we assume $p\neq 0,1$.
The first two theorems illustrate the relatively simple limiting distributions of $W_n(\boldsymbol{\lambda})$ in the case when certain symmetry conditions are imposed on $X_1, X_2, \ldots$. The third theorem illustrates the simple limiting distributions of $W_n(\boldsymbol{s})$ in the case when $X_1,X_2,\ldots$ are sub-gaussian with mean zero but not necessarily symmetric.
\begin{theorem}\label{MainTheorem1}
Suppose $X$ is a normal random variable with mean $\mu$ and variance $\sigma^2$. If $X_1, X_2, \ldots, X_n\sim X$ are i.i.d.~ random variables, then
$W_n(\boldsymbol{\lambda})/(\sigma\sqrt{p})$ converges in distribution to a standard normal.
\end{theorem}
\begin{theorem}\label{MainTheorem2}
Suppose $X$ is a symmetric sub-gaussian random variable with variance $\sigma^2$. If $X_1, X_2, \ldots, X_n\sim X$ are i.i.d.~random variables, then
\begin{align*}
W_n(\boldsymbol{\lambda})\to pX+N_p
\end{align*} in distribution, where $N_p$ is a normal random variable, independent of $pX$, with mean zero and variance $p(1-p)\sigma^2$.
\end{theorem}
\begin{theorem}\label{MainTheorem3}
Suppose $X$ is a sub-gaussian random variable with mean zero and variance $\sigma^2$. If $X_1, X_2, \ldots, X_n\sim X$ are i.i.d.~random variables, then
\begin{align*}
W_n(\boldsymbol{s})\to pX+N_p
\end{align*} in distribution, where $N_p$ is a normal random variable, independent of $pX$, with mean zero and variance $p(1-p)\sigma^2$.
\end{theorem}
The final theorem illustrates, in particular, how sensitive the sequences $W_n(\boldsymbol{s})$ are with respect to the conditions imposed on $X_1, X_2, \ldots, X_n$. Very distinct behavior emerges by simply choosing random variables with non-zero mean. Ultimately, this distinction is rooted in the asymptotic behavior of the \emph{Schatten $q$ norm} of a graph, which is defined for $q\geq 1$ by
\begin{align*}
\norm{G}_{S_q}=\Big( s_1^q+s_2^q+\cdots + s_n^q\Big)^{1/q}.
\end{align*} In particular, the large $n$ behavior of $\norm{G}_{S_q}$ is very different for the cases $q=1$ and $q>1$ \cite[Thm.~5]{Nikiforov}. Note the sub-gaussian condition is also removed in the following theorem.
\begin{theorem}\label{MainTheorem4}
Suppose $X$ is any random variable with non-zero mean $\mu$ which admits a moment generating function. If $X_1, X_2, \ldots, X_n\sim X$ are i.i.d.~random variables, then $W_n(\boldsymbol{s})/(\mu\sqrt{n})$ converges in distribution to a point mass at $\frac{8}{3\pi}\sqrt{p(1-p)}$.
\end{theorem}
There exist various central limit-type theorems in the literature pertaining to sums of eigenvalues of random matrices e.g.~\cite{Cipolloni, Johansson, Lytova, Shcherbina}. Among the earliest results in this direction are due to Johansson \cite{Johansson}. These specific results concern random Hermitian matrices distributed according to the probability measure
\begin{align}
d\mu_{n,\tau}(A) \propto\exp\big( -\tau\operatorname{tr} V(A)\big)\,dA,\label{HermitianMeasure}
\end{align} in which $\tau>0$, $V(x)$ is a polynomial with positive even degree and a positive leading coefficient and $dA$ denotes Lebesgue measure on the space of $n\times n$ complex Hermitian matrices. It bears worth mentioning that setting $\tau=n/2$ and $V(x)=x^2$ gives rise to the Gaussian unitary ensemble introduced by Dyson \cite{Dyson1, Dyson2}. The main result of \cite{Johansson} establishes that the linear eigenvalue statistic $\sum_{k} f(\lambda_k)$ converges in distribution to a normal random variable with mean zero. The sums we consider in this paper can be thought of as randomized graph-theoretic versions of the sums originally considered by Johansson.
\subsection{Examples and Simulations}
The following examples and simulations illustrate a few of the main theorems. All code used to generate these plots is made available by contacting the authors.
\begin{example}
Suppose $X$ is a normal random variable with mean $\mu$ and variance $\sigma^2$. Theorem \ref{MainTheorem1} ensures that $W_n(\boldsymbol{\lambda})/(\sigma\sqrt{p})$ converges in distribution to a standard normal. The simulation in Figure \ref{fig:Example1} is performed with $\mu=\sigma^2=1$.
\end{example}
\begin{figure}[h!]
\includegraphics[scale=0.45]{Example1.eps}
\caption{Histogram plot for $W_n(\boldsymbol{\lambda})/(\sigma\sqrt{p})$ using 750 trials taken with $n=1000$ and $p=1/2$.}
\label{fig:Example1}
\end{figure}
\begin{example}
Suppose $X$ is a Rademacher random variable. In particular, one has $\mathbb{P}(X=1)=\mathbb{P}(X=-1)=1/2$. The random variable $X$ is sub-gaussian and Theorem \ref{MainTheorem2} ensures that $W_n(\boldsymbol{\lambda})$ converges in distribution to the sum of $pX$ with an independent normal random variable with mean zero and variance $\sigma_p^2=p(1-p)$. The probability density of this sum is given by the convolution of the gaussian $f(x)=(\sigma_p\sqrt{2\pi})^{-1}\exp\big[ -x^2/(2\sigma_p^2)\big]$ with $\big[ \delta(x+p) + \delta(x-p)\big]/2 $. Interestingly, this density corresponds to the gaussian mixture $\big[f(x+p)+f(x-p)\big]/2$, which is bimodal in the case $p>1/2$. Figure \ref{fig:Example2} shows histogram plots for $W_n(\boldsymbol{\lambda})$ in the cases $p=1/2$ and $p=3/4$.
\end{example}
\begin{figure}[h!]
\includegraphics[scale=0.4]{Example2A.eps} \includegraphics[scale=0.4]{Example2B.eps}
\caption{Histogram plots for $W_n(\boldsymbol{\lambda})$ using 750 trials taken with $n=1000$ and $p=1/2$ (left) and $p=3/4$ (right). }
\label{fig:Example2}
\end{figure}
\begin{example}
Suppose $X$ is a normal random variable with non-zero mean $\mu$ and variance $\sigma^2$. Theorem \ref{MainTheorem4} ensures that $W_n(\boldsymbol{s})/(\mu\sqrt{n})$ converges in distribution to a point mass at $\frac{8}{3\pi} \sqrt{p(1-p)}$. The simulation in Figure \ref{fig:Example3} is performed with $\mu=\sigma^2=1$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{Example3.eps}
\caption{Histogram plot for $W_n(\boldsymbol{s})/(\mu\sqrt{n})$ using 750 trials with $n=1000$ and $p=3/4$. }
\label{fig:Example3}
\end{figure}
\end{example}
\subsection{Outline of the Paper}
This paper, which is intended for a wide probabilistic audience, takes us on a short journey through the spectral analysis of large random matrices and is organized as follows. Section \ref{Section2} highlights a few classic results in random matrix theory which serve as prerequisites for later sections. No background in random matrices is assumed. Section \ref{Section3} provides a computational lemma that we use to expand the partial moments of $W_n(\boldsymbol{a})$ in terms of power sum symmetric functions. Section \ref{Section4} establishes the asymptotics for the partial moments of $W_n(\boldsymbol{a})$ by analyzing the limiting behavior of the power sum symmetric functions. Theorems \ref{MainTheorem1} and \ref{MainTheorem2} are proved in Section \ref{Section5}. Theorem \ref{MainTheorem3} is proved in Section \ref{Section6} and Theorem \ref{MainTheorem4} is proved in Section \ref{Section7}. Finally, we conclude with possible directions for future work and closing remarks.
\section{Random Matrix Prerequisites}\label{Section2}
The limiting spectral analysis for large random matrices has become a widely studied topic in probability since the pioneering work of Eugene Wigner who proved that the expected empirical spectral distribution of a normalized $n\times n$ (Wigner) matrix tends to the semicircular law $\mu_{\operatorname{sc}}$. To begin, suppose $A$ is an $n\times n$ Hermitian matrix with complex entries. The eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$ of $A$ are real and we can define
the one-dimensional distribution function
\begin{align*}
\mu_{A}(x)=\frac{1}{n}\Big\vert\{ i\leq n\,:\,\lambda_i\leq x\}\Big\vert
\end{align*} called the \emph{empircal spectral distribution (ESD)} of $A$. The relation \cite[Sec.~1.3.1]{Bai}
\begin{align}
\frac{1}{n} \operatorname{tr}(A^k)=\int_{-\infty}^{\infty} x^k\, d\mu_A(x)\label{ESDTrace}
\end{align} plays a fundamental role in random matrix theory. Specifically, it turns the problem of establishing convergence, in whatever sense, for the ESD of a sequence $\{A_n\}$ of random matrices into the problem of establishing convergence of the sequence $\{\frac{1}{n}\operatorname{tr}(A_n^k)\}$ for each fixed $k$.
An \emph{$n\times n$ symmetric Wigner matrix} is an $n\times n$ real symmetric matrix whose entries, modulo the symmetry condition $\xi_{ij}=\xi_{ji}$, are independent. Specifically, we permit i.i.d.~mean zero entries $\xi_{ij}$ above the main diagonal and i.i.d.~mean zero entries $\xi_{ii}$ on the main diagonal. These two families need not share the same distribution, however. Moreover, we impose the condition that all entries have bounded moments and share a common second moment. If $B_n$ is an $n\times n$ symmetric Wigner matrix, then we denote $A_n=B_n/\sqrt{n}$. The pioneering work of Wigner \cite{Wigner1, Wigner2} establishes
\begin{align}
\lim_{n\to \infty} \frac{1}{n}\mathbb{E} \operatorname{tr}(A_n^k)=\int_{-\infty}^{\infty} x^k \,d\mu_{\operatorname{sc}}(x) \label{Wigner}
\end{align} for all integers $k\geq 1$. In particular, the expected ESD of a normalized $n\times n$ symmetric Wigner matrix tends to the semicircular law whose density is given by
\begin{align*}
f_{\operatorname{sc}}(x)=\frac{1}{2\pi}\sqrt{4-x^2} \,\mathbb{1}_{[-2,2]}.
\end{align*} This original result due to Wigner has been extended in several aspects. Grenander \cite{Grenander} proved the empirical spectral distribution converges to $\mu_{\operatorname{sc}}$ in probability. Arnold \cite{Arnold1, Arnold2} further improved this result by showing the empirical spectral distribution converges to the semicircular law almost surely. We remark that the matrix ensembles underlying \eqref{Wigner} can be generalized beyond those originally considered by Wigner and refer the reader to \cite{Bai} and \cite{Tao} for excellent surveys on the rich and rapidly developing field of spectral analysis of large random matrices.
\subsection{Almost Sure Convergence of the ESD}\label{Section3.1}
The form of \eqref{Wigner} that we need for later sections is due to Arnold. In particular, suppose $B_n$ is an $n \times n$ real symmetric matrix whose entries, modulo the symmetry condition $\xi_{ij}=\xi_{ji}$, are independent. Assume the upper-triangular entries share a common distribution with finite positive variance $\sigma^2$. In addition, we assume the diagonal entries also share a common distribution. Furthermore, assume the entries $\xi_{ij}$ have finite fourth and sixth moments for $i>j$ and the diagonal entries $\xi_{ii}$ have finite second and fourth moments. Define the normalized matrix $A_n=B_n/ (\sigma \sqrt{n})$. Arnold proves that $\mu_{A_n}\to \mu_{\operatorname{sc}}$ almost surely in the sense that
\begin{align}
\int_{\mathbb{R}} \phi (x)\,d\mu_{A_n}(x)\to \int_{\mathbb{R}} \phi (x)\,d\mu_{\operatorname{sc}}(x)\label{Arnold}
\end{align} for all continuous and compactly supported test functions $\phi:\mathbb{R}\to \mathbb{R}$ \cite[Thm.~2]{Arnold2}.
\subsection{Real Symmetric Matrices with Independent Bounded Entries}\label{Section3.2}
Here and throughout we adopt standard asymptotic notation. In particular, we write $f(n)=O\big( g(n)\big)$ if there exists a constant $C$ such that $|f(n)|\leq Cg(n)$ for all $n$ sufficiently large. Moreover, we write $f(n)=o\big( g(n)\big)$ whenever $f(n)/g(n)\to 0$ as $n\to \infty$.
A result due to F\"uredi and Koml\'os allows us to analyze the limiting behavior of polynomials in $a_1, a_2, \ldots, a_n$. Suppose $A_n$ is an $n\times n$ real symmetric matrix with bounded entries. Moreover, we assume the entries of $A_n$, modulo the constraint $\xi_{ij}=\xi_{ji}$, are independent. Let $\mu=\mathbb{E}\xi_{ij}>0$ denote the common mean of the upper-triangular entries and let $\sigma^2=\mathbb{E}(\xi_{ij}-\mu)^2$ denote their common variance. Furthermore, suppose the diagonal entries share a common mean, $\nu=\mathbb{E}\xi_{ii}$. F\"uredi and Koml\'os \cite[Thm.~1]{Furedi} show that the distribution of the largest eigenvalue $\lambda_1$ of $A_n$ can be approximated in order $n^{-1/2}$ by a normal distribution with mean $(n-1)\mu+\nu+\sigma^2/\mu$ and variance $2\sigma^2$. Moreover, with high probability (w.h.p.) we have
\begin{align*}
|\lambda_i|<\sigma\sqrt{n}+O(n^{1/3}\log n)
\end{align*} whenever $i\neq 1$.
\section{A Computational Lemma for the Partial Moments of $W_n(\boldsymbol{a})$}\label{Section3}
Suppose $j\geq 1$ and $X_1, X_2, \ldots, X_n$ are i.i.d.~ random variables defined on the probability space $(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$. The lemma we present is a simple, albeit useful, computational tool for evaluating the moments
\begin{align*}
f_j(\boldsymbol{a})=f_j^{(n)}(\boldsymbol{a})=\mathbb{E}_{\Omega}\big( a_1X_1+a_2X_2+\cdots+a_nX_n)^j,
\end{align*} in which $\mathbb{E}_{\Omega}$ denotes expectation with respect to $X_1, X_2, \ldots, X_n$. This lemma expresses $f_j(\boldsymbol{a})$ as a sum taken over all partitions of $j$ and involves power sum symmetric polynomials in the variables $a_1, a_2, \ldots, a_n$. We recall these definitions below and refer the reader to \cite[Sec.~1.7]{Stanley1} and \cite[Sec.~7.7]{Stanley2} for in depth discussions.
A \emph{partition} of an integer $j\geq 1$ is a non-increasing sequence $\boldsymbol{\pi}=(j_1, j_2, \ldots, j_r)$ of positive integers such that $j_1+j_2+\cdots+j_r=j$. If $j\geq 2$ is an even integer, then a partition $\boldsymbol{\pi}=(j_1, j_2, \ldots, j_r)$ is a \textit{partition into even} parts if $j_1, j_2, \ldots, j_r$ are even integers. We let $P(j)$ and $E(j)$ denote the set of all partitions of $j$ and the set of all partitions of $j$ into even parts, respectively. We define
\begin{align*}
y_{\boldsymbol{\pi}}=\prod_{i\geq 1}(i!)^{m_i}m_i!,
\end{align*} in which $m_i=m_i(\boldsymbol{\pi})$ denotes the multiplicity of $i$ appearing in a partition $\boldsymbol{\pi}$. Lastly, the \emph{power sum symmetric polynomial of degree $j$} in the variables $a_1, a_2, \ldots, a_n$ is the homogeneous polynomial defined by setting
\begin{align*}
\mathpzc{p}_j(\boldsymbol{a})=\mathpzc{p}_j^{(n)}(\boldsymbol{a})=a_1^j+a_2^j+\cdots+a_n^j.
\end{align*} We often denote $\mathpzc{p}_j=\mathpzc{p}_j(\boldsymbol{a})$ for brevity when there is no risk of confusion.
\begin{lemma}\label{Lemma:Tracial}
Let $j\geq 1$ be any integer and suppose $X_1, X_2, \ldots, X_n$ are i.i.d.~random variables which admit a moment generating function. If $\kappa_1, \kappa_2,\ldots$ denotes the cumulants of the $X_i$, then
\begin{align*}
f_j(\boldsymbol{a})=j!\sum_{\boldsymbol{\pi}\in P(j)} \frac{ \kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} \mathpzc{p}_{\boldsymbol{\pi}},
\end{align*} where $\kappa_{\boldsymbol{\pi}}=\kappa_{j_1}\kappa_{j_2}\cdots \kappa_{j_r}$ and $\mathpzc{p}_{\boldsymbol{\pi}}=\mathpzc{p}_{j_1}\mathpzc{p}_{j_2}\cdots \mathpzc{p}_{j_r}$ given $\boldsymbol{\pi}=(j_1, j_2, \ldots, j_r)$.
\end{lemma}
\begin{proof}
The random variables $X_1, X_2, \ldots, X_n$ are i.i.d.~which implies that the moment generating function of $\sum_{k=1}^n a_kX_k$ takes the form $\prod_{k=1}^n M(a_kt)$, where $M(t)$ denotes the moment generating function of the $X_i$ \cite[Sec.~9]{Billingsley}. Therefore,
\begin{align*}
\sum_{j=0}^{\infty} f_j(\boldsymbol{a}) \frac{t^j}{j!}&=\prod_{k=1}^n M(a_kt)=\exp\Big( K(a_1t)+K(a_2t)+\cdots +K(a_nt)\Big),
\end{align*} in which $K(t)=\log M(t)$ denotes the cumulant generating function of the $X_i$. The identity $K(t)=\sum_{\ell=1}^{\infty} \kappa_{\ell} t^{\ell}/\ell!$, which defines the cumulant sequence $\kappa_1, \kappa_2, \ldots$, implies
\begin{align}
\sum_{j=0}^{\infty} f_j(\boldsymbol{a}) \frac{t^j}{j!}=\exp\Big(\sum_{\ell=1}^{\infty} \kappa_{\ell}\mathpzc{p}_{\ell} \frac{t^{\ell}}{\ell !} \Big)=\sum_{j=0}^{\infty} B_j( \kappa_1 \mathpzc{p}_1, \kappa_2 \mathpzc{p}_2, \ldots, \kappa_j\mathpzc{p}_j) \frac{t^j}{j!},\label{Nbhd}
\end{align} where $B_j(x_1, \ldots, x_j)$ denotes the \emph{complete Bell polynomial} of degree $j$ in the variables $x_1, x_2, \ldots, x_j$ \cite[Sec.~II]{Bell} defined via the generating function
\begin{align}
\sum_{j=0}^{\infty} B_j( x_1, x_2, \ldots, x_j) \frac{t^j}{j!}=\exp\Big(\sum_{\ell=1}^{\infty} x_{\ell} \frac{t^{\ell}}{\ell !} \Big).\label{BellGen}
\end{align} Comparing coefficients in the above expression and applying the identity
\begin{align}
B_j(x_1, x_2, \ldots, x_j)=j !\sum_{\substack{i_1,i_2, \ldots, i_{j}\geq 0\\ i_1+2i_2+\cdots +j i_j=j }}\prod_{s=1}^{j} \frac{x_s^{i_s}}{(s!)^{i_s} i_s!}=j!\sum_{\boldsymbol{\pi}\in P(j)} \frac{x_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}\label{BellPartition}
\end{align} completes the proof.
\end{proof}
\begin{remark}
$X$ admits a moment generating function, which implies that its cumulant generating function $K(t)$ converges in a neighborhood of $t=0$ \cite[Sec.~9]{Billingsley}. Relation \eqref{Nbhd} therefore holds in a neighborhood of $t=0$. We use this fact repeatedly throughout the rest of the paper.
\end{remark}
\section{Asymptotics for Erd\H{o}s-R\'enyi-Gilbert Graphs}\label{Section4}
Here and throughout we let $\mathbb{E}_{\mathcal{G}}=\mathbb{E}_{\mathcal{G}(n,p)}$ and $\mathbb{P}_{\mathcal{G}}=\mathbb{P}_{\mathcal{G}(n,p)}$ denote expectation and probability, respectively, with respect to $\mathcal{G}(n,p)$. A fundamental fact is that the number of edges in a random graph of order $n$ follow a binomial distribution,
\begin{align*}
\mathbb{P}_{\mathcal{G}}\big( |E(G)|=m \big)={N\choose m}p^m(1-p)^{N-m},
\end{align*} where $N={n\choose 2}$. Moreover, the adjacency matrix $A_n$ of random graph of order $n$ is a random $n\times n$ real symmetric matrix whose upper-triangular entries $\xi_{ij}$ are bounded independent random variables which have mean $\mu=p$ and variance $\sigma_p^2=p(1-p)$. The diagonal elements of $A_n$ satisfy $\nu=\mathbb{E}_{\mathcal{G}}[\xi_{ii}]=0$ since loops are not permitted. The result by F\"uredi and Koml\'os \cite[Thm.~1]{Furedi}, which we outline in Section \ref{Section3.2}, implies that w.h.p.,
\begin{align}
|\lambda_1|=(n-1)p+1-p+O(n^{-1/2})=\big( p+o(1)\big)n\label{Erdos1}
\end{align} and
\begin{align}
|\lambda_2|< 2\sigma_p\sqrt{n}+O(n^{1/3}\log n)=\Big( 2\sigma_p+o(1)\Big) \sqrt{n}\label{Erdos2}.
\end{align}
\subsection{The Eigenvalue Case}
We now establish the limiting behavior for the partial moments $f_j(\boldsymbol{\lambda})=\mathbb{E}_{\Omega}(\lambda_1X_1+\lambda_2X_2+\cdots+\lambda_nX_n)^j$ in which $X_1, X_2, \ldots, X_n$ are i.i.d.~random variables defined on $(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$ and $\lambda_1, \lambda_2,\ldots, \lambda_n$ correspond to eigenvalues in the Erd\H{o}s-R\'enyi-Gilbert model. We recall $\mathpzc{p}_k(\boldsymbol{\lambda})$ denotes the power sum symmetric polynomial of degree $k$ in the variables $\lambda_1, \lambda_2, \ldots, \lambda_n$.
\begin{lemma}\label{OddTrace}
Let $k\geq 1$ be an odd integer. If $\lambda_1, \lambda_2, \ldots, \lambda_n$ correspond to eigenvalues in the Erd\H{o}s-R\'enyi-Gilbert $\mathcal{G}(n,p)$ model, then we have $\mathpzc{p}_k(\boldsymbol{\lambda})=o(n^{1+k/2})$ almost surely.
\end{lemma}
\begin{proof}
Let $B_n$ be the adjacency matrix of a random graph of order $n$. This matrix satisfies the hypotheses in Section \ref{Section3.1}. Moreover, the variance of the upper-triangular entries is given by $\sigma_p^2=p(1-p)$. The ESD of the normalized matrix $A_n=B_n/(\sqrt{n}\sigma_p)$ converges almost surely to the semicircular law by \cite[Thm.~2]{Arnold2} as seen in Section \ref{Section3.1}. Therefore, we can use \eqref{ESDTrace} and the identity $\operatorname{tr}(B_n^k)=\mathpzc{p}_k(\boldsymbol{\lambda})$ to conclude
\begin{align}
\frac{\mathpzc{p}_k(\boldsymbol{\lambda})}{n (\sqrt{n}\sigma_p)^k}=\frac{1}{n} \operatorname{tr}(A_n^k)\to \int_{-\infty}^{\infty} x^k \,d\mu_{\operatorname{sc}}(x)\label{ArnoldLimit}
\end{align} almost surely. The symmetry of the the semicircular density and the fact that $k$ is odd imply that $\frac{\mathpzc{p}_k(\boldsymbol{\lambda})}{n (\sqrt{n}\sigma_p)^k}\to 0 $ almost surely. The claim follows.
\end{proof}
\begin{proposition}\label{Proposition:AsymptoticEigen}
Let $j\geq 1$ be any integer and let $X_1, X_2, \ldots, X_n$ be i.i.d.~random variables with cumulant sequence $\kappa_1, \kappa_2,\ldots$. Define
\begin{align*}
c_j(p)=j!\sum_{\boldsymbol{\pi}\in E(j)} \frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} p^{j-m_2} ,
\end{align*}where $\kappa_{\boldsymbol{\pi}}$ and $y_{\boldsymbol{\pi}}$ are defined as in Lemma \ref{Lemma:Tracial} and $E(j)$ denotes the set of partitions of $j$ into even parts. The partial moments $f_j(\boldsymbol{\lambda})$ satisfy the following, where $o(1)$ denotes a term tending to zero as $n\to \infty$ with $j$ fixed.\\
\noindent (a) If $j$ is odd, then $n^{-j}f_j(\boldsymbol{\lambda})=o(1)$ w.h.p. \\
\noindent(b) If $j$ is even, then $n^{-j}f_j(\boldsymbol{\lambda})=c_j(p)+o(1)$ w.h.p.
\end{proposition}
\begin{proof}
Denote $\mathpzc{p}_i=\mathpzc{p}_i(\boldsymbol{\lambda})$ for brevity. The number of edges in a random graph $G_n$ of order $n$ is a binomial random variable with parameters $N={n\choose 2}$ and $p$. The expected number of edges in $G_n$ is therefore given by $\mathbb{E}_{\mathcal{G}}[m]=pN$. The weak law of large numbers implies that $m$ is tightly concentrated around its mean for large $n$. Therefore, we have $m=\big( p/2+o(1)\big)n^2$ w.h.p. If $A_n$ denotes the adjaceny matrix of $G_n$, then $\mathpzc{p}_2=\operatorname{tr}(A_n^2)=2m$ \cite[Thm.~3.1.1]{Cvetkovic}, which implies that w.h.p.,
\begin{align}
\mathpzc{p}_2=\big(p+o(1)\big)n^2.\label{ErdosBound1}
\end{align} If $i>2$ is even, then consider the bound $|\lambda_1|^i\leq \mathpzc{p}_i\leq |\lambda_1|^i+n|\lambda_2|^i.$
Inequalities \eqref{Erdos1} and \eqref{Erdos2} imply that w.h.p.,
\begin{align*}
\big(p^i+o(1)\big)n^i\leq \mathpzc{p}_i<\big(p^i+o(1)\big)n^i+O(n^{1+i/2}).
\end{align*} Observe that $n^{1-i/2}\to 0$ as $n\to \infty$ since $i>2$. We conclude that w.h.p.,
\begin{align}
\mathpzc{p}_i=\big( p^i+o(1)\big)n^i.\label{ErdosBound2}
\end{align} Let $\boldsymbol{\pi}=(j_1,j_2, \ldots, j_r)$ be a partition of $j$. Lemma \ref{OddTrace} together with relations \eqref{ErdosBound1} and \eqref{ErdosBound2} imply that w.h.p,
\begin{align}
\mathpzc{p}_{j_1}\mathpzc{p}_{j_2}\cdots \mathpzc{p}_{j_r}=o(n^j)\label{ErdosOdd}
\end{align} whenever $\boldsymbol{\pi}$ contains an odd integer larger than one. If $m_1(\boldsymbol{\pi})\neq 0$, then \eqref{ErdosOdd} still holds since $\mathpzc{p}_1=\operatorname{tr}(A_n)=0$ for simple graphs. Any partition of an odd integer must contain an odd part. Lemma \ref{Lemma:Tracial} implies that w.h.p.,
\begin{align*}
f_j(\boldsymbol{\lambda})=o(n^j)
\end{align*} whenever $j$ is odd. This proves (a). If $j$ is even, then $\mathpzc{p}_{j_1}\mathpzc{p}_{j_2}\cdots \mathpzc{p}_{j_r}=o(n^j)$ w.h.p.~unless $\boldsymbol{\pi}$ is a partition of $j$ into even parts. Lemma \ref{Lemma:Tracial}, together with relations \eqref{ErdosBound1} and \eqref{ErdosBound2}, imply that w.h.p.,
\begin{align*}
f_j(\boldsymbol{\lambda})=\Bigg( j! \sum_{\boldsymbol{\pi}\in E(d)} \frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} p^{j-m_2}+o(1) \Bigg)n^j,
\end{align*} which proves (b). We remark that the $m_2$ term appearing in the last expression occurs because of the discrepency in the power of $p$ occuring in \eqref{ErdosBound1} and \eqref{ErdosBound2}.
\end{proof}
\subsection{The Singular Value Case}
We now establish the limiting behavior for the partial moments $f_j(\boldsymbol{s})=\mathbb{E}_{\Omega}(s_1X_1+s_2X_2+\cdots+s_nX_n)^j$ in which $X_1, X_2, \ldots, X_n$ are i.i.d.~random variables defined on $(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$ and $s_1, s_2,\ldots, s_n$ correspond to singular values in the Erd\H{o}s-R\'enyi-Gilbert model.
\begin{proposition}\label{Proposition:AsymptoticSingular}
Let $j\geq 1$ be any integer and let $X_1, X_2, \ldots, X_n$ be i.i.d.~mean zero random variables with cumulant sequence $\kappa_1, \kappa_2,\ldots$. The partial moments $f_j(\boldsymbol{s})$ satisfy
\begin{align*}
n^{-j}f_j(\boldsymbol{s})=j!\sum_{\boldsymbol{\pi}\in P_0(j)} \frac{ \kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} p^{j-m_2}+o(1)
\end{align*} w.h.p., where $\kappa_{\boldsymbol{\pi}}$ and $y_{\boldsymbol{\pi}}$ are defined as in Lemma \ref{Lemma:Tracial} and $P_0(j)$ denotes the set of all partitions of $j$ for which $m_1=0$.
\end{proposition}
\begin{proof}
The singular values of a random graph correspond to the moduli of its eigenvalues. Therefore, \eqref{ErdosBound1} and \eqref{ErdosBound2} hold for $\mathpzc{p}_i=\mathpzc{p}_i(\boldsymbol{s})$ when $i$ is even. The same exact reasoning behind \eqref{ErdosBound2} implies $\mathpzc{p}_i=\big(p^i+o(1)\big)n^i$ when $i\neq 1$ is odd. Finally, $\kappa_1=0$ since $X_1, X_2, \ldots, X_n$ have mean zero. Lemma \ref{Lemma:Tracial} implies the claim.
\end{proof}
\begin{proposition}\label{Proposition:AsymptoticEnergy}
Let $j\geq 1$ be any integer and let $X_1, X_2, \ldots, X_n\sim X$ be i.i.d.~random variables in which $X$ admits a moment generating function and has non-zero mean $\mu$. The partial moments $f_j(\boldsymbol{s})$ satisfy
\begin{align*}
\frac{f_j(\boldsymbol{s})}{ (\mu n^{3/2})^j}=\Big(\frac{8\sigma_p}{3\pi}\Big)^j +o(1)
\end{align*} w.h.p., in which $\sigma_p^2=p(1-p)$.
\end{proposition}
\begin{proof}
The quantity $\mathpzc{p}_1=\mathpzc{p}_1(\mathbf{s})$ is called the graph energy \cite{Gutman} and \cite[Thm.~1]{Du} establishes that w.h.p.,
\begin{align*}
\frac{\mathpzc{p}_1}{n^{3/2}}=\frac{8\sigma_p}{3\pi}+o(1).
\end{align*} Lemma \ref{Lemma:Tracial} asserts
\begin{align}
f_j(\boldsymbol{s})=j!\sum_{\boldsymbol{\pi}\in P(j)} \frac{ \kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} \mathpzc{p}_{\boldsymbol{\pi}}=\mu^j \mathpzc{p}_1^j+j!\sum_{\boldsymbol{\pi}\neq (1,1,\ldots,1)} \frac{ \kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} \mathpzc{p}_{\boldsymbol{\pi}}\label{EnergyAsymptoticEq}
\end{align} since $\kappa_1=\mu$ and $y_{\boldsymbol{\pi}}=m_1!=j!$ when $\boldsymbol{\pi}=(1,1,\ldots, 1)\in P(j).$ We have so far established that $\mathpzc{p}_2=\big(p+o(1)\big)n^2$ and $\mathpzc{p}_i=\big(p^i+o(1)\big)n^i$ for integers $i\geq 3$. Dividing both sides of \eqref{EnergyAsymptoticEq} by $(\mu n^{3/2})^j$ implies the claim since $\mathpzc{p}_{\boldsymbol{\pi}}/(n^{3/2})^j=o(1)$ w.h.p.~ whenever $\boldsymbol{\pi}\neq (1,1,\ldots,1)$.
\end{proof}
\section{Proof of Theorems \ref{MainTheorem1} and \ref{MainTheorem2}}\label{Section5}
The following general form of the Hoeffding inequality \cite[Thm.~2.6.3]{Vershynin} plays an important role in our proof. Suppose $\xi_1, \xi_2, \ldots, \xi_n$ are independent mean zero sub-gaussian random variables. There exists an absolute constant $c_1>0$ such that
\begin{align*}
\mathbb{P}\Bigg( \Big\vert \sum_{j=1}^n a_i\xi_i \Big\vert \geq t \Bigg)\leq 2\exp\Big( - \frac{c_1t^2}{K^2 \mathpzc{p}_2(\boldsymbol{a})} \Big)
\end{align*} for all $a_1, a_2, \ldots, a_n\in \mathbb{R}$, in which $\mathpzc{p}_2(\boldsymbol{a})=\sum_i a_i^2$ and $K=\max_i \norm{\xi_i}_{\psi_2}.$ If in addition $\xi_1, \xi_2, \ldots, \xi_n$ have unit variances, then
\begin{align}
\norm{\sum_{k=1}^na_iX_i}_1\leq \sqrt{\mathpzc{p}_2(\boldsymbol{a})}.\label{Vershynin1}
\end{align} Moreover, there exists an absolute constant $c_2>0$ such that for all $q\geq 2$,
\begin{align}
\norm{\sum_{k=1}^na_iX_i}_q\leq c_2K\sqrt{q} \sqrt{\mathpzc{p}_2(\boldsymbol{a})}\label{Vershyninq}.
\end{align}
\subsection{Proof of Theorem \ref{MainTheorem1}}
Suppose $X_1, X_2, \ldots, X_n\sim X$ are i.i.d.~random variables defined on a probability space $(\Omega, \mathcal{F}_{\Omega}, \mathbb{P}_{\Omega})$ in which $X$ is a sub-gaussian random variable with mean $\mu$ and variance $\sigma^2$. Define the auxiliary variables $\xi_i=\sigma^{-2}(X_i-\mu)$ and let $j\geq 1$ be any integer. The sum $W_n(\boldsymbol{\lambda})$ satisfies
\begin{align*}
\mathbb{E}_{\Omega} |W_n(\boldsymbol{\lambda})|^j&=\norm{W_n(\boldsymbol{\lambda})}_j^j \notag \\
&=n^{-j} \norm{\sum_{k=1}^n \lambda_k X_k}_j^j\notag \\
&=n^{-j}\norm{\sum_{k=1}^n \lambda_k( X_k-\mu)+\mu\sum_{k=1}^n\lambda_k}_j^j\notag\\
&=n^{-j}\sigma^{2j} \norm{\sum_{k=1}^n \lambda_k \xi_k}_j^j
\end{align*} since simple graphs are traceless. The random variables $\xi_1, \xi_2, \ldots \xi_n$ are mean zero sub-gaussian random variables with unit variances. Relations \eqref{Vershynin1} and \eqref{Vershyninq}, together with the inequality $2m\leq n(n-1)$, imply that there exists a constant $c>0$, which is independent of $n$, such that
\begin{align}
\mathbb{E}_{\Omega} |W_n(\boldsymbol{\lambda})|^j\leq cn^{-j}\big[ \mathpzc{p}_2(\boldsymbol{\lambda})\big]^{j/2}= c n^{-j}(2m)^{j/2}\leq cn^{-j} [n(n-1)]^{j/2}\leq c\label{Uniform}
\end{align} Therefore, $\mathbb{E}_{\mathcal{G}}\mathbb{E}_{\Omega} |W_n(\boldsymbol{\lambda})|^j$ is finite and the Fubini-Tonelli theorem \cite[Thm.~2.16]{Folland} ensures
\begin{align}
\mathbb{E}\big[W_n(\boldsymbol{\lambda})\big]^j=\mathbb{E}_{\mathcal{G}} \mathbb{E}_{\Omega} \big[W_n(\boldsymbol{\lambda})\big]^j,\label{Fubini}
\end{align} in which $\mathbb{E}[\cdot]$ denotes expectation with respect to the product measure $\mathbb{P}_{\mathcal{G}}\otimes \mathbb{P}_{\Omega}$.
Proposition \ref{Proposition:AsymptoticEigen} and the uniform boundedness of the variables $\mathbb{E}_{\Omega} |W_n(\boldsymbol{\lambda})|^j$ allow us to compute the limit for the total expectation of $[W_n(\boldsymbol{\lambda})]^j$, which we now highlight. Define $E_j=\{ G\,:\, n^{-j}f_j(\boldsymbol{\lambda})=\alpha_j\}$, where $\alpha_j=\alpha_j(n)$ is a real number to be chosen momentarily. Relation \eqref{Fubini} implies
\begin{align*}
\mathbb{E}[W_n(\boldsymbol{\lambda})]^j&=\mathbb{E}_{\mathcal{G}}[n^{-j} f_j(\boldsymbol{\lambda})]\\
&=\int_{\mathcal{G}} n^{-j} f_j(\boldsymbol{\lambda})\,d\mathbb{P}_{\mathcal{G}}\\
&=\alpha_j \mathbb{P}_{\mathcal{G}}( E_j)+\int_{E_j^c}n^{-j} f_j(\boldsymbol{\lambda})\,d\mathbb{P}_{\mathcal{G}}.
\end{align*} The inequality $n^{-j}|f_j(\boldsymbol{\lambda})|\leq \mathbb{E}_{\Omega} |W_n(\boldsymbol{\lambda})|^j,$ together with \eqref{Uniform}, implies that
\begin{align*}
\Big\vert\int_{E_j^c}n^{-j} f_j(\boldsymbol{\lambda})\,d\mathbb{P}_{\mathcal{G}}\Big\vert\leq \int_{E_j^c} \mathbb{E}_{\Omega} |W_n(\boldsymbol{\lambda})|^j\,d\mathbb{P}_{\mathcal{G}}\leq c \mathbb{P}_{\mathcal{G}}(E_j^c).
\end{align*} Therefore,
\begin{align*}
\lim_{n\to \infty} \mathbb{E}[W_n(\boldsymbol{\lambda})]^j=\lim_{n\to \infty}\Big( \alpha_j \mathbb{P}_{\mathcal{G}}( E_j)\Big)
\end{align*} provided $\mathbb{P}_{\mathcal{G}}(E_j)\to 1$ as $n\to \infty$ so that $\mathbb{P}_{\mathcal{G}}(E_j^c)\to 0$ as $n\to \infty$. We now choose $\alpha_j$ according to Proposition \ref{Proposition:AsymptoticEigen} to conclude
\begin{align}
\lim_{n\to \infty} \mathbb{E}\big[W_n(\boldsymbol{\lambda})\big]^j=\begin{cases} 0 & \mbox{for } j\mbox{ odd},\\
c_j(p) & \mbox{for } j \mbox{ even}. \end{cases}\label{MomentMethod1}
\end{align}
A useful criteria for determing when a distribution is determined by its moments is that it admits a moment generating function \cite[Thm.~30.1]{Billingsley}. Suppose that the distribution of a random variable $W$ is determined by its moments. The method of moments \cite[Thm.~30.2]{Billingsley} ensures that $W_n(\boldsymbol{\lambda})$ converges in distribution to $W$ provided $W_n(\boldsymbol{\lambda})$ has moments of all orders and for all $j\geq 1$,
\begin{align*}
\lim_{n\to \infty} \mathbb{E}\big[W_n(\boldsymbol{\lambda})\big]^j=\mathbb{E}[W^j].
\end{align*} Recall identity \eqref{BellPartition} to conclude, for $j\geq 1$,
\begin{align}
(2j)!\sum_{\boldsymbol{\pi}\in E(2j)} \frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} p^{2j-m_2}&=(2j)!p^{2j}\sum_{\boldsymbol{\pi}\in E(2j)} \frac{1}{y_{\boldsymbol{\pi}}}\kappa_2^{m_2}\kappa_4^{m_4}\cdots \kappa_{2k}^{m_{2k}}p^{-m_2}\\
&=(2j)!p^{2j}\sum_{\boldsymbol{\pi}\in E(2j)}\frac{1}{y_{\boldsymbol{\pi}}}\kappa^{m_2}\kappa_4^{m_4}\cdots \kappa_{2k}^{m_{2k}}\\
&=p^{2j}B_{2j}(0, \kappa, 0,\kappa_4, 0,\ldots, 0,\kappa_{2j}),\label{MomentMethod2}
\end{align} where $\kappa=\kappa_2/p$. The generating function \eqref{BellGen} for the complete Bell polynomials implies
\begin{align*}
\exp\Big(\sum_{\ell=1}^{\infty} x_{\ell} \frac{t^{\ell}}{\ell !}\Big)+\exp\Big(\sum_{\ell=1}^{\infty} x_{\ell} \frac{(-t)^{\ell}}{\ell!}\Big)=2\sum_{j=0}^{\infty} B_{2j}(x_1, x_2, \ldots, x_{2j})\frac{ t^{2j}}{(2j)!}
\end{align*} Setting $x_1=x_2=\cdots=x_{2j-1}=0$ in the above expression and then simplifying yields the identity
\begin{align*}
\sum_{j=0}^{\infty} B_{2j}(0, x_2,0, \ldots,0, x_{2j})\frac{ (pt)^{2j}}{(2j)!}=\exp\Big(\sum_{\ell=1}^{\infty} x_{2\ell} \frac{(pt)^{2\ell}}{(2\ell)!}\Big).
\end{align*} Setting $x_2=\kappa$ and $x_i=\kappa_i$ for $i>2$ in the above expression and then appealing to \eqref{MomentMethod2} implies
\begin{align}
\sum_{j=0}^{\infty}c_{2j}(p) \frac{t^{2j}}{(2j)!}&=\sum_{j=0}^{\infty} B_{2j}(0,\kappa,0, \ldots,0, \kappa_{2j})\frac{ (pt)^{2j}}{(2j)!} \notag\\
&=\exp\Big(\frac{\kappa p^2t^2}{2}+\sum_{\ell=2}^{\infty} \kappa_{2\ell} \frac{(pt)^{2\ell}}{(2\ell)!}\Big)\notag\\
&=\exp\Big(\frac{\kappa p^2 t^2}{2}-\frac{\kappa_2p^2 t^2}{2}+\sum_{\ell=1}^{\infty} \kappa_{2\ell} \frac{(pt)^{2\ell}}{(2\ell)!}\Big)\notag\\
&=\exp\Big(\frac{\alpha p^2t^2}{2}-\frac{\kappa_2p^2 t^2}{2}\Big) \exp\Bigg(\frac{ K(pt)+K(-pt) }{2} \Bigg)\notag\\
&=\exp\Big(\frac{1}{2}p(1-p)\sigma^2 t^2\Big)\sqrt{M(pt)M(-pt)}\label{MomentMethod3}
\end{align} in a neighborhood of $t=0$, where $M(t)$ and $K(t)$ denote the moment and cumulant generating functions of $X$, respectively. If $X$ is a normal random variable with mean $\mu$ and variance $\sigma^2$, then the moment generating function of $X$ is given by $M(t)=\exp\big( \mu t+ \frac{1}{2}\sigma^2t^2\big)$ and \eqref{MomentMethod3} implies that
\begin{align*}
\sum_{j=0}^{\infty}c_{2j}(p) \frac{t^{2j}}{(2j)!}=\exp\Bigg(\frac{1}{2}p\sigma^2 t^2\Bigg)
\end{align*} in a neighborhood of $t=0$. The method of moments and \eqref{MomentMethod1} imply Theorem \ref{MainTheorem1} since the moment generating function for the sum of two independent random variables is given by the product of their moment generating functions.
\subsection{Proof of Theorem \ref{MainTheorem2}}
If $X$ is a symmetric sub-gaussian random variable, then $M(t)=M(-t)$ and \eqref{MomentMethod3} implies
\begin{align*}
\sum_{j=0}^{\infty}c_{2j}(p) \frac{t^{2j}}{(2j)!}=\exp\Bigg(\frac{1}{2}p(1-p)\sigma^2 t^2\Bigg)M(pt).
\end{align*} The moment generating function of $pX$ converges for all $t\in \mathbb{R}$ since $pX$ is a sub-gaussian random variable with mean zero \cite[Prop.~2.5.2]{Vershynin}. The right hand side is the moment generating function for the sum of $pX$ with an independent normal with mean zero and variance $p(1-p)\sigma^2$. The corresponding distribution is therefore determined by its moments. The method of moments and \eqref{MomentMethod1} now conclude the proof of Theorem \ref{MainTheorem2}.
\section{Proof of Theorem \ref{MainTheorem3}}\label{Section6}
Suppose $X_1, X_2, \ldots, X_n\sim X$ are i.i.d.~random variables defined on $(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$ in which $X$ is a sub-gaussian random variable with mean zero and variance $\sigma^2$. The same reasoning of Section \ref{Section5} implies that for all $j\geq 1$,
\begin{align*}
\lim_{n\to \infty} \mathbb{E}[W_n(\boldsymbol{s})]^j=j!\sum_{\boldsymbol{\pi}\in P_0(j)} \frac{ \kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} p^{j-m_2}.
\end{align*} Set $\kappa=\kappa_2/p$ and apply \eqref{BellPartition} to conclude, for $j\geq 1$,
\begin{align*}
j!\sum_{\boldsymbol{\pi}\in P_0(j)} \frac{ \kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} p^{j-m_2}&=j!p^j\sum_{\boldsymbol{\pi}\in P_0(j)} \frac{1}{y_{\boldsymbol{\pi}}} \kappa_2^{m_2}\kappa_3^{m_3}\cdots \kappa_j^{m_j} p^{-m_2}\\
&=j!p^j\sum_{\boldsymbol{\pi}\in P_0(j)} \frac{1}{y_{\boldsymbol{\pi}}} \kappa^{m_2}\kappa_3^{m_3}\cdots \kappa_j^{m_j}\\
&=p^j B_j(0, \kappa, \kappa_3, \ldots, \kappa_j).
\end{align*} The generating function \eqref{BellGen} for the complete Bell polynomials implies
\begin{align*}
\sum_{j=0}^{\infty} B_j(0, \kappa, \kappa_3, \ldots, \kappa_j) \frac{(pt)^j}{j!}&=\exp\Big( \frac{1}{2}\kappa p^2t^2+\sum_{\ell=3}^{\infty} \kappa_{\ell} \frac{(pt)^{\ell}}{\ell !}\Big)\\
&=\exp\Big( \frac{1}{2}\kappa p^2t^2-\frac{1}{2}\kappa_2 p^2t^2+\sum_{\ell=2}^{\infty} \kappa_{\ell} \frac{(pt)^{\ell}}{\ell !}\Big)\\
&=\exp\Big( \frac{1}{2}\kappa p^2t^2-\frac{1}{2}\kappa_2 p^2t^2\Big)\exp\Big(K(pt)\Big)\\
&=\exp\Big(\frac{1}{2}p(1-p)\sigma^2 t^2\Big)M(pt)
\end{align*} in a neighborhood of $t=0$, where $M(t)$ and $K(t)$ denote the moment and cumulant generating functions of $X$, respectively. The method of moments concludes the proof of Theorem \ref{MainTheorem3}.
\section{Proof of Theorem \ref{MainTheorem4}}\label{Section7}
Suppose $X$ is a random variable defined on $(\Omega, \mathcal{F}, \mathbb{P}_{\Omega})$ which has a non-zero mean and admits a moment generating function. Let $X_1, X_2, \ldots, X_n\sim X$ be i.i.d.~ random variables. Denote $V_n(\boldsymbol{s})=W_n(\boldsymbol{s})/(\mu \sqrt{n})$ for brevity. Minkowski's inequality \cite[p.~242]{Billingsley} and the fact that $X_1, X_2, \ldots, X_n\sim X$ are identically distributed imply
\begin{align*}
\Big(\mathbb{E}_{\Omega}|V_n(\boldsymbol{s})|^j\Big)^{1/j}&=\mu^{-1}n^{-3/2}\Big(\mathbb{E}_{\Omega}|\sigma_1X_1+\sigma_2X_2+\cdots +\sigma_nX_n|^j\Big)^{1/j}\\
&=\mu^{-1}n^{-3/2} \norm{ \sigma_1X_1+\sigma_2X_2+\cdots +\sigma_nX_n}_j\\
&\leq \mu^{-1}n^{-3/2}\norm{X}_j \sum_{k=1}^n \sigma_k.
\end{align*} The Cauchy-Schwarz inequality yields the inequality
\begin{align*}
\Big(\sum_{k=1}^n \sigma_k\Big)^2=\Big(\sum_{k=1}^n 1\cdot \sigma_k\Big)^2\leq n \sum_{k=1}^n \sigma_k^2=n\mathpzc{p}_2(\boldsymbol{s}).
\end{align*} Appealing to the inequality $\mathpzc{p}_2(\boldsymbol{s})=\mathpzc{p}_2(\boldsymbol{\lambda})=2m\leq n(n-1)$ now implies that there exists a constant $c$, which is independent of $n$, for which
\begin{align}
\mathbb{E}_{\Omega}|V_n(\boldsymbol{s})|^j\leq \big(\mu^{-1}n^{-3/2}\big)^j\norm{X}_j^j\big(\sqrt{2nm}\big)^j\leq c.\label{UniformEnergy}
\end{align} Therefore, $\mathbb{E}_{\mathcal{G}}\mathbb{E}_{\Omega} |V_n(\boldsymbol{\lambda})|^j$ is finite and the Fubini-Tonelli theorem ensures
\begin{align}
\mathbb{E}\big[V_n(\boldsymbol{\lambda})\big]^j=\mathbb{E}_{\mathcal{G}} \mathbb{E}_{\Omega} \big[V_n(\boldsymbol{\lambda})\big]^j,\label{FubiniEnergy}
\end{align} in which $\mathbb{E}[\cdot]$ denotes expectation with respect to the product measure $\mathbb{P}_{\mathcal{G}}\otimes \mathbb{P}_{\Omega}$.
Proposition \ref{Proposition:AsymptoticEnergy} and the uniform boundedness of the variables $\mathbb{E}_{\Omega} |V_n(\boldsymbol{s})|^j$ now allow us to compute the limit for the total expectation of $[V_n(\boldsymbol{s})]^j$, which we now highlight. Define $E_j=\{ G\,:\, \big(\mu^{-1}n^{-3/2}\big)^jf_j(\boldsymbol{s})=\alpha_j\}$, where $\alpha_j=\alpha_j(n)$ is a real number to be chosen momentarily. Relation \eqref{FubiniEnergy} implies
\begin{align*}
\mathbb{E}[V_n(\boldsymbol{s})]^j&=\mathbb{E}_{\mathcal{G}}\big[\big(\mu^{-1}n^{-3/2}\big)^j f_j(\boldsymbol{s})\big]\\
&=\int_{\mathcal{G}} \big(\mu^{-1}n^{-3/2}\big)^j f_j(\boldsymbol{s}) \,d\mathbb{P}_{\mathcal{G}}\\
&=\alpha_j \mathbb{P}_{\mathcal{G}}( E_j)+\int_{E_j^c} \big(\mu^{-1}n^{-3/2}\big)^j f_j(\boldsymbol{s})\,d\mathbb{P}_{\mathcal{G}}.
\end{align*} The inequality $\big(\mu^{-1}n^{-3/2}\big)^j| f_j(\boldsymbol{s})|\leq \mathbb{E}_{\Omega} |V_n(\boldsymbol{s})|^j,$ together with \eqref{UniformEnergy}, implies that
\begin{align*}
\Big\vert\int_{E_j^c}\big(\mu^{-1}n^{-3/2}\big)^j f_j(\boldsymbol{s})\,d\mathbb{P}_{\mathcal{G}}\Big\vert\leq \int_{E_j^c} \mathbb{E}_{\Omega} |V_n(\boldsymbol{s})|^j\,d\mathbb{P}_{\mathcal{G}}\leq c \mathbb{P}_{\mathcal{G}}(E_j^c).
\end{align*} Therefore,
\begin{align*}
\lim_{n\to \infty} \mathbb{E}[V_n(\boldsymbol{s})]^j=\lim_{n\to \infty}\Big( \alpha_j \mathbb{P}_{\mathcal{G}}( E_j)\Big)
\end{align*} provided $\mathbb{P}_{\mathcal{G}}(E_j)\to 1$ as $n\to \infty$ so that $\mathbb{P}_{\mathcal{G}}(E_j^c)\to 0$ as $n\to \infty$. We now choose $\alpha_j$ according to Proposition \ref{Proposition:AsymptoticEnergy} to conclude
\begin{align}
\lim_{n\to \infty} \mathbb{E}\big[V_n(\boldsymbol{s})\big]^j=\Big( \frac{8\sigma_p}{3\pi}\Big)^j.\label{MomentMethodEnergy1}
\end{align} Finally, the series
\begin{align*}
\sum_{j=0}^{\infty} \Big( \frac{8\sigma_p}{3\pi}\Big)^j\frac{t^j}{j!}=\exp\Big( \frac{8\sigma_p}{3\pi}t \Big)
\end{align*} corresponds to the moment generating function of a point mass at $8\sigma_p/(3\pi)$. The method of moments concludes the proof of Theorem \ref{MainTheorem4}.
\section{Closing Remarks and Open Questions}\label{Section8}
There are several possible paths to take with this project. However, none of these paths appear to be particularly easy. We argue, without the intention of undermining its potentital for difficulty, that a natural step forward is to replace the weights appearing in \eqref{EigenSequence} with the eigenvalues for the Laplacian and signless Laplacian of a random graph \cite[Ch.~7]{Cvetkovic}. We recall that the \emph{Laplacian} of a simple graph $G$ of order $n$ is the symmetric $n\times n$ matrix $L(G)$ defined by
\begin{align*}
L(G)=D(G)-A(G),
\end{align*} where $D(G)$ denotes the \emph{degree matrix of $G$} defined by setting $[D(G)]_{ii}$ equal to the degree of vertex $i$ and $[D(G)]_{ij}=0$ when $i\neq j$. The \emph{signless Laplacian} of a simple graph $G$ of order $n$ is the symmetric $n\times n$ matrix $Q(G)$ defined by
\begin{align*}
Q(G)=D(G)+A(G).
\end{align*} If $G$ is a random graph, then the upper-triangular entries of $L(G)$ and $Q(G)$ satisfy the hypotheses of the theorems in Section \ref{Section3}. Unfortunately, the vertex degrees of a random graph, while not highly correlated, are not independent. The theorems of Section \ref{Section3}, therefore, do not apply. A result due to Bryc, Dembo and Jiang on spectral measures of large Markov matrices \cite[Thm.~1.3]{Bryc}, however, can likely be adapted to handle this new situation. Lastly, we remark that the Laplacian and signless Laplacian matrices are positive semi-definite and therefore have non-negative eigenvalues. This motivates the following.
\begin{problem}
Find analogs of Theorems \ref{MainTheorem3} and \ref{MainTheorem4} for the Laplacian and signless Laplacian spectra of Erd\H{o}s-R\'enyi-Gilbert graphs.
\end{problem}
Perhaps the most natural question to consider pertains to the assumptions imposed on $X_1, X_2, \ldots, X_n$. Ultimately, the sub-gaussian assumption in Theorems \ref{MainTheorem2} and \ref{MainTheorem3} is needed to ensure the partial moments of $W_n(\boldsymbol{a})$ are uniformly bounded over all graphs. This uniform boundedness is crucial for evaluating the limit of $\mathbb{E}[W_n(\boldsymbol{a})]^j$. The authors leave it as an interesting task to try and drop the sub-gaussian assumption in these theorems.
\begin{problem}
Can we relax the sub-gaussian condition imposed on the random variables $X_1, X_2, \ldots, X_n$ appearing in Theorems \ref{MainTheorem2} and \ref{MainTheorem3}?
\end{problem}
\medskip\noindent\textbf{Acknowledgments.} The authors thank Stephan Ramon Garcia and Ken McLaughlin for useful comments and suggestions in the preparation of this manuscript.
|
1,108,101,564,181 | arxiv | \section{Introduction}
A well documented, stylized fact of the asymmetric volatility phenomenon (AVP) indicates that volatility on financial markets is higher (lower) following market downturns (upturns).\footnote{See for example \cite{black1976,christie1982stochastic,pindyck1984,french1987expected}} However, AVP is not an isolated feature because volatility spills over across assets and markets quickly and its extent is captured by the volatility connectedness \citep{diebold2015financial}. While AVP has been studied intensively, asymmetries in volatility spillovers have not received enough attention despite the fact that volatility spillovers impact portfolio diversification strategies, portfolio management \citep{GarciaTsafack2011, aboura2014cross}, options and hedging strategies \citep{jayasinghe2008exchange,james2012handbook}. In this paper we do not analyz the AVP but investigate asymmetries in volatility spillovers. Recently, asymmetric volatility connectedness was documented among a set of U.S. stocks \citep{barunik2016asymmetric} and oil commodities \citep{barunik2015} but so far there is virtually no evidence related to forex markets. In this paper we generalize a quantification of asymmetric volatility connectedness and apply it on forex markets. The economic importance of our analysis rests in that we can learn in detail the dynamics of the asymmetries in volatility spillovers. Such assessment is impossible to learn from earlier work because there is no established procedure able to provide the same extent of detail and accuracy with which we could compare our results.
Our analysis is motivated by relevant questions arising with respect to spillovers in the forex markets. Do asymmetries in volatility spillovers exist among currencies? If they do, in what manner do they propagate? One currency might be prone to attract volatility spillovers in a manner different from other currency. Hence, is the extent and direction of spillover transmission among currencies uniform or dissimilar? And are the asymmetries in volatility spillovers and their directions uniform with regard to currencies, timing, and potential underlying factors, or do they exhibit differences?
The above questions are not trivial because the forex market differs from other financial markets in a number of ways. First, 24-hour operation across continents makes the forex market a truly global market with expansive information flow. Second, the forex market exhibits a very high degree of integration, especially for key currencies \citep{kitamura2010testing}. Third, the daily forex market turnover is in multiples of trading volumes on capital markets\citep{bis2013a}.\footnote{According to the latest Triennial Central Bank Survey issued by the Bank for International Settlements \citep[p.3]{bis2013a}, “trading in foreign exchange markets averaged \$5.3 trillion per day in April 2013. This is up from \$4.0 trillion in April 2010 and \$3.3 trillion in April 2007.” To contrast the above figures with trading volumes on capital markets, the global value of share trading in 2013 was \$55 trillion and represents a 12\% increase with respect to 2012 \cite[p.2]{wfe2014}. Still, with 251 trading days a year on average, daily share trading volume in 2013 represents about \$219 billion, a figure that is dwarfed by the turnover of the forex market.} Fourth, exchange rates of currency pairs are affected by monetary policies and interventions more than stocks and bonds. Notably, an increase or decrease in a differential between two (central bank) policy interest rates results (via monetary and economic channels) in a subsequent appreciation or depreciation of the specific currencies \citep{Taylor2001,devereux2003monetary,dick2015exchange}. The degree of uncertainty about monetary policies also affects exchange rate volatility and its spillovers. Fifth, central bank interventions often successfully impact the level and volatility of exchange rates, especially in emerging markets \citep{menkhoff2013foreign,fratzscher2015foreign}. Finally, it has been shown that the volatility connectedness of the forex market increased only mildly following the 2007 financial crisis and is also more stable when compared to other market segments such as trading stocks or bonds \citep[p.164]{diebold2015financial}.
Due to the above differences and to the unique features of the forex market, volatility spillovers among currencies might propagate and affect currencies' portfolios in less-than-intuitive ways. As \cite{Kanas2001} argues, positive and significant volatility spillovers may increase the nonsystematic risk that diminishes gains from international portfolio diversification -- this is even more important in light of the evidence that systematic volatility plays a dominant role in volatility spillovers among the world currencies \citep{greenwoodrisk}. In addition, \cite{Amonlirdviman2010} explicitly show that the asymmetry in the correlations of returns decreases the gains from international portfolio diversification. Based on this evidence it is reasonable to hypothesize that qualitative differences in shocks might produce qualitatively different volatility spillovers. In plain words, volatility due to positive or negative returns might induce differing volatility spillovers within a portfolio of currencies.
To the best of our knowledge there are almost no studies addressing the issue of asymmetries in foreign exchange volatility spillovers (asymmetric forex volatility connectedness). The exception is \cite{galagedera2012effect}, who model the interaction between returns and volatility in an autoregressive five-equation system and account for asymmetries in spillovers. They show that during the subprime crisis, depreciation of the U.S. dollar against the yen has a greater impact on U.S. dollar-yen volatility spillover than appreciation. On the other hand, the appreciation and depreciation of the U.S. dollar against the euro does not appear to have an asymmetric effect on euro-U.S. dollar volatility spillover. However, while we fully acknowledge the effort of this study, the methodological approach adopted imposes limits on its ability to capture the dynamics of asymmetries in volatility spillovers.
Connectedness measures based on network models seem to answer the need to improve the detection and measurement of spillovers along with their dynamics \citep{diebold2014network}. In their seminal work, \cite{diebold2009measuring} developed a volatility spillover index (the DY index) based on forecast error variance decompositions from vector autoregressions (VARs) to measure the extent of volatility transfer among markets. This methodology has been further improved in \cite{diebold2012better}, who used a generalized VAR framework in which forecast-error variance decompositions are invariant to variable ordering. The DY index is a versatile measure allowing dynamic quantification of numerous aspects of volatility spillovers. An important input to compute the DY index is realized variance that, however, does not allow accounting for asymmetries in volatility spillovers. On the other hand, the realized semivariances introduced by \cite{shephard2010measuring} enable one to isolate and capture negative and positive shocks to volatility and thus are ideally suited to interpreting qualitative differences in volatility spillovers.\footnote{The technique was quickly adopted in several recent contributions, see e.g. \cite{fenou2013,patton2014good,segal2015good}. Full details on the DY index and realized semivariances is provided in section \ref{sec:metodology}.}
\cite{barunik2016asymmetric} combine the ideas of both the DY index and realized semivariances and devise a way to measure asymmetries in volatility spillovers that are due to qualitatively different, positive or negative, returns. We modify their approach to better account for the transfer of spillovers on the forex market. Instead of using two separate $N$-dimensional VAR systems to measure asymmetries, we suggest a general framework where the negative and positive realized semivariances are in one system. Thus, we propose a $2N$-dimensional VAR resulting in a $2N\times 2N$ system of forecast variance error decompositions. The above modification results in versatile measure allowing dynamic quantification of asymmetric connectedness.\footnote{Full details of the formal exposition is provided in section \ref{sec:metodology}.} We then empirically apply our generalized framework on the forex data. For the purpose of verbal interpretation we adopt the terminology established in the literature \citep{patton2014good,segal2015good} to distinguish asymmetry in spillovers that originates due to qualitatively different uncertainty: bad uncertainty is defined as the volatility associated with negative innovations to quantities (e.g., output, returns) and good uncertainty as the volatility associated with positive shocks to these variables. We follow this terminology and label our spillovers as bad and good volatility spillovers (or negative and positive spillovers).
Hence, in our paper we provide two distinct contributions. First, we generalize the framework and modify the spillover asymmetry measure (SAM) introduced in \cite{barunik2016asymmetric} in order to isolate asymmetries in volatility spillovers among currencies on the forex market. Second, we then apply the method to analyzing asymmetries in volatility spillovers among major world currencies during specific periods of the global financial crisis and afterward. In doing so, we provide detailed results that are not available in any earlier study related to the researched topic. Specifically, we document the dominating asymmetries in spillovers that are due to bad rather than good volatility. We also show that negative spillovers are chiefly tied to the dragging sovereign debt crisis in Europe while positive spillovers are correlated with the subprime crisis, different monetary policies among key world central banks, and developments on commodities markets. It seems that a combination of monetary and real-economy events is behind the net positive asymmetries in volatility spillovers, while fiscal factors are linked with net negative spillovers.
The rest of the paper is organized in the following way. In Section \ref{sec:lit} we provide an overview of the literature related to forex volatility spillovers. In Section \ref{sec:metodology} we formally introduce the methodological approach and formulate testable hypotheses. Forex data are described in Section \ref{sec:Data} and in Section \ref{sec:results} we detail our results along with inferences and comments. Finally, conclusions are offered in Section \ref{sec:conclusion}.
\section{Literature review \label{sec:lit}}
Analyses of volatility spillovers date back to \cite{engle1990meteor}, who showed the existence of intra-day volatility spillovers on the forex market (meteor shower hypothesis) rather than being country-specific (heat wave hypothesis). Later, \cite{baillie1991intra} did not find enough evidence for systematic volatility spillovers among exchange rates while \cite{hong2001test} did find it, including directional spillovers from the former Deutsche mark to the Japanese yen. \cite{melvin2003global} used a non-parametric approach and analyzed the same pair of currencies across regions (Asia, Asia-Europe overlap, Europe, Europe-America overlap, America) and provided evidence of both intra- and inter-regional spillovers with intra-regional volatility spillovers being stronger. Similar evidence of volatility spillovers is given by \cite{cai2008informational}, who analyze spillovers in the euro-dollar and dollar-yen pairs across five trading regions. They find informational linkages to be statistically significant at both the own-region and inter-region levels, but volatility spillovers within a region dominate in terms of economic significance. \cite{kitamura2010testing} employs an MGARCH model, analyzes intra-day interdependence and volatility spillovers, and demonstrates that volatility spillovers from the euro significantly affect the Swiss franc and Japanese yen; the analysis is limited to the period July 2008 -- July 2009, though.
Network models analyzing connectedness have been gradually employed in the economic and financial literature but their application on forex markets is still limited. Some recent contributions provide quite specific results that are derived from the application of the DY index or build upon this concept. \cite[Chapter 6]{diebold2015financial} analyze the exchange rates of nine major currencies with respect to the U.S. dollar (USD) over 1999 to mid-2013. They show that forex market connectedness increased only mildly following the 2007 financial crisis: it exhibits numerous more and less pronounced cycles, but it is not linked to a business cycle. Directional volatility spillovers differ among currencies considerably. As both the U.S. dollar and the euro are the leading vehicle currencies of the global forex market, the EUR/USD exchange rate exhibits the highest volatility connectedness among all analyzed currencies.
\cite{greenwoodrisk} generalize the connectedness framework and analyze risk-return spillovers among the G10 currencies between 1999 and 2014 and find that spillover intensity is countercyclical and volatility spillovers across currencies increase during crisis times. Similarly, \cite{bubak2011volatility} document statistically significant intra-regional volatility spillovers among the European emerging foreign exchange markets and show that volatility spillovers tend to increase in periods characterized by market uncertainty, especially during the 2007 -- 2008 financial crisis. Further, \cite{mcmillan2010return} document the existence of volatility spillovers among the exchange rates of the U.S. dollar, British pound, and Japanese yen with respect to the euro and show dominating effects coming from the U.S. dollar. Finally, Antonakakis (2012) analyzes volatility spillovers among major currencies before and after the introduction of the euro and shows that the euro (Deutsche mark) is the dominant net transmitter of volatility, while the British pound is the dominant net receiver of volatility in both periods.
Among analyses that combine the assessment of volatility spillovers on the forex and other financial markets, the most frequent are those analyzing volatility interactions between the forex and stock markets. \cite{grobys2015volatility}, employing the DY index, finds very little evidence of volatility spillovers during quiet economic development but a high level of total volatility spillovers following periods of economic turbulence. A similar conclusion is found by \cite{do2015realized}, who also emphasize that it is important to account for the volatility spillover information transmission especially during the turbulent periods. Further, significant directional spillovers are identified between the forex and stock markets in several studies targeting developed and emerging markets \citep{do2016stock,andreou2013stock,kumar2013returns,Kanas2001} or specific countries or regions including the U.S. \citep{ito2015high}, Japan \citep{jayasinghe2008exchange}, China \citep{zhao2010dynamic}, the Middle East, and North Africa \citep{arfaoui2015return}.
Finally, some studies analyze interactions and volatility spillovers between the forex market and various segments of financial markets, such as stocks and bonds \citep{clements2015volatility}, commodities \citep{salisu2013modeling}, or stocks, bonds and commodities \citep{diebold2009measuring,duncan2013domestic,aboura2014cross,ghosh2014volatility}. However, the effects of asymmetries in volatility spillovers are analyzed in none of them.
\section{Measuring asymmetric volatility spillovers \label{sec:metodology}}
Seminal papers by \cite{diebold2009measuring} and \cite{diebold2012better}, along with other related studies, estimate volatility spillovers on daily (or weekly) high, low, opening, and closing prices. Estimators based on daily data offer, in general, good approximations of volatility. However, the low sampling frequency imposes some limitations. Having high-frequency data, we estimate volatility with convenient realized volatility estimators. Furthermore, to account for volatility spillover asymmetries, we follow \cite{barunik2015,barunik2016asymmetric}, who use the realized semivariance framework of \cite{shephard2010measuring}, which offers an interesting possibility to decompose volatility spillovers due to negative and positive returns. The quantification of asymmetric volatility spillovers with realized semivariances was first employed in \cite{barunik2015}, where the authors define measures using two separate VAR systems for negative and positive semi-variances. In this paper, to estimate asymmetric volatility spillovers, we define a more general approach with a single VAR system employing volatility spillovers from both negative and positive returns.
In this section, we first introduce the two existing concepts of total and directional spillovers from \cite{diebold2012better}, and then we describe a simple way to use realized semivarinces in order to capture asymmetric volatility spillovers. In order to keep our description on a general level, we will label variables as assets.
\subsection{Measuring volatility spillovers \label{sec:SI}}
The volatility spillover measure introduced by \cite{diebold2009measuring} is based on a forecast error variance decomposition from vector auto regressions (VARs). The forecast error variance decomposition traces how much of the $H$-step-ahead forecast error variance of a variable $i$ is due to innovations in another variable $j$, thus it provides an intuitive way to measure volatility spillovers. For $N$ assets, we consider an $N$-dimensional vector of realized volatilities, $\mathbf{RV_t} = (RV_{1t},\ldots,RV_{Nt})'$, to measure total volatility spillovers. In order to measure asymmetric volatility spillovers, we decompose daily volatility into negative (and positive) semivariances that provides a proxy for downside (and upside) risk. Using semivariances allows us to measure the spillovers from bad and good volatility and test whether they are transmitted in the same magnitude \citep{barunik2016asymmetric}. In this case we use a $2N$-dimensional vector, $\mathbf{RS_t} = (RS^{-}_{1t},\ldots,RS^{-}_{Nt},RS^{+}_{1t},\ldots,RS^{+}_{Nt})'$, consisting of positive and negative semivariances.
We start describing the procedure for the $N$-dimensional vector $\mathbf{RV_t} = (RV_{1t},\ldots,RV_{Nt})'$ and later extend the framework to accommodate realized semivariance.
Let us model the $N$-dimensional vector $\mathbf{RV_t}$ by a weakly stationary vector autoregression VAR($p$) as:
\begin{equation}
\label{RV}
\mathbf{RV_t} = \sum_{i=1}^p \mathbf{\Phi}_i \mathbf{RV}_{t-i}+ \boldsymbol{\epsilon}_t,
\end{equation}
where $\boldsymbol{\epsilon}_t\sim N(0,\mathbf{\Sigma}_{\epsilon})$ is a vector of $iid$ disturbances and $\mathbf{\Phi}_i$ denotes $p$ coefficient matrices. For the invertible VAR process, the moving average representation has the following form:
\begin{equation}
\mathbf{RV}_t = \sum_{i=0}^{\infty}\mathbf{\Psi}_{i}\boldsymbol{\epsilon}_{t-i}.
\end{equation}
The $N\times N$ matrices holding coefficients $\mathbf{\Psi}_i$ are obtained from the recursion $\mathbf{\Psi}_i = \sum_{j=1}^p\mathbf{\Phi}_j \mathbf{\Psi}_{i-j}$, where $\mathbf{\Psi}_0=\mathbf{I}_N$ and $\mathbf{\Psi}_i = 0$ for $i<0$. The moving average representation is convenient for describing the VAR system's dynamics since it allows disentangling the forecast errors. These are further used for the computation of the forecast error variances of each variable in the system, which are attributable to various system shocks. However, the methodology has its limitations as it relies on the Cholesky-factor identification of VARs. Thus, the resulting forecast variance decompositions can be dependent on variable ordering. Another important shortcoming is that it allows measuring total spillovers only. Therefore, \cite{diebold2012better} use the generalized VAR of \cite{koop1996impulse} and \cite{pesaran1998generalized} to obtain forecast error variance decompositions that are invariant to variable ordering in the VAR model and it also explicitly includes the possibility to measure directional volatility spillovers.\footnote{The generalized VAR allows for correlated shocks, hence the shocks to each variable are not orthogonalized.}
\subsubsection{Total spillovers\label{sec:tot}}
In order to define the total spillover index of \cite{diebold2012better}, we consider: (i) assets' own variance shares as fractions of the $H$-step-ahead error variances in forecasting the $i$th variable that are due to the assets' own shocks to $i$ for $i=1,\ldots,N$ and (ii) cross variance shares, or spillovers, as the fractions of the $H$-step-ahead error variances in forecasting the $i$th variable that are due to shocks to the $j$th variable, for $i,j=1,\ldots,N$, $i\ne j$. Then, the $H$-step-ahead generalized forecast error variance decomposition matrix $\Omega$ has the following elements for $H=1,2,\ldots$.
\begin{equation}
\omega_{ij}^H=\frac{\sigma_{jj}^{-1}\sum_{h=0}^{H-1}\left( \mathbf{e}'_i \mathbf{\Psi}_h \mathbf{\Sigma}_{\epsilon}\mathbf{e}_j \right)^2}{\sum_{h=0}^{H-1}\left( \mathbf{e}'_i \mathbf{\Psi}_h \mathbf{\Sigma}_{\epsilon}\mathbf{\Psi}'_h\mathbf{e}_i \right)}, \hspace{10mm} i,j=1,\ldots, N,
\end{equation}
where $\mathbf{\Psi}_h$ are moving average coefficients from the forecast at time $t$; $\mathbf{\Sigma}_{\epsilon}$ denotes the variance matrix for the error vector, $\boldsymbol{\epsilon}_t$; $\sigma_{jj}$ is the standard deviation of the error term for the $j$th equation; $\mathbf{e}_i$ and $\mathbf{e}_j$ are the selection vectors, with one as the $i$th or $j$th element and zero otherwise.
As the shocks are not necessarily orthogonal in the generalized VAR framework, the sum of the elements in each row of the variance decomposition table is not equal to one. Thus, we need to normalize each element by the row sum as:
\begin{equation}
\widetilde{\omega}_{ij}^H = \frac{\omega_{ij}^H}{\sum_{j=1}^N \omega_{ij}^H}.
\end{equation}
\cite{diebold2012better} then define the total spillover index as the contribution of spillovers from volatility shocks across variables in the system to the total forecast error variance, hence:
\begin{equation}
\label{stot}
\mathcal{S}^H=100\times \frac{1}{N} \sum_{\substack{i,j=1\\ i\ne j}}^N\widetilde{\omega}_{ij}^H.
\end{equation}
Note that $\sum_{j=1}^N \widetilde{\omega}_{ij}^H=1$ and $\sum_{i,j=1}^N \widetilde{\omega}_{ij}^H=N$. Hence, the contributions of spillovers from volatility shocks are normalized by the total forecast error variance. To capture the spillover dynamics, we use a 200-day rolling window running from point $t-199$ to point $t$. Further, we set the forecast horizon $H=10$, and a VAR lag length of 2.\footnote{In addition, we constructed the spillover index with rolling windows of 150 and 100 days to check the robustness of our results. We have also experimented with different $h$ values, and we find that the results do not materially change and are robust with respect to the window and horizon selection. The VAR lag length was chosen based on AIC to produce the most parsimonious model.}
\subsubsection{Directional spillovers \label{sec:dir}}
The total volatility spillover index indicates how shocks to volatility spill over all the assets. However, with the generalized VAR framework, we are able to identify directional spillovers using the normalized elements of the generalized variance decomposition matrix \citep{diebold2012better}. The directional spillovers are important, as they allow us to uncover the spillover transmission mechanism disentangling the total spillovers to those coming from or to a particular asset in the system.
Following \cite{diebold2012better} we measure the directional spillovers received by asset $i$ from all other assets $j$:
\begin{equation}
\mathcal{S}_{N,i\leftarrow\bullet}^H=100\times \frac{1}{N} \sum_{\substack{j=1\\ i\ne j}}^N\widetilde{\omega}_{ij}^H,
\end{equation}
i.e., we sum all numbers in rows $i$, except the terms on a diagonal that correspond to the impact of asset $i$ on itself. The $N$ in the subscript denotes the use of an $N$-dimensional VAR. Conversely, the directional spillovers transmitted by asset $i$ to all other assets $j$ can be measured as the sum of the numbers in the column for the specific asset, again except the diagonal term:
\begin{equation}
\mathcal{S}_{N,i\rightarrow\bullet }^H=100\times \frac{1}{N} \sum_{\substack{j=1\\ i\ne j}}^N\widetilde{\omega}_{ji}^H.
\end{equation}
As we now have complete quantification of how much an asset receives (transmits), denoted as the direction from (to), we can compute how much each asset contributes to the volatility in other assets in net terms. The net directional volatility spillover from asset $i$ to all other assets $j$ is defined as the difference between gross volatility shocks transmitted to and received from all other assets:
\begin{equation}
\mathcal{S}^H_{N,i}=\mathcal{S}_{N,i\rightarrow\bullet }^H-\mathcal{S}_{N,i\leftarrow\bullet}^H.
\end{equation}
\subsection{Measuring asymmetric spillovers \label{sec:MAS}}
Using the advantage of high-frequency data, we can track the asymmetric behavior of volatility spillovers. In particular, we are able to distinguish spillovers from volatility due to negative returns and positive returns (bad and good volatility). Further, we are also able to distinguish directional volatility spillovers (in the direction TO) due to negative returns and positive returns.\footnote{We do not estimate directional volatility spillovers FROM as it is difficult to interpret these in the $2N\times 2N$ spillover matrix setting.} Following \cite{barunik2015} and \cite{barunik2016asymmetric}, we first disentangle daily realized volatility into negative and positive daily realized semivariances (for more details see the Appendix). The semivariances allow us to estimate volatility spillovers due to bad or good volatility and quantify asymmetries in spillovers. For $N$ assets, \cite{barunik2015} use two separate $N$-dimensional VAR systems to measure the asymmetries. In this paper, we propose a more general framework where negative and positive realized semivariances are employed in a single VAR. Thus, we estimate a $2N$-dimensional VAR, resulting in $2N\times 2N$ system of forecast variance error decompositions.
As our empirical analysis, based on the described methodological approach, employs forex data, we will use the term currency (instead of asset) from now on. In order to obtain asymmetric volatility spillovers for $N$ currencies, we construct a VAR model (Eq. \ref{RV}), but we replace the vector of realized volatilities $\mathbf{RV_t} = (RV_{1t},\ldots,RV_{Nt})'$ with the $2N$ dimensional vector of negative and positive semivariances $\mathbf{RS_t} = (RS^{-}_{1t},\ldots,RS^{-}_{Nt},RS^{+}_{1t},\ldots,RS^{+}_{Nt})'$. Then the elements of $2N\times 2N$ $H$-step-ahead generalized forecast error variance decomposition matrix $\Omega$ has the form:
\begin{equation}
\omega_{ij}^H=\frac{\sigma_{jj}^{-1}\sum_{h=0}^{H-1}\left( \mathbf{e}'_i \mathbf{\Psi}_h \mathbf{\Sigma}_{\epsilon}\mathbf{e}_j \right)^2}{\sum_{h=0}^{H-1}\left( \mathbf{e}'_i \mathbf{\Psi}_h \mathbf{\Sigma}_{\epsilon}\mathbf{\Psi}'_h\mathbf{e}_i \right)}, \hspace{10mm} i,j=1,\ldots, 2N,
\end{equation}
where $\mathbf{\Psi}_h$ denotes the moving average coefficient matrix from the forecast at time $t$; $\mathbf{\Sigma}_{\epsilon}$ is the variance matrix for the error vector $\boldsymbol{\epsilon}_t$; $\sigma_{jj}$ is the standard deviation of the error term for the $j$th equation; $\mathbf{e}_i$ and $\mathbf{e}_j$ are the selection vectors, with one as the $i$th or $j$th element and zero otherwise.
\subsubsection{Directional spillover asymmetry measure}
Standard directional spillovers give us an important insight about the volatility spillovers' structure among the studied currencies. However, we may benefit from realized semivariances to obtain more precise information about spillover behavior by defining a directional spillover asymmetry measure. The asymmetry is defined as the difference between the directional volatility spillover coming from a positive or negative semivariance. The standard directional spillovers are defined in Section \ref{sec:dir} for both directions, i.e. FROM and TO. However, in the case of asymmetry we define only the direction TO as its interpretation is straightforward in the $2N\times 2N$ spillover matrix setting while the interpretation of FROM is quite vague. Specifically, we define directional asymmetries in volatility spillovers coming from a specific currency TO the rest of the currencies under research.
In Table \ref{tab:spillsetting} we show the elements of the $2N\times 2N$ $H$-step-ahead generalized forecast error variance decomposition matrix $\Omega$ for a specific case of the six currencies we analyze (at this moment we refrain from introducing the currencies and leave details to Section \ref{sec:Data}). To compute directional spillovers, in the direction TO, we sum the corresponding column of the $2N \times 2N$ spillover matrix (Table \ref{tab:spillsetting}) excluding the own share on the main diagonal, $i\neq j$, and two diagonals in the $N \times N$ block sub-matrices (lower left and upper right), i.e., $\vert i-j\vert \neq N/2$. All excluded numbers are highlighted in bold, hence for every column we sum $2N-2$ numbers. We define directional spillover from a currency $i$ to all other currencies as:
\begin{equation}
\mathcal{S}^H_{2N,i\rightarrow\bullet }=100\times \frac{1}{2N} \sum_{\substack{i=1, i\ne j\\ \vert i-j\vert \neq N/2}}^{2N}\widetilde{\omega}_{j,i}^H, \hspace{10mm} i,j=1,\ldots, 2N.
\end{equation}
Based on directional spillovers, we can now introduce the net asymmetric directional spillovers that measure how shocks from bad and good volatility to one currency affect the volatility of all other currencies. Let us define the directional spillover asymmetry measure as the difference of the response to a shock from bad or good volatility from currency $i$ to other currencies. Thus, for currency $i$ we subtract the effect of the $(N+i)$-th column of a spillover matrix from the effect of the $i$-th column, i.e.,
\begin{equation}
\label{samd}
\mathcal{SAM}^H_{2N,i\rightarrow\bullet} = \mathcal{S}^H_{2N,i\rightarrow\bullet } - \mathcal{S}^H_{2N,(i+N)\rightarrow\bullet }, \hspace{10mm} i,\ldots, N.
\end{equation}
If the $\mathcal{SAM}^H_{N,i\rightarrow\bullet}$ is negative (positive), then we observe a stronger effect of bad (good) volatility of currency $i$ to other currencies. Again, to capture the time-varying nature of spillovers, we use a 200-day moving window running from point $t-199$ to point $t$.
\subsubsection{Spillover asymmetry measure\label{samsec}}
While the spillover asymmetry measure defined by Equation (\ref{samd}) gives us detailed information about the extent of asymmetry only for one currency, we can now define a measure that describes the volatility spillover asymmetry for the whole system (portfolio) of currencies. The idea of a spillover asymmetry measure ($\mathcal{SAM}$) was introduced in \cite{barunik2015} -- however, we extend their approach by using all available volatility spillovers in one $2N$-dimensional VAR model.\footnote{The subscript $2N$ in the spillover asymmetric measure and the directional measures denotes that a $2N$-dimensional VAR model was used for spillover computation.} We define the spillover asymmetry measure with an $H$-step-ahead forecast at time $t$, $\mathcal{SAM}^H_{2N}$, as a difference between volatility spillovers due to negative and positive returns for all currencies $N$:
\begin{equation}
\label{eq:sam}
\mathcal{SAM}^H_{2N}=\sum_{i=1}^{N} \mathcal{S}^H_{2N,i\rightarrow\bullet } - \sum_{i=N+1}^{2N} \mathcal{S}^H_{2N,i\rightarrow\bullet }.
\end{equation}
The $\mathcal{SAM}^H_{2N}$ help us to better understand the behavior of volatility spillovers for a given portfolio of assets. In case there is no spillover asymmetry, spillovers coming from $RS^-$ and $RS^+$ are equal, thus $\mathcal{SAM}^H_{2N}$ takes the value of zero. However, when $\mathcal{SAM}^H_{2N}$ is negative (positive), spillovers coming from $RS^-$ are larger (smaller) than those from $RS^+$. In order to test the null hypothesis of symmetric connectedness, we use bootstrap confidence intervals constructed as described by \cite{barunik2016asymmetric}.
\subsection{Hypotheses}
The previous definitions of $\mathcal{SAM}$ and the directional $\mathcal{SAM}$ (D -- $\mathcal{SAM}$) help us to better understand the behavior of volatility spillovers for a given portfolio of currencies. In case there is no spillover asymmetry, spillovers coming from $RS^-$ and $RS^+$ are equal and the $\mathcal{SAM}$ and D -- $\mathcal{SAM}$ take the value of zero. However, when the $\mathcal{SAM}$ and D -- $\mathcal{SAM}$ are negative (positive), spillovers coming from $RS^-$ are larger (smaller) than those from $RS^+$. This pattern would then clearly indicate the existence and extent of asymmetries in volatility spillovers. Following our exposition in Section \ref{sec:MAS}, we formulate several testable hypotheses of symmetric connectedness to test for the presence of potential asymmetries in volatility spillovers (asymmetric volatility connectedness) among currencies.
Hypothesis 1: Volatility spillovers in the portfolio of currencies do not exhibit asymmetries.
Formally, Hypothesis 1 is formulated as:
$$\begin{array}{ccccccc}
\mathcal{H}^1_0: &&\mathcal{SAM}^H_{2N} = 0& \text{against} & \mathcal{H}_A^1: && \mathcal{SAM}^H_{2N} \ne 0.\\
\end{array}$$
Hypothesis 2: No directional volatility spillovers coming from either $RS^-$ or $RS^+$ are transmitted from one currency to the rest of the currencies in a portfolio. Formally, Hypothesis 2 is formulated as:
$$\begin{array}{ccccccc}
\mathcal{H}^2_0: && \mathcal{S}^H_{2N,i\rightarrow\bullet } = 0 & \text{against} & \mathcal{H}_A^2: && \mathcal{S}^H_{2N,i\rightarrow\bullet } \ne 0 \hspace{4mm} (i=1,\ldots, 2N).\\
\end{array}$$
Hypothesis 3: Volatility spillovers transmitted from one currency do not exhibit an asymmetric impact on the volatility of the other currencies in portfolio.
Formally, the Hypothesis 3 is formulated as:
$$\begin{array}{ccccccc}
\mathcal{H}^3_0: &&\mathcal{SAM}^H_{2N,i\rightarrow\bullet} = 0& \text{against} & \mathcal{H}_A^3: && \mathcal{SAM}^H_{2N,i\rightarrow\bullet} \ne 0.\\
\end{array}$$
Rejecting a null hypothesis means that bad and good volatility does matter for spillover transmission in terms of magnitude as well as direction. Moreover, we assume that the values of the volatility spillover indices differ over time. To capture the time-varying nature of the potential asymmetries, we compute the indices using a 200-day moving window that runs from point $t-199$ to point $t$; more details were provided in Section \ref{sec:tot}. In order to test the null hypotheses of symmetric connectedness, we use bootstrap confidence intervals constructed as described by \cite{barunik2016asymmetric}.
\section{Data and dynamics \label{sec:Data}}
In this paper we compute volatility spillovers measures on the foreign exchange future contracts of six currencies over the period from January 2007 to December 2015. The currencies are the Australian dollar (AUD), British pound (GBP), Canadian dollar (CAD), euro (EUR), Japanese yen (JPY), and Swiss franc (CHF). All these currency contracts are quoted against the U.S. dollar (USD) and this is a typical approach in the forex literature (any potential domestic (U.S.) shocks are integrated into all currency contracts). The currencies under research constitute a group of the most actively traded currencies globally \citep{bis2013a,antonakakis2012exchange} and this is the reason for our choice: we aim at analyzing asymmetric connectedness among the currencies that constitute two thirds of the the global forex turnover by currency pair \citep{bis2013a}; we do not pursue assessment of less traded currencies at the moment.
The foreign exchange future contracts are traded on the Chicago Mercantile Exchange (CME) on a nearly 24-hour basis and transactions are recorded in Chicago time (CST). Trading activity starts at 5:00 pm CST and ends at 4:00 pm CST. To exclude potential jumps due to the one-hour gap in trading, we redefine the day in accordance with the electronic trading system. Furthermore, we eliminate transactions executed on Saturdays and Sundays, U.S. federal holidays, December 24 to 26, and December 31 to January 2, because of the low activity on these days, which could lead to estimation bias. The data are available from Tick Data, Inc.\footnote{http://www.tickdata.com/}
In Figure \ref{Fig1} we plot the exchange rates of all six currencies (EUR, JPY, GBP, AUD, CHF, CAD). Each plot is labelled by the three-letter international code of the specific currency and exhibits the dynamics of the currency’s price in terms of the U.S. dollar over the sample period. The dynamics of the exchange rates is remarkably different and only two commodity currencies (AUD and CAD) share an overall common pattern. Still, all six currencies exhibit depreciation with respect to the USD following the 2008 global financial crisis (GFC) -- the extent differs and the Japanese and Swiss currencies show the least GFC-related depreciation. The remarkably stable path of the GBP from 2009 is in contrast to the post-GFC appreciation of other currencies, followed by a depreciation after 2012 (AUD, CAD, JPY). On the other hand, the euro to U.S. dollar exchange rate exhibits a series of ups and downs related to various major events among which the most important are the rounds of quantitative easing performed by the Fed between 2009 and 2014, and the key part of the EU debt crisis (2010 -- 2011). The Swiss franc shows a prominent wave of appreciation in 2011 and a subsequent depreciation after the managed float regime was given up by the Swiss National Bank.
\section{Results \label{sec:results}}
\subsection{Total connectedness and economic conditions}
In Figure \ref{Fig2} (upper panel), we show the total connectedness among the six currency pairs. The total forex volatility spillovers measure is calculated based on \cite{ diebold2012better}: the connectedness is quite high during the GFC period, until 2010, and then in 2014. The total connectedness values of 65\% and above during the 2008 -- 2010 period are comparable to those found in \cite{diebold2015financial}. The plot exhibits a distinctive structural change in total connectedness among the six currencies under research: initial relatively stable and high connectedness (interrupted by a short drop during 2009) decreases gradually after 2010 but then in 2013 begins to rise sharply, surpassing in 2015 the original levels from the GFC period. The period is marked by two distinctive phenomena. One is the difference between monetary policies among the Fed, ECB, and Bank of Japan. While the Fed stopped the quantitative easing (QE) policy in 2014, the ECB was beginning to pursue it and the Bank of Japan was already active in pursuing this policy. From 2013 the policy differences affected the capital flows and carry-trade operations so that the U.S. dollar began to appreciate against the euro and yen. At the same time, falling commodity prices exerted downward pressure on inflation and interest rates. This course affects most of the currencies in our sample as commodities are quoted in vehicle currencies (USD, EUR, JPY) and interest rate cuts occurred for commodity currencies (AUD, CAD), diminishing their appeal for carry-trade activities. Hence, the increased volatility and spillovers among currencies from 2013 on are to be found in combined effects chiefly rooted in monetary steps.
In the lower panel of Figure \ref{Fig2}, we relate the total connectedness to economic conditions represented by the plots of three indicators: the Federal funds rate, the TED, and the VIX. Unfortunately, for most of the period under research the Federal funds rate is near the zero lower bound and this precludes assessing a link between the total connectedness and U.S. economic development.\footnote{For the earlier period, \cite{greenwoodrisk} document a negative correlation between the Federal funds rate and forex spillovers. The evidence is suggestive of the potential that the U.S. dollar drives much of the forex market dynamics \citep{lustig2011common}.} Further, we compare total connectedness and the TED. Both measures share maximum values in 2008 in association with the fall of Lehman Brothers and other GFC-related events. Then the TED decreases rapidly as the Fed began to lend money and to guarantee interbank lending. Spillovers start to decrease after 2010 as well. The pattern in movements of both measures indicates that forex spillovers seem to strengthen during a period of low liquidity on the market. Finally, we observe several instances when total connectedness increases along with spikes in the VIX in 2008, 2010, 2011, and 2015. Our interpretation is that forex spillovers tend to build up during periods of financial distress.
\subsection{Directional spillovers}\label{sec:ds}
We now turn to a more detailed analysis of spillovers among specific currencies. The total volatility connectedness (the upper panel of Figure \ref{Fig2}) exhibits the extent of volatility spillovers for all six currencies. Following \cite{diebold2012better} we are able to compute directional spillovers and show how volatility from a specific currency transmits to other currencies in our sample (``contribution TO"). Similarly we are also able to show the opposite link of the extent of spillovers going from other currencies to a specific currency (``contribution FROM"). The condensed information on the extent of such directional spillovers is presented in Table \ref{tab:spills}. The information presented within the table shows in aggregate form the differences in how specific currencies transmit and receive spillovers. The most important directional spillovers are detected between commodity currencies (AUD, CAD) and between the pairs EUR-CHF and EUR-GBP. However, these differences are highly aggregated and do not illustrate the evolution over time.
Therefore, we compute the net effect of the directional spillovers: a difference between ``contribution TO" and ``contribution FROM" that we plot in Figure \ref{Fig3}, where the most interesting patterns emerge. The positive domain contains net spillovers that a currency transmits to other currencies and we say that a currency is a ``spillover giver". The net spillovers in the negative domain then represent the situation when a specific currency receives net volatility spillovers from others: in this case the currency is said to be a ``spillover receiver".
Figure \ref{Fig3} offers interesting insights based on the dynamic patterns that show each currency's net position in terms of the volatility spillovers it receives or transmits. One might hypothesize that the extent of spillover transmission among currencies is uniform. However, the evidence shown in Figure \ref{Fig3} shows quite the opposite. Both commodity currencies (AUD and CAD) can be characterized by opposite extreme net positions: AUD is a net volatility spillover receiver and CAD is a spillover giver; short periods when low net spillovers are in the opposite domains are exceptions. Exactly the opposite pattern is detected with JPY that clearly receives more spillovers during most of the researched period and transmits them moderately after the GFC began to subside. This behavior might be connected to the known intervention practice of the Bank of Japan. \cite{chortareas2013volatility} find that the Bank of Japan interventions in the USD/JPY exchange rate decrease (only in a short term of less than five hours and in a discontinuous pattern) the daily volatility of the USD/JPY rate. This suggests that the interventions can decrease volatility in the short run. The finding is in line with our results because during 2000 JPY behaved as a spillover giver (increased volatility) but during the rest of our period it was mostly a spillover receiver as its own volatility diminished relative to the rest of the currencies.
The rest of the currencies in Figure \ref{Fig3} alternate between being givers or receivers, depending on the time. Still, GBP could be described as being a more spillover-giving currency because it receives non-marginal net spillovers only during 2009, marking the financial crisis aftermath. EUR receives more net spillovers as the European sovereign debt crisis builds up and then from 2013 on. Despite being rather a spillover receiver, EUR seems to be the calmest currency as the net directional spillovers are quite low. The results for GBP and EUR are in line with those presented by \cite{antonakakis2012exchange} for the period 2000 -- 2012, who finds GBP and EUR to be the dominant net transmitter and net receiver of volatility, respectively.\footnote{\cite{antonakakis2012exchange} employs the DY spillover index approach and shows that the Deutsch mark (euro) is the dominant net transmitter of volatility, while the British pound is the dominant net receiver of volatility both before and after the introduction of the euro. The exchange rates in \cite{antonakakis2012exchange} are defined as the number of Deutsch mark/euro/GBP units per one USD. Hence, in this footnote we transposed his original interpretation of transmitter/receiver to correspond with our analysis because we define the exchange rate as the number of U.S. dollars per one unit of a specific currency.} The most balanced currency in terms of net spillovers is CHF where the spillover giver/receiver positions alternate quite often.
Finally, since we employ data origination in one market (the Chicago Mercantile Exchange), we are unable to test the meteor-shower hypothesis of \cite{engle1990meteor}. Instead, the above evidence on directional spillovers among currencies suggests the presence of heat-wave volatility clustering as the values of spillovers indicate that substantial spillovers are transmitted among currencies within a specific market. Thus, our results are also in line with those presented by \cite{melvin2003global} and \cite{cai2008informational}.
The above results do not involve asymmetries in volatility spillovers but they confirm earlier findings in the literature. This validation is important for our work in terms of accuracy because our extension of the Diebold-Yilmaz methodology provides assessment of the asymmetries in volatility connectedness, results of which we present below.
\subsection{Asymmetries in volatility spillovers}
So far we have shown evidence based on spillovers that did not account for asymmetries. Now, we will employ the realized semivariances to separate qualitatively different shocks to volatility. Details on the computation of realized semivarinces are described in \ref{sec:semi}. In short, negative realized semivariance ($RS^-$) isolates negative shocks to volatility or, in other words, $RS^-$ allows capturing volatility due to negative changes (returns) in an exchange rate. The opposite is true for positive realized semivariance ($RS^+$). The descriptive statistics of realized semivariances are reported in Table \ref{tab:descr}. The similarity in the values of the first two moments of the positive and negative semivariances hints at the similarity of both types of volatility measures. However, such similarity is misleading because differences in skewness and kurtosis (including minimum and maximum values) suggest that realized semivariances do not need to be similar after all, especially when we account for their dynamics.
In contrast to Table \ref{tab:spills}, Figure \ref{Fig4} offers entirely new insights. It is the plot of the spillover asymmetry measure ($\mathcal{SAM}$) computed as the difference between the spillover indices for all six currency pairs where inputs are realized semivariances as in specification (\ref{eq:sam}), whose descriptive statistics are presented in Table \ref{tab:spills}. The volatility associated with negative (positive) innovations to returns has been termed as bad (good) volatility \citep{patton2014good,segal2015good}. We follow this terminology and label spillovers in Figure \ref{Fig4} as bad and good volatility spillovers (or simply negative and positive spillovers).
The plot of $\mathcal{SAM}$ in Figure \ref{Fig4} exhibits a similarly broken pattern as the total connectedness measure in Figure \ref{Fig2}, upper panel. However, a qualitatively new picture emerges. Asymmetries due to positive shocks measured with $RS^+$ are plotted in the positive domain and dominate the early and late periods of our sample (2008 -- 2009; 2014 -- 2015). On the other hand, during 2010 -- 2013 the asymmetries due to negative shocks measured with the $RS^-$ are plotted in the negative domain and dominate not only in their length but also in terms of their magnitude. Based on the evidence in Figure \ref{Fig4} we are able to reject Hypothesis 1 as the volatility spillovers in the portfolio of currencies exhibit distinctive asymmetries.
Further, the evidence suggests that different types of event are dominated by different types of spillover. The period of the global financial crisis (2007 -- 2009) that emerged in the U.S. is characterized by good volatility spillovers. This indicates that positive shocks dominated negative ones. In other words, asymmetries in volatility spillovers during the GFC were grounded chiefly in the good volatility of the currencies’ values with respect to the U.S. dollar. The period marked by the European sovereign debt crisis that fully unfolded in 2010 offers a different view. The asymmetries are more pronounced and bad volatility spillovers clearly dominate the period 2010-2013. The largest values mark the 2010 Greek fiscal crisis and in 2012 the combined major effects of the Greek vote against the austerity plan and Spain’s troubled situation that forced it to launch a rescue plan for its banking sector \citep{brei2013rescue}.
Besides the key events described above, there were other factors as well. The largest asymmetries due to negative shocks occurring in 2010 also reflect the development in the commodities’ markets: rising prices and the progressive financialization of commodities \citep{cheng2013financialization}. The pattern also correlates well with the improvement of the U.S. labor market and the development in emerging markets and China that are naturally paired with the development of the commodities as well. Large asymmetries in 2011 -- 2012 reflect further improvement in commodities markets until they burst. High asymmetries due to positive shocks in 2014 and later on should be paired with two major events. One, dramatically falling prices in commodities markets that resulted in interest-rate cuts by many central banks. Two, a prominent divide between the monetary policies of the Fed and its major counterparts (ECB, Bank of Japan) because international markets are quite sensitive to the Fed’s monetary policy as U.S. treasury securities dominate in global markets \citep[p. 32]{siklos2017}.
In terms of the interpretation related to asymmetries we assume that outbursts of good and bad volatilities of a specific currency spill over and increase the volatility of other currencies. The reasoning behind this assumption is that we study exchange rates involving seven currencies (USD, EUR, JPY, GBP, AUD, CHF, CAD) that account for almost 90\% of the global foreign exchange market turnover; further, the six highly traded currency pairs (with respect to the USD) based on the seven currencies that we study amount to two thirds of total global forex trading \citep{bis2013a}.\footnote{According to \cite[pp.10-11]{bis2013a}, the currency distribution of global foreign exchange market turnover is dominated by seven currencies (USD, EUR, JPY, GBP, AUD, CHF, CAD) that account for 173.6\% of the global forex market turnover (out of 200\% - the sum of the percentage shares of individual currencies totals 200\% instead of 100\% because two currencies are involved in each transaction). Further, the six currency pairs (USD/EUR, USD/JPY, USD/GBP, USD/AUD, USD/CAD, USD/CHF) amount to 65.1\% of the global foreign exchange market turnover by currency pair.} Since most of the trades in the currency markets are speculative in nature, the currencies in our sample can be considered substitutes.\footnote{The financial education website Investopedia states that “day-to-day corporate needs comprise only about 20\% of the market volume. Fully 80\% of trades in the currency market are speculative in nature (http://www.investopedia.com/articles/forex/06/sevenfxfaqs.asp; retrieved on March 10, 2016). The data provided by \cite[p.6]{bis2013a} do not provide a direct estimate of speculative trading but allow an indirect inference via foreign exchange market turnover by counterparty that is proportionally divided among non-financial customers (9\%), reporting dealers (39\%), and other financial institutions (53\%). Further, in terms of the instruments, “FX swaps were the most actively traded instruments in April 2013, at \$2.2 trillion per day, followed by spot trading at \$2.0 trillion” \citep[p.3]{bis2013a}. Hence, the figures also support the major role of the forex speculative trading.} Hence, volatility spillovers from one currency are assumed to directly impact the volatility of the other currencies under research.
Further, specifically in the case of foreign exchange another interpretation of asymmetries in volatility spillovers presents itself. The six currencies under research are base currencies. A negative change of the base currency's unit price in terms of the U.S. dollar means that the amount of dollars needed to buy one unit of the base currency is smaller. Thus, a negative change (or negative return) indicates a depreciation of the base currency with respect to the dollar. Spillovers from volatility due to negative returns (and computed with the help of negative realized semivariance $RS^-$) then mean spillovers that emerge due to temporary depreciations of the base currency. A similar logic applies to show that positive realized semivariance ($RS^+$) captures volatility that is due to the positive returns of the base currency, meaning the temporary appreciation of the base currency. We have to stress two issues, though. One, the depreciation or appreciation of a currency is usually understood as a somewhat longer process. Since we employ intra-day data, temporary depreciations and appreciations (negative and positive returns) occur frequently and often move in opposite directions. Hence, they do not represent a longer process from a macroeconomic perspective. Two, it follows that temporary depreciations and appreciations (employed in the form of returns to quantify volatility spillovers) do not necessarily correlate with periods of appreciation or depreciation of a specific currency. Despite the fact that sometimes both events occur simultaneously, it is not a rule. Finally, the illustration of temporary depreciation and appreciation movements behind volatility spillovers is useful for the economic interpretation of our results as well as for comparing our results with related evidence, albeit limited, in the literature. However, by acknowledging its limitations, henceforth we rather employ the standard terminology described and used in the literature; i.e. bad and good volatility spillovers.
Based on the results presented in this subsection we conclude that the net bad volatility spillovers (the SAM in Figure \ref{Fig4}) dominate the good volatility spillovers. Thus, during much of our sample period negative shocks were driving volatility spillovers. Further, there is a difference in the nature of the underlying key factors related to asymmetries in volatility spillovers. Good volatility spillovers of the six currencies under research are linked with (i) the global financial crisis and its subprime crisis nexus in the U.S. and (ii) different monetary policies among key world central banks as well as developments on commodities markets. On the other hand, bad volatility spillovers are chiefly tied to the dragging sovereign debt crisis in Europe. It seems that a combination of monetary and real economy events is behind the net positive spillovers, while fiscal factors are linked with net negative spillovers. Hence, not only the origin of major factors but also their nature can be found behind the asymmetries in volatility spillovers on forex markets.
\subsection{Asymmetries in directional volatility spillovers}
Following the above outline we now proceed with an assessment of asymmetries in directional spillovers among individual currencies. The condensed information on how the asymmetries in directional spillovers propagate is presented in Table \ref{tab:spills2}. The convenient matrix format allows to distinguish proportions in which good and bad volatilities from individual currencies propagate across the market and result in positive and negative spillovers that materialize in the volatilities of the currencies under research. Volatility spillovers that are above average levels might be detected for interactions between commodity currencies (CAD, AUD) and the euro and Swiss franc pair. These patterns also resonate with the non-asymmetric spillovers reported in subsection \ref{sec:ds}. Unfortunately, the condensed table does not reveal the dynamics in the pattern of directional asymmetries. Hence, the full dynamics is presented in graphical form below.
The detailed dynamics is provided in Figure \ref{Fig5}, where we present directional asymmetries in volatility spillovers coming from a specific currency to the rest of the currencies under research. First we show how the bad volatility of a specific currency influences the volatility of the other five currencies in the system (first row). The graphs are calculated from a 12-variable system of six $RS^+$ and six $RS^-$ as a sum of the column in a matrix shown in Table \ref{tab:spills2} excluding all diagonals of all four 6x6 block-matrices in the system. The least pronounced positive spillovers are visible in the case of CAD, which means that relatively small spillovers that are due to positive shocks are transmitted from CAD to other currencies. The remaining evidence points to comparable amounts of positive spillovers coming from the currencies.
In a similar fashion we are able to isolate the effects of bad volatility. In the second row of Figure \ref{Fig5} we plot bad volatility spillovers coming from a specific currency to the rest of the currencies. Most of the negative spillovers come from AUD, CAD, and EUR as their plots reach relatively high levels over the entire time span. On the other hand, the smallest proportion of spillovers due to negative shocks is transmitted from JPY to the other currencies. The rest of the currencies record a comparable extent of negative spillovers transmitted from them. Based on the evidence in the first two rows of Figure \ref{Fig5}, we are able to reject Hypothesis 2 because both negative and positive directional spillovers from each currency are transmitted to the rest of the currencies in the portfolio and these transmissions are not symmetric.
Finally, in the third row, we present $net$ asymmetric directional spillovers constructed as a difference between the values plotted in the first and second rows. Formally, the net asymmetric directional spillovers are defined as the difference of the sums of the columns of $RS^+$ and $RS^-$ in \ref{tab:spills2} excluding all diagonals of all four 6x6 block-matrices in the system. The net asymmetric directional spillovers provide the key interpretation value because they measure whether the good volatility of a specific currency affects the volatility of the other currencies more than bad volatility (positive domain of the plot) or whether net negative spillovers exhibit a greater impact (negative domain of the plot). In sum, the evidence in the third row of Figure \ref{Fig5} points to the fact that volatility spillovers transmitted from one currency exhibit an asymmetric impact on the volatility of the other currencies in portfolio. Thus, we are able to reject Hypothesis 3. We can further gauge that negative spillovers occur more often and with somewhat larger size than positive spillovers. Hence, negative spillovers transmitted from one currency impact the volatility of the other currencies in the portfolio more than positive spillovers. Specific impacts are described below.
Both commodity currencies, AUD and CAD, transmit heavily net negative spillovers to other currencies, especially during 2010 -- 2011 and also well into 2012. Further, while AUD occasionally also transmits net positive spillovers, CAD is by and large on the negative-shocks side and its net transmitting position does not experience any regime break associated with either GFC or the European debt crisis. Large asymmetries in 2011 -- 2012 reflect further increases of prices in commodities markets until they burst in 2012. Decreasing asymmetries for both AUD and CAD around 2013 pair well with developments on commodities markets and with the fact that for that particular period commodities seem to have decoupled from their strong negative correlation with the U.S. dollar.
Vehicle currencies (EUR, JPY) exhibit highly polarized behavior. The exact timings of the worst episodes during the European sovereign debt crisis contour sharply the periods when the EUR transmits net negative spillovers. The U.S.-bred GFC on the other hand coincides with the EUR net positive spillovers. Similarly, the period when the ECB began to buy bonds (2014 -- 2015) is characterized by net positive spillovers as well. The shift in the regime change is quite clear and these key events are most likely behind such asymmetries. The JPY exhibits a different dynamics: diffusion of the net directional spillovers due to positive returns dominates most of the time span. Conversely, the period 2008 -- 2012 exhibits an almost unbroken pattern of net positive spillovers. The customary forex interventions of the Bank of Japan against the currency’s strength are a likely driver of the shocks behind the net spillovers. The pattern (including the interventions) also correlates with the fact that during 2006 -- 2010 many Japanese insurance companies and pension funds engaged in purchases of foreign bonds that further increased pressure on the yen’s value. In addition, the emergence of many small forex brokers also potentially contributed to volatility on the market. A specific event that breaks the pattern can be detected in the asymmetries plot, though. There is a decrease in positive spillovers and even a small swelling of net negative spillovers from JPY to other currencies in the second quarter of 2011. This evidences the effect, albeit marginal, of the joint intervention of the Fed, ECB, Bank of England, and Bank of Canada to assist the Bank of Japan in its effort to defend the yen and harbor its volatility in the aftermath of the devastating earthquake.\footnote{The earthquake off the Pacific coast of the Tōhoku region and the subsequent tsunami occurred on March 11, 2011. It was the most powerful earthquake ever recorded to have hit Japan. Massive damages included the meltdown of three reactors in the Fukushima Daiichi Nuclear Power Plant.} The rest of the researched period from 2012 on is characterized by either net negative spillovers or marginal alternating asymmetries.
Based on the extent of the net spillovers, the non-Eurozone currencies (GBP and CHF) seem to be modest transmitters of net directional spillovers onto other currencies. However, their net spillover plots do not bear much resemblance. Both currencies display qualitative differences in unbroken portions of their net spillovers: net negative spillovers dominate the European debt crisis period for the GBP while in the case of CHF net positive spillovers prevail during the GFC. The Swiss National Bank began to be quite active in 2009 with the aim to weaken its currency. Over 2009 -- 2011 its steps involved forex interventions, verbal interventions, and interest rate adjustments. It is interesting that the lowest net spillovers are visible in 2011, when the bank gave up on limiting the CHF 1.20-per-euro bound and discontinued its managed float policy of \textit{capping}.
The above results on the asymmetries in volatility spillovers are unique in that they represent qualitatively new information. We stated earlier that the literature lacks a proper treatment of asymmetries in volatility spillovers in forex markets. As a result, the single study with which we can compare our results is that of \cite{galagedera2012effect}, who show that during the period of the subprime crisis (2008 -- 2009), the appreciation of the yen against the U.S. dollar had a greater impact on the U.S. dollar-yen volatility spillover than the yen’s depreciation. Our results from the same period fully support their finding (see the third row in Figure \ref{Fig5}). In addition, even later, until early 2012, the pattern does not change as the yen’s positive spillovers, i.e., volatility spillovers computed based on positive returns or temporary appreciation changes, exhibited a larger impact than negative spillovers. The pattern changes only from 2012 on when negative spillovers begin to prevail. Their extent is visibly smaller than that of the positive spillovers, though. \cite{galagedera2012effect} also show that the appreciation and depreciation of the U.S. dollar against the euro does not appear to have an asymmetric effect on the Euro-U.S. dollar volatility spillover. In this case we are cautious with their finding because our results show an asymmetric effect of euro volatility spillovers being transmitted to other currencies. During the investigated subprime crisis period, positive spillovers from the euro (i.e. spillovers due to temporary appreciations) dominate volatility spillovers going from the euro to other currencies.\footnote{We have to stress that, because the methodologies employed in \cite{galagedera2012effect} and in our analysis are different, both sets of results are not directly comparable.}
\section{Conclusion \label{sec:conclusion}}
We extend the procedure of \cite{barunik2016asymmetric} to quantify volatility spillovers that are due to bad and good volatility (proxied by negative and positive returns) to better fit the assessment of volatility spillovers on forex markets. The procedure is based on a computation of the volatility spillover index \citep{diebold2012better} by considering separately negative and positive changes in returns via realized semivariances \citep{shephard2010measuring}. The approach allows us to quantify (total and directional) volatility spillover indices robust to ordering in VAR and to capture asymmetries in volatility spillovers. Due to the non-existing established methodology, which would provide the detailed evidence on the dynamics of the asymmetries in volatility spillovers, our approach brings insights that could not be obtained earlier.
Using high-frequency intra-day data over 2007 -- 2015 we apply the method on a set of the most actively traded currencies quoted against the U.S. dollar, including the Australian Dollar (AUD), British Pound (GBP), Canadian Dollar (CAD), Euro (EUR), Japanese Yen (JPY), and Swiss Franc (CHF). Based on the analysis of these currencies we provide a wealth of detailed results.
We show that the extent of spillover transmission among currencies is not uniform. Each currency's net position, in terms of volatility spillovers it receives or transmits, is quite different: while GBP and CAD are mostly spillover givers, AUD, JPY, and EUR are mostly spillover receivers, and CHF is a balanced currency. Our findings also directly support the presence of heat-wave volatility clustering \citep{engle1990meteor} as there are substantial directional spillovers among currencies within a specific market.
Further, we decisively show that volatility spillovers in the portfolio of currencies exhibit distinctive asymmetries. Such asymmetries are not uniform with respect to currencies, timing, or potential underlying factors. In this respect the negative spillovers dominate positive spillovers in their magnitude as well as frequency; this behavior distinguishes the forex market from stocks and commodities markets where the divide between negative and positive asymmetries is much less prominent \citep{barunik2016asymmetric,barunik2015}. Negative spillovers are chiefly tied to the dragging sovereign debt crisis in Europe. Positive spillovers correlate with the subprime crisis in the U.S. and different monetary policies among key world central banks along with developments on commodities markets. Hence, a combination of monetary and real economy events is behind the net positive asymmetries in volatility spillovers while fiscal factors are linked with net negative spillovers.
Finally, we provide evidence that asymmetries exist also in directional spillovers. We show that currencies do not display a similar pattern in how their net asymmetric directional spillovers propagate -- i.e., the forex market exhibits asymmetric volatility connectedness. It is true that some currencies display a common pattern over a certain subset of the time span, chiefly in connection with major economic or financial events. However, the pattern is not decisively comparable over the entire time span. For example, commodity currencies (CAD, AUD) display a similar pattern with the euro during the major phases of the European sovereign debt crisis. However, all three currencies (CAD, AUD, EUR) transmit net asymmetric spillovers in a remarkably different fashion during the GFC period. In any event, negative directional spillovers transmitted from one currency impact the volatility of other currencies in the portfolio more than positive spillovers. Thus, asymmetric volatility connectedness on the forex market is dominated by negative changes and this sharply differentiates it from, for example, the U.S. stock market.
\newpage
\section*{References}
\bibliographystyle{chicago}
|
1,108,101,564,182 | arxiv |
\section{Introduction}
Semantic segmentation aims to predict, for each individual pixel in an image, a semantic category from a predefined set of labels.
Such a fine grained understanding of images finds numerous applications in aerial robotics \cite{demir2018deepglobe,chen2021_lc_mfanet,tong2020_lc_land,ref_beyond_rgb,ref_resunet_a,ref_seg_aerial_nogueira,wildfire_est, deforestation_est,chiu2020agri_multispectral}, where it has achieved remarkable results
by leveraging deep learning models trained on open datasets with large quantities of labeled images.
However, these results do not carry over when the models are deployed to operate on images that come from a distribution (\emph{target domain}) different from the data experienced during training (\emph{source domain}).
The difficulty in adapting semantic segmentation models to different data distributions is not only limited to the aerial setting and it is tightly linked to the high cost of generating pixel-level annotations \cite{cityscapes}, which makes it unreasonable to supplement the training dataset with large quantities of labeled images from the target domain.
A recent trend in the state-of-the-art addresses this challenge using domain mixing as an online augmentation to create artificial images with elements from both the source and the target domain, thus encouraging the model to learn domain-agnostic features \cite{Chen2021-semisup,dacs,daformer,zhou2021context}.
In particular, both DACS \cite{dacs} and DAFormer \cite{daformer} rely on ClassMix \cite{olsson2021classmix} to dynamically create a binary mixing mask for a pair of source-target images by randomly selecting half of the classes from their semantic labels (the true label for the source, the predicted pseudo-label for the target).
Although this mixing strategy yields state-of-the-art results in driving scenes, it is less effective in an aerial context. We conjecture that this is largely caused by two factors:
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{Images/teaser.pdf}
\caption{Class Mix superimposes classes of the source domain onto the target without taking into account the semantic hierarchy of the visual elements. As a result, it generates erroneous images that are detrimental to Unsupervised Domain Adaptation training in the aerial scenario. Instead, our \textbf{{\ourMix}} extracts instances from each semantic label and then composes the mixing mask after sorting the extracted instances based on their pixel count. This mitigates some artifacts (e.g. partial buildings) and improves the balance of the two domains.}
\label{fig:teaser}
\end{figure}
\myparagraph{Domain imbalance in mixed images.}
Segmentation-oriented aerial datasets are often characterized by categories with vastly different extensions (e.g., \textit{cars} and \textit{forest}). While this may be dealt with techniques such as multi-scale training in standard semantic segmentation \cite{ref_multiscale_vhr_seg}, the disparity in raw pixel counts between classes may be detrimental for an effective domain adaptation through class mixing, as the composition may favor either domain (see \cref{fig:teaser} left).
\myparagraph{Weak structural consistency.}
The scenes captured by a front-facing camera onboard a car have a consistent structure, with the street at the bottom, the sky at the top, sidewalks and buildings at the sides, etcetera. This structure is preserved also across domains, as in the classic Synthia~\cite{synthia} $\rightarrow$ CityScapes~\cite{cityscapes} setting. Thus, when copying objects from an image onto the other they are likely to end up in a reasonable context. This is not true for aerial images, where there is no consistent semantic structure (see \cref{fig:teaser} left).
To solve both problems, we propose a new mixing strategy for aerial segmentation across domains called \textbf{Hierarchical Instance Mixing} ({\ourMix}). {\ourMix} extracts from each semantic mask a set of connected components, akin to instance labels. The intuition is that aerial tiles often present very large stretches of land, divided into instances (e.g., forested areas separated by a road).
{\ourMix} randomly selects from the individual instances a set of layers that will compose the binary mixing mask. This helps to mitigate the pixel imbalance between source and target domains in the artificial image.
Afterwards, {\ourMix} composes these sampled layers by sorting them based on the observation that there is a semantic hierarchy in the aerial scenes (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, cars lie on the road and roads lie on stretches of land). We use the pixel count of the instances to determine their order in this hierarchy, placing smaller layers on top of larger ones.
While not optimal in some contexts (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, buildings should not appear on top of water bodies), this ordering also reduces the bias towards those categories with larger surfaces in terms of pixels as they are placed below the other layers of the mask (see \cref{fig:teaser} right).
Besides the mixing strategy itself, there is also the general problem that the effectiveness of the domain mixing is strongly dependent on the accuracy of the pseudo-labels generated on the target images during training.
This is especially true when the combination itself requires layering individual entities from either domain into a more coherent label.
A key factor for an effective domain adaptation using self-training is in fact the ability to produce consistent predictions, resilient to visual changes.
For this reason, we propose as a second contribution a \textbf{twin-head UDA architecture} in which two separate segmentation heads are fed with contrastive variations of the same images to improve pseudo-label confidence and make the model more robust and less susceptible to perturbations across domains, inevitably driving the model towards augmentation-consistent representations.
We test our complete framework on the LoveDA benchmark \cite{wang2021loveda}, the only dataset designed for evaluating unsupervised domain-adaption in aerial segmentation, where we exceed the current state-of-the-art.
We further provide a comprehensive ablation study to assess the impact of the proposed solutions. The code will be made available to the public to foster the research in this field.
\section{\vspace{2pt}Related Work}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Images/instance_mixing.pdf}
\vspace{0.2mm}
\caption{HIMix operates by (i) extracting the connected components from the source label and target pseudo-label, (ii) selecting uniformly which instances should be mixed from $S$, (iii) merging source and target instances hierarchically based on instance size (smaller ones on top), and (iv) producing a binary mask $M$ to construct the final blended image $x_m$ and its label $y_m$.}
\label{fig:id_mix}
\end{figure*}
\subsection{Aerial Semantic Segmentation}
Current semantic segmentation methods mostly rely on convolutional encoder-decoder architectures \cite{ref_fcn,ref_deeplab,ref_pspnet,ronneberger2015unet}, but the recent breakthroughs of vision Transformers introduced new effective encoder architectures such as ViT \cite{dosovitskiy2020vit}, Swin \cite{liu2021swin} or Twins \cite{chu2021twins}, as well as end-to-end segmentation approaches such as Segmenter \cite{strudel2021segmenter} and SegFormer \cite{xie2021segformer}.
Concerning the application to aerial images, despite the comparable processing pipeline as in other settings, there are peculiar challenges that demand for specific solutions. Firstly, aerial and satellite data often include multiple spectra besides the visible bands, which can be leveraged in different ways, such as including them as extra channels \cite{chiu2020agri_multispectral} or adopting multi-modal encoders \cite{ref_beyond_rgb}.
Visual features represent another major difference: unlike other settings, aerial scenes often display a large number of entities on complex backgrounds, with wider spatial relationships. In this case, attention layers \cite{niu2021rel_hybrid} or relation networks \cite{mou2019rel_relation} are employed to better model long-distance similarities among pixels. Another distinctive trait of aerial imagery is the top-down point of view and the lack of reference points that can be observed in natural images (e.g., sky always on top). This can be exploited to produce rotation-invariant features using ad-hoc networks \cite{han2021invariance, tavera2022aias}, or through regularization \cite{arnaudo2021invariance}.
Lastly, aerial images are characterized by disparities in class distributions, since these include small objects (e.g. cars) and large stretches of land. This pixel imbalance can be addressed with sampling and class weighting \cite{daformer}, or ad-hoc loss functions \cite{kevardec2019loss}.
\subsection{Domain Adaptation}
Domain Adaptation (DA) is the task of attempting to train a model on one domain while adapting to another. The main objective of domain adaptation is to close the \textit{domain shift} between these two dissimilar distributions, which are commonly referred to as the source and target domains.
The initial DA techniques proposed in the literature attempt to minimize a measure of divergence across domains by utilizing a distance measure such as the MMD~\cite{geng2011daml, pmlr-v37-long15, tzeng_mcd}.
Another popular approach to DA in Semantic Segmentation is adversarial training \cite{adaptsegnet, clan, fada, Tavera_2022_WACV}, which involves playing a min-max game between the segmentation network and a discriminator. This latter is responsible for discriminating between domains, whereas the segmentation network attempts to trick it by making features of the two distributions identical.
Other approaches, such as \cite{hoffman18cycada, wu2018dcan, yang2020fda}, employ image-to-image translation algorithms to generate target pictures styled as source images or vice versa, while \cite{transnorm} discovers the major bottleneck with domain adaptation in the batch normalization layer.
More recent methods like \cite{pycda, cbst, iast} use self-learning techniques to generate fine pseudo-labels on target data to fine-tune the model, whereas \cite{dacs, daformer} combine self-training with class mix to reduce low-quality pseudo-labels caused by domain shifts among the different distributions.
These mixing algorithms are very effective on data with a consistent semantic organization of the scene, such as in self-driving scenes \cite{cityscapes, idda}. In these scenarios, naively copying half of the source image onto the target image increases the likelihood that the semantic elements will end up in a reasonable context. This is not the case with aerial imagery (see \cref{fig:teaser}).
{\ourMix} not only mitigates this problem, but it also reduces the bias towards categories with larger surfaces.
\section{Method}
\subsection{Problem statement}
We investigate the aerial semantic segmentation task in the context of unsupervised domain adaption (UDA). Let us define as $\mathcal{X}$ the set of RGB images constituted by the set of pixels $\mathcal{I}$, and as $\mathcal{Y}$ the set of semantic masks associating a class from the set of semantic classes $\mathcal{C}$ to each pixel $i \in \mathcal{I}$.
We have two sets of data accessible at training time: (i) a set of annotated images from the source domain, denoted as $X_{s} = \{(x_{s}, y_{s})\}$ with $x_{s}\in \mathcal{X}$ and $y_{s} \in \mathcal{Y}$; (ii) a set of $N_{t}$ unlabelled images from the \textit{target} domain, denoted as $X_{t} = \{(x_{t})\}$ with $x_{t}\in \mathcal{X}$.
The goal is to find a parametric function $f_\theta$ that maps a RGB image to a pixel-wise probability, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $f_\theta: \mathcal{X} \rightarrow \mathbb{R}^{|\mathcal{I}|\times|\mathcal{C}|}$, and evaluate it on unseen images from the target domain. In the following, we indicate the model output in a pixel $i$ for the class c as $p_i^c$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $p_i^c(x) = f_\theta(x)[i,c]$.
The parameters $\theta$ are tuned to minimize a categorical cross-entropy loss defined as
\begin{equation}
L_{\text{seg}}(x, y) = - \frac{1}{|\mathcal{I}|} \sum_{i \in \mathcal{I}} \sum_{c \in \mathcal{C}} y_i^c \log(p_i^c(x)),
\label{eq:xe}
\end{equation}
where $y_i^c$ represents the ground truth annotation for the pixel $i$ and class $c$.
\subsection{Framework}
We present an end-to-end trainable UDA framework based on the use of target pseudo-labels. To better align domains, we construct artificial images using our {\ourMix} strategy (\ref{sec:idmix}), which generates mixed images exploiting the instances produced both from the source ground truth and the target pseudo-label.
Rather than using a secondary teacher network derived from the student as an exponential moving average as in \cite{dacs, daformer}, we propose a twin-head architecture (\ref{sec:twinhead}) with two separate decoders trained in a contrastive fashion to provide finer target pseudo labels.
\subsection{Hierarchical Instance Mixing} \label{sec:idmix}
Given the pairs $(x_s, y_s)$ and $(x_t, \hat{y}_t)$, where $\hat{y}_t = f_\theta(x_t)$ are the pseudo-labels computed from the model prediction on the target domain, the purpose of the mixing strategy is to obtain a third pair, namely $(x_m, y_m)$, whose content is derived from both source and target domains using a binary mask $M$.
While techniques based on ClassMix have been successfully applied in many UDA settings, we discover that the same may not be optimal in the aerial scenario since it superimposes parts of the source domain onto the target without taking into consideration of their semantic hierarchy (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, cars appear on top of roads, not vice versa).
In contrast, we propose an Hierarchical Instance Mixing strategy ({\ourMix}), which is composed of two subsequent steps: (i) \textit{instance extraction} and (ii) \textit{hierarchical mixing}.
\myparagraph{Instance extraction.}
Aerial tiles often present uniform land cover features, with many instances of the same categories in the single image. In the absence of actual instance labels, this peculiarity can be exploited to separate semantic annotations into connected components.
Here a connected component is a set of pixels that have the same semantic label and such that for any two pixels in this set there is a path between them that is entirely contained in the same set.
\Cref{fig:id_mix} illustrates an example of this process, with a forest that is separated in two instances by a road.
This increases the number of regions which can be randomly selected for the mixing phase, thus mitigating the pixel unbalance in the final mixed sample between source and target domains.
Note that this procedure is applied on the concatenation of source and target label.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{Images/framework.pdf}
\vspace{0.2mm}
\caption{Our framework training: (i) standard training is carried out on source, (ii) pseudolabels are generated on target through majority voting between each head output, (iii) source and target samples are mixed together and (iv) segmentation loss is computed on mixed pairs.}
\label{fig:architecture}
\end{figure*}
\myparagraph{Hierarchical mixing.}
We observe that instances in aerial imagery have an inherent hierarchy that is dictated by their semantic categories. In other words, land cover categories such as \textit{barren} or \textit{agricultural} frequently appear in the background w.r.t. smaller instances such as \textit{roads} or \textit{buildings}.
The mixing step follows this hierarchy when combining the instances from source and target, and it is illustrated in \cref{fig:id_mix}.
First, both sets of instance labels are encoded into a one-hot representation, so that each component yields its own mask layer.
Then both stacks of layers are merged together and sorted by their pixel count, with the larger layers on the bottom. Finally, a reduction from top to bottom projects the 3D tensor into a 2D binary mask $M$, where positive values indicate \textit{source} pixels, and null values indicate \textit{target} pixels.
\subsection{Twin-Head Architecture} \label{sec:twinhead}
State-of-the-art, self-training UDA strategies, such as \cite{daformer}, make use of \textit{teacher-student} networks to improve the consistency of the pseudo-labels. Albeit dealing with consistency in time, teacher-based approaches do not directly cope with geometric or stylistic consistency.
We propose a twin-head segmentation framework to directly address this, providing more consistent pseudo-labels and outperforming the standard tested methodologies, as shown in the ablation study \ref{sec:ablation}.
Our architecture (see Fig. \ref{fig:architecture}) comprises a shared encoder $g$, followed by two parallel and lightweight segmentation decoders, $h_1$ and $h_2$. Training is carried out end to end, exploiting annotated source data and computing pseudo-labels from target images online, as detailed hereinafter.
\myparagraph{Source training.} With the purpose of driving the model towards augmentation-consistent representations, we feed the two heads with variations of the same image in a contrastive fashion. More specifically, given a source image $x_s$ we alter it with a sequence of random geometric (\textit{horizontal flipping}, \textit{rotation}) and photometric augmentations (\textit{color jitter}), obtaining new pairs of samples.
Specifically, at each iteration, the final input is composed of $B_s = (x_s \mathbin\Vert \tilde{x}_s, y_s \mathbin\Vert \tilde{y}_s)$,
where $x_s$ and $y_s$ represent the original batch of images and respective annotations, while $\tilde{x}_s$ and $\tilde{y}_s$ represent the same samples, altered by the geometric and photometric transformations, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\tilde{x}_s = T_p(T_g(x_s))$, and $\tilde{y}_s = T_g(y_s)$.
The full augmented batch $B_s$ is first forwarded to the shared encoder module $g$, producing a set of features. The latter, containing information derived from the images and its augmented variants, are split and forwarded to the two parallel heads, effectively obtaining two comparable outputs, $h_1(g(x_s))$ and $h_2(g(\tilde{x}_s))$.
A standard cross-entropy loss, as shown in Eq. \ref{eq:xe}, is computed on both segmentation outputs.
Working independently on different variations of the same images, the two heads can evolve in different ways while trying to minimize the same objective function.
Using the same encoder yields a more robust, contrastive-like feature extraction that is less susceptible to perturbations. This is essential for producing more stable and precise pseudo-labels.
\myparagraph{Mix training.} The twin-head architecture is expressly designed to generate more refined pseudo-labels. Given an unlabeled target image $x_t$, the probabilities after forwarding the image to both heads $\sigma(h_1(g(x_t)))$ and $\sigma \left( h_2(g(\tilde{x}_t)) \right)$ are compared, where $\sigma$ indicates the softmax function. In order to extract a single pseudo-label, the most confident output is selected for each pixel. Formally, for each position $i$ the output score is computed as $p_i^c = max(\sigma_i(h_1(g(x_t)), \sigma_i \left( h_2(g(\tilde{x}_t)) \right))$, selecting the maximum value between the two.
Once $p_i^c$ is derived, the pseudo-label $\hat{y}_t$ necessary for class-mix is generated through:
\begin{equation}
\hat{y}_t^{(i,c)} = [c = argmax_c p_i^c (x_t)].
\label{eq:pl}
\end{equation}
At this point, the mixed pairs of inputs can be computed through {\ourMix}, as described in previous sections, obtaining $(x_m, y_m)$ as a composition of the source and target samples.
Similar to source training, an augmented batch $B_m = (x_m \mathbin\Vert \tilde{x}_m, y_m \mathbin\Vert \tilde{y}_m)$ is computed through geometric and photometric transformations, then fed to the model to compute $L_{seg}(B_m)$.
To reduce the impact of low-confidence areas, a pixel-wise weight map $w_m$ is generated. Similar to \cite{dacs, daformer}, the latter is computed as percentage of valid points above threshold. Formally, for each pixel $i$:
\begin{equation}
w_m^i =
\begin{cases}
1, & i \in y_s \\
\dfrac{m_{\tau}}{\left|\mathcal{I}\right|}, & i \in \hat{y}_t \\
\end{cases}
\label{eq:weight_map}
\end{equation}
where $m_{\tau}$ represents the Max Probability Threshold \cite{li2019bidirectional} computed over pixels belonging to the pseudo-label as follows:
\begin{equation}
m_{\tau}^i = \mathds{1}_{[argmax_c p_i^c (x_t) > \tau]},
\label{eq:pl_th}
\end{equation}
In practice, each pixel of the mixed label is either weighted as $1$ for regions derived from the source domain, or by a factor obtained as the number of pixels above the confidence threshold, normalized by the total amount of pixels.
Note that during all of these computations the gradients are not propagated. The training procedure is detailed in \cref{pc:pseudo_code}.
\input{Tables/pseudo}
\section{Experiments}
\subsection{Training Details}
We assess the performance of our approach on the LoveDA dataset \cite{wang2021loveda}. According to that benchmark, we conduct two series of unsupervised domain adaptation experiments: \textit{rural}$\to$\textit{urban} and \textit{urban}$\to$\textit{rural}. We measure the performance on the \textit{test set} of each target domain.
\myparagraph{Dataset.}
To our knowledge, the LoveDA dataset \cite{wang2021loveda} is the only open and free collection of land cover semantic segmentation images in remote sensing explicitly designed for UDA. Both urban and rural areas are included in the training, validation, and test sets. Data is gathered from 18 different administrative districts in China. The urban training set has 1156 images, while the rural training set contains 1366 images. Each image is supplied in a tiled format of 1024x1024 pixels annotated with seven categories.
\myparagraph{Metric.}
Following \cite{wang2021loveda} we use the averaged Intersection over Union (mIoU) metric to measure the accuracy of all the experiments conducted.
\myparagraph{Baselines.}
Our method is compared to various cutting-edge UDA methods. The first baseline we consider is the Source Only model, which is a network that has only been trained using the source dataset. We look at MMD's \cite{tzeng_mcd} original metric-based methodology. Then, we compare two alternative UDA approaches: the adversarial training strategy, with AdaptSegNet\cite{adaptsegnet}, FADA\cite{fada}, CLAN\cite{clan}, and TransNorm\cite{transnorm}, and the self-training technique, with CBST\cite{cbst}, PyCDA\cite{pycda}, IAST\cite{iast}, DACS\cite{dacs} and DAFormer \cite{daformer}.
\myparagraph{Implementation.}
To implement our solution we leverage the \textit{mmsegmentation} framework, that is based on PyTorch. We train each experiment on a NVIDIA Titan GPU with 24GB of RAM. We refer to DAFormer \cite{daformer} for the architecture and configuration of hyperparameters. We use the MiT-B5 model \cite{xie2021segformer} pretrained on ImageNet as the encoder of our method while the segmentation decoder module corresponds to the SegFormer head \cite{xie2021segformer}. We train on every setting for 40k iterations using AdamW as optimizer. The learning rate is set to $6x10^{-5}$, weight decay of $0.01$, betas to $(0.9, 0.99)$. We also adopt a polynomial decay with a factor of $1.0$ and warm-up for 1500 iterations. To cope with possible variations, every experiment presented has been obtained as the average over three seeds $\{0,1,2\}$. Training is performed on random crops, by augmenting data through random resizing in the range $[0.5, 2.0]$, horizontal and vertical flipping, and rotation of 90 degrees with probability $p=0.5$, together with random photometric distortions (i.e., brightness, saturation, contrast and hue). As \cite{dacs, daformer}, we set $\tau=0.968$. The final inference on the test set is instead performed on raw images without further transformations.
\subsection{Results}
\input{Tables/urban2rural}
\input{Tables/rural2urban}
\input{Tables/ablation}
\myparagraph{Urban$\to$Rural.}
The results for this set of experiments are reported in \cref{table:u2r_exps}. They corroborate the complexity of the task due to a strong and inconsistent class distribution in the source domain, which is dominated by urban scenes with a mix of buildings and highways but few natural items. This causes a negative transfer to the target domain, since both adversarial strategies and self-training procedures achieve overall performance equivalent to, if not worse than, the Source Only model. Specifically, when we evaluate the best performing Adversarial Training technique, which is represented by CLAN, we gain just a $+1.8$ improvement over the Source Only model. Self-training approaches have shown to be the most effective. DACS, which introduces the class mix strategy, improves the \textit{Source Only} model by $+1.2$, while DAFormer, which uses a Transformer backbone and the same class mix strategy as DACS, outperforms the \textit{Source Only} model by $+9.3$. Our approach, which combines both the twin-head architecture and the innovative class mix, outperforms the \textit{Source Only} model by the wide margin of $+12.6$ and it exceeds its closest competitor (DAFormer) by $+3.3$. {\ourMix} exhibits its ability to boost rural and underrepresented classes, such as \textit{agriculture}, as also evidenced by qualitative results in \cref{fig:qualitatives}. In comparison to DACS and DAFormer, our technique recognizes and classifies better contours and classes, such as \textit{water}, despite their underrepresentation in the source domain. This is also true in common categories with different visual features such as \textit{road}, which can appear in paved and unpaved variants.
\myparagraph{Rural$\to$Urban.}
The results for this set of experiments are summarized on \cref{table:r2u_exps}. The source domain in this scenario is dominated by large-scale natural objects and a few manmade samples. Nonetheless, the models under consideration are capable of effectively transferring the knowledge even in these underrepresented categories. Self-learning approaches outperform adversarial methods, getting an averaged boost of $+9.1$ over the Source Only model, whereas adversarial training methods achieve comparable accuracies. In terms of mIoU, the two most performant self-training models and our closest competitor surpass the Source Only model by $+6.0$ and $+15.2$, respectively.
In comparison, our strategy gains a $+17.4$ boost over the Source Only model, outperforming DACS and DAFormer by $+11.4$ and $+2.2$, respectively.
In this case, the qualitative results in \cref{fig:qualitatives} support the superior ability of our model to discern between rural and urban classes. While DACS does not recognize \textit{buildings} and DAFormer misclassifies parts of them as \textit{agricultural} terrain, our model demonstrates its efficacy in minimizing the bias towards those categories with larger surfaces providing results close to the ground truth.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{Images/qualitatives.pdf}
\caption{Qualitative results in the two settings \textit{Urban$\to$Rural} and \textit{Rural$\to$Urban} after testing on \textbf{\color{red}{target}} domain.}
\label{fig:qualitatives}
\end{figure*}
\begin{figure}[h!t]
\centering
\includegraphics[width=0.9\columnwidth]{Images/ablation.pdf}
\caption{Qualitative comparison of Single or Twin-Head architectures using Standard Class Mix or our {\ourMix}.}
\label{fig:ablation}
\end{figure}
\subsection{Ablation}\label{sec:ablation}
\myparagraph{Twin-Head and {\ourMix}.}
To demonstrate the effectiveness of the twin-head architecture, we compare it to the traditional single-head structure, which generates pseudo-labels using a secondary teacher network derived from the student as an exponential moving average. This study also demonstrates the potential of the {\ourMix} when paired with traditional single-head training. For both the settings, we perform an extensive ablation study considering the MiT-B5 \cite{xie2021segformer} as the backbone and we report the results in \cref{table:ablation}.
The twin-head design paired with the Standard Class Mix (line 3) is more performing than the single-head architecture (line 1), implying that our solution is better at providing finer pseudo-labels with correct class segmentation, as also shown in the first column of \cref{fig:ablation}.
{\ourMix} increases recognition performance even when paired with a single-head architecture (line 2), particularly for categories with a lower surface area in terms of pixels, which are placed below those with larger surfaces when using the Standard Class Mix. That is why, in the top-left image of \cref{fig:ablation}, the model is unable to grasp their semantics effectively and erroneously classifies \textit{building} as \textit{agricultural} pattern. In comparison, {\ourMix} can accurately distinguish \textit{buildings} (top-right picture in \cref{fig:ablation}) even though the prediction has poorly defined contours.
The best results are obtained when the twin-head ability to provide an enhanced segmentation map is combined with the {\ourMix} ability to maintain a correct semantic structure (line 5), yielding the best results in terms of accuracy and finer segmentation map, as shown in the bottom-right image of \cref{fig:ablation}.
We finally ablate the different components of our {\ourMix} to assess each term's contribution to overall performance (lines 4-5). The Hierarchical Mixing always increases the Instance Extraction by $+1.1$ and $+1.3$ in the two Urban$\to$Rural and Rural$\to$Urban scenarios, respectively.
\section{Conclusions}
We investigated the problem of Unsupervised Domain Adaptation (UDA) in aerial Semantic Segmentation, showing that the peculiarities of aerial imagery, principally the lack of structural consistency and a significant disparity in semantic class extension, must be taken into consideration. We addressed these issues with two contributions. First, a novel domain mixing method that consists of two parts: an instance extraction that chooses the connected components from each semantic map and a hierarchical mixing that sorts and fuses the instances based on their pixel counts. Second, a twin-head architecture that produces finer pseudo labels for the target domain, improving the efficacy of the domain mixing.
We demonstrated the effectiveness of our solution with a comprehensive set of experiments on the LoveDA benchmark.
\myparagraph{Limitations.}
Despite the excellent results, we observed that our solution has worse performance than the source only model in the \textit{barren} class, particularly in the Urban$\to$Rural scenario. This is possibly due to the large disparity in absolute pixels count between source and target domains in this category. Additionally, the twin-head architecture, while its superior performance, has a greater number of parameters that slow down the training (approximately 15h).
\myparagraph{Future Works.} We will evaluate lighter segmentation heads and other contrastive techniques to accelerate overall training and improve performance, particularly on specific semantic classes.
\section*{APPENDIX}
\bibliographystyle{IEEEtran.bst}
|
1,108,101,564,183 | arxiv | \section{Introduction}
For a curve $X$ of genus $g>1$ defined over a field $K$, the automorphism group $\Aut_K (X)$ is finite. In characteristic zero, it is well-known that one has the Hurwitz bound
$|\Aut_K(X)|\leq 84(g-1)$. In \cite{St}, the inequality $|\Aut_K(X)|< 16 g^4$ is seen to hold in positive characteristic unless $X$ is a Hermitian curve. In particular, this provides a bound for the order of any automorphism of the curve. Nevertheless,
there is not a general procedure to discard possible orders.
We are interested in the case in which $K$ is a number field. The reduction of the curve $X$ at a prime of $K$ of good reduction is a curve $\widetilde X$ defined over a finite field $\F_q$. Although $\Aut_{\F_q} (\widetilde X)$ may strictly contain $\Aut_K(X)$, any information allowing to discard orders of the elements in the group $\Aut_{\F_q} (\widetilde X)$ will be useful for our goal. Moreover, if necessary, we can change the prime of $K$ of good reduction for $X$.
The main result of this work is in Section 2. For a curve $X$ of genus $>1$ defined over a finite field $\F_q$, we fix an integer $s>1$ which is a power of a rational prime. In Theorem \ref{crit}, we present a criterion which, under a certain condition on the sequence $\{|X(\F_{q^n})|\}_{n\geq 1}$ depending on $s$, ensures the non-existence of elements in $\Aut_{\F_q}(X)$ of order $s$. Although this criterion is not a characterization for the non-existence of such automorphisms, it is certainly a powerful tool that can be applied in many situations. In Section 3, in order to show the efficacy of this tool, we apply it to determine the automorphism groups of some modular curves.
\section{Automorphisms of a curve defined over a finite field}
Let $\F_q$ be the finite field with $q$ elements and let $X$ be a curve of genus $g>1$ defined over $\F_q$.
\begin{teo}\label{crit} If for a rational prime $N$ and an integer $m>0$, the sequence of integers $\{P_{N^m}(n)\}_{n\geq 1}$ defined by
$$
0\leq P_{N^m}(n)\leq N^m-1\,,\quad P_{N^m}(n):= \left\{\begin{array}{rr}|X(\F_q)|\pmod{N^m} & \text{if $n=1$,}\\[6 pt] |\cup_{i=1}^{n}X(\F_{q^i})|-|\cup_{i=1}^{n-1}X(\F_{q^i})| \pmod{N^m} &\text{if $n>1$,}\end{array}\right.
$$
satisfies the condition
\begin{equation}\label{criterion}\sum_{n\geq 1}P_{N^m}(n)>\displaystyle{\left\lfloor\frac{2\,g\, }{N-1}\right\rfloor+ 2\frac{N^m-1}{N-1}}\,,
\end{equation}
then $X$ does not have any automorphisms defined over $\F_q$ of order $N^m$.
\end{teo}
\noindent {\bf Proof.} Assume that there exists $u\in\Aut_{\F_q}(X)$ of order $N^m$. Let $G$ be the subgroup of $\Aut_{\F_q}(X)$ generated by $u$. Let $\cR$ denote the set of ramification points of the natural projection $\pi_G\colon X\rightarrow X/G$.
The group $G$ acts on the set $X(\F_{q ^{n}})$ as a subgroup of permutations. In particular, if $Q\in X(\F_{q^{ n}})$, then the orbit of $Q$ under $G$, say $\cS_Q$, is contained in $ X(\F_{q^n})$. It is clear that $|\cS_Q|=N^m$ if, and only if, $Q\notin \cR$. Let $n_0>0$ be an integer such that $\cR\subset X(\F_{q^{n_0}})$. Hence, we have
$$ |X(\F_{q^{n_0}})|\equiv |\cR|\pmod{N^m}\,.$$
So the sequence $A(n):=|\cup_{i=1}^nX(\F_{q^{ i}})|$, $n\geq 1$, satisfies
the condition $A(n)\equiv |\cR|\pmod {N^m}$ for all $n\geq n_0$. Therefore,
\begin{equation}\label{cR}
\sum_{n\geq 1}P_{N^m}(n)=\sum_{n\geq 1}^{n_0}P_{N^m}(n)\leq |\cR\cap X(\F_q)|+\sum_{n=2}^{n_0}|\cR\cap(\cup_{i=1}^{n}X(\F_{q^ i})-\cup_{i=1}^{n-1}X(\F_{q^ i}))|=|\cR|\,.
\end{equation}
For $0\leq i\leq m-1$, let $\cR_i$ denote the subset of $\cR$ consisting of the points whose isotropy group in $G$ is the subgroup generated by $u^{ N^i}$, and set $r_i:=|\cR_i|$. One has $\cR=\cup_{i=0}^{m-1}\cR_i$ and, moreover, the ramification index of $\pi_G$ at a point $Q\in\cR_i$ is the order of its isotropy group, i.e. $N^{m-i}$. By the Riemann-Hurwitz formula applied to $\pi_G$, we obtain
$$
N^m(2g_G-2)+(N^m-1)r_0+\cdots +(N^i-1)r_{m-i}+\cdots +(N-1) r_{m-1}\leq 2 g-2\,,
$$
where $g_G$ is the genus of $X/G$. In particular, we have
$$
(N^m-1)r_0+\cdots +(N-1) r_{m-1}\leq 2g+2 (N^m-1) \,.
$$
Therefore,
\begin{equation}\label{des}
|\cR|= r_0+ \cdots + r_{m-1}\leq \frac{1}{N-1} \left((N^m-1)r_0+\cdots +(N-1) r_{m-1}\right)\leq\frac{ 2g+2 (N^m-1)}{N-1}\,.
\end{equation}
Combining the inequalities (\ref{cR}) and (\ref{des}), we get
\begin{equation}\label{nec}\sum_{n\geq 1}P_{N^m}(n)\leq \displaystyle{\left\lfloor\frac{2\,g\, }{N-1}\right\rfloor+ 2\frac{N^m-1}{N-1}} \,,
\end{equation} which proves the statement. \hfill $\Box$
\begin{rem} To apply Theorem \ref{crit}, we only need to know the characteristic polynomial $Q(x)$ of $\Frob_q$ acting on the Tate module of $\Jac (X)$. Indeed, if $\alpha_1,\cdots, \alpha_{2g}$ are the roots of $Q(x)$, then
$$
|X(\F_{q^n}) |=1+ q^n-\sum_{i=1}^{2g} \alpha_i^n\,.
$$
For $n>1$, the integer $R(n):=|\cup_{i=1}^{n}X(\F_{q ^i})|-|\cup_{i=1}^{n-1} X(\F_{q^i})|$ can be computed from the sequence $\{|X(\F_{q^i})|\}_{1\leq i\leq n}$ as follows. Let $\{p_1, \cdots ,p_k\}$ be the set of primes dividing $n$ and put $d_i=n/p_i$ for $1\leq i\leq k$. Then,
\begin{equation}\label{easy}
R(n)=|X(\F_{q^{n}})|-\sum_{r=1}^k(-1)^{r+1}
\sum_{1\leq i_1<\cdots <i_r\leq k}|X(\F_{q^{\gcd(d_{i_1},\cdots,d_{i_r})}})|\,.
\end{equation}
Indeed, using that
$$X(\F_{q ^{d_1}})\cap X(\F_{q ^{d_2}})=X(\F_{q ^{\gcd(d_1,d_2)}})\,,\quad\text{ and if }
d_1|d_2 \text{ then }X(\F_{q ^{d_1}})\cup X(\F_{q ^{d_2}})=X(\F_{q ^{d_2}})\,,
$$
we obtain
$$
\begin{array}{rl}
R(n)&= |X(\F_{q^{n}})|-|X(\F_{q^{n}})\cap \left(\cup_{i=1}^{n-1} X(\F_{q^i})\right)|\\[6 pt]
&=|X(\F_{q^{n}})|-|\cup_{d|n, d<n} X(\F_{q^d})|=|X(\F_{q^{n}})|-|\cup_{i=1}^k X(\F_{q^{d_i}})|\\[6 pt]
&=|X(\F_{q^{n}})|-\sum_{r=1}^k(-1)^{r+1}
\sum_{1\leq i_1<\cdots <i_r\leq k}|X(\F_{q^{\gcd(d_{i_1},\cdots,d_{i_r})}})|\,.
\end{array}
$$
Note that, if $\ell_1=2< \cdots <\ell_r$ are the first $r$ rational primes, for $n<\prod_{i=1}^r \ell_i$,
the sum given in (\ref{easy}) contains at most $2^{r-1}$ terms.
To be more precise, to apply Criterion \ref{crit} we only need to know $Q(x)\pmod{N^m}$. In other words, we can change the polynomial $Q(X)$ by a polynomial $T(x)\in\Z[x]$ such that $Q(x)\equiv T(x)\pmod {N^m}$. We can determine $R(n)\pmod{N^m}$ from the roots of $T(x)$ by applying the procedure described for the roots of $Q(x)$.
\end{rem}
\begin{rem}\label{r} If we can prove that, for an automorphism $u\in\Aut_{\F_q}(X)$ of order $N^m$, there exists an integer $r$ such that $|\cR| \leq r< \left\lfloor\frac{2\,g\, }{N-1}\right\rfloor+ 2\frac{N^m-1}{N-1}$, then the condition (\ref{criterion}) in Theorem \ref{crit} can be replaced with the condition
$$
\sum_{n \geq 1}P_{N^m}(n)> r\,.
$$
\end{rem}
\begin{rem}
The non-existence of an automorphism in $\Aut_{\F_q}(X)$ of order $N^m$ is not a necessary condition to satisfy condition (\ref{criterion}). For instance, it may be that two non-isomorphic curves $X$ and $Y$ defined over $\F_q$ have jacobians which are isogenous over $\F_q$. If $Y$ has an automorphism of order $N^m$, then condition (\ref{criterion}) is not satisfied, even if $X$ does not have an automorphism of order $N^m$. Also, if the group $G =\Aut_{\F_q}(X)$ is nontrivial, then one has $ |\cup_{i=1}^n X(\F_{q^i})|\equiv |\cS|\pmod {|G|}$ for almost all $n$, where
$\cS$ is the set of ramification points of the covering $X\rightarrow X/G$. It may be that condition (\ref{criterion}) is not satisfied when we take $N^m$ dividing $|G|$ and $G$ does not contain any $N^m$-cyclic subgroup.
\end{rem}
\begin{question} For a prime $N$, the condition that the sequence
$ \{|\cup_{i=1}^n X(\F_{q^i})|\pmod {N^m} \}_{n\geq n_0}$ is constant for some integer $n_0$ seems to be strong. If there exists a curve $Y$ defined over $\F_q$ such that $\Jac (Y)$ and $\Jac (X)$ are isogenous over $\F_q$ and the order of the group $\Aut_{\F_q}(Y)$ is a multiple of $N^m$, then this condition is satisfied. Is the converse true?
\end{question}
Several consequences can be obtained from Theorem \ref{crit}. Next, we present two of them.
\begin{cor} If there is an integer $n_0>0$ such that
$$ 2< |\cup_{i=1}^{n_0+1}X(\F_{q^i})|-|\cup _{i=1}^{n_0}X(\F_{q^i})| <2g+2\,,$$
then there are not any automorphisms in $\Aut_{\F_q}(X)$ of order a prime $N>2g+1$, which improves the result obtained through the Hurwitz bound.
\end{cor}
\begin{cor} If $u\in\Aut _{\F_q}(X)$ has order $N^m$, then $\sum
_{n \geq 1} P_{N^m}(n)$ is a lower bound for the cardinality of the set of ramified points of the covering $X\rightarrow X/G$, where $G$ is the subgroup of $\Aut _{\F_q}(X)$ generated by $u$.
\end{cor}
\section{Application to some modular curves}
Let $\New_N$ denote the set of normalized newforms in $S_2(\Gamma_0(N))^{\operatorname{new}}$ and let $\New_N^+$ be the set $\{f\in \New_N\colon w_N(f)=f\}$, where $w_N$ is the Fricke involution.
For $f\in\New_N$, let $S_2(f)$ be the $\C$-vector space of cusp forms spanned by $f$ and its Galois conjugates. Let us denote by $A_f$ the abelian variety attached to $f$ by Shimura. It is a quotient of $J_0(N):=\Jac (X_0(N))$ defined over $\Q$ and the pull-back of $\Omega^1_{A_f/\Q}$ is the $\Q$-vector subspace of elements in $S_2(f) dq/q$ with rational $q$-expansion, i.e. $S_2(f) dq/q\cap\Q[[q]]$, where $q=e^{2\pi\,i\,z}$ for $z$ in the complex upper half-plane.
Moreover, the endomorphism algebra $\End_\Q^0(A_f):=\End_\Q(A_f)\otimes \Q$ is isomorphic to a totally real number field $E_f$ whose degree is equal to $\dim A_f$.
Let $G_\Q$ denote the absolute Galois group $\Gal (\overline{\Q}/\Q)$. Let $X$ be a curve of genus $g>0$ defined over $\Q$ such that $\Jac (X)$ is a quotient of the jacobian of the curve $X_0(N)$ defined over $\Q$. There exists a subset $\cS$ of the set $\cup_{M|N} \New_M$, which is stable under Galois conjugation, such that $\Jac (X)$ is isogenous over $\Q$ to the abelian variety $\prod_{f\in \cS/G_{\Q}}A_f^{n_f}$ for some integers $n_f>0$. If $\ell$ is a prime of good reduction for $X$ not dividing $N$, by the Eichler-Shimura congruence, we can compute the characteristic polynomial $Q(x)$ of $\Frob_\ell$ acting on the Tate module of $\Jac (X\otimes \F_\ell)$ through the $\ell$-Fourier coefficients $a_\ell(f)$ of the newforms $f$ in $\cS$:
$$
Q(x)=\prod_ {f\in\cS}(x^2-a_\ell(f) x+\ell)^{n_f}\,.
$$
\subsection{The split Cartan modular curves $X_{s}(p)$} For a prime $p$, let us denote by $X_s(p)$ the modular curve attached to the normalizer of a split Cartan subgroup of $\GL_2(\F_p)$. This curve is a quotient of the modular curve $X(p)$ defined over $\Q$ and is isomorphic over $\Q$ to the modular curve $X_0^+(p^2)=X_0(p^2)/\langle w_{p^2}\rangle$. In \cite{go15}, the automorphism group of the curve $X_{s}(p)$ is determined for all primes $p$.
There, to conclude the article, it is needed to prove that $X_{s}(p)$ does not have any involutions defined over the quadratic field $K=\Q(\sqrt{p^*})$, where $p^*=(-1)^{(p-1)/2}$, for $p=17,19,23,29,31$. In fact, this problem is at the origin of Theorem 1 for $N^m=2$. Since in \cite{go15} it is proved that the number of fixed points of an involution is $\leq 12$, we can apply Remark \ref{r} to the reduction of $X_s(p)$ at a prime of $K$ over a rational prime $\ell\neq p$, i.e. we can use that the condition $\sum _{n}P_2(n)>12$ implies the non-existence of an involution of $X_s(p)$ defined over $K$. This fact is proved by taking a prime $\ell$ splitting in $K$ and by applying this version of Theorem \ref{crit} to the curve $X_{s}(p)\otimes \F_\ell$ for $N^m=2$ (see Section 5 of \cite{go15}).
\subsection{The modular curves $X_0^+(p)$} In \cite{BH}, the automorphism group of the modular curves $X_0^+(p):=X_0(p)/\langle w_p\rangle$ is determined for all primes $p$. After applying some theoretical results and to conclude the article, the authors need to prove that the modular curve $X_0^+(p)$ does not have any involution defined over $\Q$ for $p=163,193,197,211,223,227,229,269,331,347,359,383, 389,431,461,563,571,607$. In order to do that, they apply two different arguments. The first one is used to discard $11$ cases and the second one allows to discard the remaining $7$ cases. Although in \cite{BH} it is proved that the number of fixed points of an involution is $\leq 12$, next we show the table obtained by applying Theorem \ref{crit} (without using Remark \ref{r}) to the curve $X=X_0^+(p)\otimes\F_2$ and $N^m=2$:
$$
\begin{array}{|c|r|c||c|r|c||c|r|c|}
p& g& \sum_nP_2(n) & p& g& \sum_nP_2(n)&p& g& \sum_nP_2(n)\\ \hline
163 & 6 &\sum_{n\leq 53}P_2(n)=15 &229 & 7 &\sum_{n\leq 63}P_2(n)= 17 &389 &11 &\sum_{n\leq 123 }P_2(n)=25 \\[6pt]
193 & 7 &\sum_{n\leq 58}P_2(n)=17 &269 & 6 &\sum_{n\leq 43}P_2(n)= 13 & 431 & 8 &\sum_{n\leq 89}P_2(n)=19 \\[6pt]
197 & 6 &\sum_{n\leq 42}P_2(n)=15 &331 & 11 &\sum_{n\leq 79}P_2(n)= 25 & 461 & 12 &\sum_{n\leq 99}P_2(n)=27\\[6pt]
211& 6 &\sum_{n\leq 60}P_2(n)=15 &347 & 10 &\sum_{n\leq 74}P_2(n)=23 &563 & 15 &\sum_{n\leq 116}P_2(n)= 33\\[6pt]
223 & 6 &\sum_{n\leq 54}P_2(n)=15 &359 & 6 &\sum_{n\leq 60}P_2(n)= 15 & 571& 19&\sum_{n\leq 156}P_2(n)=41\\[6pt]
227 & 5 &\sum_{n\leq 40}P_2(n)= 13&383 & 8 &\sum_{n\leq 88}P_2(n)=19 & 607 & 19 &\sum_{n\leq 166}P_2(n)= 41
\end{array}
$$
In all cases $\sum_{n}P_2(n)>2\, g+2$ and, thus, all these curves do not have any involutions defined over $\Q$.
\subsection{The non-split Cartan modular curves $X_{ns}(p)$}
Let $p$ be a rational prime and let $X_{ns}(p)$ be the modular curve attached to a non-split Cartan subgroup of $\GL_2(\F_p)$. This curve is a quotient of the modular curve $X(p)$ defined over $\Q$, which has a canonical involution $w$ defined over $\Q$, the so-called modular involution. The genus $g$ of $X_{ns}(p)$ is greater than $1$ for $p\geq 11$. In \cite{DFGS}, the following is proved
$$\Aut(X_{ns}(11))=\Aut_\Q(X_{ns}(11))\simeq (\Z/2\Z)^2\,.$$
In \cite{do15}, it is proved that for $p\geq 37$ all automorphisms of $X_{ns}(p)$ preserve cusps and, moreover, if $p\equiv 1\pmod{12}$ then $\Aut(X_{ns}(p))=\{1,w\}$.
It is expected that $\Aut(X_{ns}(p))=\{1,w\}$ for $p>11$. The goal of this subsection is to prove this fact
for $ 13\leq p \leq 31$. We point out that the genera of these six curves are $8, 15, 20, 35, 54$ and $63$.
\vskip 0.2 cm
Set $X_{ns}^+(p)=X_{ns}(p)/\langle w\rangle$ and let us denote by $g^+$ its genus. For $p\geq 11$, the splitting over $\Q$ of the jacobians of these curves is as follows (cf. \cite{Chen}):
$$
J_{ns}(p):=\Jac(X_{ns}(p))\stackrel{\Q}\sim \prod_{f\in \New_{p^2}/G_\Q} A_f\,,\quad J_{ns}^+(p):=\Jac(X_{ns}^+(p))\stackrel{\Q}\sim \prod_{f\in \New_{p^2}^+/G_\Q} A_f\,.
$$
From now on,
$\chi$ denotes the quadratic Dirichlet character of conductor $p$, i.e. the Dirichlet character attached to the quadratic number field $K=\Q(\sqrt{p^*})$, where $p^*=(-1)^{(p-1)/2}$. Next, we summarize some facts concerning the modular abelian varieties $A_f$ attached to newforms $f\in\New_{p^2}$ (see Section 2 of \cite{go15} for detailed references).
The map $f\mapsto f\otimes \chi$ is a permutation of the set $\New_{p^2}\cup\New_p$. Under this bijection, there is a unique newform $f$, up to Galois conjugation, such that $f=f\otimes \chi$ when $p\equiv 3 \pmod 4$ and, moreover, in this case $f\in\New_{p^2}$.
If $f\in\New_{p^2}$ has complex multiplication (CM), i.e. $f=f\otimes\chi$, then the dimension of $A_f$ is the class number of $K$ and $A_f$ has all its endomorphisms defined over the Hilbert class field of $K$. The endomorphism algebra $\End_K^0(A_f)$ is isomorphic to the CM field $E_f\otimes K$ which only contains the roots of unity $\pm 1$. Moreover, $f\in\New_{p^2}^+$ if, and only if, $p\equiv 3\pmod 8$.
Let $f=\sum a_n q^n \in \New_{p^2}$ be without CM. If $f$ has an inner twist $\chi'\neq 1$, i.e. $f\otimes \chi'={}^{\sigma} f$ for some $\sigma\in G_\Q$, then $\chi'=\chi$ because $\chi'$ must be a quadratic character of conductor dividing $p^2$.
In such a case, $\End ^0(A_f)=\End_K^0(A_f) $ is a noncommutative algebra.
More precisely, set $F_f:=\Q( \{ a_\ell^2\})$, with $\ell$ running over the set of all rational primes. If $A_f$ is simple, then $\End_K^0(A_f) $ is a quaternion algebra $\cQ_f$ over $F_f$ (QM case), otherwise $A_f$ is isogenous over $K$ to the square of an abelian variety $B_f$ and $\End_K^0(A_f) $ is isomorphic to the matrix algebra $M_2(F_f)$ (RM case).
If $\chi$ is not an inner twist for $f\in\New_{p^2}$, then $A_f$ is simple and $\End^0 (A_f)$ is isomorphic to $E_f$ (RM case).
For two distinct $f_1, f_2\in \New_{p^2}/G_\Q$, the abelian varieties $A_{f_1}$ and $A_{f_2}$ are not isogenous over $\Q$ and are isogenous if, and only if, $f_1\otimes \chi={}^{\sigma} f_2$ for some $\sigma\in G_\Q$. In this particular case, there is an isogeny defined over $K$.
In the sequel, we restrict our attention to the values $13\leq p\leq 31$. The next lemma can be obtained through the instruction {\bf BrauerClass} in the program {\it Magma}.
\begin{lema}
There is not any $f\in\New_{p^2}$ with quaternionic multiplication for all $13\leq p\leq 31$.
\end{lema}
\vskip 0.2 cm
Let us fix a set $\{f_1,\cdots,f_r\}$ of representative cusp forms for the set $\New_{p^2}/G_\Q$. We introduce the subsets $\cS_{cm}$, $\cS_{rm}$, $\cS_{s}$ and $\cS_{t}$ of $\New_{p^2}/G_\Q$ as follows.
The subsets $\cS_{cm}$ and $\cS_{rm}$ are the sets of newforms in $\New_{p^2}/G_\Q$ having $\chi$ as an inner twist and corresponding to the CM and RM cases respectively. The subsets $\cS_s$ and $\cS_t$ are defined as follows:
$$ \cS_{s}=\{ f\in \New_{p^2}/G_\Q\colon f\otimes \chi \in\New_p/G_\Q\}\,,\,\,\cS_{t}=\{f_i \in \New_{p^2}/G_\Q\colon f_j=f_i\otimes \chi, i<j\}\,.
$$
For $\New_{p^2}^+/G_\Q$, we introduce the following four sets $\cS_{cm}^+=\cS_{cm}\cap \New_{p^2}^+$, $\cS_{rm}^+=\cS_{rm}\cap \New_{p^2}^+$, $\cS_s^+=\cS_s\cap \New_{p^2}^+$ and
$ \cS_{t}^+=\{f_i \in \cS_t\cap \New_{p^2}^+\colon f_i\otimes \chi\in \New_{p^2}^+\}$. Hence,
the splitting over $K$ of $J_{ns}(p)$ and $J_{ns}^+(p)$ are
$$
J_{ns}(p)\stackrel{K}\sim \prod_{f\in \cS_{cm}} A_f\prod_{f\in \cS_{s}}A_f\prod_{f\in \cS_{rm}} B_f^2\prod_{f\in\cS_t} A_f^2$$
and
$$J_{ns}^+(p)\stackrel{K}\sim \prod_{f\in \cS_{cm}^+} A_f\prod_{f\in \cS_{s}^+}A_f\prod_{f\in \cS_{rm}^+} B_f^2\prod_{f\in\cS_t^+} A_f^2\,.
$$
The corresponding decomposition of their endomorphism algebras over $K$ are
\begin{equation}\label{dc}
\End_K^0(J_{ns}(p))\simeq \prod_{f\in \cS_{cm}} E_f\otimes K\prod_{f\in \cS_{s}}E_f\prod_{f\in \cS_{rm}} M_2(F_f)\prod_{f\in\cS_t} M_2(E_f)
\end{equation}
and
\begin{equation}\label{dc+}
\End_K^0(J_{ns}^+(p))\simeq \prod_{f\in \cS_{cm}^+} E_f\otimes K\prod_{f\in \cS_{s}^+}E_f\prod_{f\in \cS_{rm}^+} M_2(F_f)\prod_{f\in\cS_t^+} M_2(E_f)\,.
\end{equation}
For $13\leq p\leq 31$, the following table shows the description of the sets $\New_{p^2}/G_\Q$ and $\New_{p^2}^+/G_\Q$ as well as the action of the map $f\mapsto f\otimes \chi$ on the set $(\New_{p^2}\cup\New_p)/G_\Q$.
$$
\begin{array}{c|c|c|c|c|c|c|c|c|}
p & g& \New_{p^2}/G_\Q & \dim A_{f_i}& \cS_{rm}& \cS_{cm} & \begin{array}{c}\cS_t\\ \text{(twists)}\end{array} &g^+& \New^+_{p^2}/G_\Q\\\hline\hline
13 &8 &\{f_1,f_2,f_3\} &\left\{\begin{array}{cr}2\,, & i=1\\
3 \,,&2\leq i\leq 3\end{array}\right. &\{ f_1\}& \emptyset &\begin{array}{c}\{ f_2\}\\f_3=f_2\otimes \chi\end{array} & 3 & \{ f_2\}\\\hline
17 &15 &\{f_1,\cdots,f_6\} & \left\{\begin{array}{cr}1\,, & i=1\\
2\,,&2\leq i\leq 3\\
3\,,& 4\leq i\leq 5\\
4\,,& i=6\end{array}\right. & \{f_6\}&\emptyset &\begin{array}{c}\{f_2,f_4\}\\ f_3=f_2\otimes \chi \\
f_5=f_4\otimes \chi\end{array}& 6 &\{f_1,f_2,f_4\} \\\hline
19 & 20&\{f_1,\cdots,f_9\}& \left\{\begin{array}{cr} 1\,, & 1\leq i \leq 2\\
2\,, & 3\leq i \leq 6\\
3\,, &7\leq i \leq 8\\
4 \,,& i=9\end{array}\right.& \{f_9\}& \{f_1\}& \begin{array}{c}\{f_3,f_5,f_7\}\\ f_4=f_3\otimes \chi \\
f_6=f_5\otimes \chi\\ f_8=f_7\otimes \chi\end{array}& 8& \{f_1,f_7,f_9\} \\\hline
23 &31 &\{f_1,\cdots,f_{10} \}&
\left\{\begin{array}{cr} 2\,, & 1\leq i \leq 5\\
3\,, & i= 6\\
4\,, &7\leq i \leq 8\\
5 \,,&9\leq i\leq 10 \end{array}\right. &\{f_7,f_8\} & \{f_6\}& \begin{array}{c} \{f_1,f_4,f_9\}\\f_2=f_1\otimes \chi \\
f_5=f_4\otimes \chi\\ f_{10}=f_9\otimes \chi\end{array}& 13&\{ f_7,f_8,f_9 \}\\\hline
29 & 54&\{f_1,\cdots,f_{11} \}&\left\{\begin{array}{rr} 2\,, & 1\leq i \leq 4\\
3\,, & 5\leq i \leq 6\\
6\,, &7 \leq i \leq 8\\
8 \,,& 9\leq i \leq 10\\
12\,, & i=11
\end{array}\right. & \{f_3,f_{11}\}&\emptyset &\begin{array}{c}\{f_1,f_5,f_7,f_9\}\\f_4=f_1\otimes \chi\\f_6=f_5\otimes\chi\\f_8=f_7\otimes \chi\\f_{10}=f_9\otimes \chi\end{array} &24 &\begin{array}{c}\{f_1,f_2,f_5,\\f_6,f_7,f_9 \}\end{array} \\\hline
31 & 63&\{f_1,\cdots,f_{12}\} & \left\{\begin{array}{rr} 2\,, & 1\leq i \leq 6\\
3\,, & i =7\\
4\,, & i = 8\\
8\,,& 9\leq i \leq 10\\
12\,, & i=11\\
16\,,& i=12
\end{array}\right.&\begin{array}{c}\{f_5,f_8,\\f_{11},f_{12}\}\end{array} &\{f_7 \}&\begin{array}{c}\{f_1,f_2,f_9\}\\f_4=f_1\otimes \chi\\f_6=f_2\otimes\chi\\f_{10}=f_9\otimes \chi\end{array} & 28&\begin{array}{c}\{f_1,f_2,\\f_9,f_{12}\}\end{array}\\ \hline
\end{array}
$$
\begin{centerline} {Table 1}
\end{centerline}
\vskip 0.2 cm
\noindent
The label of the newforms in $\New_{p^2}$ is the one given by {\it Magma}. For a prime $p$, the set $\cS_s$ is the set of newforms $f$ which do not appear in the columns corresponding to $\cS_{rm}$, $\cS_{cm}$ and $\cS_t$ (twists).
\vskip 0.3cm
\begin{prop} Let $p$ be a prime such that $13 \leq p\leq 31$. Then,
\begin{itemize}
\item[(i)] The group $\Aut(X_{ns}^+(p))$ is trivial.
\item[(ii)] The modular involution $w$ is the only nontrivial automorphism of $X_{ns}(p)$.
\end{itemize}
\end{prop}
\noindent{\bf Proof.} For $p=13$, we already know that $\Aut (X_{ns}^+(13))$ is trivial because $X_{ns}^+(13)$ is not hyperelliptic (cf. \cite{Baran}) and the endomorphism algebra $\End^0 (J_{ns}^+(13))$ is a totally real number field which only contains the roots of unity $\pm 1$.
We split the proof into the following steps.
\vskip 0.3 cm
\noindent {\it Step 1: All automorphisms of $X_{ns}(p)$ and $X_{ns}^+(p)$ are defined over $K$. }
On the one hand, for two distinct $f_1,f_2$ lying in $\New_{p^2}/G_\Q$, without CM, $A_{f_1}$ and $A_{f_2}$ are isogenous if, and only if, $f_2$ is a Galois conjugate of $f_1\otimes \chi$ and, in this case, the isogeny is defined over $K$.
On the other hand, if $f\in\New_{p^2}/G_\Q$ does not have CM, all endomorphisms of $A_f$ are defined over $K$. Hence, if $\New_{p^2}/G_\Q$, resp. $\New_{p^2}^+/G_\Q$, does not contain a newform with CM, all endomorphisms of $J_{ns}(p)$, resp. $J_{ns}^+(p)$, are defined over $K$ and, in particular, also all automorphisms of the corresponding curve.
Assume that $\New_{p^2}/G_\Q$, resp $\New_{p^2}^+/G_\Q$, contains a newform $f$ with CM. Then all endomorphisms of $A_f$ are defined over the Hilbert class field of $K$ and $A_f$ is unique. Let $g_c$ be the dimension of the abelian variety $A_f$. Due to the fact that $g>1+2 g_c$ ($p\equiv 3 \pmod 4$), resp. $g^+>1+2 g_c$ ($p\equiv 3 \pmod 8)$, the non-existence of an automorphism not defined over $K$ is
obtained by applying the same argument used in the proof of Lemma 1.4 in \cite{KM}.
\vskip 0.3 cm
\vskip 0.3 cm
\noindent {\it Step 2: The only primes $N$ which can divide the order of a nontrivial automorphism of $X_{ns}(p)$ or $X_{ns}^+(p)$ are the displayed in the following tables}
\begin{equation}\label{tables}
X_{ns}(p):\quad
\begin{array}{|c|r|}
p & N\phantom{cc}\\ \hline
13 & 2,3, 7\\
17 & 2,3\\
19 & 2,3,5\\
23& 2,3,11\\
29 & 2,3,5,7\\
31 & 2,3,5\\
\hline
\end{array}\,,\quad \quad X_{ns}^+(p):\quad \begin{array}{|c|r|}
p & N\phantom{cc}\\ \hline
13 &2\\
17 & 2\\
19 & 2,3,5\\
23& 2,3\\
29 & 2,3,7\\
31 & 2,3\\\hline
\end{array}
\end{equation}
The number fields which appear in the decomposition of $\End_K^0(J_{ns}(p))$ (see (\ref{dc})), resp. $\End_K^0(J_{ns}^+(p))$ (see (\ref{dc+})), only contain the roots of unity $\pm 1$. The only matrix algebras in this decomposition are of the form $M_2(F)$ for $f\in \cS_{rm}$ and $f\in \cS_{t}$ for $J_{ns}(p)$, resp. $f\in \cS_{rm}^+$ and $f\in \cS_{t}^+$ for $J_{ns}^+(p)$. In the first case, $F=F_f$ and, in the second case, $F=E_f$. In any case, $F$ is a totally real number field. If there exists a nontrivial automorphism of order an odd prime $N$, then the maximal real subfield $K_N$ of the $N$-th cyclotomic field must be contained in some of these number fields $F$. In particular, $N-1$ must divide $ 2 [F:\Q]$. By looking at the following tables,
obtained from Table 1,
$$
X_{ns}(p):
\begin{array}{|c|r|}
p & [F :\Q] \phantom{cc}\\ \hline
13 & 1,3 \\
17 & 2,3 \\
19 & 2,3 \\
23& 2,5 \\
29 &1,2,3,6,8 \\
31 & 1,2,6, 8 \\
\hline
\end{array}\,,\quad \quad X_{ns}^+(p): \begin{array}{|c|r|}
p & [F :\Q] \\ \hline
13 &-\\
17 & - \\
19 & 2 \\
23& 2 \\
29 &3 \\
31 & 8 \\\hline
\end{array}\,,
$$
we obtain a few possibilities for $N$. After checking all of them, we obtain that the only cases in which $K_N$ is contained in some $F$ are the displayed in (\ref{tables}).
\vskip 0.3 cm
\noindent {\it Step 3: There are no automorphisms of $X_{ns}(p)$ and $X_{ns}^+(p)$ of odd order.}
The claim is obtained applying Theorem \ref{crit} to the curves $X_{ns}(p)\otimes \F_\ell$ and $X_{ns}^+(p)\otimes \F_\ell$, where $\ell$ is a prime splitting in $K$, and for all $N^m=N$ as in (\ref{tables}). We only show the case $N=3$:
$$\begin{array}{c|c|r|r|}
p & \ell&X_{ns}(p)\otimes\F_\ell:\sum_nP_3(n) & X_{ns}^+(p)\otimes \F_\ell:\sum_nP_3(n) \\ \hline
13 & 3 & \sum_{n\leq 16}P_3(n)= 12 & -\\[6 pt]
17 &2 &\sum_{n\leq 34}P_3(n)=18& -\\ [6 pt]
19 &5 &\sum_{n\leq 31}P_3(n)= 24 & \sum_{n\leq 14}P_3(n)=12\\ [6 pt]
23 &2 &\sum_{n\leq 52}P_3(n)=35 & \sum_{n\leq 19 }P_3(n)=16\\ [6 pt]
29 &5 &\sum_{n\leq 76}P_3(n)= 58 & \sum_{n\leq 47}P_3(n)=27\\ [6 pt]
31 &2 &\sum_{n \leq 86}P_3(n)= 66 & \sum_{n \leq 58 }P_3(n)=31\\
\end{array}
$$
\vskip 0.3 cm
\noindent {\it Step 4: The group $\Aut(X_{ns}^+(p))$ is trivial.}
We only need to prove that $X_{ns}^+(p)$ does not have any involutions defined over $K$.
Again, the claim is obtained applying Theorem \ref{crit} to the curves $X=X_{ns}^+(p)\otimes \F_\ell$ for $p\neq 19$ and $X=X_{ns}^+(p)\otimes \F_{\ell^2}$ for $p= 19$, and $N^m=2$:
$$\begin{array}{c|c|c|c|}
p & \ell & \sum_n P_2(n)\\ \hline
17 & 2& \sum_{n\leq 59}P_2(n)=15 \\ [6 pt]
19 & 2 &\sum_{n\leq 83}P_2(n)=19\\ [6 pt]
23 & 2 &\sum_{n\leq 95}P_2(n)=29\\ [6 pt]
29 & 5 &\sum_{n\leq 253}P_2(n)=51 \\ [6 pt]
31 & 2 &\sum_{n\leq 258 }P_2(n)= 59\\
\end{array}
$$
For $p=19$, we have changed the prime $\ell=5$ by $\ell=2$ ($2$ is inert in $K$), because for $\ell=5$ the sequence $P_2(n)$ turns out to be equal to $0$ for $8\leq n\leq 200$.
\vskip 0.3 cm
\noindent {\it Step 5: The modular involution $w$ is the only nontrivial automorphism of $X_{ns}(p)$.}
A nontrivial automorphism different from $w$ does not commute with $w$ because the group $\Aut(X_{ns}^+(p))$ is trivial. Assume that there is a nontrivial automorphism $u$ of $X_{ns}(p)$ different from $w$. Since the order of $\Aut_K(X_{ns}(p))$ is a power of $2$, we can suppose that $u$ is an involution different from $w$.
The automorphism $v=u\cdot w$ cannot be an involution, otherwise $u$ and $w$ would commute. Therefore, either $v$ or a power of $v$ has order $4$.
Now, applying Theorem \ref{crit} to $X=X_{ns}(p)\otimes \F_\ell$ for $N^m=4$ and $p\neq 13, 19$, we obtain
$$\begin{array}{c|c|c|c|}
p & \ell &\sum_nP_4(n)\\ \hline
17 & 2& \sum_{n\leq 81}P_4(n)=38\\ [6 pt]
23 & 2 &\sum_{n\leq 127}P_4(n)=70\\ [6 pt]
29 & 5 &\sum_{n\leq 143}P_4(n)=115\\ [6 pt]
31 & 2 &\sum_{n\leq 291 }P_4(n)= 134\\
\end{array}
$$
Hence, for these four values of $p$ the statement is proved.
For $p=13$ or $19$, the sequence $P_4(n)$ turns out to be equal to $0$ for $6<n\leq 250$, even changing the prime $\ell$. Nevertheless, applying Theorem \ref{crit} for $N^m=8$, we prove that $X_{ns}(13)$ an $X_{ns}(19)$ do not have any automorphisms of order $8$:
$$\begin{array}{c|c|c|c|}
p & \ell & \sum_n P_8(n)\\ \hline
13 & 3& \sum_{n\leq 15}P_8(n)= 34\\ [6 pt]
19 & 5 &\sum_{n\leq 34}P_8(n)=58
\end{array}
$$
Therefore, the order of any automorphism of $X_{ns}(p)$ must divide $4$. Assume that there is $v\in\Aut(X_{ns}(p))$ of order $4$. Then, the automorphism $u:=v^2\cdot w$ can only have order $2$ or $4$. On the one hand, $u$ cannot be an involution since $v^2$ and $w$ do no commute. On the other hand, if $u$ has order $4$, then $u^2$ is an involution different from $w$ and, thus, $u^2\cdot w=v^2\cdot w\cdot v^2$ must have order $4$, but $(u^2\cdot w)^2=1$. Therefore, none of these two curves has automorphisms of order $4$.
\hfill $\Box$
|
1,108,101,564,184 | arxiv | \section{Introduction}
\noindent
The production of heavy hadrons (H) in {$e^+e^-$}\chkspace annihilation provides a
laboratory for the study of heavy-quark (Q) jet fragmentation. This is
commonly characterised in terms of the observable
$x_{H}$ $\equiv$ $2E_H/\sqrt{s}$, where
$E_H$ is the energy of a $B$ or $D$ hadron containing a $b$ or $c$ quark,
respectively, and $\sqrt{s}$ is the c.m. energy. In contrast to light-quark
jet fragmentation one expects~\cite{Bj} the distribution of $x_{H}$,
$D(x_{H})$, to peak at an $x_{H}$-value significantly above 0.
Since the hadronisation process is intrinsically non-perturbative $D(x_{H})$
cannot be calculated directly using perturbative Quantum Chromodynamics
(QCD). However, the distribution of the closely-related variable
$x_{Q}$ $\equiv$ 2$E_Q/\sqrt{s}$ can be calculated
perturbatively \cite{mn,dkt,bcfy} and related, via model-dependent
assumptions, to the observable quantity $D(x_{H})$; a number of such
models of heavy quark fragmentation have been proposed
\cite{lund,bowler,pete}. Measurements of $D(x_{H})$ thus serve to
constrain both perturbative QCD and the model predictions.
Furthermore, the measurement of $D(x_{H})$ at different c.m. energies
can be used to test QCD evolution, and comparison of $D(x_{B})$
with $D(x_{D})$ can be used to test heavy quark symmetry~\cite{jaffe,Lisa}.
Finally, the uncertainty on the forms of $D(x_{D})$ and $D(x_{B})$
must be taken into account in studies of the production and decay of heavy
quarks, see {\it eg.}\chkspace~\cite{heavy}; more accurate measurements of these forms
will allow increased precision in tests of the electroweak heavy-quark sector.
We consider the measurement of the $B$ hadron scaled energy distribution
$D(x_{B})$ in $Z^0$ decays. Earlier studies \cite{early}
used the momentum spectrum of the lepton from semi-leptonic $B$ decays to
constrain the mean value $<x_{B}>$ and found it to be approximately
$0.70$; this is in agreement with the results of similar studies at $\sqrt{s}$
= 29 and 35 GeV~\cite{petra}. In more recent
analyses~\cite{aleph95,shape,sldbfrag}
the scaled energy distribution
$D(x_{B})$ has been measured by reconstructing $B$ hadrons via their
$B$ {$\rightarrow$}\chkspace D$l$X decay mode. In this case the reconstruction efficiency is
intrinsically low due to the small branching ratio for $B$ hadrons to decay into
the high-momentum leptons used in the tag. Also,
the reconstruction of the $B$ hadron energy using calorimeter information
usually has poor resolution for low $B$ energy, resulting
in poor sensitivity to the shape of the distribution at low energy.
Here we describe the preliminary results of a new method for reconstructing
$B$ hadron decays, and the $B$ energy, inclusively, using only charged tracks,
in the SLD experiment at SLAC.
We use the upgraded CCD vertex detector,
installed in 1996, to reconstruct $B$-decay vertices with high
efficiency and purity. Combined with the micron-size SLC interaction point
(IP), precise vertexing allows us to reconstruct accurately
the $B$ flight direction and hence
the transverse momentum of tracks associated with
the vertex with respect to this direction.
Using the transverse momentum and
the total invariant mass of the associated tracks, an upper limit
on the mass of the missing particles is found for each
reconstructed $B$-decay vertex, and is used to solve for the longitudinal
momentum of the missing particles, and hence for the energy
of the $B$ hadron. In order
to improve the $B$ sample purity and the reconstructed $B$ hadron energy
resolution, $B$ vertices with low missing mass are selected.
The method is described in Section 3. In Section 4
we compare the $B$ energy distribution with predictions
of heavy quark fragmentation models. We also test several functional
forms of $B$ hadron energy distributions. In Section 5, we unfolded
the $B$ hadron energy distribution. In Section 6, we discuss the
systematic errors. In Section 7 we summarize the results.
\section{Apparatus and Hadronic Event Selection}
\noindent
This analysis is based on roughly 150,000 hadronic events produced in
{$e^+e^-$}\chkspace annihilations at a mean center-of-mass energy of $\sqrt{s}=91.28$ GeV
at the SLAC Linear Collider (SLC), and recorded in the SLC Large Detector
(SLD) in 1996 and 1997.
A general description of the SLD can be found elsewhere~\cite{sld}.
The trigger and initial selection criteria for hadronic $Z^0$ decays are
described in Ref.~\cite{sldalphas}.
This analysis used charged tracks measured in the Central Drift
Chamber (CDC)~\cite{cdc} and in the upgraded Vertex Detector (VXD3)~\cite{vxd}.
Momentum measurement is provided by a uniform axial magnetic field of 0.6T.
The CDC and VXD3 give a momentum resolution of
$\sigma_{p_{\perp}}/p_{\perp}$ = $0.01 \oplus 0.0026p_{\perp}$,
where $p_{\perp}$ is the track momentum transverse to the beam axis in
GeV/$c$. In the plane normal to the beamline
the centroid of the micron-sized SLC IP is reconstructed from tracks
in sets of approximately thirty sequential hadronic $Z^0$ decays to a precision
of $\sigma^{r\phi}\simeq7\pm2$ $\mu$m (1996)
and $\sigma^{r\phi}\simeq4\pm2$ $\mu$m (1997). The IP position along the
beam axis is determined event by event using charged tracks with
a resolution of $\sigma^z$ $\simeq$ 35 $\mu$m (1996) and
$\sigma^z$ $\simeq$ 30 $\mu$m (1997).
Including the uncertainty on the IP position, the resolution on the
charged-track impact parameter ($d$) projected in the plane perpendicular
to the beamline is
$\sigma_{d}^{r\phi}$ = 14$\oplus$33/$(p\sin^{3/2}\theta)$ $\mu$m
(1996) and
$\sigma_{d}^{r\phi}$ = 11$\oplus$33/$(p\sin^{3/2}\theta)$ $\mu$m
(1997),
and the resolution in the plane containing the beam axis is
$\sigma_{d}^{z}$ = 27$\oplus$33/$(p\sin^{3/2}\theta)$ $\mu$m
(1996) and
$\sigma_{d}^{z}$ = 24$\oplus$33/$(p\sin^{3/2}\theta)$ $\mu$m
(1997),
where
$\theta$ is the track polar angle with respect to the beamline.
The event thrust axis~\cite{thrust} is calculated using energy clusters
measured in the Liquid Argon Calorimeter~\cite{lac}.
A set of cuts is applied to the data to select well-measured tracks
and events well contained within the detector acceptance.
Charged tracks are required to have a distance of
closest approach transverse to the beam axis within 5 cm,
and within 10 cm along the axis from the measured IP,
as well as $|\cos \theta |< 0.80$, and $p_\perp > 0.15$ GeV/c.
Events are required to have a minimum of seven such tracks,
a thrust axis polar angle w.r.t. the beamline, $\theta_T$,
within $|\cos\theta_T|<0.71$, and
a charged visible energy $E_{vis}$ of at least 20~GeV,
which is calculated from the selected tracks assigned the charged pion mass.
The efficiency for selecting a well-contained $Z^0 \rightarrow q{\bar q}(g)$
event is estimated to be above 96\% independent of quark flavor. The
selected sample comprised 111,569 events, with an estimated
$0.10 \pm 0.05\%$ background contribution dominated
by $Z^0 \rightarrow \tau^+\tau^-$ events.
For the purpose of estimating the efficiency and purity of the $B$ hadron
selection procedure we made use of a detailed Monte Carlo (MC) simulation
of the detector.
The JETSET 7.4~\cite{jetset} event generator is used, with parameter
values tuned to hadronic {$e^+e^-$}\chkspace annihilation data~\cite{tune},
combined with a simulation of $B$ hadron decays
tuned~\cite{sldsim} to $\Upsilon(4S)$ data and a simulation of the SLD
based on GEANT 3.21~\cite{geant}.
Inclusive distributions of single-particle and event-topology observables
in hadronic events are found to be well described by the
simulation~\cite{sldalphas}. Uncertainties in the simulation
are taken into account in the systematic errors (Section~\ref{sec:sys}).
\noindent
\section{$B$ Hadron Selection and Energy Measurement}
\subsection{$B$ Hadron Selection}
The $B$ sample for this analysis is selected using a topological vertexing
technique based on the detection and measurement of charged tracks,
which is described in detail in Ref.~\cite{zvnim}.
Each hadronic event is divided into two hemispheres by a plane perpendicular
to the thrust axis.
In each hemisphere the topological vertexing algorithm is applied to
the set of ` quality' tracks having
(i) at least 23 hits in the CDC and 2 hits in VXD3;
(ii) a combined CDC and VXD3 track fit quality of $\chi^{2}/N_{dof}< $8;
(iii) a momentum in the range 0.25$<p<$55 GeV/$c$,
(iv) an impact parameter of less than 0.3~cm
in the $r\phi$ plane, and less than 1.5~cm along the $z$ axis;
(v) a transverse impact parameter error no larger than 250 $\mu$m.
Vertices consistent with photon conversions or $K^{0}$ and $\Lambda^0$ decays
are discarded.
In hemispheres containing at least one found vertex the
vertex furthest from the IP is retained
as the `seed' vertex.
Those events are retained which contain a seed vertex separated from the IP
by between
0.1~cm and 2.3~cm. The lower bound reduces contamination from non-$B$-decay
tracks and backgrounds from light-flavor events, and the upper bound
reduces the background from particle interactions with the beam
pipe.
For each hemisphere containing an accepted seed vertex, a
vertex axis is formed by the straight line joining the IP to
the seed vertex, which is located at a distance D from the IP.
For each quality track not directly associated with the vertex,
the distance of closest approach to the vertex axis, T,
and the distance from the IP along the vertex
axis to the point of closest approach, L, are calculated.
Tracks satisfying T$<1$~mm and L$/$D$>0.3$ are added to the vertex.
These T and L cuts are chosen to minimize false track associations
to the seed vertex, since typically the addition of
a false track has a much greater
kinematic effect than the omission of a genuine $B$-decay track, and hence
has more effect on the reconstructed $B$ hadron energy resolution.
Our Monte Carlo studies show that, on average, this procedure
attaches 0.85 tracks to each seed vertex, 91.9\% of the tracks
from tagged true $B$ decays are associated
with the resulting vertices, and 98.0\% of the vertex tracks are from true
$B$ decays.
The large masses of the $B$ hadrons relative to light-flavor hadrons
make it possible to distinguish $B$ hadron decay vertices from those
vertices found in events of light flavors using the vertex invariant
mass, $M$. However, due to the missing particles, which are mainly neutrals,
$M$ cannot be fully determined.
In the {\em rest} frame of the decaying hadron, $M$ can be written as
\begin{equation}
M=\sqrt{M_{ch}^{2}+P_{t}^{2}+P_{chl}^{2}}+\sqrt{M_{0}^{2}+P_{t}^{2}
\label{eqn:vertexmass}
+P_{0l}^{2}}
\end{equation}
where $M_{ch}$ and $M_{0}$ are the total invariant masses of the set of
vertex-associated tracks and the set of missing particles, respectively.
$P_{t}$ is the total charged track momentum transverse to the $B$ flight
direction, which is identical to the transverse momentum of the set of
missing particles by momentum conservation. $P_{chl}$ and $P_{0l}$ are
the respective momenta along the $B$ flight direction.
In the $B$ {\em rest} frame, $P_{chl} = P_{0l}$.
Using the set of vertex-associated charged tracks, we calculate
the total momentum vector ${\vec{P}}_{ch}$, the total energy $E_{ch}$
and the invariant mass $M_{ch}$, assuming the charged pion mass for each
track.
The $B$ hadron flight direction (the line joining the IP and the $B$ vertex.
The lower bound for the mass of the decaying hadron,
the `$P_{t}$-corrected vertex mass',
\vspace{-0.2cm}
\begin{equation}
M_{Pt} = \sqrt{M_{ch}^{2}+P_{t}^{2}} + |P_{t}|
\label{eqn:masspt}
\end{equation}
is used as the variable for selecting $B$ hadrons.
The majority of non-$B$ vertices have $M_{Pt}$ less than 2.0 GeV/$c^{2}$.
However, occasionally the measured $P_t$ may fluctuate to a
much larger
value than the true $P_t$, causing some charm vertices to have a $M_{Pt}$
larger than 2.0 GeV/$c^{2}$.
To reduce this contamination, we calculate the `minimum $P_t$' by
allowing the locations of the IP and the vertex to float to any pair
of locations within the respective one sigma error-ellipsoids,
We substitute the minimum $P_t$ in Equation~(\ref{eqn:masspt}) and
use the modified $M_{Pt}$ as our
variable for selecting $B$ hadrons~\cite{sldrb98}.
Figure~\ref{mptm} shows the distribution of the $M_{Pt}$
for the 32,492 hemispheres in the data sample with a found secondary
vertex, and the corresponding simulated distribution (histogram).
$B$ hadron candidates are selected by requiring
$M_{Pt}$ $>$ 2.0 GeV/$c^{2}$. We further required
$M_{Pt} \leq 2 \times M_{ch}$ to reduce the contamination from fake
vertices in light quark events~\cite{sldrb98}.
A total of 19,604 hemispheres are selected,
with an estimated efficiency for selecting a true $B$-hemisphere
of 40.1\%, and a sample purity of 98.2\%. The contributions from
light-flavor events in the sample are 0.15\% for primary u,d and s events
and 1.6\% for c events.
\subsection{$B$ Hadron Energy Measurement}
The energy of each $B$ hadron, $E_{B}$, can be expressed as
the sum of the reconstructed-vertex energy, $E_{ch}$,
and the energy of those particles not associated with the vertex, $E_{0}$.
We can write $E_{0}$ as
\begin{equation}
E_{0}^{2} = M_{0}^{2} + P_{t}^{2} + P_{0l}^{2}
\label{eqn:e0}
\end{equation}
The two unknowns, $M_{0}$ and $P_{0l}$, must be found in order
to obtain $E_{0}$.
One kinematic constraint can be obtained by imposing the $B$ hadron mass
on the vertex, $M_{B}^{2}=E_{B}^{2}-P_{B}^{2}$, where
$P_{B}=P_{chl}+P_{0l}$ is the total momentum of the $B$ hadron,
and $P_{chl}$ is the momentum component of the vertex-associated
tracks along the vertex axis. From Equation~(\ref{eqn:vertexmass}) we
derive the following inequality,
\begin{equation}
\sqrt{M_{ch}^2 + P_{t}^2} + \sqrt{M_{0}^2 + P_{t}^2} \leq M_{B},
\label{massineq}
\end{equation}
where equality holds in the limit where
both $P_{0l}$ and $P_{chl}$ vanish in the $B$ hadron {\em rest} frame.
Equation~(\ref{massineq}) effectively sets an upper bound on
$M_{0}$, and a lower bound is given by zero:
\begin{equation}
0\leq M_{0}^{2}\leq M_{0max}^{2},
\end{equation}
where
\begin{equation}
M_{0max}^{2}=M_{B}^2 - 2M_{B}\sqrt{M_{ch}^2+P_{t}^2} + M_{ch}^2.
\label{m0maxeqn}
\end{equation}
Since $M_{0}$ is bounded from both above and below, we expect to obtain
a good estimate of $M_{0}$, and therefore of the $B$ hadron energy,
when $M_{0max}^{2}$ is small.
We have used our simulation to study this issue.
Assuming $M_{B}=$ 5.28 GeV/$c^{2}$, the true value of
$M_{0}$ tends to cluster near its maximum value $M_{0max}$.
Figure~\ref{m0max_m0} shows
the relative deviation of $M_{0max}$ from $M_{0true}$ for all $B$ hadrons.
Although approximately 20\% of the $B$ hadrons are $B^{0}_{s}$ and
$\Lambda_{b}$ which have larger
masses, the values of $M_{0max}$ obtained using $M_{B}$=5.28 GeV/$c^{2}$
in Equation~(\ref{m0maxeqn})
are typically within about 10\% of $M_0$.
The distribution of the reconstructed $M_{0max}^{2}$ for vertices in
the selected $B$ hadron sample is shown in Figure~\ref{m0max_after}.
The simulation indicates that the
non-$b\bar{b}$ background is concentrated at high $M_{0max}^{2}$; this because
most of the light flavor vertices have small $M_{Pt}$
and therefore, due to the strong negative correlation between
$M_{Pt}$ and $M_{0max}$,
large $M_{0max}$. The negative tail in Figure~\ref{m0max_after}
is an effect of detector resolution, and
the Monte Carlo simulation shows good agreement with the data.
Because $M_{0}$ peaks near $M_{0max}$,
we set $M_{0}^{2}$ = $M_{0max}^{2}$ if $M_{0max}^{2}$ $\geq$0, and
$M_{0}^{2}$ = 0 if $M_{0max}^{2}$ $<$0.
We then calculate $P_{0l}$:
\begin{equation}
P_{0l} = \frac {\textstyle M_{B}^{2}-(M_{ch}^{2}+P_{t}^{2})-(M_{0}^{2}+P_{t}^{2})}{\textstyle 2 (M_{ch}^{2}+P_{t}^{2})} P_{chl},
\label{eqn:p0l}
\end{equation}
and hence $E_{0}$ (Equation~(\ref{eqn:e0})).
We then divide the reconstructed $B$ hadron energy,
$E_{B}^{rec}=E_{0}+E_{ch}$, by the beam energy, $E_{beam}=\sqrt{s}/2$,
to obtain the reconstructed scaled $B$ hadron energy,
$x_{B}^{rec}=E_{B}^{rec}/E_{beam}$.
The resolution of $x_{B}^{rec}$ depends on both $M_{0max}^{2}$
and the true $x_{B}$, $x_{B}^{true}$. Vertices in the negative tail of
the $M_{0max}^2$ distribution that have $M_{0max}^{2}<-1.0 (GeV/c^{2})^{2}$
are often poorly reconstructed and are not used in further analysis.
Vertices with small values of $|M_{0max}^{2}|$ are typically reconstructed
with better resolution and
an upper cut on $M_{0max}^{2}$ is hence applied.
For an $x_B$-independent cut,
the efficiency for selecting $B$ hadrons is roughly linear in $x_{B}^{true}$.
In order to obtain an approximately $x_B$-independent selection efficiency
we choose the following upper cut:
\begin{equation}
M_{0max}^{2} < \left\{ 1.1+0.006 (E_{beam}-E_{B}^{rec})+
3.5 exp[-(E_{B}^{rec}-5.5)/3.5] \right\}^2,
\label{eqn:m0maxcut}
\end{equation}
where the two terms that depend on the reconstructed energy $E_{B}^{rec}$
increase the efficiency at lower $B$ hadron energy.
Only about 0.7\% of the selected vertices are from light-flavor events,
but they are concentrated in the lowest energy bin. To further remove
this background, a vertex is required to contain at least 3 quality
tracks with a normalized impact
parameter greater than 2. This eliminates almost all of the uds-event
background and cuts the charm background
by about 20\% overall and 43\% in the few lowest energy bins.
This cut helps to reduce the dependence of the reconstructed $B$ hadron
energy distribution on the light flavor simulation
in the low energy region, which is a key step towards finding the correct
shape of the $B$ hadron energy distribution at low energies.
Figure~\ref{m0max_before} shows the distribution of $M_{0max}^2$ after all
these cuts; the data and Monte Carlo simulation are in good agreement.
A total of 1920 vertices in the data for 1996-97 satisfy all these selection
cuts.
The overall efficiency for selecting $B$ hadrons is 3.9\% and the estimated
$B$ hadron purity is 99.5\%.
The efficiency as a function of $x_{B}^{true}$ is
shown in Figure~\ref{efficiency}. The dependence is rather weak
except for the lowest $x_B$ region; the efficiency is substantial,
about 1.7\% even just above the kinematic threshold for $B$ energy.
We examine the $B$-energy resolution of this technique.
The distribution of the normalized difference
between the true and reconstructed $B$ hadron energies,
$(x_{B}^{rec}-x_{B}^{true})/x_{B}^{true}$, for Monte Carlo events,
is fitted by a double Gaussian, resulting in a core width
(the width of the narrower Gaussian) of 10.4\% and a tail width
(the width of the wider Gaussian) of 23.6\% with a core fraction of 83\%.
Figure~\ref{sigmavsx} shows the core and tail widths as a function
of $x_{B}^{true}$. In order to compare the widths from different $x_B$ bins,
we fix the ratio between core and tail fractions to that obtained in the
overall fit above. The $x_B$-dependence of the resolution is weak,
indicating that the absolute resolution on $x_B$, $x_B^{rec}-x_B^{true}$,
is very good at low $B$ energy, which is an advantage of this energy
reconstruction technique.
Figure~\ref{xbrec} shows the distribution of the reconstructed scaled
$B$ hadron energy for the data, $D^{data}(x_{B}^{rec})$, and for the
Monte Carlo simulation, $D^{MC}(x_{B}^{rec})$.
The small non-{$b\bar{b}$}\chkspace background, the high $B$ selection efficiency over
the full kinematic coverage, and the good energy resolution
combine to give a much improved sensitivity of the data to the underlying
true {\em shape} of the $B$ energy distribution (see next section).
The event generator used in our simulation is based on a perturbative QCD
`parton shower' for production of quarks and gluons, together with the
phenomenological Peterson function~\cite{pete}
(Table~\ref{table:fragmodels}) to account for the fragmentation of $b$ and $c$
quarks into $B$ and $D$ hadrons, respectively,
within the iterative Lund string hadronisation mechanism~\cite{jetset};
this simulation yields a `generator-level'
primary $B$ hadron energy distribution with
$<x_{B}>$ = 0.693\footnote{We used a value of the Peterson function
parameter $\epsilon_b$ = 0.006~\cite{sldrb}.}.
It is apparent that this simulation does not reproduce the data well
(Figure~\ref{xbrec}); the $\chi^2$ for the comparison is 62
for 16 bins\footnote{We exclude several bins with very few events in
the comparison. For details see Section~\ref{subsec:model} for details.}.
The distribution of the non-$b\bar{b}$ background,
$S(x_{B}^{rec})$, is also shown in Figure~\ref{xbrec}.
The background is subtracted bin-by-bin from the $D^{data}(x_B^{rec})$
before we proceed to test various fragmentation models.
\section{The Shape of the $B$ Hadron Energy Distribution}
\label{sec:shape}
Given the raw reconstructed $B$ energy distribution in the
data shown in Figure~\ref{xbrec}, there are several ways of estimating the
true underlying $B$ energy distribution. Here we take
two approaches, each described in a subsection.
In the first part, we test several $b$ fragmentation models,
$f(z,\beta)$ embedded within Monte Carlo generators, where $z$ is an
internal, experimentally inaccessible variable, corresponding roughly
to the fraction of the momentum of the fragmentating $b$ quark carried by
the resulting $B$ hadron, and $\beta$ is the set of parameters associated
with the model in question.
In the second part, we test several functional forms for the distribution
of $x_B$ itself, $f(x_B,\lambda)$, where $\lambda$ represents the set of
parameters associated with each functional form.
\subsection{Tests of $b$ Quark Fragmentation Models $f(z,\beta)$}
\label{subsec:model}
We first consider testing models of $b$ quark fragmentation.
Since the fragmentation functions for various models are
usually functions of an experimentally inaccessible variable
(e.g. $z=(E+p_{\|})_{H}/(E+p_{\|})_Q$ or $z = {p_{\|}}_H / {p_{\|}}_Q$ ),
it is necessary to use a Monte Carlo generator
to generate events according to a given input heavy
quark fragmentation function $f(z,\beta)$, where $\beta$ represents the
set of parameters.
We consider the phenonmenological models of
the Lund group~\cite{lund}, Bowler~\cite{bowler},
Peterson {\it et al.}\chkspace~\cite{pete} and Kartvelishvili {\it et al.}\chkspace~\cite{kart}.
We also consider the perturbative QCD calculations of Braaten
{\it et al.}\chkspace (BCFY)~\cite{bcfy}, and of Collins and Spiller (CS)~\cite{collins}.
We use the JETSET~\cite{jetset} parton shower Monte Carlo and
each fragmentation model in question to generate the simulated events
without detector simulation.
Table~\ref{table:fragmodels} contains a list of the models.
In addition, we test the UCLA~\cite{ucla} fragmentation model which
with fixed parameters. For $b$ fragmentation, we also test
the HERWIG~\cite{herwig} event-generator using both possible values
of the parameter $cldir=0$ and $1$.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Model & $f(z,\beta)$ & Reference \\
\hline
BCFY & $\frac{\textstyle z(1-z)^{2}}{\textstyle [1-(1-r)z]^{6}}[3+{\sum_{i=1}^{4} (-z)^{i}f_{i}(r)}]$ & \cite{bcfy} \\
Bowler & $\frac{\textstyle 1}
{\textstyle z^{(1+r_{b}bm_{\perp}^{2})}}(1-z)^{a}exp(-bm_{\perp}^{2}/z)$
& \cite{bowler} \\
CS &$ (\frac{\textstyle 1-z}{\textstyle z}+\frac{\textstyle (2-z)\epsilon_{b}}{\textstyle 1-z})
(1+z^{2})(1-\frac{\textstyle 1}{\textstyle z}-\frac{\textstyle \epsilon_{b}}{\textstyle 1-z})^{-2}$ & \cite{collins} \\
Kart. & $z^{\alpha_{b}}(1-z)$ & \cite{kart} \\
Lund & $\frac{\textstyle 1}{\textstyle z}(1-z)^{a}exp(-bm_{\perp}^{2}/z)$
& \cite{lund} \\
Peterson & $\frac{\textstyle 1}{\textstyle z}(1-\frac{\textstyle 1}{\textstyle z}-\frac{\textstyle \epsilon_{b}}{\textstyle 1-\textstyle z})^{-2}$ & \cite{pete} \\
\hline
\end{tabular}
\caption{\label{table:fragmodels}
$b$ quark fragmentation models used in comparison with the data.
For the BCFY model, $f_{1}(r)~=~3(3-4r)$,
$f_{2}(r)~=~12-23r+26r^{2}$, $f_{3}(r)~=~(1-r)(9-11r+12r^{2})$, and
$f_{4}(r)~=~3(1-r)^{2}(1-r+r^{2})$.
}
\end{center}
\end{table}
In order to make a consistent comparison of each model
with the data we adopt the following procedure. For each model,
starting values of the arbitrary parameters, $\beta$, are assigned
and the corresponding fragmentation function $f(z,\beta)$ is used
along with the JETSET Monte Carlo to produce the corresponding
scaled primary $B$ hadron energy distribution, $D^{MC}(x_{B}^{true})$
in the MC-generated {$b\bar{b}$}\chkspace event sample, {\it before} simulation of the
detector. Then each simulated $B$ hadron is weighted according to its
true $B$ hadron energy, $x_B^{true}$; the weight is determined by the
ratio of the generated $B$ hadron energy distribution,
$D^{MC}(x_{B}^{true})$, to that of our default simulation
$D^{default}(x_{B}^{true})$.
After simulation of the detector, application of the analysis cuts and
background subtraction, the resulting weighted distribution of reconstructed
$B$ hadron energies, $D^{MC}(x_{B}^{rec})$, is then compared with the
background-subtracted data distribution and the $\chi^2$ value, defined as
\begin{equation}
\chi^2 = \sum_{i=1}^{N} \left( \frac{\textstyle N_{i}^{data} - r N_{i}^{MC} }
{\textstyle \sigma_{i}} \right)^{2}
\label{eqn:chisq}
\end{equation}
is calculated, where $N$ is the number of bins to be used in the comparison,
$N_{i}^{data}$ is the number of entries
in bin $i$ in the data distribution, and $N_{i}^{MC}$ is the number of entries
in bin $i$ in the simulated distribution\footnote{$r$ is the factor by which
the total number of entries
in the simulated distribution is scaled to the number of entries in
the data distribution; $r$ $\simeq$ 1/12.}.
$\sigma_{i}$ is the statistical error on the deviation of the
observed number of entries for the data from the expected number of
entries in bin $i$, which can be expressed as
\begin{equation}
\sigma_i^2 = \left( \sqrt{rN_i^{MC}} \right)^2 +
\left( r\sqrt{N_i^{MC}} \right)^2,
\label{eqn:error}
\end{equation}
where $ \left( \sqrt{rN_{i}^{MC}} \right)^2$ is the expected statistical
variance on the observed data number of entries in bin $i$,
assuming the model being tested is correct, and
$ \left( r\sqrt{N_{i}^{MC}} \right)^2 $ is the statistical variance on
the expected number of entries in bin $i$. Since the $\chi^{2}$-test is
not a statistically effective test for bins with a very small number of
entries, the third, the fourth, and the last three bins in Figure~\ref{xbrec}
are excluded from the comparison.
We vary the values of the set of parameters $\beta$ and repeat the
above procedure. The minimum $\chi^2$ is found by scanning through
the input parameter space, yielding
a set of parameters which give an optimal description of the reconstructed
data by the fragmentation model in question.
Each of the nine plots in Figure~\ref{fig:fragmodel}
shows the background-subtracted distribution of
reconstructed $B$ hadron energy for the data (points) and
the respective $B$ energy distribution (histogram) resulting
{\em either} from the optimised input fragmentation function $f(z)$ embedded
within the JETSET parton shower simulation, {\em or} from the predictions
of the HERWIG event-generator and the UCLA fragmentation model.
Data points excluded from the fit are
represented in Figure~\ref{fig:fragmodel} by open circles.
Table~\ref{table:modelresult} lists the results of the comparisons.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Model & $\chi^{2}/dof$ & Parameters & $\langle x_{B} \rangle$\\
\hline
JETSET + BCFY
& 83/16 & $r=0.085$ & 0.694 \\
JETSET + Bowler* & 17/15 & $a=1.5, b=1.5, (r_b=1)$ & 0.714 \\
JETSET + Collins and Spiller & 103/16 & $\epsilon_b=0.003$ & 0.691 \\
JETSET + Kartvelishvili* {\em et al.}
& 34/16 & $\alpha_b = 10.4$ & 0.711 \\
JETSET + Lund* & 17/15 & $a=2.0, b=0.5$ & 0.712 \\
JETSET + Peterson {\em et al.} & 62/16 & $\epsilon_{b}=0.006$ & 0.697 \\
HERWIG cldir=0 & 460/17 & $-$ & 0.632 \\
HERWIG cldir=1 & 94/17 & $-$ & 0.676 \\
UCLA* & 25/17 & $-$ & 0.718 \\
\hline
\end{tabular}
\caption{\label{table:modelresult}
Results of fragmentation model tests for JETSET + fragmentation models,
the HERWIG model and the UCLA model. Minimum $\chi^{2}$,
number of degrees of freedom, coresponding parameter
values, and the mean value of the corresponding $B$ energy distribution
are listed. * indicates used to correct the data in Section~\ref{sec:correct}.
}
\end{center}
\end{table}
We conclude that with our resolution and our current data sample, we
are able to distinguish between several fragmentation models.
Within the context of the JETSET Monte Carlo, the Lund and Bowler
models are consistent with the data with $\chi^2$ probability of
32\% for each, the Kartvelishvili model is marginally consistent with
the data, while the Peterson, the BCFY and the CS models are
found to be inconsistent with the data.
The UCLA model is consistent with the data to a level of 10\%
$\chi^2$ probability. The HERWIG model with $cldir=0$ is confirmed to
be much too soft. Using $cldir=1$ results in a substantial improvement,
but it is still inconsistent with the data.
\subsection{Tests of Functional Forms $f(x_B,\lambda)$}
\label{subsec:form}
We then consider the more general question of what functional forms
of the $B$ energy
distribution, $f(x_B,\lambda)$, can be used as estimates of the true
underlying $B$ energy distribution.
In particular, we would like to test a wide variety of functional forms
and ask how many different forms are consistent with the data. Each
consistent functional form will add to the list of our estimates of the
true underlying $B$ energy distribution.
For convenience we consider the functional forms of
the BCFY, Collins and Spiller, Kartvelishvili, Lund, and Peterson
groups in the variable $x_B$.
In addition we consider {\it ad hoc}\chkspace generalisations of the Peterson function (F),
an 8th-order polynomial and a `power' function. These functions are
listed in Table~\ref{table:functionalform}.
Each function vanishes at $x_{B}=0$ and $x_{B}=1$.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Function & $f(x_B,\lambda)$ & Reference \\
\hline
F & $\frac{\textstyle (1+b(1-x_B))}{\textstyle x_B}(1-\frac{\textstyle c}{\textstyle x_B}-\frac{\textstyle d}{\textstyle 1-x_B})^{-2}$ & \cite{aleph95} \\
8th-order Polynomial & $x_B(1-x_B)(x_B-x_B^0)(1+{\sum_{i=1}^{5} p_{i}x_B^{i}})$ & (see text) \\
Power & $x_B^{\alpha}(1-x_B)^{\beta}$ & (see text) \\
\hline
\end{tabular}
\caption{\label{table:functionalform}
\small
$B$ energy functional forms used in comparison with the data.
A polynomial function and a power function are included
(see text for discussion). $x_B^0$ is the low kinematic threshold for
$B$ energy. For BCFY, CS, Kartvelishvili, Lund, Peterson functional
forms, see Table~\ref{table:fragmodels}.
}
\vspace{-0.5cm}
\end{center}
\end{table}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Function & $\chi^{2}/dof$ & Parameters & $\langle x_{B} \rangle$\\
\hline
F1* & 14/15 & $c=0.838\pm0.018$ & 0.714$\pm$0.005 \\
& & $d=0.022\pm0.002$ & \\
F2* & 21/15 & $c=0.896\pm0.033$ & 0.717$\pm$0.005 \\
& & $d=0.040\pm0.003$ & \\
BCFY & 62/16 & $r=0.240\pm0.009$ & 0.709$\pm$0.005 \\
Collins and Spiller
& 75/16 & $\epsilon_{b}=0.043\pm0.005$ & 0.711$\pm$0.005 \\
Kartvelishvili {\em et al.}
& 68/16 & $\alpha_{b}=4.16\pm0.11$ & 0.721$\pm$0.004 \\
Lund & 115/15 & $a=2.30\pm0.12$ & 0.721$\pm$0.005 \\
& & $bm_{\perp}^{2}=0.50\pm0.07$ & \\
Peterson {\em et al.}* & 28/16 &
$\epsilon_{b}=0.036\pm0.002$ & 0.713$\pm$0.005 \\
Polynomial*
& 15/12 & $p_{1}=-10.76\pm0.16$ & 0.709$\pm$0.005 \\
& & $p_{2}=45.74\pm0.28$ & \\
& & $p_{3}=-93.60\pm0.34$ & \\
& & $p_{4}=92.01\pm0.37$ & \\
& & $p_{5}=-34.53\pm0.27$ & \\
Power
& 68/15 & $\alpha=4.27\pm0.25$ & 0.720$\pm$0.005 \\
& & $\beta=1.05\pm0.10$ & \\
\hline
\end{tabular}
\caption{\label{table:formresult}
Results of the $\chi^{2}$ fit of fragmentation functions to the reconstructed
$B$ hadron energy distribution after background subtraction. The minimum
$\chi^{2}$ value, the number of degrees of freedom, the coresponding
parameter values, and the mean value of the corresponding $B$ energy
distribution are listed. Errors are statistical only.
* indicates used to correct the data in Section~\ref{sec:correct}.
}
\end{center}
\end{table}
For each functional form,
a testing procedure similar to that described in
subsection~\ref{subsec:model} is applied. The optimised fitting
parameters $\lambda$
and the minimum $\chi^2$ values are listed
in Table~\ref{table:formresult}. The corresponding $D^{MC}(x_{B}^{rec})$
are compared with the data in Figure~\ref{fig:form}.
Two sets of optimised parameters are found for the generalised
Peterson function F to describe the data.
`F1', obtained by setting the parameter $b$ (shown in
Table~\ref{table:functionalform}) to infinity,
behaves like $x_B$ as $x_B$ {$\rightarrow$}\chkspace 0 and $(1-x_B)^3$ as $x_B$ {$\rightarrow$}\chkspace 1 and yields
the best $\chi^2$ probability of 53\%;
`F2', obtained by setting $b$ to zero, has a $\chi^2$ probability of 13\%.
A constrained polynomial of at least 8th-order is needed to obtain
a $\chi^{2}$ probability greater than 0.1\%.
The Peterson functional form marginally reproduces the data with a
$\chi^2$ probability of about 3\%.
The remaining functional forms are found to be inconsistent with our data.
The widths of the BCFY and CS functions are too large to
describe the data; Kartvelishvili, Lund and the `power'
functional form vanish too fast as $x_B$ approaches zero.
We conclude that, within our resolution and with our
current data sample, we are able to distinguish between some of these
functional forms. But most importantly, consistent functional forms
will help us evaluate the uncertainty on
the true $B$ energy distribution.
\section{Correction of the $B$ Energy Distribution}
\label{sec:correct}
In order to compare our results with those from other experiments and
potential future theoretical predictions it is
necessary to correct the reconstructed scaled $B$ hadron energy distribution
$D^{data}(x_{B}^{rec})$ for the
effects of non-$B$ backgrounds, detector acceptance, event selection and
analysis bias, and initial-state radiation, as well as for bin-to-bin
migration effects caused by the finite resolution of the detector and the
analysis technique.
Due to the known rapid variation of the yet-unknown true $B$ energy
distribution at large $x_B$, {\em any} correction procedure will
necessarily be more or less model-dependent.
We choose a method that explicitly
evaluates this model-dependence and gives a very good estimate of the
true energy distribution using all of the above models or functional
forms that are at least marginally consistent with the data.
We apply a $25\times25$ matrix unfolding procedure
to $D^{data}(x_{B}^{rec})$ to obtain an estimate of the true distribution
$D^{data}(x_{B}^{true})$:
\vspace{-0.12cm}
\begin{eqnarray}
D^{data}(x_{B}^{true})\quad=\quad \epsilon^{-1}(x_{B}^{true}) \cdot
E(x_{B}^{true},x_{B}^{rec}) \cdot (D^{data}(x_{B}^{rec})
-S(x_{B}^{rec}))
\label{eqn:unfold}
\vspace{-0.5cm}
\end{eqnarray}
where $S$ is a vector representing the background contribution, $E$ is a
matrix to correct for bin-to-bin migrations, and $\epsilon$ is
a vector representing the efficiency for selecting true $B$ hadron
decays for the analysis.
The matrices $S$, $E$ and $\epsilon$ are calculated from our MC
simulation;
the matrix $E$ incorporates a
convolution of the input fragmentation function with the resolution of the
detector. $E(i,j)$ is the number of vertices with $x_{B}^{true}$ in bin $i$
and $x_{B}^{rec}$ in bin $j$, normalized by the total number of vertices
with $x_{B}^{rec}$ in bin $j$.
We evaluate the matrix $E$ using the Monte Carlo simulation weighted
according to an input generator-level {\em true} $B$ energy
distribution found to be consistent with the data in
Section~\ref{sec:shape}. We have seen that several $B$ energy
distributions can reproduce the data.
We consider in turn each of these eight consistent distributions,
using the optimised parameters listed in Table~\ref{table:modelresult}
and~\ref{table:formresult}.
The matrix $E$ is then evaluated by examining
the population migrations of true $B$ hadrons between bins
of the input scaled $B$ energy, $x_{B}^{true}$, and
the reconstructed scaled $B$ energy, $x_{B}^{rec}$.
Using each $D^{MC}(x_{B}^{true})$, the data distribution
$D^{data}(x_{B}^{rec})$ is then unfolded
according to Equation~(\ref{eqn:unfold}) to yield $D^{data}(x_{B}^{true})$,
which is shown for each input fragmentation function in Figure~\ref{overlay}.
It can be seen that the shapes of $D^{data}(x_{B}^{true})$ differ
systematically among the input $b$ quark fragmentation models
and the assumed $B$ energy functional forms.
These differences are used to assign systematic errors.
Figure~\ref{average} shows the final corrected $x_{B}$
distribution $D(x_{B})$, which is the bin-by-bin average of
the eight unfolded distributions,
where the inner error bar represents the statistical error
and the outer error bar
is the sum in quadrature of the r.m.s.\ of the eight unfolded distributions
and the statistical error within each bin.
Since two of the eight functions (the Kartvelishvili model and the
Peterson functional form) are only in marginal agreement with the data,
and the 8th-order polynomial has a slightly unphysical behavior
near $x_B=1$, this r.m.s. may be considered to be a rather reasonable
envelope within which the true $x_B$
distribution is most likely to vary. The model dependence for this
analysis is significantly smaller than those of previous direct $B$
energy measurements, indicating
the enhanced sensitivity of our data to the underlying true energy
distribution.
\section{Systematic Errors}
\label{sec:sys}
We have considered sources of systematic uncertainty that potentially affect
our measurement of the $B$ hadron energy distribution.
These may be divided into uncertainties in
modelling the detector and uncertainties on
experimental measurements serving as
input parameters to the underlying physics modelling.
For these studies our standard simulation, employing
the Peterson fragmentation function, is used.
For each source of systematic error, the Monte Carlo distribution
$D^{MC}(x_B^{true})$ is re-weighted and then the resulting
Monte Carlo reconstructed distribution $D^{MC}(x_B^{rec})$ is
compared with the data $D^{data}(x_B^{rec})$
by repeating the fitting and unfolding procedures described in Section 4
and 5.
The differences in both the shape and the mean value of the $x_B^{true}$
distribution
relative to the standard procedure with nominal values of parameters
are considered.
Due to the strong dependence of our energy reconstruction technique
on charged tracks, the dominant systematic error is due to the
discrepancy in the charged track transverse momentum resolution between
the Monte Carlo and the data.
We evaluate this conservatively by taking the
full difference between the nominal results and results using
a resolution-corrected Monte Carlo event sample.
The difference between the measured and simulated charged track multiplicity
as a function of cos$\theta$ and momentum is attributed to an unsimulated
tracking inefficiency correction. We use a random track-tossing
procedure to evaluate the difference in our results.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|r|}
\hline
Source & Variation & $\delta$ $\langle x_B \rangle$ \\
\hline
{\bf Monte Carlo statistics} & & {\bf 0.0011} \\
\hline
Tracking efficiency correction & on/off & 0.0022\\
Track impact parameter & on/off & 0.0012 \\
Track polar angle & 2 mrad & $-$0.0009 \\
Track $1/P_\perp$ & 0.0017 & $-$0.0054 \\
Hadronic event selection & standard & 0.0005 \\
\hline
{\bf Total Detector Systematics} & & {\bf 0.0061} \\
\hline
$B^0$ mass effect & $\pm$2$\sigma$& 0.0001 \\
$B$ lifetimes & $\pm\sigma$& 0.0002 \\
$B^+/B^0/B^0_s/\Lambda_b$ production & $\pm$2$\sigma$& 0.0010 \\
$B$ decay fraction & $\pm$2$\sigma$ & 0.0006 \\
$B$ decay $<n_{ch}>$& 5.3$\pm$0.3 & 0.0012 \\
$D$ lifetimes & $\pm$$\sigma$ & 0.0002 \\
$D$ decay $<n_{ch}>$& $\pm$$\sigma$ & 0.0005 \\
$D \rightarrow K^0$ multiplicity & $\pm$$\sigma$ & 0.0013 \\
$D \rightarrow$ no $\pi^0$ fraction & $\pm$$\sigma$ & 0.0005 \\
$g \rightarrow b\bar{b}$ & (0.31$\pm$0.15)$\%$ & 0.0002 \\
$g \rightarrow c\bar{c}$ & (2.4$\pm$1.2)$\%$ & 0.0008 \\
$K^0$ production& 0.66$\pm$0.07 trks&0.0009 \\
$\Lambda$ production& 0.12$\pm$0.01 trks&0.0002 \\
$R_b$& 0.2170$\pm$0.0009&$<$0.0001 \\
$R_c$& 0.1733$\pm$0.0048&$<$0.0001 \\
Model dependence & & 0.0020 \\
\hline
{\bf Total Systematics} & & {\bf 0.0068}\\
\hline
\end{tabular}
\caption{
\label{table:syst}
Source and systematic errors.
}
\end{center}
\end{table}
\indent
A large number of measured quantities relating to the production and decay
of charm and bottom hadrons are used as input to our simulation.
In {$b\bar{b}$}\chkspace events we have considered the uncertainties on:
the branching fraction for $Z^0$ {$\rightarrow$}\chkspace {$b\bar{b}$}\chkspace;
the rates of production of $B^{\pm}$, $B^0$ and $B^0_s$ mesons,
and $B$ baryons;
the lifetimes of $B$ mesons and baryons;
and the average $B$ hadron decay charged multiplicity.
In {$c\bar{c}$}\chkspace events we have considered the uncertainties on:
the branching fraction for $Z^0$ {$\rightarrow$}\chkspace {$c\bar{c}$}\chkspace;
the charmed hadron lifetimes,
the charged multiplicity of charmed hadron decays,
the production of $K^0$ from charmed hadron decays,
and the fraction of charmed hadron decays containing no $\pi^0$s.
We have also considered the rate of production of \relax\ifmmode {\rm s}\bar{\rm s in the jet fragmentation
process, and the production of secondary {$b\bar{b}$}\chkspace and {$c\bar{c}$}\chkspace from gluon splitting.
The world-average values~\cite{heavy,sldrb} of these quantities used in our
simulation, as well as the respective uncertainties, are listed in
Table~\ref{table:syst}. Most of these variations have effect on
normalisation, but very little on the shape or the mean value. In no
case do we find a variation that changes our conclusion about which
functions are consistent with the data. Systematic errors of the mean value
are listed in Table~\ref{table:syst}.
The model-dependence of the unfolding procedure is estimated by considering
the envelope of the unfolded results illustrated in Figure~\ref{average}.
Since eight functions provide an acceptable $\chi^2$ probablity
in fitting to the data, in each bin of $x_{B}$ we calculated the
average value of these eight unfolded results as well as the r.m.s.
deviation. In each bin the average value is taken as our central
value and the r.m.s.\@ value is assigned as the unfolding uncertainty.
\indent
Other relevant systematic effects such as variation of
the event selection cuts and the assumed $B$ hadron mass are also
found to be very small. As a cross-check, we vary the $M_{0max}$
cut (Equation~(\ref{eqn:m0maxcut})) in selecting the final $B$ sample
within a large range and repeat the
analysis procedure. In each case, conclusions about the shape of the $B$
energy distribution hold. In each bin, all sources of systematic
uncertainty are added in quadrature to obtain the total systematic error.
\section{Summary and Conclusions}
We have used the excellent tracking and vertexing capabilities of SLD
to reconstruct the energies of $B$ hadrons in {$e^+e^-$}\chkspace {$\rightarrow$}\chkspace $Z^0$ events over
the full kinematic range by applying a new kinematic technique to
an {\em inclusive} sample of topologically reconstructed $B$ hadron
decay vertices. The overall $B$ selection efficiency of the
method is 3.9\%.
We estimate the resolution on the $B$ energy to be about 10.4\% for
roughly 83\% of the reconstructed decays. The energy resolution for
low energy $B$ hadrons is significantly better than previous measurements.
In order to get a good estimate of the model
dependence of the unfolded distribution,
the distribution of reconstructed scaled
$B$ hadron energy, $D^{data}(x^{rec}_{B})$, is
compared {\bf case 1)} with
predictions of {\em either} perturbative QCD and phenomenological
$b$ quark fragmentation models in the context of the JETSET
parton shower Monte Carlo,
{\em or} HERWIG and UCLA fragmentation models, and {\bf case 2)}
with a set of functional forms for the $B$ energy distribution.
In {\bf case 1)},
the Lund and the Bowler models are consistent with the data;
the model of Kartvelishvili {\it et al.}\chkspace is in marginal agreement with the data.
The models based on the
perturbative QCD calculations of Braaten {\it et al.}\chkspace, and of Collins and Spiller,
and the Peterson model are disfavored by the data. Although both
versions of the HERWIG model are excluded by the data, the new
version is very much improved.
The UCLA model describes the data reasonably well.
In {\bf case 2)}, four functional forms,
namely the two generalised Peterson functions F1 and F2,
the Peterson function, and a constrained 8th-order polynomial
are found to be consistent with the data.
The raw $B$ energy distribution is then corrected
for bin-to-bin migrations caused by the resolution of the method
and for selection efficiency to derive the energy distribution
of the weakly decaying $B$ hadrons produced in $Z^0$ decays.
Systematic uncertainties in the correction have been evaluated
and are found to be significantly smaller than those of previous
direct $B$ energy measurements. The final corrected $x_{B}$ distribution
$D^{data}(x_{B}^{true})$ is shown in Figure~\ref{average}.
The statistical and unfolding uncertainties are
indicated separately.
It is conventional to evaluate the mean of this $B$ energy
distribution, $<x_{B}>$.
For each of the eight functions providing a reasonable description of
the data (four from {\bf case 1)} and four from {\bf case 2)}), we
evaluate $<x_{B}>$ from the distribution that corresponds to the
optimised parameters;
these are listed in Table~\ref{table:modelresult} and
Table~\ref{table:formresult}. We take the average
of the eight values of $<x_{B}>$ as our central value, and
define the model-dependent uncertainty to be the r.m.s.\@ deviation
within each bin.
All detector and physics modeling systematic errors are included.
We obtain
\begin{eqnarray}
<x_{B}>\quad=\quad 0.714\pm 0.005 (stat.)\pm 0.007 (syst)\pm 0.002 (model),
\label{eqn:average}
\end{eqnarray}
It can be seen that $<x_{B}>$ is relatively insensitive to the variety of
allowed forms of the shape of the fragmentation function $D(x_{B})$.
\section*{Acknowledgements}
We thank the personnel of the SLAC accelerator department and the
technical
staffs of our collaborating institutions for their outstanding efforts
on our behalf.
\vskip .5truecm
\small
\vbox{\footnotesize\renewcommand{\baselinestretch}{1}\noindent
$^*$Work supported by Department of Energy
contracts:
DE-FG02-91ER40676 (BU),
DE-FG03-91ER40618 (UCSB),
DE-FG03-92ER40689 (UCSC),
DE-FG03-93ER40788 (CSU),
DE-FG02-91ER40672 (Colorado),
DE-FG02-91ER40677 (Illinois),
DE-AC03-76SF00098 (LBL),
DE-FG02-92ER40715 (Massachusetts),
DE-FC02-94ER40818 (MIT),
DE-FG03-96ER40969 (Oregon),
DE-AC03-76SF00515 (SLAC),
DE-FG05-91ER40627 (Tennessee),
DE-FG02-95ER40896 (Wisconsin),
DE-FG02-92ER40704 (Yale);
National Science Foundation grants:
PHY-91-13428 (UCSC),
PHY-89-21320 (Columbia),
PHY-92-04239 (Cincinnati),
PHY-95-10439 (Rutgers),
PHY-88-19316 (Vanderbilt),
PHY-92-03212 (Washington);
The UK Particle Physics and Astronomy Research Council
(Brunel, Oxford and RAL);
The Istituto Nazionale di Fisica Nucleare of Italy
(Bologna, Ferrara, Frascati, Pisa, Padova, Perugia);
The Japan-US Cooperative Research Project on High Energy Physics
(Nagoya, Tohoku);
The Korea Research Foundation (Soongsil, 1997).}
\vfill
\eject
\section*{$^{**}$List of Authors}
\begin{center}
\def$^{(1)}${$^{(1)}$}
\def$^{(2)}${$^{(2)}$}
\def$^{(3)}${$^{(3)}$}
\def$^{(4)}${$^{(4)}$}
\def$^{(4)}${$^{(5)}$}
\def$^{(5)}${$^{(6)}$}
\def$^{(6)}${$^{(7)}$}
\def$^{(7)}${$^{(8)}$}
\def$^{(8)}${$^{(9)}$}
\def$^{(9)}${$^{(10)}$}
\def$^{(10)}${$^{(11)}$}
\def$^{(11)}${$^{(12)}$}
\def$^{(12)}${$^{(13)}$}
\def$^{(14)}${$^{(14)}$}
\def$^{(13)}${$^{(15)}$}
\def$^{(14)}${$^{(16)}$}
\def$^{(15)}${$^{(17)}$}
\def$^{(16)}${$^{(18)}$}
\def$^{(17)}${$^{(19)}$}
\def$^{(18)}${$^{(20)}$}
\def$^{(19)}${$^{(21)}$}
\def$^{(20)}${$^{(22)}$}
\def$^{(21)}${$^{(23)}$}
\def$^{(22)}${$^{(24)}$}
\def$^{(23)}${$^{(25)}$}
\def$^{(24)}${$^{(26)}$}
\def$^{(25)}${$^{(27)}$}
\def$^{(26)}${$^{(28)}$}
\def$^{(27)}${$^{(29)}$}
\def$^{(28)}${$^{(30)}$}
\def$^{(29)}${$^{(31)}$}
\def$^{(30)}${$^{(32)}$}
\def$^{(31)}${$^{(33)}$}
\def$^{(32)}${$^{(34)}$}
\def$^{(33)}${$^{(35)}$}
\def$^{(36)}${$^{(36)}$}
\def$^{(34)}${$^{(37)}$}
\def$^{(35)}${$^{(38)}$}
\def$^{(36)}${$^{(39)}$}
\def$^{(37)}${$^{(40)}$}
\baselineskip=.75\baselineskip
\mbox{Kenji Abe\unskip,$^{(19)}$}
\mbox{Koya Abe\unskip,$^{(31)}$}
\mbox{T. Abe\unskip,$^{(27)}$}
\mbox{I.Adam\unskip,$^{(27)}$}
\mbox{T. Akagi\unskip,$^{(27)}$}
\mbox{N. J. Allen\unskip,$^{(4)}$}
\mbox{W.W. Ash\unskip,$^{(27)}$}
\mbox{D. Aston\unskip,$^{(27)}$}
\mbox{K.G. Baird\unskip,$^{(15)}$}
\mbox{C. Baltay\unskip,$^{(37)}$}
\mbox{H.R. Band\unskip,$^{(36)}$}
\mbox{M.B. Barakat\unskip,$^{(14)}$}
\mbox{O. Bardon\unskip,$^{(17)}$}
\mbox{T.L. Barklow\unskip,$^{(27)}$}
\mbox{G. L. Bashindzhagyan\unskip,$^{(18)}$}
\mbox{J.M. Bauer\unskip,$^{(16)}$}
\mbox{G. Bellodi\unskip,$^{(21)}$}
\mbox{R. Ben-David\unskip,$^{(37)}$}
\mbox{A.C. Benvenuti\unskip,$^{(3)}$}
\mbox{G.M. Bilei\unskip,$^{(23)}$}
\mbox{D. Bisello\unskip,$^{(22)}$}
\mbox{G. Blaylock\unskip,$^{(15)}$}
\mbox{J.R. Bogart\unskip,$^{(27)}$}
\mbox{G.R. Bower\unskip,$^{(27)}$}
\mbox{J. E. Brau\unskip,$^{(20)}$}
\mbox{M. Breidenbach\unskip,$^{(27)}$}
\mbox{W.M. Bugg\unskip,$^{(30)}$}
\mbox{D. Burke\unskip,$^{(27)}$}
\mbox{T.H. Burnett\unskip,$^{(35)}$}
\mbox{P.N. Burrows\unskip,$^{(21)}$}
\mbox{A. Calcaterra\unskip,$^{(11)}$}
\mbox{D. Calloway\unskip,$^{(27)}$}
\mbox{B. Camanzi\unskip,$^{(10)}$}
\mbox{M. Carpinelli\unskip,$^{(24)}$}
\mbox{R. Cassell\unskip,$^{(27)}$}
\mbox{R. Castaldi\unskip,$^{(24)}$}
\mbox{A. Castro\unskip,$^{(22)}$}
\mbox{M. Cavalli-Sforza\unskip,$^{(33)}$}
\mbox{A. Chou\unskip,$^{(27)}$}
\mbox{E. Church\unskip,$^{(35)}$}
\mbox{H.O. Cohn\unskip,$^{(30)}$}
\mbox{J.A. Coller\unskip,$^{(5)}$}
\mbox{M.R. Convery\unskip,$^{(27)}$}
\mbox{V. Cook\unskip,$^{(35)}$}
\mbox{R. Cotton\unskip,$^{(4)}$}
\mbox{R.F. Cowan\unskip,$^{(17)}$}
\mbox{D.G. Coyne\unskip,$^{(33)}$}
\mbox{G. Crawford\unskip,$^{(27)}$}
\mbox{C.J.S. Damerell\unskip,$^{(25)}$}
\mbox{M. N. Danielson\unskip,$^{(7)}$}
\mbox{M. Daoudi\unskip,$^{(27)}$}
\mbox{N. de Groot\unskip,$^{(4)}$}
\mbox{R. Dell'Orso\unskip,$^{(23)}$}
\mbox{P.J. Dervan\unskip,$^{(4)}$}
\mbox{R. de Sangro\unskip,$^{(11)}$}
\mbox{M. Dima\unskip,$^{(9)}$}
\mbox{A. D'Oliveira\unskip,$^{(6)}$}
\mbox{D.N. Dong\unskip,$^{(17)}$}
\mbox{M. Doser\unskip,$^{(27)}$}
\mbox{R. Dubois\unskip,$^{(27)}$}
\mbox{B.I. Eisenstein\unskip,$^{(12)}$}
\mbox{V. Eschenburg\unskip,$^{(16)}$}
\mbox{E. Etzion\unskip,$^{(36)}$}
\mbox{S. Fahey\unskip,$^{(7)}$}
\mbox{D. Falciai\unskip,$^{(11)}$}
\mbox{C. Fan\unskip,$^{(7)}$}
\mbox{J.P. Fernandez\unskip,$^{(33)}$}
\mbox{M.J. Fero\unskip,$^{(17)}$}
\mbox{K.Flood\unskip,$^{(15)}$}
\mbox{R. Frey\unskip,$^{(20)}$}
\mbox{J. Gifford\unskip,$^{(36)}$}
\mbox{T. Gillman\unskip,$^{(25)}$}
\mbox{G. Gladding\unskip,$^{(12)}$}
\mbox{S. Gonzalez\unskip,$^{(17)}$}
\mbox{E. R. Goodman\unskip,$^{(7)}$}
\mbox{E.L. Hart\unskip,$^{(30)}$}
\mbox{J.L. Harton\unskip,$^{(9)}$}
\mbox{A. Hasan\unskip,$^{(4)}$}
\mbox{K. Hasuko\unskip,$^{(31)}$}
\mbox{S. J. Hedges\unskip,$^{(5)}$}
\mbox{S.S. Hertzbach\unskip,$^{(15)}$}
\mbox{M.D. Hildreth\unskip,$^{(27)}$}
\mbox{J. Huber\unskip,$^{(20)}$}
\mbox{M.E. Huffer\unskip,$^{(27)}$}
\mbox{E.W. Hughes\unskip,$^{(27)}$}
\mbox{X.Huynh\unskip,$^{(27)}$}
\mbox{H. Hwang\unskip,$^{(20)}$}
\mbox{M. Iwasaki\unskip,$^{(20)}$}
\mbox{D. J. Jackson\unskip,$^{(25)}$}
\mbox{P. Jacques\unskip,$^{(26)}$}
\mbox{J.A. Jaros\unskip,$^{(27)}$}
\mbox{Z.Y. Jiang\unskip,$^{(27)}$}
\mbox{A.S. Johnson\unskip,$^{(27)}$}
\mbox{J.R. Johnson\unskip,$^{(36)}$}
\mbox{R.A. Johnson\unskip,$^{(6)}$}
\mbox{T. Junk\unskip,$^{(27)}$}
\mbox{R. Kajikawa\unskip,$^{(19)}$}
\mbox{M. Kalelkar\unskip,$^{(26)}$}
\mbox{Y. Kamyshkov\unskip,$^{(30)}$}
\mbox{H.J. Kang\unskip,$^{(26)}$}
\mbox{I. Karliner\unskip,$^{(12)}$}
\mbox{H. Kawahara\unskip,$^{(27)}$}
\mbox{Y. D. Kim\unskip,$^{(28)}$}
\mbox{M.E. King\unskip,$^{(27)}$}
\mbox{R. King\unskip,$^{(27)}$}
\mbox{R.R. Kofler\unskip,$^{(15)}$}
\mbox{N.M. Krishna\unskip,$^{(7)}$}
\mbox{R.S. Kroeger\unskip,$^{(16)}$}
\mbox{M. Langston\unskip,$^{(20)}$}
\mbox{A. Lath\unskip,$^{(17)}$}
\mbox{D.W.G. Leith\unskip,$^{(27)}$}
\mbox{V. Lia\unskip,$^{(17)}$}
\mbox{C.Lin\unskip,$^{(15)}$}
\mbox{M.X. Liu\unskip,$^{(37)}$}
\mbox{X. Liu\unskip,$^{(33)}$}
\mbox{M. Loreti\unskip,$^{(22)}$}
\mbox{A. Lu\unskip,$^{(32)}$}
\mbox{H.L. Lynch\unskip,$^{(27)}$}
\mbox{J. Ma\unskip,$^{(35)}$}
\mbox{G. Mancinelli\unskip,$^{(26)}$}
\mbox{S. Manly\unskip,$^{(37)}$}
\mbox{G. Mantovani\unskip,$^{(23)}$}
\mbox{T.W. Markiewicz\unskip,$^{(27)}$}
\mbox{T. Maruyama\unskip,$^{(27)}$}
\mbox{H. Masuda\unskip,$^{(27)}$}
\mbox{E. Mazzucato\unskip,$^{(10)}$}
\mbox{A.K. McKemey\unskip,$^{(4)}$}
\mbox{B.T. Meadows\unskip,$^{(6)}$}
\mbox{G. Menegatti\unskip,$^{(10)}$}
\mbox{R. Messner\unskip,$^{(27)}$}
\mbox{P.M. Mockett\unskip,$^{(35)}$}
\mbox{K.C. Moffeit\unskip,$^{(27)}$}
\mbox{T.B. Moore\unskip,$^{(37)}$}
\mbox{M.Morii\unskip,$^{(27)}$}
\mbox{D. Muller\unskip,$^{(27)}$}
\mbox{V.Murzin\unskip,$^{(18)}$}
\mbox{T. Nagamine\unskip,$^{(31)}$}
\mbox{S. Narita\unskip,$^{(31)}$}
\mbox{U. Nauenberg\unskip,$^{(7)}$}
\mbox{H. Neal\unskip,$^{(27)}$}
\mbox{M. Nussbaum\unskip,$^{(6)}$}
\mbox{N.Oishi\unskip,$^{(19)}$}
\mbox{D. Onoprienko\unskip,$^{(30)}$}
\mbox{L.S. Osborne\unskip,$^{(17)}$}
\mbox{R.S. Panvini\unskip,$^{(34)}$}
\mbox{C. H. Park\unskip,$^{(29)}$}
\mbox{T.J. Pavel\unskip,$^{(27)}$}
\mbox{I. Peruzzi\unskip,$^{(11)}$}
\mbox{M. Piccolo\unskip,$^{(11)}$}
\mbox{L. Piemontese\unskip,$^{(10)}$}
\mbox{K.T. Pitts\unskip,$^{(20)}$}
\mbox{R.J. Plano\unskip,$^{(26)}$}
\mbox{R. Prepost\unskip,$^{(36)}$}
\mbox{C.Y. Prescott\unskip,$^{(27)}$}
\mbox{G.D. Punkar\unskip,$^{(27)}$}
\mbox{J. Quigley\unskip,$^{(17)}$}
\mbox{B.N. Ratcliff\unskip,$^{(27)}$}
\mbox{T.W. Reeves\unskip,$^{(34)}$}
\mbox{J. Reidy\unskip,$^{(16)}$}
\mbox{P.L. Reinertsen\unskip,$^{(33)}$}
\mbox{P.E. Rensing\unskip,$^{(27)}$}
\mbox{L.S. Rochester\unskip,$^{(27)}$}
\mbox{P.C. Rowson\unskip,$^{(8)}$}
\mbox{J.J. Russell\unskip,$^{(27)}$}
\mbox{O.H. Saxton\unskip,$^{(27)}$}
\mbox{T. Schalk\unskip,$^{(33)}$}
\mbox{R.H. Schindler\unskip,$^{(27)}$}
\mbox{B.A. Schumm\unskip,$^{(33)}$}
\mbox{J. Schwiening\unskip,$^{(27)}$}
\mbox{S. Sen\unskip,$^{(37)}$}
\mbox{V.V. Serbo\unskip,$^{(27)}$}
\mbox{M.H. Shaevitz\unskip,$^{(8)}$}
\mbox{J.T. Shank\unskip,$^{(5)}$}
\mbox{G. Shapiro\unskip,$^{(13)}$}
\mbox{D.J. Sherden\unskip,$^{(27)}$}
\mbox{K. D. Shmakov\unskip,$^{(30)}$}
\mbox{C. Simopoulos\unskip,$^{(27)}$}
\mbox{N.B. Sinev\unskip,$^{(20)}$}
\mbox{S.R. Smith\unskip,$^{(27)}$}
\mbox{M. B. Smy\unskip,$^{(9)}$}
\mbox{J.A. Snyder\unskip,$^{(37)}$}
\mbox{H. Staengle\unskip,$^{(9)}$}
\mbox{A. Stahl\unskip,$^{(27)}$}
\mbox{P. Stamer\unskip,$^{(26)}$}
\mbox{H. Steiner\unskip,$^{(13)}$}
\mbox{R. Steiner\unskip,$^{(1)}$}
\mbox{M.G. Strauss\unskip,$^{(15)}$}
\mbox{D. Su\unskip,$^{(27)}$}
\mbox{F. Suekane\unskip,$^{(31)}$}
\mbox{A. Sugiyama\unskip,$^{(19)}$}
\mbox{S. Suzuki\unskip,$^{(19)}$}
\mbox{M. Swartz\unskip,$^{(14)}$}
\mbox{A. Szumilo\unskip,$^{(35)}$}
\mbox{T. Takahashi\unskip,$^{(27)}$}
\mbox{F.E. Taylor\unskip,$^{(17)}$}
\mbox{J. Thom\unskip,$^{(27)}$}
\mbox{E. Torrence\unskip,$^{(17)}$}
\mbox{N. K. Toumbas\unskip,$^{(27)}$}
\mbox{T. Usher\unskip,$^{(27)}$}
\mbox{C. Vannini\unskip,$^{(24)}$}
\mbox{J. Va'vra\unskip,$^{(27)}$}
\mbox{E. Vella\unskip,$^{(27)}$}
\mbox{J.P. Venuti\unskip,$^{(34)}$}
\mbox{R. Verdier\unskip,$^{(17)}$}
\mbox{P.G. Verdini\unskip,$^{(24)}$}
\mbox{D. L. Wagner\unskip,$^{(7)}$}
\mbox{S.R. Wagner\unskip,$^{(27)}$}
\mbox{A.P. Waite\unskip,$^{(27)}$}
\mbox{S. Walston\unskip,$^{(20)}$}
\mbox{J.Wang\unskip,$^{(27)}$}
\mbox{S.J. Watts\unskip,$^{(4)}$}
\mbox{A.W. Weidemann\unskip,$^{(30)}$}
\mbox{E. R. Weiss\unskip,$^{(35)}$}
\mbox{J.S. Whitaker\unskip,$^{(5)}$}
\mbox{S.L. White\unskip,$^{(30)}$}
\mbox{F.J. Wickens\unskip,$^{(25)}$}
\mbox{B. Williams\unskip,$^{(7)}$}
\mbox{D.C. Williams\unskip,$^{(17)}$}
\mbox{S.H. Williams\unskip,$^{(27)}$}
\mbox{S. Willocq\unskip,$^{(15)}$}
\mbox{R.J. Wilson\unskip,$^{(9)}$}
\mbox{W.J. Wisniewski\unskip,$^{(27)}$}
\mbox{J. L. Wittlin\unskip,$^{(15)}$}
\mbox{M. Woods\unskip,$^{(27)}$}
\mbox{G.B. Word\unskip,$^{(34)}$}
\mbox{T.R. Wright\unskip,$^{(36)}$}
\mbox{J. Wyss\unskip,$^{(22)}$}
\mbox{R.K. Yamamoto\unskip,$^{(17)}$}
\mbox{J.M. Yamartino\unskip,$^{(17)}$}
\mbox{X. Yang\unskip,$^{(20)}$}
\mbox{J. Yashima\unskip,$^{(31)}$}
\mbox{S.J. Yellin\unskip,$^{(32)}$}
\mbox{C.C. Young\unskip,$^{(27)}$}
\mbox{H. Yuta\unskip,$^{(2)}$}
\mbox{G. Zapalac\unskip,$^{(36)}$}
\mbox{R.W. Zdarko\unskip,$^{(27)}$}
\mbox{J. Zhou\unskip.$^{(20)}$}
\it
\vskip \baselineskip
\vskip \baselineskip
\baselineskip=.75\baselineskip
$^{(1)}$
Adelphi University, Garden City, New York 11530, \break
$^{(2)}$
Aomori University, Aomori , 030 Japan, \break
$^{(3)}$
INFN Sezione di Bologna, I-40126, Bologna Italy, \break
$^{(4)}$
University of Bristol, Bristol, U.K., \break
$^{(4)}$
Brunel University, Uxbridge, Middlesex, UB8 3PH United Kingdom, \break
$^{(5)}$
Boston University, Boston, Massachusetts 02215, \break
$^{(6)}$
University of Cincinnati, Cincinnati, Ohio 45221, \break
$^{(7)}$
University of Colorado, Boulder, Colorado 80309, \break
$^{(8)}$
Columbia University, New York, New York 10533, \break
$^{(9)}$
Colorado State University, Ft. Collins, Colorado 80523, \break
$^{(10)}$
INFN Sezione di Ferrara and Universita di Ferrara, I-44100 Ferrara, Italy, \break
$^{(11)}$
INFN Lab. Nazionali di Frascati, I-00044 Frascati, Italy, \break
$^{(12)}$
University of Illinois, Urbana, Illinois 61801, \break
$^{(14)}$
Johns Hopkins University, Baltimore, MD 21218-2686, \break
$^{(13)}$
Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720, \break
$^{(14)}$
Louisiana Technical University - Ruston,LA 71272, \break
$^{(15)}$
University of Massachusetts, Amherst, Massachusetts 01003, \break
$^{(16)}$
University of Mississippi, University, Mississippi 38677, \break
$^{(17)}$
Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, \break
$^{(18)}$
Institute of Nuclear Physics, Moscow State University, 119899, Moscow Russia, \break
$^{(19)}$
Nagoya University, Chikusa-ku, Nagoya 464 Japan, \break
$^{(20)}$
University of Oregon, Eugene, Oregon 97403, \break
$^{(21)}$
Oxford University, Oxford, OX1 3RH, United Kingdom, \break
$^{(22)}$
INFN Sezione di Padova and Universita di Padova I-35100, Padova, Italy, \break
$^{(23)}$
INFN Sezione di Perugia and Universita di Perugia, I-06100 Perugia, Italy, \break
$^{(24)}$
INFN Sezione di Pisa and Universita di Pisa, I-56010 Pisa, Italy, \break
$^{(25)}$
Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX United Kingdom, \break
$^{(26)}$
Rutgers University, Piscataway, New Jersey 08855, \break
$^{(27)}$
Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309, \break
$^{(28)}$
Sogang University, Seoul, Korea, \break
$^{(29)}$
Soongsil University, Seoul, Korea 156-743, \break
$^{(30)}$
University of Tennessee, Knoxville, Tennessee 37996, \break
$^{(31)}$
Tohoku University, Sendai 980, Japan, \break
$^{(32)}$
University of California at Santa Barbara, Santa Barbara, California 93106, \break
$^{(33)}$
University of California at Santa Cruz, Santa Cruz, California 95064, \break
$^{(36)}$
University of Victoria, Victoria, B.C., Canada, V8W 3P6, \break
$^{(34)}$
Vanderbilt University, Nashville,Tennessee 37235, \break
$^{(35)}$
University of Washington, Seattle, Washington 98105, \break
$^{(36)}$
University of Wisconsin, Madison,Wisconsin 53706, \break
$^{(37)}$
Yale University, New Haven, Connecticut 06511. \break
\rm
\end{center}
\vfill
\eject
|
1,108,101,564,185 | arxiv | \section{Confinement Scale QCD and Chiral Dynamics}
\label{sec:intro}
At low energies the interaction between quarks and gluons is extremely
strong and leads to confinement, where approximate QCD solutions can be
obtained by an effective field theory known as chiral perturbation theory
(ChPT) or Chiral Dynamics\cite{book,ChPT,ChPT2,work1}. This is based on
the chiral symmetry present in the QCD Lagrangian in the limit of massless
light quarks, but which is broken in the ground state of matter. In such a
situation, Goldstone's theorem states that there are massless, pseudoscalar
Bosons whose interactions with other hadrons vanish at zero
momentum\cite{book,Goldstone,L}. In the case of m$_{u}$=m$_{d}$=0, there
are three Goldstone Bosons which are identified as the pion triplet. The
relatively weak interactions of Goldstone Bosons at low energies invites a
perturbation scheme based on chiral symmetry and hadronic degrees of
freedom.
In the real world, the light quark masses are nonzero, but small
\cite{W1,GL1}. Therefore, for strong interaction theory to have predictive
power, calculations must be performed taking the deviations from the pure
Goldstone theorem into account. As an example, the s wave scattering
length, $a$, vanishes for a Goldstone Boson scattering from any hadron in
the low energy limit. However for a physical meson with finite mass
($\pi,\eta,K$) one would intuitively expect $a \simeq 1/\Lambda_{x}$
(see contribution of A.B. in Ref.\cite{work1}) where
$\Lambda_{x}$ is the chiral symmetry breaking scale ($\simeq
4\pi f_{\pi}\simeq 1 GeV$ for pions, where $f_{\pi}= 92.4 MeV$ is the
decay
constant). This intuitive expectation is supported by the original
calculation of Weinberg \cite{W2} in which the scattering lengths of pions
from any hadrons were first obtained by current algebra techniques (a
precursor of ChPT). The order of magnitude of the s wave scattering lengths
is\cite{W2}:
\begin{equation}
a_{o} = \frac{m_{\pi}}{4 \pi f_{\pi}^{2}}=\frac{m_{\pi}}{\Lambda_{x}
f_{\pi} }\simeq \frac{1.5}{\Lambda_{x}}
\end{equation}
One observes that $a_{o}\rightarrow 0$ when $m_{\pi} \rightarrow 0$
(the chiral limit) and also that $a_{o} \simeq 1/\Lambda_{x}$.
Similarly, one would expect the production and decay amplitudes of
Goldstone Bosons to vanish in the chiral limit. Some examples, which can
be obtained from ChPT calculations \cite{ChPT,ChPT2}, are the threshold
electric dipole amplitude, $E_{0+}^{\gamma N \rightarrow \pi^{0}N}$ for s
wave photo-pion production, the $\Sigma$ term of $\pi N $ scattering , the
isospin breaking $\eta \rightarrow 3 \pi$ decay, and the form factors for
$K_{l4}$ decays
In a similar vein, there are some observables that
diverge in the chiral limit, such as the charge radii and polarizabilities
of nucleons and pions \cite{ChPT,ChPT2}. In this case, the physical
interpretation is that the meson cloud extends beyond the hadron and in the
chiral limit extends to infinity.
Pion-hadron scattering and the amplitudes given above are examples of
quantities that either vanish or blow up in the chiral limit. In the real
world, where the light quark masses are non-zero, chiral symmetry is
explicitly broken and these quantities are finite and non-zero. Their
precise (nonzero and finite) values are measures of explicit chiral
symmetry breaking, and therefore a theoretical challenge to calculate them.
Quantities which either vanish or diverge in the chiral limit point to an
experimental opportunity to perform precise experiments, not only to check
ChPT calculations, but also as fundamental quantities which must be
predicted by any theory of the strong interaction. Experimental Chiral
Dynamics is the study of the properties, production and decay amplitudes,
and low energy interactions of the almost Goldstone Bosons ($\pi,\eta,K$)
with themselves and with other hadrons.
The main purpose of this contribution is to point out new and exciting
experimental possibilities in chiral dynamics which arise by having an
experimental apparatus with thin, pure, polarized, targets in an electron
storage ring with an intense, polarized, electron beam of $\simeq$ 1 GeV.
We will introduce the possibility of a new very small angle electron
scattering/almost--real photon tagging
facility (SMASH) which will utilize the polarized, internal targets being
built for BLAST. As important examples we shall discuss the threshold
$\vec{\gamma} p \rightarrow \pi^{0} p$ and polarized Compton scattering,
$\vec{\gamma} \vec{p} \rightarrow \gamma p$, reactions. Furthermore,
such a facility would be capable of measuring all
photo-hadron processes with polarized photons on polarized and unpolarized
internal targets, e.g. protons, deuterons, and $^{3}He$. In particular, the
coherent $\vec{\gamma}
\vec{D} \rightarrow \pi^{0} D$ reaction can be accessed from threshold
through the $\Delta$ region and could produce important new results on the
$\vec{\gamma} \vec{n} \rightarrow \pi^{0} n$ amplitude.
Some of these experiments require soft recoil ion
detection near the internal target.
We will also
point out the possibility of using the BLAST detector itself in addition to
the internal target to investigate other timely physics issues e.g.
to study the quadrupole components
in the $\vec{\gamma} \vec{p} \rightarrow \Delta$ transition.
\subsection{Photopion Reactions and Light Quark Dynamics}
Near threshold pion photoproduction is an excellent example of confinement
scale QCD physics, where considerable theoretical and experimental progress
has been made in the past few years. Starting with the availability of CW
electron
beams, the $\gamma p \rightarrow \pi^{0}p$ reaction has been performed with
high quality tagged photon beams at Mainz\cite{Mainz} and
Saskatoon\cite{Sask}. At the same time, ChPT calculations have been
performed which have advanced our understanding of this reaction\cite{BKM}.
For the sake of brevity only the photoproduction experiments will be
discussed here.
ChPT is an effective field theory which uses the observed hadrons rather
then the quarks and gluons as the degrees of
freedom\cite{book,ChPT,ChPT2,work1}. The effective Lagrangians are
organized into a series of increasing powers of the momenta,
$(p/\Lambda_{x})^{n}$, where $\Lambda_{x} \simeq 4 \pi f_{\pi} \simeq 1
GeV$ is the chiral scale parameter. The introduction of a higher order
Lagrangian introduces low energy constants which are required to
renormalize the infinities order by order. At the present time these low
energy constants must be determined empirically by fits to data or
estimated by the principle of resonance saturation. In principle they can
be obtained from the QCD Lagrangian by integrating out the high energy
degrees of freedom, e.g. by lattice gauge theory. The importance of ChPT
is that it is an effective {\it theory} based on QCD, i.e. at each order in
the momentum expansion, the diagrams that must be calculated are specified
and not left to individual discretion as they are in model calculations.
At the present time one loop ChPT calculations for electromagnetic pion
production have been carried out to $O(p^{4})$ \cite{BKM}. The presence of
the counterterms implies 3 low energy constants in photoproduction and 5 in
electroproduction: two in the transverse s wave multipole $E_{0+}$, one in
the p wave transverse multipoles, and two in the longitudinal s wave
multipole $L_{0+}$ (for electroproduction). Currently, these are
determined by a fit to the data and also estimated by the principle of
resonance saturation. The two approaches are found to agree \cite{BKM},
indicating that the values are understood. It should also be noted that
$\pi^{0}$ photo and electroproduction from the neutron can be predicted
without any additional parameters. Thus measurements of the neutron
production amplitudes will provide a stringent test of ChPT calculations.
One important advantage that ChPT has brought to the study of
electromagnetic meson production is the systematic ordering of the
diagrams. In particular, pion rescattering in the final state (1 loop
diagram) is the crucial ingredient of the near threshold energy dependence.
The largest contribution comes from the production of charged pions in the
intermediate state. Since the ratio of the electric dipole amplitudes for
the neutral and charged pion channels $R= E_{0+}^{\gamma p \rightarrow
\pi^{+}n} /E_{0+}^{\gamma p \rightarrow \pi^{0}p}\simeq -20$, the two
step $\gamma p \rightarrow \pi^{+}n\rightarrow \pi^{0}p$ reaction is as
strong as the direct $\gamma p \rightarrow \pi^{0}p$ path. Combined with
the separation of the $\pi^{0}p$ and $\pi^{+}n$ thresholds this leads to a
unitary cusp in the $\gamma p \rightarrow \pi^{0}p$ reaction.
\begin{figure}[ht]
\begin{center}
\epsfig{file=reE0+.eps, width=0.45\textwidth}
\caption{ \footnotesize
$Re E_{0+}$ (in units of $10^{-3}/m_{\pi}$) for the
$\gamma p \rightarrow \pi^{0}p$ reaction versus photon energy k.
The dashed dot curve is the ChPT fit \protect\cite{ChPT} and the
solid curve is the unitary fit Eq. \protect\ref{eqn:e0+_unitary} to the
Mainz \protect\cite{Mainz} data (open circles). The Saskatoon data
\protect\cite{Sask} is also shown (filled circles).
\normalsize}
\label{fig:ree0+}
\end{center}
\end{figure}
The simplest example to understand the occurrence of the unitary cusp
in the $\gamma^{*}p \rightarrow \pi^{0}p$ reaction (where $\gamma^{*}$ is a
real or virtual photon) is to use the 3 channel S matrix for the open
channels ($\gamma^{*} p, \pi^{0}p$, $\pi^{+}n$ )\cite{AB}. Applying the
constraints of unitary and time reversal invariance, one is led to the
coupled channel result for the s wave amplitude $E_{0+}^{\gamma p
\rightarrow \pi^{0}p}$:
\begin{equation}
E_{0+}^{\gamma p \rightarrow \pi^{0}p}(k)
= e^{i\delta_{0}} [A(k) + i \beta q_{+} ]
\label{eqn:e0+_unitary}
\end{equation}
where $\delta_{0}$ is the s wave $\pi^{0}p$ phase shift (predicted to be
very small), A(k) is a smooth function of the photon energy k, $\beta=
E_{0+}^{\gamma p \rightarrow \pi^{+}n} \cdot
a^{cex}_{\pi^{+}n\rightarrow\pi^{0}p}$ is the cusp parameter, and $q_{+}$
is the $\pi^{+}$ CMS momentum which is continued below the $\pi^{+}n$
threshold as $i\mid q_{+}\mid$. The cusp function $\beta q_{+}$ contributes
to the real (imaginary) part of $E_{0+}$ below (above) the $\pi^{+}n$
threshold.
The results for the real part of the s wave electric dipole amplitude
$E_{0+}^{\gamma p \rightarrow \pi^{0}p}$ are presented in Fig.
\ref{fig:ree0+}. The rapid energy dependence between the $\pi^{0}p$ and
$\pi^{+}n$ thresholds at 144.7 and 151.4 MeV due to the unitary cusp can be
seen. Above the $\pi^{+}n$ threshold the energy dependence is much less
rapid. This is in approximate agreement with the predictions of the unitary
cusp (Eq.\ref{eqn:e0+_unitary},Ref.\cite{AB}) and ChPT\cite{BKM}. It should
be noted that the errors in the
data shown in Fig.\ref{fig:ree0+} are statistical only so that the
disagreement between the data sets
is not serious. To complete the observation of the unitary cusp and to
precisely measure the value of $\beta$, $Im(E_{0+})$ must be measured.
In particular, experiments with polarized targets can measure the predicted
rapid rise in $Im (E_{0+}^{\gamma p \rightarrow \pi^{0}p})$ above the
$\pi^{+}n$ threshold. The SMASH facility
outlined here would provide such important data.
The unitary constraints to the multipole amplitudes show the importance of
the final state $\pi N$ scattering and charge exchange on the $\gamma^{*} p
\rightarrow \pi^{0}p$ reaction. It raises the possibility of measuring
$a_{\pi^{0}p}$ and $a^{cex}_{\pi^{+}n \rightarrow \pi^{0} p}$ by
measurements of $Im(E_{0+}^{\gamma p \rightarrow \pi^{0}p})$ both below and
above the $\pi^{+}n$ threshold\cite{AB}. The s wave $\pi^{0}N$ elastic
scattering and charge exchange scattering length have been of considerable
interest since Weinberg\cite{W1} predicted that there should be an isospin
breaking effect due to the up, down quark mass difference. For
$a_{\pi^{0}N}$ this effect is $\simeq$ 30\%, in large part because the
isospin conserving term in $a_{\pi^{0}N}$ is very small ($\simeq 0.01
/m_{\pi}$) and consequently very difficult to measure. Since the charge
exchange scattering length is much larger ($\simeq 0.13/m_{\pi}$) this is
easier to observe. Here the isospin violating term due to the up and down
quark mass effect is the same (within a factor of $\sqrt{2}$) of the
elastic scattering prediction but in relative terms is $\simeq$ 2 to 3\% of
the isospin conserving term. The most straightforward way to observe this
predicted isospin violation is to measure the cusp parameter $\beta$ in the
$\gamma p \rightarrow \pi^{0} p$ reaction with polarized proton
targets\cite{AB} and also $E_{0+}$ in the $\gamma p \rightarrow \pi^{+}n$
reaction. Then one could compare the values of $a^{cex}_{\pi^{+} n
\rightarrow \pi^{0}p}$ with the measured value of $a^{cex}_{\pi^{-}p
\rightarrow \pi^{0}n}$ from the line width in pionic hydrogen\cite{PSI}.
If isospin is a good quantum number then these will be equal and opposite.
Equivalently, the dynamic isospin breaking effect of the up, down quark
masses can be considered as an isospin breaking contribution to $\beta$ of
$\simeq$ 2 to 3\% \cite{AB}.
In addition to the s wave multipole $E_{0+}$ discussed above, ChPT makes
predictions for the threshold magnitudes of the three p wave multipoles
($P_{1}, P_{2},P_{3}$)\cite{BKM}. Since the p wave $\pi N$ phase shifts are
small at low energies, the p wave multipoles are essentially real.
Therefore for each threshold $\gamma N \rightarrow \pi N $ reaction there
are five multipole amplitudes to be measured ($Re(E_{0+}), Im(E_{0+}),
P_{1}, P_{2},P_{3}$). The unpolarized cross section can be written as
$\sigma(\theta)= A + B cos(\theta) + C cos^{2}(\theta)$ where A, B, and C
are real bilinear combinations of the five multipole
amplitudes\cite{BKM,formulas}. A complete experimental determination of the
multipoles requires then two additional polarization measurements. This
could include, e.g. measurements with linear polarized photons and
unpolarized targets and with polarized targets and unpolarized photons
(this latter response measures imaginary parts of interference
amplitudes\cite{formulas}).
Measurements with both photon and target polarization could also be
used\cite{formulas}. There are three independent isospin amplitudes and
four reaction channels ($\gamma p \rightarrow \pi^{+}n, \gamma p
\rightarrow \pi^{0}p, \gamma n \rightarrow \pi^{-}p, \gamma n \rightarrow
\pi^{0}n $). so a measurement of all four constitutes a test of isospin
conservation. Therefore, a comprehensive set of measurements of the
threshold photo-pion reactions requires experiments with polarized photons
and targets, including both neutron and proton targets.
Unpolarized experiments on the threshold $\gamma p \rightarrow \pi^{0}p$
reaction have been performed \cite{Mainz,Sask}. The results for the s
wave multipole $E_{0+}$ were discussed above. In addition, two linear
combinations of the three p wave multipoles were found to be in agreement
with ChPT calculations \cite{BKM, Mainz,Sask}. More recently we have
performed a threshold $\vec{\gamma} p \rightarrow \pi^{0} p$ experiment at
Mainz with linearly polarized photons and the data are presently
being analyzed. This will complete the measurement of the three p wave
multipoles for $\pi^{0}$ photoproduction from the proton. However at the
present time, there are no measurements of $Im(E_{0+})$, which requires
polarized target experiments such as can be performed at SMASH.
There is also growing interest in using the deuteron as a neutron target
for the $\gamma n \rightarrow \pi N$ amplitudes. A ChPT calculation for the
coherent $\gamma D \rightarrow \pi^{0} D$ reaction exactly at threshold has
been performed\cite{Beane}, as was a first experiment of the $\gamma D
\rightarrow \pi^{0} X$ reaction (where X = D or np) at Saskatoon
\cite{Sask2}. Most $\pi^{0}$ spectrometers, including the one deployed at
Saskatoon, do not have sufficient energy resolution to determine whether
this reaction was coherent (X = D) or not (X=np). For the Saskatoon
experiment the smaller incoherent cross section was calculated with a model
and subtracted from the data to produce a coherent cross section to compare
with theory \cite{Sask2}.
\begin{figure}[t]
\begin{center}
\epsfig{file=chicane_plan-xxx.eps,width=0.6\textwidth} \\
\hrulefill \\
\epsfig{file=optics.eps,width=0.6\textwidth}
\caption{\footnotesize
Conceptual layout for a very small angle electron
tagging facility at the BLAST target position in the Bates storage
ring. The top figure shows where the chicane
would fit into the existing ring. The first dipole in the chicane
would act as the electron spectrometer. The bottom figure shows
sample scattered electron ray bundles (up to $\theta$=0.5$^{\circ}$ for
$\phi$=0,90,180$^{\circ}$). A
schematicized wire chamber shows the corresponding almost--real photon
energies that would be detected in this zeroth-order design.
\normalsize}
\label{fig:chicane}
\label{fig:raytrace}
\end{center}
\end{figure}
In general studies of the coherent $\gamma D \rightarrow \pi^{0} D$
reaction are best performed with recoil deuteron detection. This is an
important opportunity for polarized internal targets which are thin and
allow the detection of low energy recoil deuterons. There are also
opportunities in the $\Delta$ region. There an interesting sensitivity to
the quadrupole E2 amplitude has been theoretically demonstrated for
polarized deuteron targets \cite{WA} (see also Fig.\ref{fig:wilhelm}).
This sensitivity is for small
$\pi^{0}$ CMS angles for which the recoil deuteron energy is small.
Although space does not permit a detailed discussion of Compton scattering,
a few remarks about its significance is in order. Previous measurements
have aimed to determine the electric and magnetic polarizability of the
proton. However, the spin dependent polarizabilities have yet to be
measured. These spin dependent polarizabilities are subtle probes of the
internal structure of the nucleon. This requires Compton scattering of
polarized photons from polarized targets. The use of polarized internal
targets is an excellent way to make these measurements, particularly since
these targets are thin and one can measure the recoil nuclei as will be
discussed in the next section.
\section{\large SMASH: {\it SM}all {\it A}ngle Electron {\it S}cattering
{\it H}odoscope \normalsize}
In this section we discuss the technique of very small angle electron
scattering, or almost--real photon tagging, with a specific conceptual
implementation for the Bates Storage
Ring, and some ideas for important possible experiments. Much of this
material has
been presented in an unpublished report from a previous incarnation of this
concept proposed at Bates several years ago \cite{Polite}. This previous
project was conceived before BLAST was funded, and therefore was presented
as a stand--alone facility with a different proposed position in the Bates
Ring. With the advent of BLAST funding, the new idea is to employ the BLAST
polarized target, and to make modifications to the ring to accommodate a very
small angle electron tagger.
The method of very small angle electron scattering has been known for many
years (see e.g. Ref.\cite{history}) and is based on the fact that the
virtual photons
have very low $q^{2}$, so can be treated as almost--real photons.
What is new about the present
proposal is the utilization of full polarization observables for both the
photon and the target, which is made possible by detecting these very small
angle scattered electrons, thereby {\it tagging} the almost--real exchange
photons.The target polarization is made possible by the use
of internal targets. For the photon, one obtains circular polarization from
longitudinally polarized electrons. The equivalent response function for
linear polarized almost--real photons \cite{formulas} are obtained by measuring
the $\phi$ dependence of the cross section\cite{Polite}. Therefore, this
virtual tagging proposal goes much further in utilizing the polarized,
pure, and windowless nature of internal targets, and the very large linear
polarization of the almost--real photons.
\subsection{Almost-real Photon Tagger}
A method to perform very small angle electron scattering/almost--real
photon tagging in the Bates Storage Ring is presented in Fig.
\ref{fig:chicane}. At this stage the design is conceptual only, but it does
exhibit the main features of what a fully designed facility would have. The
point here is to introduce the idea and its possibilities to motivate
further study.
The design is based on the introduction of a chicane downstream of the
internal target position, beginning just outside the BLAST superstructure.
A crucial element in the chicane design is that the extra path length
introduced must be $n\cdot\lambda=10.497cm$, where $\lambda$ is the beam
wavelength fixed by the RF. In this conceptual design $n=5$. Another
important element is for the optics from the chicane to the next ring
dipole to be identical to the unmodified ring. The chicane shown in
Fig.\ref{fig:chicane} does not satisfy this, but preliminary ring optics
investigations by T. Zwart of Bates \cite{zwart-pc} using box (not sector)
dipoles without the quadrupoles in between show that in principle this
should be possible with relatively minor adjustments of local beam line
elements. A full study of this issue could not be completed in time for
this article, but nevertheless, one does not expect that the properties of
the fully designed chicane/tagger will be significantly different from what
is to be described.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{l@{\hspace{0.60cm}}r}
\epsfig{file=acceptance1.eps,height=7.75cm,width=0.45\textwidth} &
\epsfig{file=acceptance2.eps,height=7.75cm,width=0.45\textwidth} \\
\end{tabular}
\caption{\footnotesize
Acceptance of very small angle scattered electrons for the
chicane spectrometer design in Fig.\protect\ref{fig:chicane},
{\bf Left:} $\phi_e$ $vs$ $\theta_e$ at fixed energy, and
{\bf Right:} $\theta_e$ $vs$ $E'$ at fixed azimuthal
angle. The lightly shaded areas at right indicate partially filled bins.
\normalsize }
\label{fig:acceptance}
\end{center}
\end{figure}
Beam electrons which do not interact in the target will be transported
around the chicane and then returned to the original ring trajectory.
However, the electrons that interact and are scattered into a very small
angle ($<0.5^{\circ}$ here) will have lost energy, so will be bent away
from the beam at the first chicane dipole (see Fig.\ref{fig:raytrace}). By
placing wire chambers to detect these scattered electrons outside this
dipole, a QQD spectrometer will be realized. A ray tracing
simulation shown in Fig.\ref{fig:raytrace} shows that something
approximating a focal plane emerges from even this simple design. One is
confident then that with a carefully designed first chicane dipole a
spectrometer with reasonable optical properties can be achieved.
As a guideline to what can be expected in a full--fledged system, the
acceptance of the chicane system (Fig.\ref{fig:raytrace}) is shown in
Fig.\ref{fig:acceptance} as a function of the ratio of the outgoing to
incident electron energy E'/E, and the outgoing electron angles
$\theta_{e}, \phi_{e}$. The acceptance is limited by the apertures of the
first two quadrupoles. In this case a rectangular beam box was used, which
increases the acceptance at $\phi= 0, 90, 180, 270^{\circ}$. With the usual
cylindrical beam pipe, the acceptance is limited to about 0.33$^{\circ}$.
Clearly, larger aperture quadrupoles would be preferable, but nonetheless
even in this scenario decent momentum coverage is achieved, and as will be
shown, a $\simeq0.5^{\circ}$ angular acceptance is not unreasonable for
almost--real photon tagging.
Here we recall the formulas for the kinematic variables relevant to small
angle electron scattering. Note that in the extreme forward direction, the
finite mass of the electron cannot be neglected, so the exact expressions
must be used. These are \cite{Polite}:
\begin{eqnarray*}
q^2 ~=~-Q^2& = & 2m_e^2 -2EE'+2pp'cos\theta_e\\
k_\gamma & = & \sqrt{p^2+p'^2-2pp'cos\theta_e} \\
\epsilon & = & (1+\frac{Q^2 |k_\gamma|^2}{2p^2p'^2sin^2\theta_e})^{-1}\\
P_{\gamma} & = & h\cdot\sqrt{1-\epsilon^{2}} \\
\Gamma & = & \frac{\alpha}{2\pi^2}
\frac{E'}{E}
\frac{k_\gamma}{Q^2}
\frac{1}{1-\epsilon}
\label{eqn:formulas}
\end{eqnarray*}
where $\theta_e$ is the electron scattering angle, $E (E')$ and $p (p')$
are the beam energy and momentum of the incident (scattered) electrons, $h$
is the longitudinal beam polarization, and $q^2$, $k_\gamma$, $\epsilon$,
$P_{\gamma}$, and $\Gamma$ are the virtual photon four-momentum, momentum,
transverse polarization, circular polarization, and flux, respectively.
These formulas still neglect radiative corrections, but these
should be relatively small, at the 1\% level.
\begin{figure}[ht]
\begin{center}
\mbox{\epsfig{file=avgQ2-plot.eps,width=0.45\textwidth}}
\caption{\footnotesize
Four-momentum transfer of very small angle scattered 1 GeV electrons
versus virtual photon energy, averaged over $\theta <
\frac{1}{3}^{\circ}$ and weighted by the virtual photon flux
$\Gamma$. Note that these virtual
photons are essentially real, except those in the untaggable region
very near the endpoint.
\normalsize}
\label{fig:avgQ2}
\end{center}
\end{figure}
Figure \ref{fig:avgQ2} shows the four--momentum transfer of the detected
scattered electrons, averaged over a $0<\theta_{e}<0.33^{o}$ acceptance and
weighted by the virtual photon flux. The very small values indicate that
the exchanged virtual photons are essentially real, except very near the
endpoint, which nevertheless fall outside the
tagging region.
The transverse polarization of almost--real photons are shown
in Fig.\ref{fig:epsilon} both as a function of scattering angle at fixed
photon energy, and also versus energy averaged over angle weighted by the
virtual photon flux. Note the very high transverse polarizations over the
entire tagging range, averaging about 70\%. Note as well that the
polarization is constant after about 0.1$^{o}$. Also shown in
Fig.\ref{fig:epsilon} is the transfered circular polarization, assuming a
70\% longitudinally polarized electron beam. Sizable polarizations which
are very flat with angle are seen here as well.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{l@{\hspace{1cm}}r}
\epsfig{file=eps_vs_thet-plot.eps,width=0.45\textwidth} &
\epsfig{file=avgeps-plot.eps,width=0.45\textwidth} \\
\end{tabular}
\caption{\footnotesize
{\bf Left:} Virtual photon polarizations versus electron scattering
angle for 1 GeV electrons for various values of photon energy. ;
{\bf Right:} Virtual photon polarization versus photon energy,
averaged over $\theta < \frac{1}{3}^{\circ}$ electron scattering
angle and weighted by the virtual photon flux $\Gamma$. The solid
(dashed) line shows the transverse (circular) polarization.
\normalsize}
\label{fig:epsilon}
\end{center}
\end{figure}
The virtual photon flux is shown in Fig.\ref{fig:virtphotflux}, multiplied
by the luminosity 7.5$\cdot$10$^{31}$ cm$^{-2}$, which is what is expected
at BLAST for the internal polarized proton target. The left figure shows
the flux versus angle, where it is seen that the flux is strongly forward
peaked at most energies, and that there is diminishing strength beyond
about 0.5$^{o}$. Although a larger angular acceptance is clearly
preferable, especially for the highest photon energies, this shows that the
returns are diminishing beyond this range. The total flux versus photon
energy is also shown at right in Fig.\ref{fig:virtphotflux}.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{l@{\hspace{1cm}}r}
\epsfig{file=virt_phot-vs-theta-plot.eps,width=0.4\textwidth} &
\epsfig{file=virt_phot_flux-plot.eps,width=0.4\textwidth} \\
\end{tabular}
\caption{\footnotesize
{\bf Left:} Virtual photon rate versus electron
scattering angle, for a 1 GeV beam of luminosity 7.5
$\cdot$10$^{31}$ expected for the BLAST internal polarized proton target.
{\bf Right:} Virtual photon flux integrated over
$0<\theta<$0.33$^{\circ}$ solid angle versus virtual photon energy,
for the same luminosity at 0.855 GeV. Also shown is the
bremsstrahlung rate at that energy expected from the Mainz tagged
photon facility, assuming a 2cm polarized butanol target.
\normalsize}
\label{fig:virtphotflux}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\epsfig{file=coh_brem.eps,width=0.6\textwidth}
\caption{\footnotesize
{\bf Left:} Ratio of coherent bremsstrahlung spectrum from a diamond radiator to the
incoherent spectrum at E$_{0}$=0.855 GeV, and {\bf Right:} degree of linear
polarization, as a function of photon energy, at the Mainz tagged photon
facility (see Ref. \protect\cite{mami-a2}). Note that here the linear
polarization is sizable in only a narrow energy range, unlike that shown in
Fig.\protect\ref{fig:epsilon}.
\normalsize}
\label{fig:coh_brem}
\end{center}
\end{figure}
The incoherent bremsstrahlung flux assuming a 2$cm$ butanol target from a
modern tagged photon facility at Mainz \cite{mami-a2} is shown as the dotted line in
Fig.\ref{fig:virtphotflux}. As well, the energy and linear polarization
spectrum of coherent bremsstrahlung beam using a diamond radiator is shown
in Fig.\ref{fig:coh_brem}. The former figure demonstrates that even with a thin
internal target, a sizable rate advantage is seen using tagged almost--real
photons, especially for $\frac{E_{\gamma}}{E_{0}} < 0.5$. Moreover, low
energy recoils are not accessible with usual (thick) frozen targets
like butanol,
whereas they are with (thin) internal targets. The coherent bremsstrahlung
linear
polarization spectrum (Fig.\ref{fig:coh_brem}) shows
lower polarizations over a much smaller energy range than
those in almost--real photon tagging (Fig.\ref{fig:epsilon}).
The afore described figures demonstrate the salient features of a very small
angle electron spectrometer of the kind shown in
Fig.\ref{fig:chicane}. Namely, it tags virtual photons which are
essentially "real" with high transverse and circular polarizations, and a
large flux. Coupled with thin highly polarized internal targets, and a
versatile detector such as BLAST, supplemented with a hodoscope for low
energy recoils (such as a large coverage Si strip counter),
it is clear that an almost--real photon
tagger opens up many new opportunities. Some examples will be described in the
following section.
\subsection{Example Experiments}
Here a few example experiments will be discussed, focusing on those
previously mentioned in conjunction with the chiral dynamics studies
outlined in the introduction. Note that the experimental details have not
been worked out in time for this contribution, so only a broad outline of
what is required will be offered to motivate future study.
Figure \ref{fig:pi0photokin} shows the recoil proton momentum versus lab
scattering angle in the $\gamma p \rightarrow \pi^{0}p$ reaction for
constant photon energy and constant CMS pion scattering angle. One observes
that near threshold the protons recoil at low energy in a forward cone, so
that to detect these a forward detector must be constructed. Given the
constraints of the BLAST internal target, this detector would probably need
to be compact, therefore of high positional resolution, implying a silicon
strip--type unit of the kind used in many high energy experiments. Note
that thin windowless internal targets have the great advantage here of
allowing the low momentum recoils to be detected with minimal interference.
One way to identify the scattered $\pi^{0}$s is to detect the recoil
protons with sufficient energy and angular resolution to determine the
missing mass. Another is to detect the produced $\pi^{0}$s, and for this
a crystal "ball" or "cylinder" could be
constructed to fit around the target. This would greatly reduce backgrounds
and add flexibility. It would probably necessitate
operating with the BLAST detector "pulled apart", or else BLAST could
remain in place and photon detectors installed outside the magnet, but this
option increases the size and cost, and reduces the solid angle.
\begin{figure}[ht]
\begin{center}
\epsfig{file=pionmomentum.eps,width=0.35\textwidth}
\caption{\footnotesize
Proton momentum versus scattering angle in neutral pion photoproduction
near threshold, with contours of constant photon energy and pion CMS
angle. Note that close to threshold the protons recoil forward $<20^{\circ}$.
\normalsize}
\label{fig:pi0photokin}
\end{center}
\end{figure}
The same Fig.\ref{fig:pi0photokin} can be used to show the kinematics for
the $\gamma p \rightarrow \pi^{+}n$ reaction, although due to the final
state mass differences, the threshold and kinematic contours will be
slightly altered. In this case one could use the BLAST detector for
$\pi^{+}$ detection with good angular coverage. The planned neutron
detector would cover $38<\theta_{n}<70^{0}$, which would cover nicely the
$\Delta$ resonance region for e.g. $\gamma p \rightarrow \Delta$ studies.
However, this precludes the near threshold region, so these detectors would
need to be shifted somewhat, or new detectors constructed, to cover the
more forward angles.
The proton and photon kinematic relationships for Compton scattering at
E$_{\gamma}=100 MeV$ are shown in Fig.\ref{fig:compton}. This experiment
would use the same or very similar setup to that used for the near
threshold $\gamma p \rightarrow \pi^{0}p$ experiment described above to
detect the scattered photon and recoil proton. In addition, near forward
photon angles are of interest in double--polarized Compton scattering (see
reference \cite{Polite}). Here, the proton momentum is low and the angle
large, and so can be detected with the proposed BLAST low energy recoil
detector \cite{Blast}. Again, in both cases the thin windowless internal
target is seen to be greatly advantageous to facilitate low energy proton
detection. Circularly polarized photons are required, and we have seen
(Fig.\ref{fig:epsilon}) that these can be rather sizable.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{l@{\hspace{0.25cm}}r}
\epsfig{file=compfig7.eps,width=0.3\textwidth, angle=90} &
\epsfig{file=compfig8.eps,width=0.3\textwidth, angle=90} \\
\end{tabular}
\caption{\footnotesize
{\bf Left:} Proton and photon momentum versus photon lab angle for
Compton scattering at E$_{\gamma}=100 MeV$, and {\bf Right:} Angular
correlation of the scattered photon and recoil proton.
\normalsize}
\label{fig:compton}
\end{center}
\end{figure}
For a final example, consider the coherent $\gamma D \rightarrow
\pi^{0} D$ reaction on a vector polarized deuteron target
which has been shown (see \cite{Polite}) to be
sensitive to the nucleon quadrupole E2 amplitude. Figure
\ref{fig:wilhelm} demonstrates the calculated sensitivity, which
is greatest at small pion CMS angles where the recoil deuteron energy is
low. Again the proposed BLAST low energy recoil detector would be used
to detect these deuterons. The neutral pions would be detected with the
same setup used for the threshold photoproduction and Compton scattering
experiments. Once again the merits of a thin windowless internal target is
seen.
\begin{figure}[ht]
\begin{center}
\epsfig{file=wilhelmpfig2.eps,width=0.3\textwidth, angle=90}
\caption{\footnotesize
{\bf Left:} Cross section difference ($\sigma\uparrow - \sigma\downarrow$)
and {\bf Right:} cross section for coherent $\pi^{0}$
photoproduction on a $\pm$100\% vector polarized deuteron target at the $\Delta$
energy, from the model described in Ref.\protect\cite{WA}. The solid line
is the full calculation, whereas the dashed line has the quadrupole E2
amplitude removed. The region of largest sensitivity corresponds to large
angle recoil deuterons of a few MeV kinetic energy.
\normalsize}
\label{fig:wilhelm}
\end{center}
\end{figure}
\section{Conclusion}
The experiments presented in the previous section are but a few examples of
what can done to
exploit the unique capability of an almost-real photon tagger coupled with
the proposed BLAST facility \cite{Blast}. Including also
a new forward hadron detector and a large acceptance photon detector would
not only allow three important Chiral Dynamics experiments to be
done, but should
also open up a whole new arena of unique experiments.
The low energy QCD experiments include : threshold photo-$\pi^{0}$
production on polarized protons to measure $Im (E_{0+})$ which is
sensitive to the isospin breaking due to the light quark mass
differences, fully polarized Compton scattering, which measures the
internal quark helicity structure, and the electric quadrupole
contribution to the $\gamma N \rightarrow \Delta$ transitions in the
proton and the deuteron. Other reactions not touched on here include
photo nucleon and photo pion production from polarized few body
nuclei. We believe that this opens a new and exciting window of
opportunity for polarized, internal, targets at Bates.
More detailed design efforts are currently underway, and more input from
the collaboration in general would be warmly welcomed.
|
1,108,101,564,186 | arxiv | \section{Introduction}
Typically around $50\%$ or more of the gas in spiral galaxies consists of H$_2$, inferred indirectly by CO or dust
emission. Since the discovery of dark molecular hydrogen \citep{grenier_unveiling_2005, langer_c+_2010,
planck_collaboration_planck_2011, paradis_dark_2012}, the estimated quantity of H$_2$ in the Milky Way has essentially
been doubled, effectively revealing the nature of some of the dark baryons. It is commonly admitted that the CO-related
molecular hydrogen is present in relatively dense regions of the interstellar medium, molecular clouds with number
density $> 10^{10}\,\unit{m^{-3}}$ and temperatures of $7$--$30\,\unit{K}$ \citep{draine_physics_2011}.
Even though H$_2$ is by far the most abundant molecule ($\sim 90\%$), molecular clouds are mainly detected by CO
emissions because of all the difficulties in detecting cold H$_2$ \citep{bolatto_co--h_2013}. For example, H$_2$ only
starts emitting at temperatures $> 512\,\unit{K}$. See \cite{combes_perspectives_1997} for a review of several possible
methods for detecting cold H$_2$. Because of these detection difficulties, the real quantity of H$_2$ (as well as He,
which shares similar properties of discreteness with H$_2$) in molecular clouds is still rather unknown, especially when
the gas temperature is below $\lesssim 8\,\unit{K}$ down to the cosmic background temperature of $2.76\,\unit{K}$.
The condensation properties of H$_2$ relevant for molecular cloud conditions are well known from laboratory data
\citep{air_liquide_gas_1976}. The phase diagram (Fig.~\ref{fcc}) shows the domain of pressure conventionally attributed
to molecular clouds. One has to keep in mind, however, that since molecular clouds are highly structured
\citep{pfenniger_is_1994}, and commonly observed in a state of supersonic turbulence
\citep{elmegreen_interstellar_2004}, large fluctuations in density and temperature must occur.
The presence of ice in the interstellar medium consisting of heavier molecules, such as H$_2$O, CO, CO$_2$ and NH$_3$
covering dust grains, is nowadays well documented \citep{allamandola_evolution_1999}. Figure \ref{fmol} shows the
location of the critical and triple points of abundant molecules existing in the ISM. H$_2$ ice has been detected in the
absorption band at $2.417\,\mu\unit{m}$ \citep{sandford_h2_1993, buch_interpretation_1994, dissly_h2-rich_1994}, but the
interpretation of this detection is that H$_2$ is mixed within H$_2$O-rich grains in conditions that are too warm to
allow the bulk of H$_2$ to condense \citep{kristensen_h_2011}.
High-resolution pictures of nearby planetary nebulae or remnants of supernova have shown the presence of substellar
fragments \citep{walsh_imaging_1993}. These very cold globules, or knots, each of the size of a few tens of AU, are at
least as cold in the inner parts ($\sim 10\,\unit{K}$) as molecular clouds, but much smaller. An important feature is
that the apparent column density in these knots increases inwards as long as the resolution allows, or until the knot is
optically thin \citep{burkert_structure_1998}. If these trends extend to the centre, one can expect there would be much
higher density at colder conditions. It would be ideal to eventually reach a regime where H$_2$ could condense in liquid
or solid form, especially because at high column density the medium blocks UV radiation and cosmic ray heating.
At the level of molecular clouds, even though their average properties are well separated from the H$_2$ phase
transition, these are only static properties that ignore the highly dynamical nature of supersonic turbulence observed
in the interstellar medium, where fluctuations must be large. Lower temperatures can be reached with fast adiabatic
decompression alone. An example of fast decompression is displayed in the Boomerang Nebula, which reaches a temperature
of only $1\,\unit{K}$ because of a fast expanding wind \citep{sahai_boomerang_1997}. Thus, one should expect, to be
coherent with supersonic turbulence, that regions of expansion and compression must be common in the ISM.
The fragmentation of self-gravitating gas leading to collapse has been studied for over a century. The molecular cloud
is normally represented as a self-gravitating, one-phase fluid (i.e. pure gas), which is governed by the balance between
gravity and gas pressure. The gravity of this fluid is directly proportional to its density, and the pressure to the
adiabatic density derivative of the pressure, $(\partial P/ \partial\rho)_\name{s}$. It has been shown by
\citet{jeans_stability_1902} that if a perturbation with a long enough wavelength is introduced into the system, this
perturbation will grow exponentially. With growing perturbation, the pressure of the fluid will not be able to withstand
its gravity anymore, which will ultimately lead to gravitational collapse.
The approximation of molecular clouds as a one-phase fluid is valid in many cases, but when considering very cold,
high-density regions, it is an over-simplification. In these condition, H$_2$ will start creating ice, which has to be
taken into account \citep{walker_snowflakes_2013}. The dynamics of this kind of fluid presenting a phase transition is
different from a one-phase fluid, the most important difference being a very low value of
$(\partial P/\partial\rho)_\name{s}$ \citep{johnston_thermodynamic_2014}. But, as $(\partial P/\partial\rho)_\name{s}$
is crucial for the stability of a fluid, a cold high-density fluid presenting a phase transition is therefore expected
to be very unstable.
Based on the observation of substellar globules in planetary nebulas and the dynamics of fluids presenting a phase
transition, we may expect the presence of small, substellar H$_2$ ice fragments in molecular clouds due to the
fragmentation of cold high-density regions. It is of great astronomical interest to study the nature of this substellar
fragmentation, as the resulting bound objects may be too small to start nuclear fusion and, thanks to their very low
temperature, are very difficult to detect. Some of the baryonic dark matter may actually consist of these fragments
\citep{pfenniger_is_1994, pfenniger_is_1994-1}.
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/claus_clap.png}}
\caption{H$_2$ phase diagram (bold line) and He (dotted line) in cold and low pressure conditions. }
\label{fcc}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/mol.png}}
\caption{Critical points (bullet) and triple points ($\scriptstyle{\mathsf{Y}}$) of common molecules in the ISM. As for
H$_2$ in Fig.~\ref{fcc}, the sublimation curves cross interstellar pressure conditions very steeply in pressure a few
K below their triple point. For example, CO should essentially be frozen below $\sim 20\,\unit{K}$, so unable to emit
rotation lines.}
\label{fmol}
\end{figure}
In this article, we study the physics of a self-gravitating van der Waals fluid presenting a phase transition
analytically and with simulations. In the analytic part (Sect.~\ref{sPT}), the physics of a van der Waals fluid and the
related Lennard-Jones (thereafter LJ) potential are recalled. We calculate the virial theorem taking both the
gravitational and the LJ potential into account. The virial analysis helps to characterize different types of
fluids. The stability of a self-gravitating van der Waals fluid presenting a phase transition is then analyzed.
In Sect.~\ref{sMD}, the molecular dynamics simulator LAMMPS \citep{plimpton_fast_1995} is introduced. By proper scaling
of physical constants the Coulomb force solver is used to calculate the gravitational force, and the short rang
molecular force is calculated with the LJ force. We introduce the concept of super-molecules to enable us to perform
simulations with a total mass high enough for the fluid to be self-gravitating.
The simulations performed are discussed in Sect.~\ref{sS}. First, the correctness of the super-molecule approach is
tested. Second, one-phase fluids (i.e.\ pure gas) are used to test the Jeans criterion. Third, simulations close to a
phase transitions are performed, studying the properties of non-gravitating fluids and fluids with a gravitational
potential above and below the Jeans criterion.
\section{Physics of a fluid presenting a phase transition}
\label{sPT}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/vdw2d.png}}
\caption{van der Waals phase diagram for a fluid with $T_\name{r}=0.9$. Dotted line: gas phase; dashed line: phase
transition; solid line: solid phase.}
\label{fvdw2d}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/vdw.png}}
\caption{van der Waals phase diagram for a fluid with a temperature (from bottom to top) of $T_\name{r}=0.2$,
$T_\name{r}=0.4$, $T_\name{r}=0.6$, $T_\name{r}=0.8$, $T_\name{r}=1.0$, and $T_\name{r}=1.2$. Solid line: van der
Waals EOS with Maxwell construct, dotted line: original van der Waals EOS.}
\label{fvdw}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/maxlab.png}}
\caption{H$_2$ and van der Waals phase diagrams. Solid line: H$_2$ laboratory data; dotted line: van der Waals vapour
curve derived with the Maxwell construct.}
\label{fmaxlab}
\end{figure}
In this section, we describe the physics of a self-gravitating fluid presenting a phase transition.
\subsection{van der Waals equation}
\label{ssVdW}
The van der Waals equation is a classical equation of state (EOS)\ of a pure fluid presenting a first order phase
transition. This EOS describes the macroscopic behaviour of a fluid in which at microscopic level the molecules are
strongly repulsive at short distances and beyond that weakly attractive over a limited range.
The van der Waals equation \citep{van_der_waals_remarks_1910,johnston_thermodynamic_2014} is a modification of the
ideal-gas law, taking the finite size of molecules and intermolecular interactions into account. It links pressure,
density, and temperature as follows:
\begin{equation}
P = \frac{k_\name{B}T\,n}{1 - bn}-an^2 \label{evdw} \ ,
\end{equation}
with $a\,[\unit{Pa\,m^6}]$ and $b\,[\unit{m^3}]$ being constants characteristic of the fluid. It can also be expressed
in a reduced, dimensionless form as,
\begin{equation}
P_\name{r} = \frac{8T_\name{r}}{\frac{3}{n_\name{r}}-1} - 3n_\name{r}^2 \ ,
\end{equation}
where $P_\name{r} = P / P_\name{c}$, $n_\name{r} = n / n_\name{c}$, and $T_\name{r} = T / T_\name{c}$. The parameters
$P_\name{c}$, $T_\name{c}$ and $n_\name{c}$ are the values of the thermodynamic critical point
\citep{kondepudi_modern_1998, carey_statistical_1999}.
Figure \ref{fvdw2d} shows the phase diagram for a fluid with $T = 0.9T_\name{c}$. One can distinguish the gaseous phase
(dotted line) and the solid phase (solid line); in between (dashed line), the fluid presents a phase transition. There
are states where $(\partial P/\partial \rho)_\name{s} < 0$, which is thermodynamically unstable. The parts of the dashed
curve where $(\partial P/\partial \rho)_\name{s} > 0$ are metastable because a lower entropy state is reached when the
fluid splits into condensed and gaseous components. The fraction of condensed over gaseous phase grows from 0 to 1 from
left to right.
When the two phases coexist, the van der Waals EOS is replaced by a constant pressure marked by a horizontal line. This
constant pressure level is determined by the Maxwell ``equal area'' construct \citep{clerk-maxwell_dynamical_1875},
demanding a total zero $P\cdot v$ work for an adiabatic cycle between the fully gaseous to the fully condensed state.
Figure \ref{fvdw} shows the phase diagrams for fluids with a temperature from $T = 0.2 T_\name{c}$ to
$T = 1.2 T_\name{c}$. The dotted line shows the original van der Waals EOS whereas the solid line displays the modified
law using the Maxwell construct. There is no Maxwell construct for $T \geq T_\name{c}$ as
$(\partial P/\partial \rho)_\name{s}$ is always $\geq 0$. \cite{lekner_parametric_1982} provides a parametric solution
to the van der Waals and Maxwell construct, using $\Delta s$ as variable.
The phase equilibrium pressure is almost identical with the coexistence curve of laboratory data for H$_2$ over a wide
range of pressure (Fig.~\ref{fmaxlab}).
\subsubsection{Lennard-Jones forces}
\label{ssLJ}
\begin{figure}[th]
\resizebox{\hsize}{!}{\includegraphics{imgs/ULJ.png}}
\caption{Lennard-Jones potential.}
\label{fLJ}
\end{figure}
The van der Waals model of phase transition fluid is convenient for a continuum fluid description, but fails to
represent all the phenomena around the phase transition, which is the reason for correcting it with the above
mentioned Maxwell construct. Even with the Maxwell construct, however, a local thermal equilibrium is still supposed to
hold. In astrophysical contexts, often even these assumptions cannot be granted. For instance, in supersonic turbulent
situations, ubiquitous in the interstellar medium, thermal equilibrium cannot be satisfied since mechanical energy
propagates faster than thermal energy through pressure. Thus any method using quantities, like temperature or pressure,
implicitly assumes that a local thermal equilibrium is established, which makes their use in the supersonic turbulent
regime uncertain.
A much less demanding model of fluid is provided by molecular dynamics, where the simplest molecule interactions close
to the van der Waals model in equilibrium situations is provided by the Lennard-Jones (LJ) intermolecular potential
\citep{jones_determination_1924}. No local or global thermal equilibirum is required. The LJ potential consists of an
attractive long-range term and a repulsive short-range term (Fig.~\ref{fLJ}),
\begin{equation}
\Phi_{\name{LJ}}(r) = 4\frac{\epsilon}{m}\left(r_\sigma^{-12} - r_\sigma^{-6}\right) \ ,
\end{equation}
with $r_\sigma = r/\sigma$. Its minimum value, located at $r_\sigma = 2^{1/6}$, is $-\epsilon/m$.
\subsection{Virial theorem for a Lennard-Jones gravitating fluid}
\label{ssV}
\subsubsection{Lagrange-Jacobi identity}
The virial theorem \citep{clausius_ueber_1870} describes the statistical equilibrium of a system of interacting
particles or fluid systems, relating the kinetic and potential energies. In the case of a self-gravitating LJ fluid,
the LJ potential and gravity combine as a total potential $\Phi = \Phi_\name{G} +\Phi_\name{LJ}$.
The virial theorem for this new potential can be derived using the Lagrange-Jacobi identity path, by taking the second
time derivative of the moment of the polar inertia $I \equiv \sum_i m_i\vec{r}_i^2$, we find,
\begin{equation}
\frac{1}{2} {\dd^2I \over \dd t^2} = \sum_i m_i\vec{\dot{r}}_i^2 + \sum_i m_i\vec{r}_i\cdot \vec{\ddot{r}}_i \ ,
\end{equation}
where the first right-hand term is twice the kinetic energy $E_\name{kin}$, and the second one is the virial term. For
a system near a statistical equilibrium, both sides of this equation should oscillate around 0, so the respective
time averages should vanish.
The LJ potential is a sum of two homogeneous functions
\footnote{By definition, a homogeneous function $f$ of degree $k$ satisfies
$f(\lambda \vec{x}) = \lambda^k f(\vec{x})$.}
$\Phi_\name{LJ}=\Phi_\name{LJ, a}+\Phi_\name{LJ, r}$ of degree $-6$ and $-12$ respectively, while gravity is of degree
$-1$. So we can use Euler's theorem of homogeneous functions to express the virial term as a sum of potential energies
multiplied by minus the homogeneous degree. The Lagrange-Jacobi identity becomes
\begin{equation}
\label{eVir}
{1 \over 2} {\dd^2I \over \dd t^2} = \underbrace{2 E_\name{kin}}_{>0} + \underbrace{12 E_\name{pot, LJ, r}}_{>0} +
\underbrace{6 E_\name{pot, LJ, a}}_{<0} + \underbrace{E_\name{pot, G}}_{<0} \ .
\end{equation}
\subsubsection{Homogeneous sphere}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/MT.png}}
\caption{Minimum mass of isothermal equilibrium curves for H$_2$ homogeneous spheres.}
\label{fMT}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/Mrh.png}}
\caption{Virial equilibrium of gravitating LJ homogeneous isothermal spheres with H$_2$ molecules arranged on a
HCP/FCC lattice. The sphere mass as a function of density is plotted for temperatures ranging from
$\tau \equiv T/T_\name{max} = 10^{-2}$ to $10^2$.}
\label{fvirial-ratios}
\end{figure}
For homogeneous, finite mass spheres at a given temperature, the individual terms of the Lagrange-Jacobi identity read,
\begin{eqnarray}
2E_\name{kin} &=& {3 k_\name{B}T\over m} M \ ,\\
12 E_\name{pot, LJ, r} &=& 12 c_\name{r} \, \,{\epsilon \sigma^{12} \rho^4\over m^5} \,M \ , \label{eELJr}\\
6 E_\name{pot, LJ, a} &=& -6 c_\name{a} \,\, {\epsilon \sigma^6 \rho^2\over m^3} \,M \ , \label{eELJa}\\
E_\name{pot,G} &=& -G \left({36 \pi \rho \over 125} \right)^{1/3} M^{5/3} \ , \label{eEgrav}
\end{eqnarray}
where the constants $c_\name{r} $ and $c_\name{a}$ in the LJ terms have been calculated by straight summation over the
$1.5\cdot 10^{12}$ nearest molecules, as given in Table \ref{tLC} for both the simple cubic (SC) and hexagonal close
packed (HCP) or face-centred cubic (FCC) lattices.
\begin{table}[ht]
\caption{Lattice constants.}
\label{tLC}
\centering
\begin{tabular}{l l l}
\hline\hline
Lattice & $c_\name{r} $ & $c_\name{a} $ \rule{0pt}{2.6ex}\\
\hline
SC & $24.8085962$ & $33.6076959$ \rule{0pt}{2.6ex}\\
HCP/FCC & $48.5275208$ & $57.8156842$ \\
\hline
\end{tabular}
\end{table}
The virial equation, obtained by setting ${\dd^2I / \dd t^2 = 0}$ in the Lagrange-Jacobi identity, contains terms
proportional to $M$ except the gravity term, which is proportional to $M^{5/3}$. Dividing the equation by $M$ we can
separate $M^{2/3}$, yielding,
\begin{equation}
M^{2/3} = \left(375 \over 4 \pi \rho\right)^{1/3} {1 \over G m}
\left[k_\name{B}T + 4 c_\name{r}\epsilon\left({\sigma^3\rho\over m}\right)^4
-2 c_\name{a} \epsilon\left({\sigma^3\rho\over m}\right)^2
\right],
\end{equation}
which is indeed positive if the sum of the terms inside the brackets are also positive. However the term proportional
to $c_\name{a}$ is negative, and when large introduces the possibility that no positive solution for $M^{2/3}$ exists.
Solving the bracket terms equal to zero for $\left(\sigma^3\rho/m\right)^2$, we have then a quadratic equation for this
term, which gives the solutions for the densities $\rho_0$ for which $M$ vanishes,
\begin{equation}
\left({\sigma^3\rho_0\over m}\right)^2 = {1 \pm \sqrt{1-4\,{c_\name{r} \over c_\name{a}^2}{k_\name{B}T \over \epsilon}}
\over 4 \,{c_\name{r}\over c_\name{a} } }.
\end{equation}
The $+$ and $-$ solutions are real non-negative when the term inside the square root above is non-negative. The maximum
temperature at which $M$ can vanish is thus,
\begin{equation}
\label{eTmax}
{k_\name{B} T_\name{max} \over \epsilon }= {c_\name{a}^2 \over 4 c_\name{r} } .
\end{equation}
Since the critical temperature for a phase transition in the absence of gravity is about $\epsilon/k_\name{B}$, we see
that $T_\name{max}$ can be substantially larger than the critical temperature. With the values given in Table 1 for the
constants $c_\name{a}$ and $c_\name{r}$, the maximum temperature below which gravity combined with molecular forces
enhances fragmentation are $T_\name{max}=414.3\,\name{K}$ and $626.8,\name{K}$ for the SC and HCP/FCC lattices,
respectively, which are $11.38$ and $17.22$ larger than $\epsilon/k_\name{B}$.
The corresponding density $\rho_\mathrm{f}$ at which fragmentation can occur at \textit{arbitrarily small mass} is
\begin{equation}
\label{erho0}
\rho_\mathrm{f} = {m \over \sigma^3} \sqrt{ c_\name{a} \over 4 c_\name{r} }\ ,
\end{equation}
which is of the order of the individual molecule density $m / \sigma^3$.
The Lagrange-Jacobi identity therefore allows us to predict a density $\rho_\mathrm{f}$ and a maximum temperature
$T_\name{max}$ below which a homogeneous sphere made of LJ molecules can find a gravitational equilibrium at an
arbitrarily small mass: in a way, this provides a simple model, an explanation, for the reason of forming substellar ice
clumps out of a cold self-gravitating gas. For H$_2$ molecules arranged as SC or HCP/FCC lattice, we find
$\rho_\mathrm{f}= 73.8\,\name{kg\,m^{-3}}$ and $69.2\,\name{kg\,m^{-3}}$, respectively. These densities are of the
order or slightly below the solid or liquid H$_2$ density ($\approx 80\,\name{kg\,m^{-3}}$).
At temperatures $T > T_\name{max}$ the isothermal equilibrium curves have a minimum mass when $dM/d\rho|_\name{T}=0$,
which is expressed as
\begin{equation}
M_\name{min}^2 = K {\epsilon^3 \sigma^3\over G^3 m^4}
{c_\name{a}^{11/2}\over c_\name{r}^{5/2}} { \left[ {11 \over 5 }\tau
- \left( 1 + \sqrt{ 1+{11 \over 25 }\tau }\right) \right]^{3} \over \sqrt{1+\sqrt{1
+{11\over 25}\tau }}}
\end{equation}
where $\tau = T/T_\name{max}$ and
\begin{equation}
K = {81\over 2\pi} \left(5 \over 11\right)^{11/2} \approx 0.1686476934 \ .
\end{equation}
Figure \ref{fMT} shows the minimum mass as a function of the temperature for HCP/FCC H$_2$ homogeneous spheres. It is
rising very steeply at $T \gtrsim 627\,\name{K}$, but rising much slower after $\sim 1000\,\name{K}$. Interestingly the
mass range includes all the masses below terrestrial planet masses for temperatures below H$_2$ dissociation.
\subsubsection{Condensed bodies}
Different condensed and uncondensed bodies can be identified using the Lagrange-Jacobi identity, as summarized in
Table~\ref{tLJ}.
\begin{table}[ht]
\caption{Condensed and uncondensed bodies in a LJ fluid using the Lagrange-Jacobi identity.}
\label{tLJ}
\centering
\begin{tabular}{l | l}
\hline\hline
Dominating Terms & Name \rule{0pt}{2.6ex}\\
\hline
$2E_\name{kin} + 6 E_\name{pot, LJ, a}$& ``molecular gas'' \rule{0pt}{2.6ex}\\
$12 E_\name{pot, LJ, r} + 6 E_\name{pot, LJ, a}$& ``comet''\\
$12 E_\name{pot, LJ, r} + E_\name{pot,G}$& ``rocky planetoid''\\
$2E_\name{kin} + E_\name{pot,G}$& ``gaseous planetoid''\\
\hline
\end{tabular}
\end{table}
Figure \ref{fvirial-ratios} shows some $M(\rho,T)$ equilibrium curves of gravitating homogeneous sphere whose molecules
are arranged on a HCP/FCC lattice, which is able to represent the most compact sphere lattice at high density. At low
density, on the left of the diagram, the equilibrium is principally fixed by the attractive part of molecules and their
temperature, and the exact lattice structure is irrelevant. This is the domain of uncondensed ``molecular gas''.
On the right, there is an accumulation curve at approximately $\rho \approx 100 \, \name{kg\,m^{-3}}$ representing
purely condensed H$_2$ bodies mainly balancing gravity with the repulsive part of the molecular interaction. The area
between the dashed line, connecting the minimum of the isotherms with $T > T_\name{max}$, and the accumulation curve is
the domain of ``rocky planetoids''.
At temperatures below $T_\name{max}=626.8\,\name{K}$, the equilibrium curves plunge to arbitrarily small masses along two
regimes: the left vertical asymptotes represent the limit gaseous bodies, and the right asymptotes the condensed (solid
or liquid) bodies called ``comets''. The area of the ``comets'' lies between the dotted line, connecting the elbows of
the isotherms with $T < T_\name{max}$, and the dashed line.
At high temperature and low density, to the left of both the dotted and dashed line and above the isotherm lines, lie
the ``gaseous planetoids''. These bodies' equilibrium is fixed principally by gravity and temperature.
Of course real astrophysical bodies are not homogeneous, in practice in the small mass regime one can expect bodies made
of a mixture of condensed state in the core surrounded by a less dense gaseous atmosphere, which could be calculated by
solving the hydrostatic equilibrium, see \citet{pfenniger_cold_2004}, where some models of isothermal bodies made of
H$_2$ containing a solid core and a gaseous atmosphere have been discussed.
\subsection{Gravitational instability}
\label{ssGI}
The linearized wave equation for the isentropic density perturbation $\rho_\name{per}$ of a self-gravitating fluid of
density $\rho$ is the following \citep{jeans_stability_1902,weinberg_gravitation_1972}:
\begin{equation}
\frac{\partial^2\rho_\name{per}}{\partial t^2} - \left(\frac{\partial P}{\partial \rho}\right)_\name{s} \nabla^2\rho_\name{per} =
4\pi G \rho\rho_\name{per}\ .
\end{equation}
Its solution is a superposition of modes of the form $\exp\left(i(\vec k\cdot \vec x - \omega t) \right)$, where
$\omega$ is the frequency and $k = {2\pi}/{\lambda}$ the wavenumber, and where $\lambda$ is the wavelength. This leads
to the instability condition \citep{jeans_stability_1902},
\begin{equation}
\label{eo}
\omega^2 = \left(\frac{\partial P}{\partial \rho}\right)_\name{s} k^2 - 4\pi G\rho < 0 \ .
\end{equation}
When fulfilled the modes are real-exponential, therefore unstable. The classical Jeans criterion reads,
\begin{equation}
\lambda >
\lambda_\name{J} \equiv
\sqrt{ \left( \frac{\partial P}{\partial \rho} \right)_\name{s} \frac{\pi}{G\rho}
} \ .
\label{eJeans}
\end{equation}
Therefore the effective gravitational instability is directly dependent on the EOS of the medium.
\subsubsection{Ideal gas}
In most textbooks about Jeans instability only the ideal gas is discussed. In the case of an ideal monoatomic gas, the
pressure derivative term is
\begin{equation}
\left(\frac{\partial P}{\partial \rho}\right)_\name{s} = \gamma\frac{k_\name{B}T}{m} \ ,
\label{eig}
\end{equation}
with the adiabatic index $\gamma = {5}/{3}$. This value is always positive, which leads to the classical, ideal
monoatomic gas Jeans criterion for instability,
\begin{equation}
\lambda > \lambda_\name{J} = \sqrt{\frac{\pi \gamma k_\name{B}T}{G\rho m}} \ .
\label{eJeansideal}
\end{equation}
\subsubsection{Fluid with phase transition}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/vdw3d.png}}
\caption{van der Waals EOS, including the Maxwell construct. }
\label{fgravin}
\end{figure}
When considering a fluid presenting a phase transition the term $\left(\partial P /\partial \rho \right)_\name{s}$ may
differ from that of an ideal gas. As seen before, in the case of the van der Waals EOS, the Maxwell construct must be
used, which may considerably change the combined stability of the fluid (Sect. \ref{ssVdW}). In the presence of a
phase transition, the EOS gradient is strongly modified, in particular $(\partial P / \partial \rho)_\name{T} = 0$.
Using
\begin{equation}
\left(\partial P \over \partial \rho\right)_\name{s} = {c_\name{P}\over c_\name{v}}\left(\partial P \over \partial \rho\right)_\name{T}
\end{equation}
with $c_\name{P}$ and $c_\name{v}$ both finite values, we find $\left(\partial P /\partial \rho \right)_\name{s} = 0$.
Therefore, a self-gravitating fluid in a phase transition is also automatically gravitationally unstable.
Figure \ref{fgravin} shows the 3D representation of the EOS including the Maxwell construct. Clearly the curved
triangular region, the phase transition region (on the left) has a very different gradient than the almost ideal-gas
region at low density and high temperature, or the condensed phase region at high density. It is remarkable that a
temperature drop by one order of magnitude from the critical temperature leads to a drop in the critical pressure by 14
orders of magnitude. For example for H$_2$ at 3\,K the critical pressure is $6.4\cdot 10^{-9}\,\unit{Pa}$.
\section{Molecular dynamics} \label{sMD}
For all simulations, the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is used
\citep{plimpton_fast_1995}. The LAMMPS software is a state of the art and widely used single and multi-processor code in
chemistry, material sciences, and related fields. Its abilities to quickly compute short and long-range forces and the
possibility to use a multi-timescale integrator make it a suitable tool to perform our simulations.
\subsection{Potential Solver}
\label{ssPS}
The LAMMPS code has a wide range of force fields. For the simulations in this article we use only the LJ and
self-gravitational potentials.
Since the straight calculation cost for exact pairwise interactions is $\name{O}(N^2)$, approximate but still accurate
methods are implemented that make the calculations possible at a much lower cost. A cut-off radius $r_\name{c}$ is used
for the short-range forces. As the attractive part of the LJ potential drops with $r^{-6}$, it is calculated only for
neighbour particles within $r_\name{c}$. The neighbours for each particle are found using a Verlet neighbour list. This
list is created with a radius of $r_\name{n} = r_\name{c} + r_\name{s}$, $r_\name{s}$ being an extra ``skin'' distance
to avoid the recalculation of the Verlet list at every time-step. This cut-off method is $\name{O}(N)$ and therefore
linearly scales with the number of particles.
The gravitational potential drops with $r^{-1}$ so the long range interactions cannot be ignored. It is calculated using
the Particle$^3$-Mesh (P$^3$M) method \citep{hockney_computer_1981}. The gravitational potential is split in short-range
and long-range parts in Fourier space \citep{ewald_berechnung_1921}:
\begin{equation}
\hat{\Phi} = \hat{\Phi}_\name{SR}\left(1 - \name{exp}(-k^2r_\name{s}^2)\right) +
\hat{\Phi}_\name{LR}\,\name{exp}(-k^2r_\name{s}^2) \ ,
\end{equation}
where $r_\name{s}$ is the splitting distance. The potential $\Phi_\name{SR}$ is calculated at the same time as the LJ
potential, using the same Verlet list. The potential $\Phi_\name{LR}$ is calculated in Fourier space using a fast
Fourier transform (FFT).
As LAMMPS is designed for the simulation of chemical substances not far from terrestrial conditions, and self-gravity is
too weak to be of any influence, no specific self-gravity module is provided. For that reason, a tweak is used by using
the Coulomb's potential module for an electric field,
\begin{equation}
\Phi_\name{C} = \frac{Cq_iq_j}{\epsilon r} \ ,
\end{equation}
where $C$ is the interaction constant, $\epsilon$ the dielectric constant, and $q_{i, j}$ the charges of the molecules.
As $C$ is fixed in LAMMPS, to correctly calculate the gravitational potential we set the constants as follows:
\begin{eqnarray}
\epsilon &=& -1 \ ,
\\ q_i &=& \sqrt{\frac{G}{C}}m_i \ .
\end{eqnarray}
\subsection{Time integration}
\label{ssTI}
The time integration is done using the symplectic leapfrog scheme. The drift and kick operators,
\begin{eqnarray}
D(\Delta t) &\equiv& x(t + \Delta t) = x(t) + \Delta t \dot{x}(t) \ ,\\
K(\Delta t) &\equiv& \dot{x}(t + \Delta t) = \dot{x}(t) + \Delta t \ddot{x}(t) \ ,
\end{eqnarray}
are applied according to the sequence
$K\left(\frac{\Delta t}{2}\right)\, D(\Delta t) \, K\left(\frac{\Delta t}{2}\right)$ at each elementary time-step. For
short-range force, the time-step is constant throughout the simulation and set as $10^{-2}$ -- $10^{-3}$, which is the
typical interaction timescale during nearest-neighbour molecular interactions. This is reasonable since the repulsive
interaction and the finite kinetic energy prevent strongly varying accelerations between particles.
Small changes of individual particle positions can significantly change the short-range forces, which is why a very
small time-step is required. But these small changes have almost no influence on long-range forces. For this reason,
there is no need to calculate the long-range forces at every time-step.
In regard to short-range calculations, long-range calculations are an order of magnitude more costly per step but always
need the same amount of calculation time: creation of the density-map with a fifth order interpolation procedure, a FFT
solution of the potential, and the same interpolation procedure for finding the accelerations.
With the ``reversible reference system propagator algorithm'' (rRESPA), a multiple timescale integrator is available for
LAMMPS that enables us to reduce the number of long-range force calculation \citep{tuckerman_reversible_1992}. It
enables time integration in up to four hierarchical levels, but only two are needed for the simulations we present. The
rRESPA time-integration-scheme looks as follows:
\begin{equation}
K_\name{LR}\left(\frac{\Delta t}{2}\right) \,
\left[K_\name{SR} \left(\frac{\Delta t}{2n_\name{SR}}\right)\, D\left(\frac{\Delta t}{n_\name{SR}}\right)\,
K_\name{SR} \left(\frac{\Delta t}{2n_\name{SR}}\right)
\right]^{n_\name{SR}}
K_\name{LR}\left(\frac{\Delta t}{2}\right) \ ,
\end{equation}
where $n_\name{SR}$ is the number of short-range iterations and set as $10$ -- $100$, $K_\name{LR}$ is the kick operator
using the long-range (FFT) accelerations, and $K_\name{SR}$ the kick operator using the short-range accelerations.
\subsection{Super-molecules}
\label{ssSM}
The maximum number of particles that can be simulated on today's supercomputers is of the order of $10^9$ -- $10^{10}$
particles. Even using such a huge number of molecules would only result in a fluid with a total mass of a few
femtograms. It is obvious that gravity has no effect on this kind of fluid. For that reason, the concept of
super-molecules is introduced. This concept is well established in galactic dynamics simulation, where
super-stars weigh typically $10^4 - 10^6\,\unit{M}_\odot$, and in cosmology where super-WIMPS may weigh as
much as $\sim 10^{67}$ GeV$ c^{-2}$ particles. Two-body relaxation or diffusion due to the low number of super-particles
is negligible, provided the simulation time is not too large, depending on the specific problem. Practice has shown
that this kind of an approximation is valid in galactic dynamics for instance if the simulation length is of order 100
dynamical times \citep{binney_galactic_2008}.
The basic principle of the concept is that each super-molecule consists of $\eta$ molecules, and its mass is therefore,
\begin{equation}
m_{\mathrm{SM}} = \eta\, m_\mathrm{M} \ .
\label{emSM}
\end{equation}
To have the same properties of a super-molecule fluid as for a normal fluid, every term of the virial Equ. (\ref{eVir}) has
to be transformed such that the ratios of the terms are invariant. The kinetic energy term reads
\begin{equation}
2 E_\name{kin} = N_\mathrm{M} m_\name{M}\langle{v_\name{M}^2}\rangle = N_\mathrm{SM} m_\name{SM}\langle{v_\name{SM}^2}\rangle \ .
\end{equation}
Since the total mass is constant, $N_\mathrm{M} m_\name{M} = N_\mathrm{SM} m_\name{SM}$, the velocity dispersion of
molecules and super-molecules is also the same,
\begin{equation}
\langle{v_\mathrm{SM}^2}\rangle = \langle{v_\name{M}^2}\rangle \ .
\end{equation}
From the two LJ terms, Equ.\ (\ref{eELJr}--\ref{eELJa}) remain the same if
\begin{eqnarray}
{\epsilon_\name{M}\sigma_\name{M}^{12} \over m_\name{M}^5} &=& {\epsilon_\mathrm{SM}\sigma_\mathrm{SM}^{12} \over m_\mathrm{SM}^5}\ ,\\
{\epsilon_\name{M}\sigma_\name{M}^6 \over m_\name{M}^3} &=& {\epsilon_\mathrm{SM}\sigma_\mathrm{SM}^6 \over m_\mathrm{SM}^3} \ .
\end{eqnarray}
Solving these two equations and using Equ.\ (\ref{emSM}), we derive
\begin{eqnarray}
\epsilon_\mathrm{SM} &=& \eta \,\epsilon_\name{M} \ ,\\
\sigma_\mathrm{SM} &=& \sqrt[3]{\eta}\sigma_\name{M} \ .
\end{eqnarray}
The gravitational energy Equ.\ (\ref{eEgrav}) does not change since it does not depend on super-molecule properties.
It is important to ensure that the gravitational force between two super-molecules remains small compared to the
corresponding LJ force within the short-range force cut-off radius. This is assured by setting
$\vec{F}_\name{G}(r_\name{c}) \ll \vec{F}_\name{LJ}(r_\name{c})$ with $r_c$ being the cut-off radius. This leads to the
following constraint:
\begin{equation}
\label{exglj}
\eta^\frac{2}{3} \ll {24\epsilon_\name{M}\sigma_\name{M} \over G\,m_\name{M}^2}
\left( r_\name{c,SM}^{-5} - 2r_\name{c,SM}^{-11} \right) \ ,
\end{equation}
where $r_\name{c,SM} = (r_\name{c} /\sigma_\name{SM})$. Using a cut-off radius of $r_\name{c} = 4\,\sigma$ and hydrogen
super-molecules, we find a maximum super-molecule mass equal to $5.7 \cdot 10^{-6}\,M_\oplus$, meaning at least
$1.7\cdot 10^5$ particles are required for simulating an Earth-mass, virialized, H$_2$-condensed body.
\subsection{Ice clump detection}
\label{ssIG}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/dMax.png}}
\caption{Mean kinetic energy as a function of the maximum displacement in an ice clump.}
\label{fdMax}
\end{figure}
The LJ potential has its minimum at $r_m = 2^{1/6}\sigma$. If an ice clump is at absolute zero temperature, all bound
molecules would have this distance from at least one other molecule. But, as the molecules in a clump are vibrating, the
maximum binding distance between two molecules is generally larger than $r_m$.
The simplest clump consists of two molecules. Their mean kinetic energy as a function of the maximum displacement can be
calculated numerically (Fig.~\ref{fdMax}).
The mean kinetic energy rises up to the maximum value of $r_{\sigma, \name{max}} = 1.3625$, after which it drops
again. It can be assumed that all stable clumps have a binding distance below this value. The distance constraint for
two molecules to be bound is therefore:
\begin{equation}
r_\sigma \leq r_{\sigma, \name{max}} \ .
\end{equation}
Using $r_{\sigma, \name{max}}$ as the threshold distance, LAMMPS provides a list of clumps at fixed time intervals.
\section{Simulations}
\label{sS}
\begin{table}[ht]
\caption{Parameters of the super-molecule test simulations.}
\label{tSM}
\centering
\begin{tabular}{l l l l}
\hline\hline
Name & $n/n_\name{cr}$ & $T/T_\name{cr}$ & $N_\name{SM}$\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
\hline
SM15 & $0.006$ & $1.5$ & $4^3$ -- $200^3$\rule{0pt}{2.6ex}\\
SM30 & $0.006$ & $3.0$ & $4^3$ -- $200^3$\\
SM60 & $0.006$ & $6.0$ & $4^3$ -- $200^3$\\
\hline
SM04 & $0.1$ & $0.4$ & $25^3$ -- $200^3$\rule{0pt}{2.6ex}\\
SM06 & $0.1$ & $0.4$ & $25^3$ -- $200^3$\\
\hline
SF\tablefootmark{a} & $0.1$ & $0.1$ & $50^3$ -- $160^3$\rule{0pt}{2.6ex}\\
\hline
\end{tabular}
\tablefoot{All simulations are without gravity and use the basic KDK time integration scheme.\\
\tablefoottext{a}{External gravity $a=0.1\,(L / \tau^2)$}}
\medskip
\caption{Parameters of the one-phase fluid simulations.}
\label{tOP}
\centering
\begin{tabular}{l l l l l }
\hline\hline
Name & $n/n_\name{cr}$ & $T/T_\name{cr}$ & $N_\name{SM}$ & $\gamma_\name{J}$\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
\hline
OP0 & $10^{-2}$ & $1.25$ & $100^3$ & $0$ \rule{0pt}{2.6ex}\\
OPGw & $10^{-2}$ & $1.25$ & $100^3$ & $0.5$\\
OPGs & $10^{-2}$ & $1.25$ & $50^3$ -- $160^3$& $1.25$\\
\hline
\end{tabular}
\tablefoot{ OP0 without gravity and using the KDK time integration scheme. All gravitational simulations use the P$^3$M
gravitational solver and the rRESPA time scheme.}
\medskip
\caption{Parameters of the phase transition simulations.}
\label{tTP}
\centering
\begin{tabular}{l l l l l}
\hline\hline
Name & $n/n_\name{cr}$ & $T/T_\name{cr}$ & $N_\name{SM}$ & $\gamma_\name{J}$\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
\hline
\emph{OPGs} &$10^{-2}$ & $1.25$ & $50^3$ -- $160^3$ & $0$, $0.5$, $1.25$\rule{0pt}{2.6ex}\\
\hline
PT1-1 & $10^{-1}$ & $0.1$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\rule{0pt}{2.6ex}\\
PT2-1 & $10^{-1}$ & $0.2$ &$50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
PT3-1 & $10^{-1}$ & $0.3$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
\hline
PT1-2 & $10^{-2}$ & $0.1$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\rule{0pt}{2.6ex}\\
PT2-2 & $10^{-2}$ & $0.2$ &$50^3$ -- $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
PT3-2 & $10^{-2}$ & $0.3$ & $50^3$ -- $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
PT5-2 & $10^{-2}$ & $0.5$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
PT7-2 & $10^{-2}$ & $0.7$ &$50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
PT9-2 & $10^{-2}$ & $0.9$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
\hline
PT1-3 & $10^{-3}$ & $0.1$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\rule{0pt}{2.6ex}\\
PT2-3 & $10^{-3}$ & $0.2$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
PT3-3 & $10^{-3}$ & $0.3$ & $50^3$, $100^3$ & $0$, $0.5$, $G=G_\name{OPGs}$\\
\hline
\end{tabular}
\tablefoot{All simulations in two different runs: without gravity and perturbation, using basic KDK time integration scheme,
and with gravity and perturbation, using the P$^3$M gravitational solver and the rRESPA time scheme.}
\end{table}
\subsection{Units}
A fluid is defined by the number of super-molecules $N_\name{SM}$, the initial velocity distribution (temperature), the
number density, and the strength of the gravitational potential. All simulation properties are molecule-independent, the
initial temperature and density are measured as a factor of the critical values $T_\name{cr}$ and $n_\name{cr}$, and the
gravitational potential is measured as the factor $\gamma_\name{J} = G / G_\name{J}$ of the ideal-gas Jeans gravity
$G_\name{J}$, defined as
\begin{equation}
G_\name{J} = {\pi\gamma k_\name{B} T_0\over n m^2 L^2}
\end{equation}
with the box side $L = (N_\name{SM}/n)^{1/3}$. It is interesting to note that $G_\name{J}$ is proportional to
temperature and inversely proportional to density. For all simulations, the time unit is defined as the average time of
a particle crossing the box
\begin{equation}
\tau = \frac{L}{V} \ ,
\end{equation}
with $V^2 = \sum_i {v_x}_i^2/N$.
The distance at which the gravitational and LJ forces are equal is denoted as
\begin{equation}
\label{exGLJ}
x_\name{GLJ} \equiv F_\name{G}(x_\name{GLJ}) = F_\name{LJ}(x_\name{GLJ}) \ .
\end{equation}
\subsection{Super-molecule concept tests}
\label{ssSMS}
\subsubsection{Potential energy}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/Epot_SM.png}}
\caption[LJ potential energy as a function of the number of super-molecules at $t = 1\tau$. Solid line: SM15; dashed
line: SM30; dotted line: SM60.]{LJ potential energy as a function of the number of super-molecules at $t = 1\tau$.
Solid line: SM15$^a$; dashed line: SM30$^a$; dotted line:
SM60$^a$. \\ $^{(a)}$ See Table \ref{tSM}}
\label{fEp}
\end{figure}
A fluid in a cubic box with periodic boundary conditions and initial constant density of $n = 0.006\,n_\name{cr}$ is
simulated with LAMMPS. Three different initial Maxwellian velocity distributions with temperature $T=1.5$, $3.0$ and
$6.0\,T_\name{cr}$, and a number of particles from $N_\name{SM}=30^3$ -- $200^3$ are used. Table \ref{tSM} shows the
parameters of the different simulations.
Figure \ref{fEp} shows the LJ potential energy of those three fluids with respect to the number of super-molecules. One
can see fluctuations in the simulations with low $N_\name{SM}$, but the result becomes reasonable when
$N_\name{SM} > 10^4$ and a satisfactory convergence is obtained if $N_\name{SM} > 10^5$. Therefore all the subsequent
simulations are performed with $N_\name{SM} > 10^5$.
\subsubsection{Cluster percentage}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/SM_nb.png}}
\caption[Fraction of bound molecules as a function of the number of super-molecules at $t = 5\tau$. Solid line: SM04,
dashed line: SM06.]{Fraction of bound molecules as a function of the number of super-molecules at $t = 5\tau$. Solid
line: SM04$^a$, dashed line: SM06$^a$. \\ $^{(a)}$ See Table \ref{tSM}}
\label{fSMnb}
\end{figure}
High density, low temperature fluids are simulated during $5 \tau$ at various number of super-molecules. These fluids
will form ``comets'' and allow us to compare the final percentage of bound molecules. Table \ref{tSM} shows the
parameters of the different simulations.
Figure \ref{fSMnb} shows the fraction of bound molecules with respect to the number of super-molecules. All simulations
reach the same percentage, independent of the number of super-molecules.
\subsubsection{Precipitation in an external field}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/snowZ.png}}
\caption[$z$-Coordinate of the centre of mass of ``comets'' as a function of time of the SF simulation. $N_\name{SM} =
50^3$ and $64^3$ (dotted line), $80^3$ and $100^3$ (dashed line), $128^3$ and $160^3$ (solid line).]{$z$-Coordinate of
the centre of mass of ``comets'' as a function of time of the SF$^a$ simulation. $N_\name{SM} = 50^3$ and $64^3$
(dotted line), $80^3$ and $100^3$ (dashed line), $128^3$ and $160^3$ (solid line). \\ $^{(a)}$ See Table \ref{tSM}}
\label{fsnowZ}
\end{figure}
An external acceleration is applied in the $-z$ direction to a fluid within a box with reflecting boundary
conditions. The fluid is very cold and rather dense (see Table \ref{tSM}) and is therefore forming ice grains, or
``comets''. Because of acceleration and Archimedes's principle, the ``comets'' fall to the bottom and stay there,
similar to snowfall on Earth.
The purpose of this simulation is to test the timescaling of the precipitation as function of the number of particles.
Figure \ref{fsnowZ} shows the $z-$coordinate of the centre of mass of ``comets'' as a function of time. The curves are
very similar, with a slight over-estimation of the collapse time for the smallest simulation. Therefore the
precipitation phenomenon timescale is not strongly dependent on the mass resolution.
\subsection{Fluid with perturbation}
\label{ssfwp}
To illustrate the behaviour of a homogeneous fluid perturbed by a plane sinusoidal wave, a velocity perturbation in the
$x$ direction is introduced with wavelength $\lambda$ equal to the box side $L$. Starting with a Maxwellian
distribution $\vec{v}_i$ of velocities for the homogeneous unperturbed case, the $x$-component of the velocities is
perturbed in such a way as to conserve energy. The perturbed $x$-velocity component for each particle $i$ reads,
\begin{equation}
{v^\prime_x}_i = \alpha \left[{v_x}_i + \beta V\sin(\omega x_i)\right] \ ,
\label{evs}
\end{equation}
with $\omega =2\pi/L$. The correction factor $\alpha\leq 1$ is used to keep the same kinetic energy in the $x$
direction, with
\begin{equation}
\alpha^2= \frac{\sum_i{v^2_x}_i}{\sum_i\left[{v_x}_i+ \beta V\sin(\omega x_i)\right]^2} \ .
\end{equation}
$\beta$ determines the strength of the perturbation, but is supposed to be small enough to be in the linear regime of
perturbations. In the present simulations, $\beta = 0.01$.
\subsection{One-phase fluid}
\label{ssOPS}
To study the reaction of a pure one-phase ideal-gas fluid far from the phase transition but with the above plane-wave
perturbation, the initial temperature is taken above the critical value ($T_0 = 1.25\,T_\name{cr}$) and the initial
density well below the critical value ($n_0 = 10^{-2}\,n_\name{cr}$). Three different cases are studied: without
gravity, weakly self-gravitating below the classical ideal-gas Jeans criterion with $\gamma_\name{J} = 0.5$, and
sufficiently self-gravitating above the ideal-gas Jeans criterion with $\gamma_\name{J} = 1.25$. The simulations are
performed with $N_\name{SM}$ ranging from $50^3$ to $160^3$. Table \ref{tOP} shows the parameters of the different
simulations.
\subsubsection{Time evolution}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/G.png}}
\caption[Temperature of the one-phase fluid simulations OP0 (straight solid line) and OPGs as a function of time.
$N_\name{SM} = 50^3$ and $64^3$ (dotted line), $80^3$ and $100^3$ (dashed line), $128^3$ and $160^3$ (solid line).]
{Temperature of the one-phase fluid simulations OP0$^a$ (straight solid line) and OPGs$^a$
as a function of time. $N_\name{SM} = 50^3$ and $64^3$ (dotted line), $80^3$ and $100^3$ (dashed line), $128^3$ and
$160^3$ (solid line). \\ $^{(a)}$ See Table \ref{tOP}}
\label{fG}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/Gc.png}}
\caption{Fraction of bound molecules in ``comets'' of the one-phase fluids OP0 (straight solid line) and OPGs as a
function of time. $N_\name{SM} = 50^3$ and $64^3$ (dotted line), $80^3$ and $100^3$ (dashed line), $128^3$ and
$160^3$ (solid line).}
\label{fGc}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/G100.png}}
\caption{Temperature of the one-phase fluid simulations OPGs as a function of time. $N_\name{SM} = 100^3$ with four
different random seeds.}
\label{fG100}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/Gd.png}}
\caption{Density at $x=(0.5\pm0.05)L$ of the one-phase fluids OP0 (straight solid line) and OPGs as a function of
time. Bold line: analytic solution for ideal gas. $N_\name{SM} = 50^3$ and $64^3$ (dotted line), $80^3$ and
$100^3$ (dashed line), $128^3$ and $160^3$ (solid line).}
\label{fGd}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/exp.png}}
\caption{Evolution of perturbation density. Straight dotted line: analytic solution for ideal gas.
$N_\name{SM} = 50^3$ and $64^3$ (dotted line), $80^3$ and $100^3$ (dashed line), $128^3$ and $160^3$ (solid line).}
\label{fexp}
\end{figure}
As the simulations conserve total energy, the formation of ``comets'', implying a decrease of potential energy, can be
measured by the mean temperature (i.e., kinetic energy) of the system. This is equivalent to the release of latent heat
when a gaseous fluid condenses.
Figures \ref{fG} and \ref{fGc} display the temperature and the ``comet'' percentage evolutions of the one-phase fluid
simulations: OP0 without gravity and $N_\name{SM} = 100^3$ super-molecules and the sufficiently self-gravitating fluid
OPGs with $N_\name{SM}$ ranging from $50^3$ to $160^3$. The weakly self-gravitating fluid of simulation OPGw is not
shown since it is identical to the no gravity case OP0.
For the fluid without gravity OP0 and the weakly self-gravitating fluid OPGw, no particular effect is observed. The
percentage of ``comets'' rises to $\lesssim 2\%$, which can be attributed to the initial fluctuations in the
distribution. The small perturbation introduced in the $x$-direction does not change the nature of the fluid; its
temperature and ``comet'' percentage remains the same.
The sufficiently self-gravitating OPGs fluids, on the other hand, do change. Their temperatures and cluster percentages
are rising steeply showing a substantial latent heat release. All simulations reach the same asymptotic temperature
$\sim 6 T_\name{cr}$, but different initial conditions (i.e., random seeds) yield slightly different behaviours during
the collapse. This can also be observed in Fig.~\ref{fG100} where four identical parameter simulations but different
random seeds are run with $N_\name{SM} = 100^3$, leading to a time difference of $\sim 0.5\tau$ for reaching the
asymptotic upper value.
Figure \ref{fGd} shows the density increase around the centre of the perturbation in the range $x = (0.5\pm0.05)L$ and
compares them to the analytic solution for an ideal gas. The simulations follow the ideal-gas solution with a slight
dispersion up to $t \approx 2.5\tau$, where a density rise is not possible anymore because of the repulsive LJ force;
the density declines for a short while because of the collapse rebound down to a minimum at $t \approx 2.7\tau$. This
density local minimum corresponds with the break in the temperature increase visible in Fig.~\ref{fG}. Subsequently, the
temperature growth is much slower.
Figure \ref{fexp} shows the evolution of density in the simulations, and the predicted growth rate line for an ideal gas
subject to Jeans' instability (Sect. \ref{ssGI}). All curve slopes correspond initially to the predicted Jeans' growth
rate. The initial steep growth and the small time delay are caused by the initial noise in the particle
distribution. Thus, a certain time is needed for the perturbation to overcome the noise.
\subsubsection{``Comet'' mass distribution}
\label{ssCMD}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/3dPlot.png}}
\caption{3D-view of simulation OPGs with $N_\name{SM} = 100^3$at $t = 4\tau$.}
\label{fG122s}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/PDist.png}}
\caption{Distribution of y- and z-coordinates of ``planetoid'' centre of mass on a 10x10 grid; all OPGs simulations at
$t = 2.5\tau$.}
\label{fPDist}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/OPGT.png}}
\caption{Temperature distribution of unbound molecules and ``comets'' as a function of ``comet'' mass of OPGs with
$N_\name{SM} = 100^3$ at $t = 4\tau$.}
\label{fOPGT}
\end{figure}
Figure \ref{fGs} shows snapshots and Fig.~\ref{fGh} the ``comet'' mass distribution at different times. During the
temperature rise, the ``comets'' are distributed according to a power law,
\begin{equation}
\label{ePC}
{\dd\log\left(\sum M_\name{comet}\right) \over \dd\log\left(M_\name{comet}\right)} =
\xi_\name{c}\,\,\name{for}\, M_\name{comet} \ll M_\name{tot} \ ,
\end{equation}
and no ``planetoid'' is formed. The power-law index is very steep at the beginning with $\xi_\name{c} \approx -10$, but
quickly decreases as it reaches a value of $\xi_\name{c} \approx -2.5$ at the end.
After the temperature increase break at $t\approx 2.7\tau$, the power-law distribution remains for small ``comets'', but
one bigger ``gaseous planetoid'' with over $1\%$ of the total mass forms. It is shown in Fig.~\ref{fG122s}. The
spherical body seen at $t \leq 3\tau$ in the snapshots is uncondensed gas, which is why it does not figure in the
``comet'' mass distribution (see Sect. \ref{ssIG} for a discussion how condensed matter is identified).
A second power law:
\begin{equation}
\label{ePP}
{\dd\log\left(\sum M_\name{comet}\right) \over \dd\log\left(M_\name{comet}\right)} =
\xi_\name{p}\,\,\name{for}\, M_\name{comet} \gg M_\name{SM}
\end{equation}
on the right side of the diagram describes the ``comets'' and ``planetoids''. Each body occurs typically only once with
our limited $N_\name{SM}$, and $\xi_\name{p} = 1$. Equation (\ref{ePC}) describes the mass distribution of small bodies
where gravity is negligible whereas Equ. (\ref{ePP}) describes large bodies where gravity is weak, but cannot be
neglected.
The first high-density plane is created thanks to the plane perturbation parallel to the $yz$-plane at $x=0.5$. One can
observe in Fig.~\ref{fGs} at $t=2\tau$ that two filaments form, along the $y$- and $z$- axes, connecting the
``planetoid'' with itself thanks to the periodic boundary conditions. They are aligned with the mesh, but not primarily
due to grid effects, but because these lines are the shortest distance between the ``planetoid'' and its periodic
images. As can be seen in Fig. \ref{fPT1-3s}, diagonal filaments are also possible using the second shortest distance.
The position of the ``planetoid'' is, thanks to the plane-wave perturbation, situated in the $x = 0.5L$ plane, the $y$
and $z$ positions are random and depend on the initial condition and vary with every simulation as shown in
Fig. \ref{fPDist}.
Figure \ref{fOPGT} shows the temperature distribution in the ``comets'' as function of mass. As the number of large
``comets'' with the same number of super-molecules is small or even one, no error bars can be seen for the larger
``comets'' and the ``gaseous planetoid''.
The broad Maxwellian velocity distribution is seen for the unbound molecules ($M_\name{comet} = 10^{-6}$). The small
``comets'' (2 or more particles) clearly have a lower temperature and dispersion than the average temperature of the
system. This is due to the fact that in order for two super-molecules to cluster together, a third one is needed, and
this takes away some of their kinetic energy. The larger a ``comet'' is, the closer its temperature is to the system
average temperature. On longer formation time the ``planetoids'' temperature tends towards the system average
temperature, as expected.
\subsubsection{Scaling}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/Gmean.png}}
\caption{Mean temperature over four one-phase fluid simulations OPGs with different random seeds as a function of
time. $N_\name{SM} = 50^3$ and $64^3$ (dotted line), $80^3$ and $100^3$ (dashed line), $128^3$ and $160^3$ (solid
line).}
\label{fGmean}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/turn.png}}
\caption{Time and size of the largest ``comet'' at the appearance of the turning point.}
\label{fturn}
\end{figure}
Figure \ref{fGmean} shows the mean temperature value over four simulations with different random seeds. One can see that
the two smallest simulations with $N_\name{SM} = 50^3$ and $64^3$ are starting to collapse earlier than the other
simulations, which are very similar. This can be explained by the fact that these simulations have very low
$x_\name{GLJ}$ values (Equ. \ref{exGLJ}), $3.65$ and $4.03\,r_\sigma$. The first value is in fact lower than the cut-off
radius of $4\,r_\sigma$ and fails to satisfy Equ.~(\ref{exglj}), the other one just slightly above it. In these
simulations, the gravitational forces are strong even in short-range intermolecular interactions, the simulations are
therefore overemphasizing the gravitational effect.
Figure \ref{fGall} shows the ``comet'' mass distribution of all OPGs simulations. For small ``comets'', the percentage
for a same number of super-molecules is the same for all $N_\name{SM}$. For example, at $t=5\tau$, all simulations have
a fraction of $\sim 10^{-1}$ for comets consisting of two super-molecules. These small clumps should be called
multimers as mentioned below. But, with increasing $N_\name{SM}$, the mass of a comet consisting of two super-molecules
diminishes since $M_\name{2\,SM} = 2/N_\name{SM}$.
On the other hand, for large ``comets'' and the ``planetoid'', the mass is invariant to $N_\name{SM}$. For example, the
``planetoid'' at $t = 5\tau$ has the same mass of $M_\name{planetoid} \approx 5\cdot 10^{-2} M_\name{tot}$ for all
$N_\name{SM}$.
The fact that the mass distributions are shifted to the left for larger $N_\name{SM}$ is misleading, as shown in Fig.
\ref{fGall3}. Only the two largest simulations are shown, the simulation with $N_\name{SM} = 128^3$ is shown normally,
but for the simulation with $N_\name{SM} = 160^3$ is downscaled to $160^3/2 \approx 128^3$, always two ``comet'' sizes
are added together. As can be seen, when comparing the two simulation using the same scaling, the two mass distributions
are in fact very similar.
It is interesting to look at the turning point, defined as the point where the mass sum of the biggest ``comets'' is
larger than the mass sum of smaller comets. This point is interesting as it is the first indicator of the creation of a
aggregate of molecules that start to be influenced by gravity. Figure \ref{fturn} shows the time and the mass of the
turning point. One can see than the time of appearance is independent of $N_\name{SM}$. The ``comet'' mass, on the other
hand, clearly follows a power law with:
\begin{equation}
\label{ePN}
{M_\name{comet}\over M_\name{tot}} \approx 10 N_\name{SM}^{\xi_\name{N}} \ ,
\end{equation}
with $\xi_\name{N} \approx -1$. Since $N_\name{SM} = M_\name{tot}/M_\name{SM}$, at turning point the number of
super-molecules is always of order of 10. In other words, in critical conditions where the attractive molecular force
adds up to gravity, the tendency to make a larger condensed body mass fraction already starts at about ten molecules.
\subsubsection{Extrapolation to physical scale}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/extrapol.png}}
\caption{Extrapolation of the ``comet'' mass distribution for a real H$_2$ molecular fluid at $t=5\tau$.}
\label{fextra}
\end{figure}
Having studied the scaling behaviour of the simulations, we can attempt to extrapolate some properties for a fluid
consisting of real molecules instead of super-molecules. Considering H$_2$ molecules, the total mass of the OPGs
simulations is $1.4\,M_\oplus$, the box length equal to $2.3\cdot 10^{-3}\,\name{AU}$ and
$\tau = 1.5\cdot 10^{-2}\,\name{years}$.
Thus, in realistic conditions there would be a total amount of $2.5\cdot 10^{51}$ H$_2$ molecules, the number of
molecules per super-molecule $\eta$ varying from $2.0\cdot 10^{46}$ for $N_\name{SM} = 50^3$ to $3.2\cdot 10^{44}$ for
$N_\name{SM} = 160^3$. So, even for the largest simulations, there is a difference in number of particles of more than
$40$ orders of magnitude with real situations. For that reason, the extrapolated data has to be treated with caution.
As long as no other physics enters in the interval of scales, this is however not extraordinary in regards to common
practices in simulation works, such as simulating stars with SPH particles. While a single SPH particle can at best
represent a mass fraction of $10^{-6}-10^{-10}$ of the total, an SPH particle is supposed to behave as the smallest mass
element in local thermal equilibrium, say a few 1000 protons over the stellar mass: $\sim 10^{-55}$, which lies
therefore well over $10^{40}$ times the SPH simulation resolving capacity.
As can be observed in Fig. \ref{fGc}, the fraction of bound molecules does not change when increasing the number of
particles and will remain unchanged for a fluid consisting of molecules. The same is true for the size of the resulting
``planetoid'', as can be seen in Fig. \ref{fGall}. Therefore, we can assume that after $5\tau$, the system will consist
of $\sim 80\%$ unbound molecules and a ``planetoid'' with a mass of $\sim 0.07\,M_\oplus$.
At $t = 5\tau$, there is a fraction of $10^{-1}$ of two molecules aggregates i.e. dimers with a mass of
$6.6\cdot 10^{-27}\,\unit{kg}$. These multi-molecules clumps should be called ``multimers'' instead of ``comets''. As
mentioned above the turning point occurs at $\sim 2.4\tau$ consistently for approximately $10$ molecules, so its mass is
about $ 20 \,m_\name{H} = 3.3\cdot 10^{-26}\,\name{kg}$.
Using the above values and the two power laws of Equ. (\ref{ePC} -- \ref{ePP}), Fig.~\ref{fextra} shows a schematic
diagram showing the mass distribution at $t=5\tau$. While the mass fraction of the turning point multimers is tiny, the
power-law distribution rapidly increases the mass in larger grain- and comet-sized bodies until they become a planetoid
mass.
\subsection{Phase transition}
\label{ssTPS}
\begin{figure*}[t]
\centering
\includegraphics[width=18cm]{imgs/vdwTPh.png}
\caption{Position of the LJ simulations in a corresponding van der Waals phase diagram. Solid line: Maxwell construct
for the van der Waals EOS; dotted line: van der Waals EOS. Star: fluid presenting a phase transition; bullet:
one-phase fluid.}
\label{fvdwTPh}
\end{figure*}
We study several physical conditions close to a phase transition. Table \ref{tTP} summarizes their properties. Three
different densities, $10^{-1}$, $10^{-2}$ and $10^{-3}\,n_\name{cr}$, and three different temperatures, $0.1$, $0.2$ and
$0.3\,T_\name{cr}$, are chosen. As the one-phase fluid studied in Sect.~\ref{ssOPS} also has a density of
$10^{-2}\,n_\name{cr}$, three additional temperatures, $0.5$, $0.7$ and $0.9\,T_\name{cr}$, were used for this density
to make a link between the one-phase fluid and the fluids studied in this section.
Looking at the phase transition diagram (Fig.~\ref{fvdwTPh}), all fluids with $T \leq 0.3 T_\name{c}$ are on the Maxwell
line. Those with $\rho \leq 10^{-2}\rho_\name{c}$ are on the $(\partial P / \partial \rho)_\name{s} >0$ part of the van
der Waals phase diagram, which corresponds to metastable states on the verge of a deep phase transition. The fluids with
$\rho = 10^{-1}\rho_\name{c}$ are on the $(\partial P / \partial \rho)_\name{s} \leq 0$ part of the van der Waals phase
diagram, which corresponds to unstable states in a phase transition. The fluids with $T \geq 0.5 T_\name{c}$ are still
in the stable regime.
Three different cases are studied: without gravity, strongly self-gravitating above the Jeans criterion, and weakly
self-gravitating below the Jeans criterion.
\subsubsection{Fluid without gravity}
\label{ssFwg}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/TP0.png}}
\caption{Temperature of non-gravitating fluids as a function of time. Dotted line: $n = 10^{-1} n_\name{cr}$; solid
line: $n = 10^{-2} n_\name{cr}$; dashed line: $n = 10^{-3} n_\name{cr}$. All simulations with $N_\name{SM} = 50^3$.}
\label{fTP0}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/3dClose.png}}
\caption{Close-up 3D-view of a medium sized ``comet'' and smaller aggregates.}
\label{fclump}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/TPT.png}}
\caption[Temperature distribution of unbound molecules and ``comets'' as a function of ``comet'' size of PT2-2 without
gravity and $N_\name{SM} = 50^3$ at $t = 100\tau$.] {Temperature distribution of unbound molecules and ``comets'' as a
function of ``comet'' size of PT2-2$^a$ without gravity and $N_\name{SM} = 50^3$ at $t = 100\tau$. \\ $^{(a)}$ See
Table \ref{tTP}}
\label{fTPT}
\end{figure}
To study the evolution of fluids presenting a phase transition without gravity, the $x_\name{GLJ}$ value does not have
to be considered and the number of super-molecules is set to $N_\name{SM} = 50^3$. Figure \ref{fTP0} shows the
temperature evolution of some of these fluids. In the beginning, the molecules merge into small ``comets'', which leads
to a decrease of the potential energy and therefore a rise of the temperature. Once the temperature is high enough, the
kinetic and potential energy reach a equilibrium and the fluid remains stable.
The number of ``comets'' formed is dependent on the density and inversely dependent on the temperature. The higher the
number of formed ``comets``, the longer it takes for the fluid to reach a stable regime. For example, the very cold
fluids PT1-2 and PT2-2 only reach an asymptotic value around $t \approx 100\tau$.
The ``comets'' remain at moderate size as can be seen in the snapshots (Fig.~\ref{fPT2-2s}) and in their mass
distribution (Fig.~\ref{fPT2-2h}): The number of super-molecules per ``comets'' is $< 10^{-2}\, N_\name{SM}$. The mass
distribution follows the power law of Equ. (\ref{ePC}), but at $\sim 10^{-4}$, the mass distribution rises again,
presenting a big number of ``comets'' between $10^{-4}$ to $10^{-2}$ super-molecules. Fig.~\ref{fclump} shows a close-up
3D view of a medium-sized ``comet'' and smaller aggregates.
Figure \ref{fTPT} shows the temperature distribution as a function of the ``comet'' mass. As in simulation OPGs, the
temperature for small bodies lies below average, but reaches average value for more massive bodies.
\subsubsection{Self-gravitating fluid above Jeans criterion}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{imgs/TP.png}}
\caption[Temperature of sufficiently self-gravitating fluids as a function of time. Dotted line: OPGs; dashed line:
PT1-3; solid line (from left to right): PT3-2, PT5-2, PT7-2, PT9-2; dash-dotted line: PT1-1. All simulations with
$N_\name{SM} = 100^3$.] {Temperature of sufficiently self-gravitating fluids as a function of time. Dotted line:
OPGs$^a$; dashed line: PT1-3$^a$; solid line (from left to right): PT3-2$^a$, PT5-2$^a$, PT7-2$^a$, PT9-2$^a$;
dash-dotted line: PT1-1$^a$. All simulations with $N_\name{SM} = 100^3$. \\ $^{(a)}$ See Table \ref{tTP}}
\label{fTP}
\end{figure}
All sufficiently self-gravitating simulations use the same gravitational potential,
$G = 1.25\,G_\name{J}\left(T = 1.25\,T_\name{cr}, n = 0.01\,n_\name{cr}\right)$. As the temperatures and densities
differ, $G_\name{J}$ and therefore $\gamma_\name{J}$ is different for each simulation, but this parameter is always
$> 1$.
As $\gamma_\name{J} > 1$ for all simulated fluids, they are unstable if perturbed. The lower the initial temperature of
a fluid, the bigger is $\gamma_\name{J}$, which translates to a faster exponential growth. This can be seen in
Fig.~\ref{fTP} where the fluids with a density of $10^{-2}\,n_\name{cr}$ are starting to collapse one after another,
from the lowest to the highest temperature.
The evolution of the fluids is identical to the fluid OPGs: an exponential rise of the temperature until it reaches a
temperature break, after which the temperature rises only slowly and a single ``gaseous planetoid'' is formed. Figures
\ref{fPT1-3s} and \ref{fPT1-3h} show snapshots and ``comet'' mass distribution, again very similar to the fluid OPGs.
First, there is a formation of small ``comets'' following the power law of Equ. (\ref{ePC}), and then the ``comets'' are
massive enough to attract each other and merge into one big spherical ``planetoid'' ($t> 1\tau$), following the power
law of Equ. (\ref{ePP}). The power-law index $\xi_\name{c}$ also evolves in a similar fashion as in simulation OPGs,
starting out very steep and reaching a final value of $\gtrsim -2$.
\subsubsection{Self-gravitating fluid below Jeans criterion}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/TPl.png}}
\caption{Temperature of weakly self-gravitating fluids as a function of time. Dotted line: PT2-2 and PT3-2, with
$N_\name{SM} = 50^3$ and without gravity; dashed line: PT2-2, $\gamma_\name{J} = 0.5$ and $N_\name{SM} = 80^3$ and
$100^3$; solid line PT3-2, $\gamma_\name{J} = 0.5$ and $N_\name{SM} = 80^3$ and $100^3$; dash-dotted line PT5-2,
$\gamma_\name{J} = 0.5$ and $N_\name{SM} = 100^3$.}
\label{fl}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/TPlT.png}}
\caption{Temperature distribution of unbound molecules and ``comets'' as a function of ``comet'' size of PT3-2 with
$\gamma_\name{J} = 0.5$ and $N_\name{SM} = 100^3$ at $t = 200\tau$.}
\label{fTPlT}
\end{figure}
To compare weakly self-gravitating fluids in a phase transition (on the Maxwell line) with out-of-phase-transition
fluids (below the Maxwell line), $\gamma_\name{J}$ is set to $0.5$ for all fluids. This means that the individual $G$
value is different for each simulation.
As shown in Sect. \ref{ssFwg}, the perturbation of the weakly self-gravitating one-phase fluid OPGw does not create any
effect if $\gamma_\name{J} < 1$. Figure \ref{fl} shows the temperature as a function of time of the self-gravitating
fluids PT5-2, PT3-2, and PT2-2, and compares them to the non-gravitating fluids. The out-of-phase-transition fluid PT5-2
remains unchanged and does not differ from the non-gravitating fluid, identical to OPGw.
The temperatures of the fluids in a phase transition PT3-2 and PT2-2 are exponentially growing and differ clearly from
the non-gravitating ones. In the beginning, the LJ forces dominate the gravitational forces and the fluids behave like
non-gravitating fluids. PT2-2 reacts much faster than PT3-2, thanks to its low initial temperature and fluctuations due
to the Maxwellian velocity distribution,and forms many small- and medium-sized ``comets'' (see Sect. \ref{ssFwg}). Once
these ``comets'' are formed, their mass is high enough to attract each other with gravity, forming a single ``rocky
planetoid''.
This can be seen in the snapshots (Fig. \ref{fPT1-2s}) and ``comet'' mass distribution (Fig. \ref{fPT1-2}). During the
first $15\tau$, the fluid is identical to the low-temperature, non-gravitating fluid (compare to
Fig. \ref{fPT2-2h}). Only then does gravity start to play a role, one can see the formation of a ``planetoid'' and how
it swallows the small- and medium-sized ``comets'' during its growth.
With its higher temperature, PT3-2 forms almost no small- or medium-sized ``comets'', which can be seen in the snapshots
(Fig. \ref{fPT3-2s}) and ``comet'' mass distribution (Fig. \ref{fPT3-2}) for the first $\approx 50 \tau$. Only after
$50 \tau$ does gravity show its effect with the appearance of a medium-sized ``comet'', which is too massive to fit into
the power law $\xi_1$ (see Fig. \ref{fPT3-2s}, $t = 50\tau$). From then on, this ``comet'' attracts super-molecules and
thus grows in size until at $t \approx 125\tau$, a single big ``rocky planetoid'' is formed.
Figure \ref{fTPlT} shows the temperature distribution as a function of the ``comet'' mass of PT3-2. The temperature of
the ``comets'' containing few super-molecules lies below the average, but the ``planetoids'' temperature is the same as
the average.
PT2-2, PT3-2 and PT5-2 all scale very well. The two fluids in a phase transition, PT2-2 and PT3-2, have an almost
identical timescale for the exponential growth for $N_\name{SM} = 80^3$ and $100^3$. The slight difference is due to the
initial random seed. As $x_\name{GLJ}$ depends on $\gamma_\name{J}$ and the temperature, both much lower than for OPGs,
its value is clearly above the cut-off radius, which is $7.63$ and $8.34$ for PT2-2 and $7.03$ and $7.69$ for PT3-2.
PT5-2 shows no difference for any number of super-molecules, with $N_\name{SM}$ ranging from $50^3$ to $100^3$.
\subsection{``Planetoid'' densities}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{imgs/PlanetD.png}}
\caption{Density of ``planetoid'' as a function of the radius. Dashed line: OPGs with $N_\name{SM} = 100^3$ at
$t=4\tau$, the ``gaseous planetoid'' consists of 41559 super-molecules ($0.042\,M_\name{tot}$). Dash-dotted line:
PT3-2, with $\gamma_\name{J} = 0.5$ and $N_\name{SM} = 100^3$ at $t = 200\tau$; the ``rocky planetoid' consists of
74208 super-molecules ($0.074\,M_\name{tot}$). Solid lines: hydrogen, higher value: solid; lower value:
liquid. Dotted line: $\rho_\mathrm{f}$ of Lagrange-Jacobi identity, higher value: SC; lower value:
HCP/FCC.} \label{fPlanetD}
\end{figure}
Figure \ref{fPlanetD} shows the densities of the ``planetoids'' as a function the radius of the simulations OPGs and
PT3-2 with $\gamma = 0.5\gamma_\name{J}$, and compares them to hydrogen laboratory data and values found in the
Lagrange-Jacobi identity (Sect. \ref{ssV}).
The density of the OPGs ``gaseous planetoid'' is below that of a conventional solid or liquid. As their temperature is
above the critical value ($T> 6\,T_\name{cr}$), super-molecules are not able to condense without the aid of gravity, and
because of their high kinetic energy, are vibrating at much higher amplitudes. This leads to more space between the
super-molecules and therefore a lower density compared to super-molecules below the critical temperature.
The density of the ``gaseous planetoid'' drops with increasing radius and hence does not qualify as a homogeneous
sphere, but its average density is below the $\rho_\mathrm{f}$ value of the Lagrange-Jacobi identity (Equ. \ref{erho0}),
in accordance with Tab. \ref{tLJ} and Fig. \ref{fvirial-ratios} for ``gaseous planetoids''.
The density of the PT3-2 ``rocky planetoid'' lies between the liquid and solid phase. There is no continuous density
drop as for the OPGs simulation, the density remains stable up to almost the outer radius and the body can be
approximated as a homogeneous sphere. Its density lies above $\rho_\mathrm{f}$ in accordance with Tab. \ref{tLJ} and
Fig. \ref{fvirial-ratios} for ``rocky planetoids''.
\section{Conclusions}
\label{sC}
We used analytic methods (Sect. 2) and computer simulations (Sects. 3-4) to study substellar fragmentation of fluids
presenting a phase transition. The motivating astrophysical context are molecular clouds where H$_2$ forms the bulk of
the gravitating mass and is not very far from condensation conditions.
\subsection{Analytic results}
The study of the virial theorem, using the gravitational and the Lennard-Jones potential energies, has shown that there
is a maximum temperature below which a fluid can fragment at arbitrary small masses. This temperature is an order of
magnitude above the critical temperature. This shows, granted the right circumstances, that ``comets'' can form at a
temperatures an order of magnitude larger than the critical temperature. In the case of H$_2$, this maximum temperature
lies in the range of $400$ -- $600\,\unit{K}$, depending on the solid crystalline structure.
A van der Waals fluid can be in three different states: gaseous, solid/liquid, or in a phase transition where the two
phases coexist. The latter is defined as lying on the line of the Maxwell construct, where
$(\partial P/\partial\rho)_\name{s} = 0$. A fluid is gravitationally unstable if an introduced perturbation has a
wavelength above a certain value, which depends on $(\partial P/\partial\rho)_\name{s}$. Since this quantity vanishes
for a fluid presenting a phase transition, such a fluid is also gravitationally unstable.
\subsection{Simulation results}
We performed simulations using the state-of-the-art molecular dynamics simulator LAMMPS. We used super-molecules to
simulate gravitational and molecular forces together with a computationally tractable number of particles.
\subsubsection{Super-molecules}
We tested the super-molecule concept thoroughly. We achieved good scaling for non-gravitational effects if the number of
super-molecules is large enough ($\gtrsim 10^5$). The sticking point is to ensure that the molecular LJ forces are
dominating the gravitational forces in close-range interactions. This is achieved by setting the number of
super-molecules to be large enough so that the distance at which the two forces are equal is above the cut-off radius
$4\,\sigma_\name{SM}$. For an H$_2$ fluid this sets a maximum super-molecule mass equal to
$5.7 \cdot 10^{-6}\,M_\oplus$, limiting the total mass that can be simulated by the available computing power.
In principle, the super-molecule concept should not perfectly reproduce the time dependence of diffusive properties, as
bigger particles introduce faster relaxation. But our experiments using different resolutions spanning several orders
of magnitude did not reveal important modifications in the timings of major collapse and asymptotic evolution
state. This enables us to study the fragmentation of large bodies, which we call ``comets'' and ``planetoids''.
\subsubsection{One-phase fluid}
We applied a plane sinusoidal perturbation into a one-phase fluid with a temperature above the critical value. The
results reproduce the ideal-gas Jeans instability: no collapse is seen for conditions below the Jeans criterion, while
an exponential growth of the perturbation is observed for conditions above it.
As a result, the temperature rises, and small- and medium-sized ``comets'' form, some of which later merge into one big
``planetoid''. An interesting observation is the mass distribution of these ``comets'', which follows a power law for
the small- and medium-sized "comets", while the ``planetoid'' and the largest ``comets'' follow a different power law.
\subsubsection{Phase transition fluid}
We simulated fluids with temperatures below the critical value and close to the effective phase transition for three
different cases: without gravity, sufficiently, and weakly self-gravitating (above and below the ideal-gas Jeans
criterion).
Because of the Maxwellian velocity distribution fluctuations, the non-gravitating fluids form small- and medium-sized
``comets'' until the potential and kinetic energy reach an equilibrium; thereafter they remain statistically stable. In
the absence of any long-range force, no ``planetoid'' forms.
The self-gravitating fluids with a gravitational potential above the ideal-gas Jeans criterion do not react differently
to a plane sinusoidal perturbation from a one-phase fluid: the perturbation grows exponentially and its temperature
rises, and small- and medium-sized ``comets'' form, which ultimately merge into one ``planetoid''.
The analysis predicts that fluids presenting a phase transition are unstable even if the gravitational potential alone
is below the ideal-gas Jeans threshold. The performed simulations of weakly self-gravitating fluids in the phase
transition regime did reproduce the prediction. Because of the weak nature of the gravitational potential, the timescale
to see this reaction is two orders of magnitude longer than for sufficiently self-gravitating fluids, but, in the end, a
big central ``planetoid'' forms anyway. While the out-of-phase transition fluids do not amplify perturbations, those in
a phase transition indeed amplify them.
\bigskip
This study is general enough to be relevant for many astrophysical situations. In the case of H$_2$ as main component,
it concerns situations where temperature may drop well below $10\,\name{K}$, such as in cold molecular gas in the outer
galactic disks, in expanding winds of planetary nebulae or supernovae, where cold substellar mass condensations are
known to form, or in protoplanetary disks. Other molecules besides H$_2$, such as CO, can condense even sooner, but as
their mass fraction is low they should not significantly perturb the gravitational balance, while He should not condense
at all and should keep a minimal amount of gaseous phase.\footnote{In \cite{safa_equation_2008} it was calculated that
the mixture of He and H$_2$ at cosmic abundance does not change the conclusion regarding the phase transition of H$_2$
alone, as the species do not remain mixed when H$_2$ alone condenses.} The precipitation of formed ``comets'' and
``planetoids'' in a gravitational field, separating the gaseous and condensed phases, is also a fascinating aspect that
is as yet impossible to capture with traditional hydrodynamical codes, but possible with the molecular dynamics
approach. We will pursue further simulation work, including more specific astrophysical applications.
\begin{acknowledgements}
This work is supported by the STARFORM Sinergia Project funded by the Swiss National Science Foundation. We thank the
LAMMPS team for providing a powerful open source tool to the scientific community. We thank the referee for a
thorough reading of the manuscript and constructive comments, which substantially improved the paper.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,564,187 | arxiv |
\section{Unsupervised Lifting of 2D Pose to 3D Skeleton}
\section{Unsupervised 2D-3D Lifting}
\label{sect:algo}
In this section, we describe our unsupervised learning approach to lift 2D pose to a 3D skeleton. Let $\textbf{x}_i = (x_i,y_i), i = 1 \ldots N,$ denote $N$ 2D pose landmarks of a skeleton with the root joint (midpoint between hip joints) located at the origin. Let $\textbf{X}_i$ denote the corresponding 3D joint for each 2D joint. We assume a camera with unit focal length centered at the origin $(0,0,0)$. Note that because of the fundamental perspective ambiguity, absolute metric depths cannot be obtained from a single view. Therefore, we fix the distance of the skeleton to the camera to a constant $c$ units. In addition, we normalize the 2D skeletons such that the mean distance from the head joint to the root joint is $\frac{1}{c}$ units in 2D. This ensures that 3D skeleton will be generated with a scale of $\approx1$ unit (head to root joint distance).
\subsection{Lifting Network}
\label{subsect:adversarial-algo}
The lifting network $\textrm{G}(x)$ is a neural network that outputs the 3D joint for each 2D joint.
\begin{equation}
\textrm{G}_{\theta_G}(\textbf{x}) = \textbf{X},
\end{equation}
where $\theta_G$ are the parameters of the lifter learned during training. Internally, the lifter estimates
the depth offset $d_i$ of each joint relative to the fixed plane at $c$ units. The 3D joint is computed as $\textbf{X}_i = (x_iz_i,y_iz_i,z_i)$, where
\begin{equation}
z_i = \max\left(1, c + d_i \right).
\end{equation}
\subsection{Random Projections}
\label{subsect:random_projection-algo}
The generated 3D skeletons are projected to 2D using random camera orientations and these 2D poses are sent to the lifter and discriminator. Let $\textbf{R}$ be a random rotation matrix, created by uniformly sampling an azimuth angle between [-$\pi$, $\pi$] and an elevation angle between [-$\pi$/9, $\pi$/9], and $\textbf{X}_r$ be the location of the root joint of the generated skeleton. The rotated 3D skeleton $\textbf{Y}_i$ is obtained as
\begin{eqnarray}
\textbf{Y}_i = Q(\textbf{X}_i) = \textbf{R} * (\textbf{X}_i - \textbf{X}_r) + \textbf{T},
\end{eqnarray}
where $\textbf{T}=\left[0,0,c\right]$. $Q$ represents the rigid transformation between $\textbf{Y}$ and $\textbf{X}$. The rotated 3D skeleton $\textbf{Y}_i$ is then projected to create a 2D skeleton $\textbf{y}_i=P(\textbf{Y}_i)$, where $P$ denotes perspective projection.
\subsection{Self-Supervision via Loop Closure}
\label{subsect:cycle-loss}
We now describe the symmetrical lifting and projection step performed on the synthesized 2D pose, $\textbf{y}_i$. As shown in Figure~\ref{fig:cycle-loss-fig}, we lift the randomly projected pose $\textbf{y}_i$ to obtain $\textbf{\~Y}_i$
\begin{equation}
\textbf{\~Y}_i = \textrm{G}_{\theta_G}(\textbf{y}_i).
\end{equation}
$\textbf{\~Y}_i$ is transformed to $\textbf{\~X}_i$ by applying the inverse of rigid transformation $Q$ that was used while generating the random projection $\textbf{y}_i$ from $\textbf{X}_i$. The 3D skeleton $\textbf{\~X}_i$ is finally projected to the 2D skeleton $\textbf{\~x}_i$.
Note that the lifting network ${G}(\cdot)$ remains the same in both the forward and backward part of the cycle as illustrated in Figure~\ref{fig:cycle-loss-fig}. If the lifting network accurately reconstructs the 3D pose from 2D inputs, then the 3D skeletons $\textbf{Y}_i$ and $\textbf{\~Y}_i$ and the corresponding 2D projections $\textbf{x}_i$ and $\textbf{\~x}_i$ should be similar. The cycle described herein provides a strong signal for self-supervision for the lifting network, whose loss term can be updated by adding two additional components, namely, $\pazocal{L}_{3D} = \left\lVert{\textbf{Y} - \textbf{\~Y}}\right\rVert^2$ and $\pazocal{L}_{2D} = \left\lVert{\textbf{x} - \textbf{\~x}}\right\rVert^2$.
\begin{figure}
\begin{equation*}
\begin{tikzcd}
\textbf{x} \arrow{r}{\textrm{G}(\textbf{x})} &
\textbf{X} \arrow{r}{Q(\textbf{X})} &
\textbf{Y} \arrow{r}{\textrm{P}(\textbf{Y})} &
\textbf{y} \arrow{r}{\textrm{D} (\textbf{y})} \arrow[dl, "\textrm{G}(\textbf{y})" pos=0.75, rounded corners, to path={|- (\tikztotarget) \tikztonodes}] &
\mathtt{real/fake}\\
\textbf{\~x} &
\arrow{l}{\textrm{P}(\textbf{\~X})} \textbf{\~X} &
\arrow{l}{\textrm{Q}^{-1}(\textbf{\~Y})} \textbf{\~Y} &
\end{tikzcd}
\end{equation*}
\caption{Self-supervision achieved by closing the loop between the generated skeleton $\textbf{Y}$, its random projection $\textbf{y}$. The recovered 3D skeleton $\textbf{\~Y}$ is obtained by lifting $\textbf{y}$. Upon reversing the geometric transformations, training can be self-supervised by comparing $\textbf{x}$ with $\textbf{\~x}$, and \textbf{Y} with \textbf{\~Y}.}
\label{fig:cycle-loss-fig}
\end{figure}
\subsection{Discriminator for 2D Poses}
\label{subsect:discriminator}
The 2D pose discriminator $D$ is a neural network (with parameters $\theta_D$) that takes as input a 2D pose and outputs a probability between $0$ and $1$. It classifies between real 2D pose $\textbf{r}$ (target probability of $1$) and fake (projected) 2D pose $\textbf{y}$ (target probability of $0$).
Note that for any training sample $\textbf{x}$ for lifter, we do {not} require $\textbf{r}$ to be same as $\textbf{x}$ or any of it's multi-view correspondences. During learning we utilize a standard GAN loss~\cite{GAN} defined as
\begin{equation}
\min_{\theta_G} \max_{\theta_D} \pazocal{L}_{adv} = \mathbb{E}(\log(D(\textbf{r}))) + \mathbb{E}(\log(1-D(\textbf{y}))).
\end{equation}
The discriminator provides feedback to the lifter allowing it to learn priors on 3D skeletons such as the ratio of limb lengths and joint angles using only random 2D projections, thus allowing it to avoid inadequacies as shown in Sect.~\ref{subsect:degeneracy_self_consistency}.
\subsection{Temporal Consistency}
\label{subsect:temporal_consistency}
Note that our approach does not require video data for training. However, when available, temporal 2D pose sequences (\eg video sequence of actions) can improve the accuracy of the single frame lifting network. We exploit the temporal smoothness via an additional loss function to refine the lifting network $G(\cdot)$ as shown in Figure~\ref{fig:temporal_consistency}. We train an additional discriminator, $T(\cdot)$ that takes as input the difference of 2D poses adjacent in time. The real data for this discriminator comes from a sequence of real 2D poses available during training, $\textbf{r}_t - \textbf{r}_{t+1}$. The discriminator $T(\cdot)$ is updated to optimize the loss that can distinguish the distribution of real 2D pose differences from those of the fake 2D (sequential) projections $\textbf{y}_t - \textbf{y}_{t+1}$. Specifically,
\begin{equation}
\begin{split}
\max_{\theta_T} \pazocal{L}_T = & \mathbb{E} (\log(T\left(\textbf{r}_t - \textbf{r}_{t+1})\right)) + \\
& \mathbb{E} (\log(1 - T\left(\textbf{y}_t - \textbf{y}_{t+1})\right)).
\end{split}
\end{equation}
\begin{figure}
\begin{equation*}
\begin{tikzcd}
\textbf{x}_{t} \arrow[d, "G(\textbf{x}_{t})"] & &\textbf{x}_{t+1} \arrow[d, "G(\textbf{x}_{t+1})", swap] \\
\textbf{X}_{t} \arrow[d, "P(Q(\textbf{X}_{t}))"] & & \textbf{X}_{t+1} \arrow[d, "P(Q(\textbf{X}_{t+1}))", swap] \\
\textbf{y}_{t} \arrow[d, "D(\textbf{y}_{t})"] \arrow[r] & \textbf{y}_{t} - \textbf{y}_{t+1} \arrow[d, "T(\cdot)"] & \arrow[l]\textbf{y}_{t+1} \arrow[d, "D(\textbf{y})", swap]\\
\mathtt{real/fake} & \mathtt{real/fake} & \mathtt{real/fake}
\end{tikzcd}
\end{equation*}
\caption{
Discriminator $T(\cdot)$ enforces a distribution on the temporal differences of projected 2D poses. The temporal consistency is an optional element to stabilize the results, and is only added during training, allowing inference on single frame 2D pose inputs. Subscripts $t$ and $t+1$ denote two consecutive inputs. Lifting, transformation and projection is done as in Figure~\ref{fig:cycle-loss-fig}.}
\label{fig:temporal_consistency}
\end{figure}
\subsection{Learning from 2D Poses in the Wild}
\label{subsect:domain_adaptation}
\begin{figure}[htb]
\centering
\includegraphics[width=0.97\linewidth]{da}
\caption{Unsupervised domain adaptation to transform 2D poses from source domain to match the semantics of the target domain. The 2D adapter modifies the input $\textbf{x}_s$ to generate corrected pose $\textbf{x}_{sc}$. The domain discriminator is used to ensure that the distribution of adapted2D poses $\textbf{x}_{sc}$ and the target domain 2D poses $\textbf{x}_t$ match. Adapted poses $\textbf{x}_{sc}$ are used for training a domain agnostic lifter as shown in Figure~\ref{fig:mainarch}.}
\label{fig:da}
\end{figure}
To improve the 3D lifting accuracy in the target domain of interest (\eg Human 3.6M, $\textbf{x}_t$), we wish to augment 2D training data from in the wild (\eg OpenPose joint estimates on Kinetics dataset, $\textbf{x}_s)$. Depending on the choice of 2D pose extraction algorithms~\cite{OpenPose,stacked-hourglass,cpm}, the position and semantics of 2D keypoints can vary greatly from the representation adopted by the target domain (\eg center of face vs. top of the head or side of the hips vs. pelvis).
We train a 2D domain adapter neural network $C$ to map the source domain 2D joints to target domain 2D joints (see Figure~\ref{fig:da}). Let $\textbf{x}_{sc}$ denote the corrected source domain 2D joints, such that $\textbf{x}_{sc} = \textbf{x}_s + C(\textbf{x}_s)$. Note that we do not assume any correspondences between the 2D joints in the source and target domains. Thus, we cannot train $C$ using any form of supervised loss. In absence of any supervision, we use a domain discriminator ${D}_{D}$ to match the distribution between the two domains. Again utilizing the standard GAN loss~\cite{GAN}, we optimize the following loss
\begin{eqnarray}\label{eqn:domain_loss}
\begin{aligned}[b]
\min_{\theta_{C}} \max_{\theta_{D_D}} \pazocal{L}_{adv} = & \mathbb{E}(\log(D_D(\textbf{x}_p))) + \lambda \left\|C(\textbf{x}_s)\right\|^2 \\
+ & \mathbb{E}(\log(1-D_D(\textbf{x}_{sc}))),
\end{aligned}
\end{eqnarray}
\noindent where, the $\lambda \left\|C(\textbf{x}_s)\right\|^2$ is a regularizer term to keep the corrections limited to a small magnitude.
Figure~\ref{fig:domain_corrected_examples} shows an example of the difference in semantics between the Human3.6M (target domain) and OpenPose (source domain). In OpenPose, the top of the head is not marked and the center of the marked eye joints are used. In addition, the shoulder keypoints are marked higher than in Human3.6M. The domain adapted 2D pose (middle) is closer in terms of keypoint locations to the target domain. Our domain correction is an off-line preprocessing step. The domain corrected 2D poses, $\textbf{x}_{sc}$, are fed both to the lifting network (Sect.~\ref{subsect:adversarial-algo}) and the 2D pose discriminator (Sect.~\ref{subsect:discriminator}) during training.
\begin{figure}[htb]
\centering
\includegraphics[width=0.75\linewidth]{ks_da_h36_new.png}
\caption{An example of unsupervised domain adaptation. (Left) 2D joints estimated using OpenPose for an example DeepMind Kinetics image. (Middle) Resulting 2D pose after adaptation. (Right) Similar pose from Human3.6M dataset. Notice the change in the width of hips and slant of shoulders after adaptation.}
\label{fig:domain_corrected_examples}
\vspace{-3ex}
\end{figure}
\subsection{Training}
As discussed, the 2D to 3D lifting network is trained using geometric self-supervision along with 2D pose and temporal discriminators. Network parameters are updated to optimize the total loss given by,
\begin{equation}\label{eqn:final_loss}
\pazocal{L} = \pazocal{L}_{adv} + w_{2D} \pazocal{L}_{2D} + w_{3D} \pazocal{L}_{3D} + w_T \pazocal{L}_T,
\end{equation}
\noindent where, $w_{2D}=10, w_{3D}=0.001$, and $w_{T}=1$ are the relative weights for each of the 2D, 3D, and temporal loss terms, respectively.
\textbf{Architectures:} We do not use any convolutional layers and use fully connected layers (followed by residual blocks) for all the neural networks described above. Both the lifting network and the 2D pose discriminator takes as input $2N$ dimensional vectors, where $N$ denotes the number of 2D/3D pose points. Similarly, the temporal discriminator takes $2N + 2NM$ inputs corresponding to the pose joints in the current frame and their temporal differences with $M$ other consecutive frames (before and/or after). We adopt a similar architecture as that of Martinez~\etal~\cite{MartinezICCV2017}, with the lifting network composed of 4 residual blocks and the discriminator with 3 residual blocks. For 2D domain adaptation, we use 4 residual blocks for the adapter and 3 residual blocks for the domain discriminator. Batch normalization was used in the lifter and the adapter but not in either of the discriminators.
Our experiments use $N=14$ joint locations. For training we used a batch size of $8192$, a constant depth $c=10$ and the Adam optimizer.
\subsection{Discussion}
Previous unsupervised and weakly supervised methods use additional constraints on training data in lieu of 3D annotations. For example,~\cite{ZedNet_2018_ECCVW, Yasin_2016_CVPR} leverage synthetic 2D poses obtained from known 3D skeletons to improve results.
Similarly, Rhodin~\etal~\cite{Rhodin_2018_ECCV} derive an appearance and geometric model by choosing different frames from temporal sequences and multi-view images involving the same person. However, in theory, if multi-view images from synchronized cameras are available, one could triangulate the detected 2D joints to get 3D joints and train a supervised network.
In contrast, our method treats each 2D skeleton as an individual training example, without requiring any multi-view correspondence. Hence, there is no restriction on where the 2D input pose originates; it could be obtained from a single image, video, or multi-view sequences. Our work explores the innate geometry of human pose itself, whereas~\cite{Rhodin_2018_ECCV} exploits the consistency in camera geometry and appearance of specific individuals. As shown in Sect.~\ref{subsect:quant_results}, our approach is able to augment the training data from other datasets (\eg Kinetics) with 2D skeletons captured in the wild.
Our current approach cannot handle occluded/missing joints during training or testing phases. This limits the amount of external domain data that can be used for training. For example, using OpenPose on Kinetics dataset results in 17M skeletons with at least 10 joints, but only 9M complete skeletons (14 joints). Though not the main focus of the paper, we did a small experiment to fill-in missing joints to further augment our training data. We trained a two-layer fully connected neural network which takes incomplete OpenPose 2D pose estimates on Human3.6M images as input and outputs completed 14 joints. The network was trained using the corresponding 2D ground-truth joints from Human3.6M in a supervised manner. Using the completed poses (17M skeletons) from the Kinetic dataset, our method achieved a MPJPE of 48mm on Human3.6M test data. This experiment further underscores the importance of volume and diversity of training data for unsupervised learning. We believe that by using auto-encoders and other unsupervised methods for data completion will enable utilizing even more diverse datasets, where 2D joints may be extracted from a variety of 2D pose estimation algorithms. Future work includes training the filling network and the domain adaptation network together with the lifting network in an end-to-end manner.
\section{Conclusions}
\label{sect:conclusions}
For 3D human pose estimation, acquiring 3D MoCap data remains an expensive and challenging endeavor. We presented an unsupervised learning approach to generate 3D skeletons from 2D joints, which does not require 3D data in any form. Our paper introduces geometric self-supervision as a novel constraint to learn the 2D-3D lifter. We showed that while geometric self-supervision is not a sufficient condition and cannot generate realistic skeletons by itself, it improves the reconstruction accuracy when combined with a discriminator. By training a domain adapter, we showed how to utilize data from different domains and datasets in an unsupervised manner. Thus, we believe that our paper has significantly improved the state-of-art in unsupervised learning of 3D skeletons by developing the key idea of geometric self-supervision and utilizing domain adaptation. Future work includes end-to-end training of 3D skeletons from 2D images, using self-supervision.
\section{Discussion}
\begin{table}[t]
\centering
\begin{tabularx}{\textwidth}{ l *{8}{Y} }
\toprule
Method & Direct. & Discuss & Eat & Greet & Phone & Photo & Pose & Purchase \\
\midrule
Akhter \& Black~\cite{akhter2015pose} & 199.2 & 177.6 & 161.8 & 197.8 & 176.2 & 186.5 & 195.4 & 167.3 \\
Ramakrishna~\etal~\cite{ramakrishna2012reconstructing} & 137.4 & 149.3 & 141.6 & 154.3 & 157.7 & 158.9 & 141.8 & 158.1 \\
Zhou~\etal~\cite{Zhou_2016_CVPR} & 99.7 & 95.8 & 87.9 & 116.8 & 108.3 & 107.3 & 93.5 & 95.3\\
Bogo~\etal~\cite{keep-it-simpl} & 62.0 & 60.2 & 67.8 & 76.5 & 92.1 & 77.0 & 73.0 & 75.3 \\
Moreno-Noguer~\cite{Moreno-Noguer_2017_CVPR} & 66.1 & 61.7 & 84.5 & 73.7 & 65.2 & 67.2 & 60.9 & 67.3\\
Martinez~\etal~\cite{MartinezICCV2017} & {44.8} & {52.0} & {44.4} & {50.5} & {61.7} & {59.4} & {45.1} & {41.9} \\
\midrule
\ouremph{Ours (\textit{Unsupervised})} & \ouremph{60.2}& \ouremph{60.7}& \ouremph{59.2}& \ouremph{65.1}& \ouremph{65.5}& \ouremph{63.8}& \ouremph{59.4}& \ouremph{59.4} \\
\bottomrule
\toprule
Method & Sit & SitD & Smoke & Wait & Walk & WalkD & WalkP & Avg.\\
\midrule
Akhter \& Black~\cite{akhter2015pose}& 160.7 & 173.7 & 177.8 & 181.9 & 176.2 & 198.6 & 192.7 & 181.1\\
Ramakrishna~\etal~\cite{ramakrishna2012reconstructing} & 168.6 & 175.6 & 160.4 & 161.7 & 150.0 & 174.8 & 150.2 & 157.3\\
Zhou~\etal~\cite{Zhou_2016_CVPR}& 109.1 & 137.5 & 106.0 & 102.2 & 106.5 & 110.4 & 115.2 & 106.7\\
Bogo~\etal~\cite{keep-it-simpl} & 100.3 & 137.3 & 83.4 & 77.3 & 86.8 & 79.7 & 87.7 & 82.3\\
Moreno-Noguer~\cite{Moreno-Noguer_2017_CVPR} & 103.5 & 74.6 & 92.6 & 69.6 & 71.5 & 78.0 & 73.2 & 74.0\\
Martinez~\etal~\cite{MartinezICCV2017} & {66.3} & {77.6} & {54.0} & {58.8} & {49.0} & {35.9} & {40.7} & {52.1}\\
\midrule
\ouremph{Ours (\textit{Unsupervised})} & \ouremph{69.1}& \ouremph{88.0}& \ouremph{64.8}& \ouremph{60.8}& \ouremph{64.9}& \ouremph{63.9}& \ouremph{65.2}& \ouremph{64.6}\\
\bottomrule
\end{tabularx}
\vspace{1mm}
\caption{Comparison of our approach to other supervised methods on Human3.6M under \textbf{Protocol 2} using detected 2D keypoints. The results of all approaches are obtained from~\cite{MartinezICCV2017}. Our \textit{unsupervised} approach outperforms most supervised methods that use 3D data and comes close to the state of the art supervised approach of~\cite{MartinezICCV2017}.}
\label{table:P2_sh_supervisedcomparison}
\end{table}
\begin{table}[b
\small
\centering
\begin{tabularx}{0.97\textwidth}{ *{4}{Y} }
\toprule
Moreno-Noguer & Martinez~\etal & \multicolumn{2}{c}{Ours (Unsupervised)} \\
\cite{Moreno-Noguer_2017_CVPR} & \cite{MartinezICCV2017} & { (Single Model)} & { (Ensemble)} \\
\midrule
62.2 & \underline{37.1} & 38.2 & {\bf 36.3} \\
\bottomrule
\end{tabularx}
\vspace{1mm}
\caption{Comparison of our unsupervised results to the state of the art fully supervised approaches under Protocol 2 using ground truth 2D inputs. Our unsupervised model has error within 1.1mm of the best supervised approach, and outperforms the same with a na\"ive ensemble approach.}
\label{table:fully-supervised}
\label{table:supervised}
\end{table}
\section{Experimental Evaluation}
\label{sect:experiments}
We present quantitative and qualitative results on the widely used Human3.6M dataset~\cite{h36m} for benchmarking. Additionally, to demonstrate how the unsupervised learning framework can be improved by leveraging 2D pose data from in-the-wild images, we augment our training data by adapting OpenPose estimated 2D human poses from the Kinetics~\cite{kinetics}
dataset. We also show qualitative visualizations of reconstructed 3D skeletons from 2D pose landmarks on the MPII~\cite{andriluka14cvpr} and Leeds Sports Pose (LSP)~\cite{johnson2010clustered} datasets, for which the ground truth 3D data is not available.
\subsection{Dataset and Metrics}
\label{sect:dataset_metrics}
\noindent\textbf{Human3.6M Dataset:} Human3.6M is one of the largest 3D human pose datasets, consisting of $3.6$ million 3D human poses. The dataset contains video and motion capture (MoCap) data from $5$ female and $6$ male subjects. Data is captured from $4$ different viewpoints, while subjects perform typical activities such as talking on phone, walking, eating,~\etc.
\noindent\textbf{MPI-INF-3DHP:} The MPI-INF-3DHP ~\cite{mono-3dhp2017} is a large human pose dataset containing $>$1.3M frames taken from diverse viewpoints. The dataset has 4 male and 4 female actors performing an array of actions similar to but more diverse than the Human3.6M dataset.
\noindent\textbf{Kinetics dataset:}
The Kinetics dataset contains 400 video clips each for 400 activities involving one or more persons.
The video clips are sourced from Youtube and each clip is approximately 10 seconds in duration. We did not use any of the class annotations from the dataset for our training. Instead, we extracted 2D pose landmarks using OpenPose~\cite{OpenPose} on sampled frames from this dataset. We retained only those frames in which all the landmarks on a person were estimated with sufficient confidence.
After this filtering, approximately 9 million 2D skeletons were obtained.
\noindent\textbf{Evaluation Metric:} We report the Mean Per Joint Position Error (MPJPE) in millimeters after scaling and rigid alignment to the ground truth skeleton. Similar to previous works~\cite{ZedNet_2018_ECCVW,Tung_2017_ICCV,li20143d,MartinezICCV2017,Rhodin_2018_ECCV,tekin2016direct,Zhou_2016_CVPR}, we report results on subjects S9 and S11. Also, following the convention as in~\cite{MartinezICCV2017,Rhodin_2018_ECCV}, we only use data from subjects S1, S5, S6, S7, and S8 for training. We do not train class specific models or leverage any motion information during inference to improve the results. The reported metrics are taken from the respective papers for comparisons.
We also compare our method to ~\cite{mono-3dhp2017, zhou2017towards} which uses the adapted Percentage of Correct Keypoints (PCK) and corresponding Area Under Curve (AUC) metrics.
\input{ResultTable_Abalation}
\subsection{Quantitative Results}
\label{subsect:quant_results}
We summarize our results for Human3.6M and MPI-INF-3DHP in Table~\ref{table:result_summary} and Table~\ref{table:MPI}, respectively. In addition to comparing with the state-of-the-art unsupervised 3D pose estimation method of Rhodin~\etal~\cite{Rhodin_2018_ECCV}, we also show results from top fully supervised and weakly supervised methods.
Results from~\cite{Rhodin_2018_ECCV} uses images as input and are hence comparable to Ours(SH) results which use 2D joints extracted from the same input images using SH detector~\cite{stacked-hourglass}. Our method reduces error by 30\% compared to~\cite{Rhodin_2018_ECCV} (68mm vs. 98.2mm).
Table~\ref{table:result_ablation} shows the results of an ablation study on lifter with various algorithmic components using ground truth 2D points.~\textbf{SS} denotes self-consistency (Sect.~\ref{subsect:cycle-loss}), \textbf{Adv} adds the 2D pose discriminator (Sect.~\ref{subsect:discriminator}), \textbf{DA} augments the training data by adapting 2D poses from Kinetics (Sect.~\ref{subsect:domain_adaptation}), and \textbf{TD} leverages temporal cues during training (Sect.~\ref{subsect:temporal_consistency}), when available. As further analyzed in Sect.~\ref{subsect:degeneracy_self_consistency}, just using self consistency loss can lead to unrealistic skeletons without the additional discriminator. Augmenting our approach with additional 2D poses obtained from the Kinetics dataset (Ours: Adv + SS + DA) further reduces the error down to 55mm. Lastly, we exploit temporal information during training (Ours: Adv + SS + DA + TD), when available, to obtain an error of 51mm on Human3.6M. It should be noted that the inference for the TD experiment is still done on single frames and the results can be further improved by applying temporal smoothness techniques on video sequences.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.33\linewidth]{cycle_ratio.pdf}
\includegraphics[width=0.33\linewidth]{cycleSym_ratio.pdf}
\includegraphics[width=0.33\linewidth]{withD_ratio.pdf}
\caption{Distribution of limb lengths ratios on Human3.6M test data. (Left) Training using self-consistency loss alone does not impose symmetry. (Middle) Using self-consistency and symmetry aligns distribution of left/right limbs, but results in flatter (unrealistic) distributions. (Right) Using a discriminator sharpens the distributions and brings them closer to real values (ground truth ratio is $\thicksim1.0$ and $\thicksim1.1$ for leg and arms respectively.) }
\label{fig:selfAnalysis}
\vspace{-4ex}
\end{figure*}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.97\linewidth]{degenracies_new.pdf}
\caption{Inadequacy of self-consistency loss. Left most Col is input 2D pose. (a) Self-consistency alone is unable to recover the correct 3D skeletons. (b) With symmetry constraints, limb lengths become symmetric but may not have realistic ratios. (c) Adding 2D pose discriminator results in a geometrically consistent and realistic 3D skeletons.}
\label{fig:self_consistency_example}
\end{figure}
\subsection{Inadequacy of Geometric Self-Supervision}
\label{subsect:degeneracy_self_consistency}
At first glance, it may appear that self supervision is sufficient to learn a good lifter, without the need for a discriminator. However, we found that in absence of the 2D pose discriminator, network can produce outputs which are geometrically self-consistent, but not realistic (see Figure~\ref{fig:self_consistency_example}). We present an analysis of the 3D outputs that the lifting network can generate with only self-supervision. Specifically, we examine the ratios of upper to lower arm and leg, both for the left and right side of human body ($4$ ratios).
Figure~\ref{fig:selfAnalysis} (Left) shows the distribution of the $4$ ratios, for a lifter trained using self-consistency loss alone. Note that the lifter produces different limb length ratios for the left and right side of the body. Thus self-consistency loss alone may not produce symmetric (realistic) skeletons without any 3D priors.
Figure~\ref{fig:selfAnalysis} (Middle) shows that after imposing symmetry constraints, the distributions of the left and right limbs are better aligned. However, the distributions are \textit{flatter} since enforcing the \textit{same} ratios for left and right sides does not ensure that these ratios are \textit{realistic} (conforming to a human body). In other words, the lifter may choose different ratios for different training examples. Figure~\ref{fig:selfAnalysis} (Right) shows the distributions when a discriminator that gives feedback to the lifter using real 2D poses is used. Notice that the ratio distributions become sharper and closer to distributions of real ratios in the training set. This is the reason that using self-supervision loss (SS) alone performs worse in our ablation studies as shown in Table~\ref{table:result_summary}. However, the self-consistency further improves the performance in conjunction with 2D pose discriminator (Adv+SS).
Note that we do not use symmetry ratios in our framework when the discriminator is present. Our lifting network can learn higher order 3D skeleton statistics (beyond symmetry) based on the feedback from geometric self consistency and the 2D pose discriminator.
\begin{figure}[h!
\centering
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s9_6_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s9_8_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s9_19_360.png}\\
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_47_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_59_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_119_360.png}
\caption{Qualitative results on Human3.6M dataset. (Left to right) Color image with overlaid 2D pose points, estimated and ground-truth 3D skeleton.}
\label{fig:h36good}
\end{figure}
\begin{figure}[h!
\centering
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s9_64_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_18_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_68_360.png}\\
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_74_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_75_360.png}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{s11_80_360.png}
\caption{Examples of failure cases of our algorithm on Human3.6M dataset. Ground truth 3D skeletons are shown in gray.}
\label{fig:h3failure}
\vspace{-3ex}
\end{figure}
\subsection{Semi-supervised 3D Pose Estimation}
Other methods have shown improvement in accuracy when a small amount of 3D data is used for supervised fine-tuning. We fine tuned our baseline model (from unsupervised training) using 5\% of randomly sampled 3D data available in Human3.6M dataset. With this, our method could achieve performance comparable to fully supervised method (37mm) as shown in Table~\ref{table:result_ablation}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{mpii_122}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{mpii_201}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{mpii_462}\\
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{mpii_621}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{mpii_881}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{mpii_1161}\\
\caption{Examples of 3D pose reconstruction on images from MPII dataset (no ground-truth 3D skeleton). Each image shows overlaid 2D pose and the estimated 3D skeleton.}
\vspace{-1ex}
\label{fig:mpii}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{lsp_1562}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{lsp_1141}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{lsp_401}\\
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{lsp_262}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{lsp_781}
\includegraphics[width=0.32\linewidth,trim={2.5cm .7cm 0 .7cm},clip]{lsp_1362}
\caption{Examples of 3D pose reconstruction on images from LSP dataset (no ground-truth 3D).}
\label{fig:leeds}
\vspace{-3ex}
\end{figure}
\subsection{Qualitative Results}
Figure~\ref{fig:h36good} shows some of the 3D pose reconstruction results on Human3.6M dataset using our lifting network. The ground truth 3D skeleton is depicted in gray.
Some of the failures are shown in Figure~\ref{fig:h3failure}. Most of these can be attributed to self-occlusions or flip ambiguities in viewing direction (for more details see Suppl. materials)
To demonstrate generalization, we show some examples of 3D skeletons estimated on MPII~\cite{andriluka14cvpr} and the Leeds Sports Pose (LSP)~\cite{johnson2010clustered} datasets, in Figures~\ref{fig:mpii} and~\ref{fig:leeds} respectively. MPII has images extracted from short Youtube videos. LSP dataset consists of images of sport activities sampled from Flickr. Our unsupervised method successfully recovers 3D poses on these datasets without being trained on them.
\section{Introduction}
\label{sect:introduction}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.99\linewidth, trim={0.5cm 7cm 0.5cm 5.5cm}, clip]{mainAlgo}
\caption{We train a 2D-3D lifting network (lifter), which estimates the 3D skeleton from 2D pose landmarks. Random projections of generated 3D skeletons are fed to a 2D pose discriminator to provide feedback to the lifter. The random projections also go through a similar lifting and reprojection process, allowing the network to self supervise the training process by exploiting geometric consistency.}
\label{fig:mainarch}
\vspace{-3ex}
\end{figure*}
Estimation of 3D human pose from images and videos is a classical ill-posed inverse problem in computer vision with numerous applications~\cite{forsyth2006computational,hogg1983model,moeslund2001survey,o1980model} in human tracking, action understanding, human-robot interaction, augmented reality, video gaming,~\etc. Current deep learning-based systems attempt to learn a mapping from RGB images or 2D keypoints to 3D skeleton joints via some form of supervision requiring datasets with \emph{known 3D pose}. However, obtaining 3D motion capture data is time-consuming, difficult, and expensive, and as a result, only a limited amount of 3D data is currently available. On the other hand, 2D image and video data of humans is available in abundance. However, unsupervised learning of 3D joint locations from 2D pose alone remains a holy grail in the field. In this paper, we take a first step towards achieving this goal and present an unsupervised learning algorithm to estimate 3D human pose from 2D pose landmarks/keypoints. Our approach does not use 3D inputs in \textit{any} form and does not require 2D-3D correspondences or explicit 3D priors.
Due to perspective projection ambiguity, there exists an infinite number of 3D skeletons corresponding to a given 2D pose. However, all of these solutions are not physically plausible given the anthropomorphic constraints and joint angle limits of a human body articulation. Typically, supervised learning with 2D pose and corresponding 3D skeletons is used to restrict the solution space. In addition, the 3D structure can also be regularized in a weakly-supervised manner by using priors such as symmetry, ratio of length of various skeleton elements, and kinematic constraints, which are learned from 3D data. In contrast, this paper addresses the fundamental problem of lifting 2D image coordinates to 3D space without the use of any additional cues such as video~\cite{tekin2016direct,Zhou_2016_CVPR}, multi-view cameras~\cite{amin2013multi,hofmann2012multi}, or depth images~\cite{rafi2015semantic,shotton2013real,yub2015random}.
We posit that the following properties of the 2D-3D pose mapping render unsupervised lifting possible: 1) \textit{Closure:} If a 2D skeleton is lifted to 3D accurately, and then randomly rotated and reprojected, the resulting 2D skeleton will lie within the distribution of valid 2D poses. Conversely, a lifted 3D skeleton whose random re-projection falls outside this distribution is likely to be inaccurate. 2) \textit{Invariance:} 2D projections of the same 3D skeleton from different viewpoints, when lifted, should produce the same 3D output. In other words, lifting should be invariant to change in the viewpoint.
We employ the above properties in designing a deep neural network, referred to as the \textit{lifting network}, which is illustrated in Figure~\ref{fig:mainarch}.
We introduce a novel geometrical consistency loss term that allows the network to learn in a self-supervised mode. This self-consistency loss relies on the property of invariance: any 2D projection of the generated 3D skeleton should produce the same 3D skeleton when processed by the lifting network (Section~\ref{subsect:cycle-loss}). We further demonstrate that self-consistency is a necessary but not a sufficient condition. We add a discriminator to ensure that the projection of lifted skeletons lie within the distribution of 2D poses.
However we find that self-supervision \textit{does} improve performance of the lifting network when used in conjunction with discriminator feedback.
\textbf{Domain Adaptation:} Since unsupervised learning methods often need more data for training than supervised methods, it is desirable to exploit multiple data sources. However, \textit{domain shifts} could occur in multiple data sources due to (a) differences in human pose and viewpoint variations, and (b) semantic differences in the location of the skeletal joints on the body (e.g., hips marked inside/outside legs).
We propose a domain adaptation algorithm, where we train a 2D \textit{adapter} network to convert 2D joints from a source domain to the target domain without the need for any correspondences.
Using multiple datasets allows us to enrich the viewpoint, pose, and articulation variations in the target domain using additional domains.
\textbf{Temporal consistency during training:} Our algorithm only requires 2D joints extracted from a single frame for training and inference. However, if sequences of poses from videos are available during training, we show how they can be used to improve the lifter.
To exploit temporal consistency during training, we incorporate an additional \textit{temporal discriminator} that classifies the differences in 2D joints in subsequent frames as real/fake. Our ablation studies show that adding a temporal discriminator improves performance by an additional $7$\%, even when inference is performed on a single frame.
Our paper makes the following contributions:
\vspace{-2ex}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\leftmargin}{0in}
\item Inspired by ~\cite{ZedNet_2018_ECCVW}, we present an unsupervised algorithm to lift 2D joints to 3D skeletons by observing samples of real 2D poses, without using 3D data in any form.
\item Our method can learn by exploiting geometric self consistency. We show that self consistency is a necessary but not a sufficient condition for lifting. Self consistency loss improves performance when combined with 2D pose discriminator adversarial loss.
\item We propose a 2D domain adaptation technique which can utilize data from different domains to improve performance on the target domain.
\item We show that adding a temporal discriminator during training can further improve performance, even for single frame 2D-3D lifting during inference.
\end{itemize}
\section{Related Work}
\label{sect:related_work}
\textbf{3D Pose Estimation:} There are numerous deep learning techniques proposed for estimating 3D joint location directly from 2D images~\cite{orinet,mono-3dhp2017,park20163d,conf/bmvc/ParkK18,Pavlakos_2017_CVPR,Rogez_2017_CVPR}. Other methods decompose this problem into the estimation of 2D joint locations from images followed by the estimation of 3D joint locations based on the 2D keypoints. 2D pose from images can be obtained using techniques such as CPM~\cite{cpm}, Stacked-hourglass architecture~\cite{stacked-hourglass}, Mask-RCNN~\cite{mask-rcnn} or affinity models~\cite{OpenPose}. As discussed, our focus is on estimating 3D pose from 2D landmarks~\cite{ChenDeva2017,Tung_2017_ICCV,MartinezICCV2017}, and we are agnostic to the source of landmarks.
For the purpose of comparison, prior work on lifting can be organized into four categories:
\textbf{Fully Supervised:} These include approaches such as~\cite{Li_2015_ICCV,MartinezICCV2017,Nie_2017_ICCV} that use paired 2D-3D data comprised of ground truth 2D locations of joint landmarks and the corresponding 3D ground truth for learning. For example, Martinez \etal~\cite{MartinezICCV2017} learn a regression network from 2D joints to 3D joints, whereas Moreno-Noguer~\cite{Moreno-Noguer_2017_CVPR} learn a regression from 2D distance matrix to 3D distance matrix using 2D-3D correspondences. Exemplar based methods~\cite{ChenDeva2017,jiang20103d,Yasin_2016_CVPR} use a database/dictionary of 3D skeletons for nearest-neighbor look-up.
Tekin \etal~\cite{Tekin_2017_ICCV} fused 2D and 3D image cues relying on 2D-3D correspondences. Wang~\etal~\cite{DRPose3D} use the 3D ground truth to train an intermediate ranking network to extract the depth ordering of pairwise human joints from a single RGB image. Sun~\etal~\cite{sun2017compositional} use a 3D regression based on bone segments derived from joint locations as opposed to directly using joint locations.
Since these methods model 2D to 3D mappings from a given dataset, they implicitly incorporate dataset-specific parameters such as camera projection matrices, distance of skeleton from the camera, and scale of skeletons. This enables these models to predict metric position of joints {in 3D} on similar datasets, but requires paired 2D-3D correspondences which are difficult to obtain.
\textbf{Weakly Supervised:} Approaches such as~\cite{Brau3DV2016,AAAI18_yxu_3dpose,Tome_2017_CVPR,zhou2017towards,Zhou_2016_CVPR,MonoCap} do not explicitly use paired 2D-3D correspondences, but use \textit{unpaired} 3D data to learn priors on shape (3D basis) or pose (articulation priors). For example, Zhou~\etal~\cite{Zhou_2016_CVPR} use a 3D pose dictionary to learn pose priors and Brau~\etal~\cite{Brau3DV2016} employ an independently trained network that learns a prior distribution over 3D poses (kinematic and self-intersection priors). Tome~\etal~\cite{Tome_2017_CVPR}, Wu~\etal~\cite{InterpreterNetwork2016} and Tung~\etal~\cite{Tung_2017_ICCV} pre-train low-dimensional representations from 3D annotations to obtain priors for plausible 3D poses. Another form of weak supervision is employed by Ronchi~\etal~\cite{relativeposeBMVC18}, where they train a network using relative depth ordering of joints to predict 3D pose from images. Dabral~\etal~\cite{dabral2018learning} uses supervision of 3D skeletons in conjunction with anatomical losses based on joint angle limits and limb symmetry. Rhodin~\etal~\cite{rhodin2018learning} train via 2D data, using multiple images of a single pose in addition to supervision in using 3D data when available. An adversarial training paradigm was used by Yang~\etal~\cite{Yang_2018_CVPR} to improve an existing 3D pose estimation framework, lifting in-the-wild images with no 3D ground truth and comparing them to existing 3D skeletons.
Similar to our work, the weakly supervised approach of Drover~\etal~\cite{ZedNet_2018_ECCVW} also makes use of 2D projections to learn a 3D prior on human pose. However, Drover~\etal utilize the ground-truth 3D points to generate a large amount (12M) of synthetic 2D joints for training, thus augmenting the original 1.5M 2D poses in Human3.6M by almost $10$ times. This allows them to synthetically over-sample the space of camera variations/angles to learn the 3D priors from those poses. In contrast, we do not use any ground truth 3D projection or 3D data in any form. The fact that we can utilize multiple 2D datasets without any 3D supervision sets us apart from these previous approaches, and enables our method to exploit the large amount of available 2D pose data.
\textbf{Unsupervised:}
Recently, Rhodin~\etal~\cite{Rhodin_2018_ECCV} proposed an unsupervised method to learn a geometry-aware body representation. Their approach maps one view of the human to another view from a set of given multi-view images. It relies on synchronized multi-view images of subjects to learn an encoding of scene geometry and pose. It also uses video sequences to observe the same subject at multiple time instants to learn appearance. In contrast, we do not require multi-view images or the ability to capture the same pose at multiple time instants. We learn 3D pose from 2D projections alone. Kudo~\etal~\cite{kudo2018unsupervised} present 3D error results (130.9 mm) that are comparable to the trivial baseline reported in ~\cite{ZedNet_2018_ECCVW} (127.3 mm).
\textbf{Learning Using Adversarial Loss:}
Generative adversarial learning has emerged as a powerful framework for modeling complex data distributions, some use it to learn generative models~\cite{GAN,CyCADA,CycleGANICCV2017}, and~\cite{shat2019rem} leverages it to synthesize hard examples,~\etc. Previous approaches have used adversarial loss for human pose estimation by using a discriminator to differentiate real/fake 2D poses~\cite{Chen_2017_ICCV} and real/fake 3D poses~\cite{Tung_2017_ICCV,Black2018}. To estimate 3D, these techniques still require 3D data or use a prior 3D pose models. In contrast, our approach applies an adversarial loss over randomly projected 2D poses of the generated 3D skeletons. Previous works on image-to-image translation such as CycleGAN~\cite{CycleGANICCV2017} or CyCADA~\cite{CyCADA} also rely on a cycle consistency loss in the image domain to enable unsupervised training. However, we use geometric self-consistency and utilize consistency loss in 3D and 2D joint locations, resulting in a novel method for lifting
|
1,108,101,564,188 | arxiv | \section{Introduction
We start this paper with an intuitive idea in general terms.
Consider the optimal control problem
\begin{equation*}\boxed{
\begin{split}
& \inf \frac{1}{T}\int_0^Tf^0(y(t),u(t))\,dt,\\
& \text{subject to}\;\; \dot{y}(t)=f(y(t),u(t)), \quad t\in[0,T], \\[2mm]
\end{split}}
\end{equation*}
under some terminal state conditions, with $T>0$ large. Setting $s=t/T$ and $\varepsilon=1/T$, we rewrite the above optimal control problem as
\begin{equation*}\boxed{
\begin{split}
& \inf \int_0^1f^0(y(s),u(s))\,ds,\\
& \text{subject to}\;\;\varepsilon \dot y(s) = f(y(s),u(s)),\quad s\in[0,1].\\[2mm]
\end{split}}
\end{equation*}
Then, we expect that, as $\varepsilon\rightarrow 0$, there is some convergence to the static problem
$$
\boxed{
\inf f^0(y,u), \;\;\text{subject to}\;\;f(y,u)=0. \\[2mm]
}$$
This intuition has been turned into rigorous results in the literature, under some appropriate assumptions. These results say roughly that, if $T$ is large, then any optimal solution $y(\cdot)$ on $[0,T]$ spends most of its time close to an optimal solution $ y_s$ of the static problem.
This is the (neighborhood) turnpike phenomenon. We call the point $y_s$ a turnpike point.
This turnpike phenomenon was first observed and investigated by economists for discrete-time optimal control problems
(see, e.g., \cite{DorfmanSamuelsonSolow,Mc}). In the last three decades, many turnpike results have been established in a large number of works (see, e.g., \cite{Kokotovic, AL, CHJ, CarlsonBOOK, Grune1, Faulwasser1, GTZ, LWei, Rapaport, Z1, Z2, Z3} and references therein), either for discrete-time or continuous-time problems
involving control systems in finite-dimensional state spaces, and very few of them in the infinite dimensional setting.
A more quantitative turnpike property, which is called the exponential turnpike property, has been established in \cite{PZ1,PZ2,TZ1} for both the linear and nonlinear continuous-time optimal controlled systems.
It means that the optimal solution for the dynamic controlled problem remains exponentially close to an optimal solution for the corresponding static controlled problem within a sufficiently large time interval contained in the long-time horizon under consideration. We stress that in those works not only the optimal state and control, but also the corresponding adjoint vector, resulting from the application of the Pontryagin maximum principle, were shown to remain exponentially close to an extremal triple for a corresponding static optimal control problem, except at the extremities of the time horizon. The main ingredient in the papers \cite{PZ1,PZ2,TZ1}
is an exponential dichotomy transformation and the hyperbolicity feature of the Hamiltonian system, deriving from the Pontryagin maximum principle, under some controllability and observability assumptions.
However, not all turnpike phenomena are around a single point. For instance,
the turnpike theorem for calculus of variations problems in \cite{Rapaport} is proved for the case when there are several turnpikes. More precisely, they
show that
there exists a competition between the several turnpikes for optimal trajectories with different initial states,
and provide in particular a criterion for the choice of turnpikes that are in competition.
On the another hand, for some classes of optimal control problems for periodic systems, the turnpike phenomenon may occur around a periodic trajectory, which is itself characterized as being the optimal solution of an appropriate periodic optimal control problem (cf., e.g., \cite{Sa, TZZ, Z2, Z3, Zon}).
\medskip
In this paper, the first main result is to derive a more general turnpike result, valid for very general classes of optimal control problems settled in an infinite-dimensional state space, and where the turnpike phenomenon is around a set $\mathcal{T}$. This generalizes the standard case where $\mathcal{T}$ is a singleton, and the less standard case where $\mathcal{T}$ is a periodic trajectory. Between the case of one singleton and the periodic trajectory, however,
there are, to our knowledge, very few examples for intermediate situations in the literature.
The organization of the paper is as follows.
In Section \ref{general}, we build up an abstract framework to derive a general turnpike phenomenon around a set.
In Section \ref{sec_turnpike}, we enlighten the relationship between the above-mentioned abstract framework
and the strict dissipativity property. Under the strict dissipativity assumption for optimal control problems, we establish the so-called measure-turnpike property.
In Section \ref{sec_dissip}, we provide some material to clarify the relationship between measure-turnpike, strict dissipativity and strong duality. Finally, Section \ref{consec} concludes the paper.
\section{An abstract setting}\label{general}
In this section, we are going to derive a general turnpike phenomenon around a set $\mathcal{T}$. The framework is the following.
Let $X$ (resp., $U$) be a reflexive Banach space endowed with the norm $\|\cdot\|_X$ (resp., $\|\cdot\|_U$).
Let $f: \mathbb R \times X\times U\rightarrow X $ be a continuous mapping that is uniformly Lipschitz continuous in $(y,u)$ for all $t\in\mathbb R$. Let $f^0: \mathbb R\times X\times U\rightarrow \mathbb R$ be a continuous function that is bounded from below.
Let $E$ and $F$ be two subsets of $X$ and $U$, respectively.
Given any $t_0\in\mathbb R$ and $t_1\in\mathbb R$ with $t_0<t_1$, we consider the non-autonomous optimal control problem
$$\boxed{
(P_{[t_0,t_1]})\qquad \left\{\begin{array}{l}
J_{[t_0,t_1]} = \inf\frac{1}{t_1-t_0} \int_{t_0}^{t_1} f^0(t,y(t),u(t))\, dt,\\[2mm]
\text{subject to}\;\;\;\;\dot y(t) = A(t)y+f(t,y(t),u(t)),\quad t\in[t_0,t_1],\\[2mm]
R(t_0,y(t_0),t_1,y(t_1))=0,\quad (y(t),u(t))\in E\times F, \quad t\in[t_0,t_1].\\[2mm]
\end{array}\right.}
$$
Here,
$(A(t), D(A(t)))$ is a family of unbounded operators on $X$ such that the existence of the corresponding two-parameter evolution system $\Phi (t, s)$ is ensured (cf., e.g.,
\cite[Chapter 5, Definition 5.3]{Pa}),
the controls are Lebesgue measurable functions $u(\cdot) : [t_0,t_1] \rightarrow F$, and $Y$ is a Banach space, the mapping $R: \mathbb R\times X\times\mathbb R\times X\rightarrow Y$ stands for any possible terminal state conditions.
Throughout the paper, the solutions $(y(\cdot),u(\cdot))\in C([t_0,t_1];X)\times L^2(t_0,t_1;U)$ are considered in the mild sense, meaning that
$$y(\tau)=\Phi(\tau,t_0)y(t_0)+\int_{t_0}^\tau \Phi(\tau,t) f(t,y(t),u(t))\,dt,\qquad\forall\tau\in[t_0,t_1].$$
\begin{remark}
\emph{
Typical examples of terminal conditions are the following:
\begin{itemize}
\item When both initial and final conditions are let free in $(P_{[t_0,t_1]})$, take $R=0$.
\item When the initial point is fixed (i.e., $y(t_0)=y_0$) and the final point is let free, take $R(s_0,z_0,s_1,z_1)=z_0-y_0$.
\item When both initial and final conditions are fixed (i.e., $y(t_0)=y_0$ and $y(t_1)=y_1$), take $R(s_0,z_0,s_1,z_1)=(z_0-y_0,z_1-y_1)$.
\item When the final point is expected to coincide with the initial point (i.e., $y(t_0)=y(t_1)$ without any other constraint), for instance in a periodic optimal control problem, in which one assumes that there exists
$T>0$ such that $f(t+T,y,u)=f(t,y,u)$ and $f^0(t+T,y,u)=f^0(t,y,u)$, $\forall (t,y,u)\in \mathbb R\times X\times U$,
take $R(s_0,z_0,s_1,z_1)=(s_1-s_0-T, z_0-z_1)$.
\end{itemize}
}
\end{remark}
Hereafter, we call $(y(t),u(t))$, $t\in[t_0,t_1]$, an admissible pair if it verifies the state equation and the constraint $(y(t),u(t))\in E\times F$ for almost every $t\in[t_0,t_1]$. We remark that the definition of admissible pair does not require that the terminal state condition $R(t_0,y(t_0),t_1,y(t_1)) = 0$ is satisfied.
We denote by $$C_{[t_0,t_1]}(y(\cdot),u(\cdot)) = \int_{t_0}^{t_1} f^0(t,y(t),u(t))\, dt$$ the cost of an admissible pair $(y(\cdot),u(\cdot))$ on $[t_0,t_1]$.
In other words, $J_{[t_0,t_1]}$ is the infimum with time average cost ({\it Ces\`aro mean}) over all admissible pairs satisfying the constraint on terminal points:
$$
J_{[t_0,t_1]} = \inf \left\{ \frac{1}{t_1-t_0}C_{[t_0,t_1]}(y(\cdot),u(\cdot)) \ \mid\ (y(\cdot),u(\cdot))\textrm{ admissible},\ R(t_0,y(t_0),t_1,y(t_1))=0 \right\}.
$$
Throughout the paper, we assume that the problem $(P_{[t_0,t_1]})$ has optimal solutions, and that an admissible pair $(y(\cdot),u(\cdot))$, with initial state $y(t_0)$, is said to be optimal for the problem $(P_{[t_0,t_1]})$ if $R(t_0,y(t_0),t_1,y(t_1))=0$ and $\frac{1}{t_1-t_0}C_{[t_0,t_1]}(y(\cdot),u(\cdot))=J_{[t_0,t_1]}$.
Existence of optimal solutions for optimal control problems is well-known under appropriate convexity assumptions on $f^0$, $f$ and $R$ with $E$ and $F$ convex and closed (see, for instance, \cite[Chapter 3]{LiXunjing}).
\medskip
We then consider the optimal control problem
$$\boxed{
(\bar P_{[t_0,t_1]})\qquad \left\{\begin{array}{l}
\bar J_{[t_0,t_1]} = \inf\frac{1}{t_1-t_0}C_{[t_0,t_1]}(y(\cdot),u(\cdot)),\\[2mm]
\text{subject to}\;\;\;\;\dot y(t) =A(t)y+ f(t,y(t),u(t)), \quad t\in[t_0,t_1],\\[2mm]
\quad (y(t),u(t))\in E\times F,\quad t\in[t_0,t_1].\\[2mm]
\end{array}\right.}
$$
Compared with the problem $(P_{[t_0,t_1]})$, in the above problem there is no terminal state constraint, i.e., $R(\cdot)=0$.
In fact, it is the infimum with time average cost over all possible admissible pairs:
$$
\bar J_{[t_0,t_1]} = \inf \left\{ \frac{1}{t_1-t_0}C_{[t_0,t_1]}(y(\cdot),u(\cdot)) \ \mid\ (y(\cdot),u(\cdot))\textrm{ admissible} \right\}.
$$
We say the problem $(\bar P_{[t_0,t_1]})$ has a limit value if $\lim_{t_1\rightarrow+\infty}\bar J_{[t_0,t_1]} $
exists.
We refer \cite{GR, QR} for the sufficient conditions ensuring the existence of the limit value.
More precisely, asymptotic properties of optimal values, as $t_1$ tends to infinity, have been studied in \cite{QR} under suitable nonexpansivity assumptions, and in \cite[Corollary 4 (iii)]{GR} by using occupational measures.
In the sequel, we assume it exists and is written as
$$
\bar J_{[t_0,+\infty)} = \lim_{t_1\rightarrow +\infty} \bar J_{[t_0,t_1]}.
$$
Besides, given any $y\in X$ we define the value function
$$
V_{[t_0,t_1]}(y) = \inf \left\{ \frac{1}{t_1-t_0}C_{[t_0,t_1]}(y(\cdot),u(\cdot)) \ \mid\ (y(\cdot),u(\cdot))\textrm{ admissible},\ y(t_0)=y \right\}.
$$
It is the optimal value of the optimal control problem with fixed initial data $y(t_0)=y$ (but free final point). Note that, if there exists no admissible trajectory starting at $y$ (because $E$ would not contain $y$), then we set $V_{[t_0,t_1]}(y)=+\infty$.
For each $y\in X$, we say a limit value exists if $\lim_{t_1\rightarrow+\infty} V_{[t_0,t_1]}(y) $
exists. We now assume that, for each $y\in X$, the limit value exists and is written as
$$
V_{[t_0,+\infty)}(y) = \lim_{t_1\rightarrow +\infty} V_{[t_0,t_1]}(y).
$$
Clearly, we have
$$
\forall t_0<t_1,\qquad J_{[t_0,t_1]}\geq \bar J_{[t_0,t_1]},
$$
and thus
\begin{equation}\label{lower limit}
\liminf_{t_1\rightarrow +\infty} J_{[t_0,t_1]}\geq \bar J_{[t_0,+\infty)}.
\end{equation}
Meanwhile,
$$
\forall t_0<t_1,\quad\forall y\in X,\qquad V_{[t_0,t_1]}(y)\geq \bar J_{[t_0,t_1]},
$$
and thus
$$
\forall y\in X,\qquad V_{[t_0,+\infty)}(y)\geq \bar J_{[t_0,+\infty)}.
$$
\medskip
\begin{remark}\label{inv}
\emph{If the optimal control problem is autonomous (i.e., $A(\cdot)=A$, $f$ and $f^0$ are independent of time variable), it follows from the definitions that $\bar J_{[t_0,+\infty)}$, as well as
$V_{[t_0,+\infty)}(y)$, $\forall y\in X$, do not depend on $t_0\in \mathbb R$.}
\end{remark}
\begin{remark}\label{re2}
\emph
Actually we have
$$\bar J_{[t_0,t_1]} = \inf_{y\in X} V_{[t_0,t_1]}(y).$$
This is obvious because we can split the infimum and write
$$
\bar J_{[t_0,t_1]} = \inf_{y\in X} \inf_{\stackrel{(y(\cdot),u(\cdot))\textrm{ admissible}}{y(t_0)=y}} \frac{1}{t_1-t_0}C_{[t_0,t_1]}(y(\cdot),u(\cdot)) = \inf_{y\in X} V_{[t_0,t_1]}(y).
$$}
\end{remark}
\medskip
In order to state the general turnpike result, we make the following assumptions:
\begin{itemize}
\item[$ (H_1)$.] (Turnpike set) There exists a closed set $\mathcal{T}\subset X$ (called turnpike set) such that
$$
\qquad\forall t_0\in\mathbb R,\quad\forall y\in \mathcal{T},\qquad V_{[t_0,+\infty)}(y)=\bar J_{[t_0,+\infty)}.
$$
\end{itemize}
\begin{itemize}
\item[$ (H_2)$.] (Viability) The turnpike set $\mathcal{T}$ is \emph{viable}, meaning that, for every $y\in\mathcal{T}$ and for every $t_0\in\mathbb R$, there exists an admissible pair $(y(\cdot),u(\cdot))$ such that $y(t_0)=y$ and $y(t)\in\mathcal{T}$ for every $t\geq t_0$.
Moreover, every admissible trajectory remaining in $\mathcal{T}$ is optimal in the following sense: for every $y\in\mathcal{T}$, for every $t_0\in\mathbb R$, for every admissible pair $(y(\cdot),u(\cdot))$ such that $y(t_0)=y$ and $y(t)\in\mathcal{T}$ for every $t\geq t_0$, we have
$$
V_{[t_0,+\infty)}(y) = \lim_{t\rightarrow+\infty} \frac{1}{t-t_0} C_{[t_0,t]}(y(\cdot),u(\cdot)) .
$$
\end{itemize}
\begin{itemize}
\item[$(H_3)$.] (Controllability) There exist $\bar \delta_0>0$ and $\bar \delta_1>0$ such that, for every $t_0\in\mathbb R$
and every $t_1\in\mathbb R$ with $t_1>t_0+\bar \delta_0+\bar \delta_1$, and every optimal trajectory
$y(\cdot)$ for the problem $(P_{[t_0,t_1]})$,
\begin{itemize}
\item there exist $\delta_0\in(0,\bar \delta_0]$ and an admissible pair $(y_0(\cdot),u_0(\cdot))$ on $[t_0,t_0+\delta_0]$ such that $y_0(t_0)=y(t_0)$ and $y_0(t_0+\delta_0)\in \mathcal{T}$,
\item for every $y\in\mathcal{T}$, there exist $\delta_1\in(0,\bar \delta_1]$ and an admissible pair $(y_1(\cdot),u_1(\cdot))$ on $[t_1-\delta_1,t_1]$ such that $y_1(t_1-\delta_1)=y$ and $y_1(t_1)=y(t_1)$.
\end{itemize}
\item[$(H_4)$.] (Coercivity)
There exist a monotone increasing continuous function $\beta:[0,+\infty)\rightarrow[0,+\infty)$ with $\beta(0)=0$ and a distance
$\text{dist} (\cdot,\mathcal T)$ to $\mathcal T$
such that
for every $t_0$ and every $\hat y\in X$,
$$
V_{[t_0,t_1]}(\hat y) \geq \inf_{y\in X}V_{[t_0,t_1]}(y) + \frac{1}{t_1-t_0}\int_{t_0}^{t_1}
\beta(\text{dist}(\hat y(t),\mathcal{T}))\,dt +\mathrm o(1),$$
holds for any optimal trajectory $\hat y(\cdot)$ starting at $\hat y(t_0)=\hat y$ for the problem $(P_{[t_0,t_1]})$,
where the last term in the above inequality is an infinitesimal quantity as $t_1\rightarrow +\infty$.
\end{itemize}
\medskip
Hereafter, we speak of
\textbf{Assumption (H)} in order to designate assumptions $(H_1)$, $(H_2)$, $(H_3)$ and $(H_4)$.
\medskip
\begin{remark}
\begin{itemize}
\item[(i).] \emph{Under $(H_1)$, we actually have
$\bar J_{[t_0,+\infty)} = \inf_{y\in X} V_{[t_0,+\infty)}(y)$, $\forall t_0\in \mathbb R$.
}
\item[(ii).] \emph{$(H_2)$ means that, starting at $y\in\mathcal{T}$, it is better to remain in $\mathcal{T}$ than to leave this set.}
\item[(iii).] \emph{$(H_3)$ is a specific controllability assumption.
For instance, in the case that the initial point $y(t_0)=y_0$ and the final point $y(t_1)=y_1$ in the problem $(P_{[t_0,t_1]})$ are fixed, then $(H_3)$ means that the turnpike set $\mathcal{T}$ is reachable from $y_0$ within time $\bar\delta_0$, and that $y_1$ is reachable from any point of $\mathcal{T}$ within time $\bar \delta_1$. When the turnpike set $\mathcal T$
is a single point, we refer the reader to \cite{Faulwasser1} for a similar assumption. }
\item[(iv).] \emph{$(H_4)$ is a coercivity assumption involving the value function and the turnpike set $\mathcal T$.
It may not be easy to verify this condition. However, under the strict dissipativity property (which will be introduced in the next section), it is satisfied. We refer the reader to Section \ref{rd} for more discussions about the relationship with the strict dissipativity.
}
\end{itemize}
\end{remark}
\medskip
We first give a simple example which satisfies the \textbf{Assumption (H)}.
\begin{example}
Let $\Omega\subset\mathbb R^n$, $n\geq1$, be a bounded domain with a smooth boundary $\partial \Omega$, and let $\mathcal D\subset\Omega$ be a non-empty
open subset. We denote by $\chi_\mathcal D$ the characteristic function of $\mathcal D$.
Let $M>0$ and $y_0\in L^2(\Omega)$ be arbitrarily given.
For $t_0< t_1$, consider the following optimal control problem for the heat equation:
$$\;\;\;\;\;\;\;\; \inf\, \frac{1}{t_1-t_0}\int_{t_0}^{t_1}\Big(\|y(\cdot,t)\|_{L^2(\Omega)}^2+\|u(\cdot,t)\|_{L^2(\mathcal D)}^2\Big)\,dt$$
subject to
\begin{equation*}\left\{
\begin{split}
&y_t-\Delta y=\chi_{\mathcal D} u,\;\;\text{in}\;\;\Omega\times(t_0,t_1),\\
&y=0,\;\;\text{on}\;\;\partial\Omega\times(t_0,t_1),\\
&y(\cdot,t_0)=y_0,\;\;y(\cdot,t_1)=0,\;\;\text{in}\;\;\Omega,\\
&\|u(\cdot,t)\|_{L^2(\mathcal D)}\leq M,\;\;\text{for a.e.}\;\;t\in(t_0,t_1).
\end{split}\right.
\end{equation*}
Here, we take $X=L^2(\Omega)$, $U=L^2(\mathcal D)$ and $F=\{u\in U\,\,|\,\, \|u\|_{L^2(\mathcal D)}\leq M\}$. By the standard energy estimate, we can take $E=\{y\in X\,\,|\,\, \|y\|_{L^2(\Omega)}\leq \|y_0\|_{L^2(\Omega)}+M/\lambda_1\}$,
where $\lambda_1>0$ is the first eigenvalue of the Laplace operator with zero Dirichlet boundary condition on $\partial \Omega$.
It is clear that $\bar J_{[t_0,+\infty)}=0$. Let us define the turnpike set $\mathcal T=\{0\}$. By the $L^\infty$-null controllability and exponential decay of the energy of heat equations, then the above control system with bounded controls is null controllable from each given point $y_0$ within a large time interval (see, e.g., \cite{wang}). Therefore, the assumptions $(H_1)$, $(H_2)$ and $(H_3)$ are satisfied.
Let $y(\cdot)$ be any optimal trajectory starting at $y(\cdot, t_0)=y_0$.
By the definition of value function, we see that
$$
V_{[t_0,t_1]}(y_0)\geq \frac{1}{t_1-t_0}\int_{t_0}^{t_1} \|y(\cdot,t)\|^2_{L^2(\Omega)}\,dt.
$$
Hence $(H_4)$ is satisfied with $\beta (r)=r^2$, $r\geq 0$.
\end{example}
\medskip
The main result of this paper is the following. It says that a general turnpike behavior occurs around the turnpike set $\mathcal T$,
in terms of the time average of the distance from optimal trajectories to $\mathcal T$.
\begin{theorem}\label{thm1}
Assume that $f^0$ is bounded on $\mathbb R\times E\times F$.
\begin{enumerate}
\item[(i).] Under $(H_1)$, $(H_2)$ and $(H_3)$, for every $t_0\in\mathbb R$ we have
\begin{equation}\label{mi1}
\lim_{t_1\rightarrow +\infty} J_{[t_0,t_1]} = \bar J_{[t_0,+\infty)}.
\end{equation}
\item[(ii).] Further, under the additional assumption $(H_4)$
we have
\begin{equation}\label{guai1}
\lim_{t_1\rightarrow+\infty}\frac{1}{t_1-t_0}\int_{t_0}^{t_1}\beta (\text{dist} (y(t),\mathcal T))\,dt =0,
\end{equation}
for any $t_0$ and any optimal trajectory $y(\cdot)$ of the problem $(P_{[t_0,t_1]})$.
\end{enumerate}
\end{theorem}
\medskip
\begin{remark}\label{remove}
{\it
The boundedness assumption on $f^0$ in Theorem \ref{thm1} can be removed in case the optimal control problem is autonomous, i.e., when $A$, $f$ and $f^0$ do not depend on $t$, provided that the controllability assumption $(H_3)$ be slightly reinforced, by assuming ``controllability with finite cost": one can steer $y(t_0)$ to the turnpike set $\mathcal T$ within time
$\delta_0$ and steer any point of $\mathcal T$ to $y(t_1)$ within time $\delta_1$ with a cost that is uniformly bounded with respect to every optimal trajectory $y(\cdot)$ and $y\in \mathcal T$. For non-autonomous control problems, see also Remark~\ref{dubao1}.
}
\end{remark}
The property \eqref{guai1} is a weak turnpike property, which can be called the $\beta$-integral-turnpike property, and which is even weaker than the measure-turnpike property introduced further in Section \ref{dis}. Indeed, from $\eqref{guai1}$ we infer that for any $\delta>0$, there exists $T_0>t_0$ such that
\begin{equation*}
\frac{1}{t_1-t_0}\int_{t_0}^{t_1}\beta (\text{dist} (y(t),\mathcal T))\,dt\leq \delta
\end{equation*}
for any $t_1\geq T_0$.
If, for any $\varepsilon>0$, we set
\begin{equation*}
Q^\varepsilon_{[t_0,t_1]}=\big\{t\in [t_0,t_1]\ \mid\
\text{dist} (y(t),\mathcal T)>\varepsilon\big\},\;\;\;\;\forall t_1\geq T_0.
\end{equation*}
Throughout the paper, we denote by $|Q|$ the Lebesgue measure of a subset $Q\subset\mathbb R$.
Then, by Markov's inequality, one can easily derive that
$$
\frac{\left\vert Q^\varepsilon_{[t_0,t_1]}\right\vert}{t_1-t_0}\leq \frac{\delta}{\beta(\varepsilon)},\;\;\;\;\forall t_1\geq T_0.
$$
This is weaker than the property \eqref{bairui1} in Section \ref{dis}.
\begin{proof}[\textbf{Proof of Theorem~\ref{thm1}}]
$(i)$. Let $t_1>t_0+\bar\delta_0+\bar\delta_1$, with $\bar\delta_0$ and $\bar\delta_1$ as in $(H_3)$.
Let $(y(\cdot),u(\cdot))$ be an optimal pair for the problem $(P_{[t_0,t_1]})$.
By $(H_2)$ and $(H_3)$, there exist $\delta_0\in(0,\bar\delta_0]$, $\delta_1\in(0,\bar\delta_1]$ and an admissible pair $(\widetilde y(\cdot),\widetilde u(\cdot))$ such that
\begin{itemize}
\item $\widetilde y(\cdot)$ steers the control system from $y(t_0)$ to $\mathcal{T}$ within the time interval $[t_0,t_0+\delta_0]$,
\item $\widetilde y(\cdot)$ remains in $\mathcal{T}$ within the time interval $[t_0+\delta_0,t_1-\delta_1]$,
\item $\widetilde y(\cdot)$ steers the control system from $\widetilde y(t_1-\delta_1)\in\mathcal{T}$ to $y(t_1)$ within the time interval $[t_1-\delta_1,t_1]$.
\end{itemize}
These trajectories are drawn in Figure \ref{fig1}.
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{turnpike.png}
\caption{Optimal trajectory $y(\cdot)$, and admissible trajectory $\widetilde y(\cdot)$ remaining along the turnpike set $\mathcal{T}$ as long as possible.}
\label{fig1}
\end{figure}
Its cost of time average within the time interval $[t_0,t_1]$ is
\begin{multline}\label{sum}
\frac{1}{t_1-t_0} C_{[t_0,t_1]}(\widetilde y(\cdot),\widetilde u(\cdot))
= \frac{1}{t_1-t_0} C_{[t_0,t_0+\delta_0]}(\widetilde y(\cdot),\widetilde u(\cdot)) + \frac{1}{t_1-t_0} C_{[t_1-\delta_1,t_1]}(\widetilde y(\cdot),\widetilde u(\cdot)) \\
+ \frac{1}{t_1-t_0} C_{[t_0+\delta_0,t_1-\delta_1]}(\widetilde y(\cdot),\widetilde u(\cdot)).
\end{multline}
Since $f^0$ is bounded on $\mathbb R\times E\times F$, the first two terms on the right hand side of \eqref{sum} converge to zero as $t_1\rightarrow +\infty$.
Since $\widetilde y(t_0+\delta_0)\in\mathcal{T}$, by $(H_2)$ we have
\begin{equation}\label{tj3}
V_{[t_0+\delta_0,+\infty)}(\widetilde y(t_0+\delta_0))=
\lim_{t_1\rightarrow+\infty}\frac{1}{t_1-\delta_1-(t_0+\delta_0)}C_{[t_0+\delta_0,t_1-\delta_1]}(\widetilde y(\cdot),\widetilde u(\cdot) ).
\end{equation}
As $\widetilde y(t_0+\delta_0)\in\mathcal{T}$, by $(H_1)$ we infer
\begin{equation}\label{tj2}
V_{[t_0+\delta_0,+\infty)}(\widetilde y(t_0+\delta_0))=\bar J_{[t_0+\delta_0,+\infty)}.
\end{equation}
We now claim that
\begin{equation}\label{tianjing1}
\bar J_{[t_0+\delta_0,+\infty)}\leq \bar J_{[t_0,+\infty)}.
\end{equation}
We postpone the proof of this claim and first see how it could be used in showing the convergence \eqref{mi1}.
Therefore, we derive from \eqref{tj2} and \eqref{tianjing1} that
\begin{equation*}
V_{[t_0+\delta_0,+\infty)}(\widetilde y(t_0+\delta_0))\leq \bar J_{[t_0,+\infty)}.
\end{equation*}
This, together with \eqref{sum} and \eqref{tj3}, indicate that
\begin{equation}\label{hui1}
\lim_{t_1\rightarrow+\infty}\frac{1}{t_1-t_0} C_{[t_0,t_1]}(\widetilde y(\cdot),\widetilde u(\cdot))\leq \bar J_{[t_0,+\infty)}.
\end{equation}
On the other hand, by the construction above, $(\widetilde y(\cdot),\widetilde u(\cdot))$ is an admissible pair satisfying the terminal state constraint $R(t_0,\widetilde y(t_0),t_1,\widetilde y(t_1))=0$, we have
$$
J_{[t_0,t_1]} \leq \frac{1}{t_1-t_0} C_{[t_0,t_1]}(\widetilde y(\cdot),\widetilde u(\cdot)).
$$
This, combined with \eqref{hui1}, infers that
$$\limsup_{t_1\rightarrow+\infty} J_{[t_0,t_1]} \leq \bar J_{[t_0,+\infty)}.$$
Which, along with \eqref{lower limit}, leads to \eqref{mi1}.
Next, we present the proof of the claim \eqref{tianjing1}. Let $(\bar y(\cdot),\bar u(\cdot))$ be an optimal pair
for the problem $(\bar P_{[t_0,t_1]})$. Then
\begin{multline*}
\bar J_{[t_0,t_1]}-\bar J_{[t_0+\delta_0,t_1]}
=
\frac{1}{t_1-t_0}\int_{t_0}^{t_0+\delta_0}f^0(t,\bar y(t),\bar u(t))\,dt+\frac{1}{t_1-t_0}
\int_{t_0+\delta_0}^{t_1}f^0(t,\bar y(t),\bar u(t))\,dt-\bar J_{[t_0+\delta_0,t_1]}\\
=\frac{1}{t_1-t_0}\int_{t_0}^{t_0+\delta_0}f^0(t,\bar y(t),\bar u(t))\,dt+\Big(\frac{t_1-t_0-\delta_0}{t_1-t_0}-1\Big)\times
\frac{1}{t_1-t_0-\delta_0}\int_{t_0+\delta_0}^{t_1}f^0(t,\bar y(t),\bar u(t))dt\\
+
\frac{1}{t_1-t_0-\delta_0}\int_{t_0+\delta_0}^{t_1}f^0(t,\bar y(t),\bar u(t))dt-\bar J_{[t_0+\delta_0,t_1]}.
\end{multline*}
Since $(\bar y(\cdot),\bar u(\cdot))$ is also admissible for the problem $(\bar P_{[t_0+\delta_0,t_1]})$,
\begin{equation*}
\frac{1}{t_1-t_0-\delta_0}\int_{t_0+\delta_0}^{t_1}f^0(t,\bar y(t),\bar u(t))dt\geq\bar J_{[t_0+\delta_0,t_1]},
\end{equation*}
we see that
\begin{multline*}
\bar J_{[t_0,t_1]}-\bar J_{[t_0+\delta_0,t_1]}\\ \geq
\frac{1}{t_1-t_0}\int_{t_0}^{t_0+\delta_0}f^0(t,\bar y(t),\bar u(t))\,dt+\Big(\frac{t_1-t_0-\delta_0}{t_1-t_0}-1\Big)\times
\frac{1}{t_1-t_0-\delta_0}\int_{t_0+\delta_0}^{t_1}f^0(t,\bar y(t),\bar u(t))dt.\\
\end{multline*}
By the boundedness of $f^0$ on $\mathbb R\times E\times F$ (i.e., there exists $M>0$ such that $|f^0(\cdot)|\leq M$), we obtain
\begin{equation*}
\bar J_{[t_0,t_1]}-\bar J_{[t_0+\delta_0,t_1]} \geq -\frac{2M\delta_0}{t_1-t_0},
\end{equation*}
which implies \eqref{tianjing1} as $t_1\rightarrow +\infty$.
\medskip
$(ii)$. By the definition of the value function $V_{[t_0,t_1]}(\cdot)$, we obtain
\begin{equation}\label{jican1}
V_{[t_0,t_1]}(y(0))\leq\frac{1}{t_1-t_0} C_{[t_0,t_1]}(y(\cdot),u(\cdot)).
\end{equation}
By $(H_4)$ and Remark~\ref{re2}, we have
\begin{equation}\label{jican2}
V_{[t_0,t_1]}(y(0))
\geq \bar J_{[t_0,t_1]}+\frac{1}{t_1-t_0}\int_{t_0}^{t_1}
\beta(\text{dist}(y(t),\mathcal{T}))\,dt +\mathrm o(1),
\end{equation}
as $t_1\rightarrow +\infty$.
By \eqref{mi1} we infer
$$
\lim_{t_1\rightarrow+\infty}\frac{1}{t_1-t_0} C_{[t_0,t_1]}(y(\cdot),u(\cdot))=\lim_{t_1\rightarrow+\infty}J_{[t_0,t_1]}=\bar J_{[t_0,+\infty)}.$$
This, together with \eqref{jican1} and \eqref{jican2}, indicates
$$
\limsup_{t_1\rightarrow+\infty}\frac{1}{t_1-t_0}\int_{t_0}^{t_1}
\beta(\text{dist}(y(t),\mathcal{T}))\,dt =0,
$$
which completes the proof.
\end{proof}
\begin{remark}\label{nonc}
{\it
In the proof of Theorem \ref{thm1}, the role of controllability assumption $(H_3)$ is to ensure that there is an admissible trajectory $\widetilde y(\cdot)$ satisfying the terminal state condition $R(t_0,\widetilde y(t_0), t_1,\widetilde y(t_1))=0$ and with a comparable cost (i.e., \eqref{hui1}).
Note that $(H_3)$ can be weakened to some cases where controllability may fail:
take any control system that is asymptotically controllable to the turnpike set $\mathcal{T}$. This is the case for the heat equation which is asymptotically controllable for any given point (cf., e.g., \cite[Chapter 7]{LiXunjing}). Then, if one waits for a certain time, one will arrive at some neighborhood of $\mathcal{T}$.
Similarly, to run the proof as in Theorem \ref{thm1}, one needs an assumption which is stronger than $(H_2)$. More precisely, one needs viability, not only along $\mathcal{T}$, but also in a neighborhood of $\mathcal{T}$.
Under these assumptions, we believe that one can design a turnpike result for this control system with free final point.
In any case, note that, when the final point is free, having a turnpike property is more or less equivalent to having an asymptotic stabilization to $ \mathcal{T}$ (see also an analogous discussion in \cite[Remark 2]{Faulwasser1}).
If additionally one wants to fix the final point, then one would need the existence of a trajectory steering any point of the neighborhood of $\mathcal{T}$ to the final point.}
\end{remark}
\begin{remark}\label{dubao1}
{\it
As seen in the proof of Theorem \ref{thm1}, the assumption of boundedness of $f^0$ is used two times: the first one, in order to bound the first two terms of \eqref{sum}; the second one, in order to prove \eqref{tianjing1}. For autonomous optimal control problems, on the one part we have $\bar J_{[t_0+\delta_0,+\infty)}= \bar J_{[t_0,+\infty)}$ (see Remark \ref{inv}) and then \eqref{tianjing1} is true, and on the other part the first two terms at the right-hand side of \eqref{sum} converge to zero as
$t_1\rightarrow +\infty$ under the ``controllability with finite cost" assumption mentioned in Remark \ref{remove}. In contrast, for non-autonomous optimal control problems the situation may be more complicated, in particular due to the dependence on time of $f^0$. The assumption of boundedness of $f^0$ is quite strong and could of course be weakened in a number of ways so as to ensure that the above proof still works. We prefer keeping this rather strong assumption in order to put light in the main line of the argument, not going into too technical details. Variants are easy to derive according to the context.
}
\end{remark}
\section{Relationship with (strict) dissipativity}\label{sec_turnpike}\label{rd}
In this section, we make precisely the relationship between the strict dissipativity property (which we recall in Section \ref{dd})
and the so-called measure-turnpike property (which we define in Section \ref{dis}).
\subsection{What is (strict) dissipativity}\label{dd}
To fix ideas, in this section we only consider the autonomous case.
Let $X$, $U$, $E$ and $F$ be the same as in Section \ref{general}. Let $A(\cdot)\equiv A$
generate a $C_0$ semigroup $\{e^{tA}: t\geq 0\}$ on $X$, and let $f$ and $f^0$ be time-independent.
To simplify the notation,
for every $T>0$, we here consider the optimal control problem
\begin{equation*}
\boxed{
(\bar P_{[0,T]})\qquad \left\{\begin{split}
& \inf J^T(y(\cdot),u(\cdot))=\frac{1}{T}\int_0^Tf^0(y(t),u(t))\,dt,\\
&\text{subject to}\;\;\; \dot{y}(t)=Ay(t)+f(y(t),u(t)), \quad t\in[0,T], \\[1mm]
& y(t)\in E, \quad u(t)\in F,\qquad t\in[0,T]. \\[1mm]
\end{split}\right.}
\end{equation*}
Indeed, the above problem $(\bar P_{[0,T]})$ coincides with $(\bar P_{[t_0,t_1]})$ in Section~\ref{general} for $t_0=0$ and $t_1=T$.
Note that the terminal states $y(0)$ and $y(T)$ are left free in the problem $(\bar P_{[0,T]})$.
Recall that the solutions $(y(\cdot),u(\cdot))\in C([0,T];X)\times L^2(0,T;U)$ are considered in the mild sense, meaning that
$$y(\tau)=e^{\tau A}y(0)+\int_{0}^\tau e^{(\tau-t)A} f(y(t),u(t))\,dt,\qquad\forall\tau\in[0,T],$$
or equivalently,
\begin{equation*}\label{5311}
\begin{split}
\langle \varphi, y(\tau)\rangle_{X^*,X}-\langle \varphi, y(0)\rangle_{X^*,X}=\int_{0}^\tau \Big( \langle A^*\varphi, y(t)\rangle_{X^*,X}+\langle \varphi,f(y(t),u(t))\rangle_{X^*,X}\Big) dt ,
\end{split}
\end{equation*}
for each $\tau\in[0,T]$ and $\varphi\in D(A^*)$, where $A^*:D(A^*)\subset X^*\rightarrow X^*$ is the adjoint operator of $A$, and $\langle\cdot,\cdot\rangle_{X^*,X}$ is the dual paring
between $X$ and its dual space $X^*$.
Likewise,
we say $(y(\cdot), u(\cdot))$ an admissible pair to the problem $(\bar P_{[0,T]})$ if it satisfies the state
equation and the above state-control constraint.
Assume that, for any $T>0$, $(\bar P_{[0,T]})$ has at least one optimal solution denoted by $(y^T(\cdot),u^T(\cdot))$, and we set
$$\bar J^T=J^T(y^T(\cdot),u^T(\cdot)).$$
Note that $\bar J^T$ does not depend on the optimal solution under consideration.
In the finite-dimensional case where $X = \mathbb R^n$ and $U=\mathbb R^m$, without loss of generality, we may take $A = 0$, and then the control system is $\dot{y}(t)=f(y(t),u(t))$.
We refer the reader to \cite{Faulwasser1,TZ1} for the asymptotic behavior of optimal solutions of such optimal control problems with constraints on the terminal states.
Consider the static optimal control problem
\begin{equation*}\boxed{
(P_s)\qquad \left\{\begin{split}
& \inf J_s(y,u)= f^0(y,u), \\
& \text{subject to}\;\;\; Ay+f(y,u)=0, \\
& y\in E, \quad u\in F, \\
\end{split}\right.}
\end{equation*}
where the first equation means that
$$
\langle A^*\varphi, y\rangle_{X^*,X}+\langle \varphi,f(y,u)\rangle_{X^*,X}=0,\qquad\forall\varphi\in D(A^*).
$$
As above, we assume that there exists at least one optimal solution $(y_s,u_s)$ of $(P_s)$. Such existence results are as well standard, for instance in the case where $A$ is an elliptic differential operator (see \cite[Chapter 3, Theorem 6.4]{LiXunjing}).
We set
$$\bar J_s=J_s(y_s,u_s).$$
Note that $\bar J_s$ does not depend on the optimal solution that is considered.
Of course, uniqueness of the minimizer cannot be ensured in general because the problem is not assumed to be convex.
Note that $(y_s,u_s)$ is admissible for the problem $(\bar P_{[0,T]})$ for any $T>0$, meaning that it satisfies the constraints and is a solution of the control system.
We next define the notion of dissipativity for the infinite-dimensional controlled system, which is originally due to \cite{Willems} for finite-dimensional dynamics (see also related definitions in \cite{Faulwasser1}). Recall that the continuous function $\alpha: [0,+\infty)\rightarrow [0,+\infty)$ with $\alpha(0)=0$ is said to be a $\mathcal K$-class function if it is monotone increasing.
\begin{definition
We say that $\{(\bar P_{[0,T]})\, \mid\, T>0\}$ is \emph{dissipative} at an optimal stationary point $(y_s, u_s)$ with respect to the \emph{supply rate function}
\begin{equation}\label{5234}
\omega(y,u)= f^0(y,u)-f^0(y_s, u_s),\qquad \forall(y,u)\in E\times F,
\end{equation}
if there exists a \emph{storage function} $S:E\rightarrow\mathbb R$, locally bounded and bounded from below, such that, for any $T>0$, the dissipation inequality
\begin{equation}\label{5224}
S(y(0))+\int_0^\tau\omega(y(t),u(t))\,dt\geq S(y(\tau)),\;\;\forall \tau\in[0,T],
\end{equation}
holds true, for any admissible pair $(y(\cdot),u(\cdot))$.
We say it is \emph{strictly dissipative} at $(y_s, u_s)$
with respect to the supply rate function $\omega$ if there exists a $\mathcal{K}$-class function $\alpha(\cdot)$ such that, for any $T>0$, the strict dissipation inequality
\begin{equation}\label{5225}
S(y(0))+\int_0^\tau\omega(y(t),u(t))\,dt\geq S(y(\tau))+\int^\tau_0\alpha\big(\|(y(t)- y_s, u(t)- u_s)\|_{X\times U}\big)\,dt,
\;\;\forall \tau\in[0,T],
\end{equation}
holds true, for any admissible pair $(y(\cdot),u(\cdot))$.
The function $d(\cdot)= \alpha(\|(y(\cdot)- y_s, u(\cdot)- u_s)\|_{X\times U}) $ in \eqref{5225} is called the dissipation rate.
\end{definition}
Although there are many possibly different notions of dissipativity introduced in the literature (such as the positivity or the local boundedness of the storage function in their definitions, cf., e.g., \cite[Chapter 4]{disbook}), they are proved to be equivalent in principle between with each other.
Note that a storage function is defined up to an additive constant.
We here define the storage function $S : E \rightarrow \mathbb R$ to take real values instead of positive real values. Since $S$ is assumed to be bounded from below, one could as well consider $S : E \rightarrow \mathbb [0,+\infty)$.
We mention that no regularity is a priori required to define it.
Actually, storage functions do possess some regularity properties, such as $C^0$ or $C^1$ regularity, under suitable assumptions.
For example, the controllable and observable systems with positive transfer functions are dissipative with quadratic storage functions (see \cite[Section 4.4.5]{disbook} for instance).
When a system is dissipative with a given supply rate function, the question of finding a storage function has been extensively studied. This question is closely similar to the problem of finding a suitable Lyapunov function in the Lyapunov second method ensuring the stability of a system.
For linear systems with a quadratic
supply rate function, the existence of a storage function boils down to solve a Riccati inequality.
In general, storage functions are closely related to viscosity solutions of a partial differential inequality, called a \emph{Hamilton-Jacobi} inequality. We refer the reader to \cite[Chapter 4]{disbook} for more details on this subject.
An equivalent characterization of the dissipativity in \cite{Willems}
can be described by the so-called \emph{available storage}, which is defined as
$$
S_a(y)\triangleq\sup_{t\geq0,\,(y(\cdot),u(\cdot))}\Big\{-\int_0^t \omega(y(\tau),u(\tau))\,d\tau\Big\},
$$
where the $\sup$ is taken over all admissible pairs $(y(\cdot),u(\cdot))$ (meaning that satisfy the dynamic controlled system and state-control constraints) with initial value $y(0)=y$.
In fact, for every $y\in E$, $S_a(y)$ can be seen as the maximum amount of ``energy'' which can be extracted from the system with initial state $y=y(0)$.
It has been shown by Willems \cite{Willems} that
the problem $(\bar P_{[0,T]})$ is dissipative at $(y_s, u_s)$ with respect to the supply rate function $\omega(\cdot,\cdot)$
if and only if $S_a(y)$ is finite for every $y\in E$.
We provide a specific example of a (strictly) dissipative control system.
\begin{example}
Let $\Omega\subset\mathbb R^n$ ($n\geq1$) be a smooth and bounded domain, and let $\mathcal D\subset\Omega$ be a non-empty
open subset. Denote by $\langle\cdot,\cdot \rangle$ and $\|\cdot\|$ the inner product and norm in $L^2(\Omega)$
respectively. For each $T>0$, consider the optimal control problem
$$ \;\;\;\;\;\;\inf \,\int_0^T \Big(\langle y(t), \chi_\mathcal D u(t)\rangle+\|u(t)\|^2\Big)\,dt,$$
subject to
\begin{equation*}\left\{
\begin{split}
&y_t-\Delta y=\chi_\mathcal D u,\;\;\text{in}\;\;\Omega\times(0,T),\\
&y=0,\;\;\text{on}\;\;\partial\Omega\times(0,T),\\
&\|y(t)\|\leq 1, \;\; \|u(t)\|\leq 1, \;\;\;\forall t\in[0,T].
\end{split}\right.
\end{equation*}
Notice that the corresponding static problem has a unique solution $(0,0)$.
We show that this problem is strictly dissipative at $(0,0)$ with respect to the supply rate
$$\omega (y,u)=\langle y,\chi_\mathcal D u\rangle +\|u\|^2,\;\;\forall (y, u)\in L^2(\Omega)\times L^2(\Omega).$$
In fact, integrating the heat equation by parts leads to
\begin{equation*}
\int_0^\tau\langle y(t),\chi_\mathcal D u(t) \rangle \,dt
=\frac{\|y(\tau)\|^2-\| y(0)\|^2}{2}+\int_0^\tau\|\nabla y(t)\|^2 dt,
\end{equation*}
for any $\tau\in[0,T]$. This, together with the definition of $\omega(\cdot,\cdot)$ above and the Poincar\'e inequality, indicates that the strict
dissipation inequality
$$
S(y(\tau))+c\int_0^\tau\Big(\|y(t)\|^2+\|u(t)\|^2\Big)\,dt\leq
S(y(0))+\int_0^\tau \omega(y(t),u(t))\,dt,\;\;\forall \tau\in[0,T],
$$
holds with $\alpha(\gamma)=c\gamma^2$ for some constant $c>0$, and a storage function $S(\cdot)$ given by
$$S(y)=\frac{1}{2}\|y\|^2, \;\;\forall y\in L^2(\Omega).$$
Thus, this problem has the strict dissipativity property at $(0,0)$.
\end{example}
\subsection{Strict dissipativity implies measure-turnpike}\label{dis}
Next, we introduce a rigorous definition of measure-turnpike for optimal control problems.
\begin{definition
We say that $\{(\bar P_{[0,T]})\, \mid\, T>0\}$ enjoys the \emph{measure-turnpike property} at $(y_s,u_s)$ if, for every $\varepsilon>0$, there exists $\Lambda(\varepsilon)>0$ such that
\begin{equation}\label{bairui1}
\mathcal |Q_{\varepsilon,T}|\leq \Lambda(\varepsilon),\qquad\forall T>0,
\end{equation}
where
\begin{equation}\label{5227}
Q_{\varepsilon,T}=\left\{ t\in[0,T]\, \mid\, \left\|(y^T(t)-y_s,u^T(t)-u_s)\right\|_{X\times U}> \varepsilon \right\}.
\end{equation}
\end{definition}
We refer the reader to \cite{CarlsonBOOK, Faulwasser1, Z2} (and references therein) for similar definitions.
In this definition, the set $Q_{\varepsilon,T}$ measures the set of times at which the optimal trajectory and control stay outside an $\varepsilon$-neighborhood of $(y_s,u_s)$ for the strong topology.
We stress that the measure-turnpike property defined above concerns both state and control. In the existing literature (see, e.g., \cite{CarlsonBOOK}), the turnpike phenomenon is often studied only for the state, meaning that, for each $\varepsilon>0$, the (Lebesgue) measure of the set $\{t\in[0,T]\, \mid\, \|y^T(t)-y_s\|_X>\varepsilon\}$ is uniformly bounded for any $T>0$.
In the following result, we establish the measure-turnpike property for optimal solutions of $(\bar P_{[0,T]})$ (as well for $(\bar P_{[t_0,t_1]})$) under the strict dissipativity assumption, as the parameter $T$ goes to infinity. This implies that \emph{any} optimal solution $(y^T(\cdot),u^T(\cdot))$ of $(\bar P_{[0,T]})$ remains \emph{essentially} close to \emph{some} optimal solution $(y_s,u_s)$ of $(P_s)$. Our results can be seen in the stream of the recent works \cite{Faulwasser1, PZ1, PZ2, TZ1}.
\begin{theorem}\label{turnpikeproperty1}
Let $E$ be a bounded subset of $X$.
\begin{enumerate}
\item[(i).] If $\{(\bar P_{[0,T]})\, \mid\, T>0\}$ is dissipative at $(y_s,u_s)$ with respect to the supply rate function $\omega(\cdot,\cdot)$ given by \eqref{5234}, then
\begin{equation}\label{5226}
\bar J^T=\bar J_s + O(1/T),
\end{equation}
as $T\rightarrow+\infty$.
\item[(ii).] If $\{( \bar P_{[0,T]})\, \mid\, T>0\}$ is strictly dissipative at $(y_s,u_s)$ with respect to the supply rate function $\omega(\cdot,\cdot)$ given by \eqref{5234}, then it satisfies the measure-turnpike property at $(y_s,u_s)$.
\end{enumerate}
\end{theorem}
\medskip
\begin{remark}
\emph{
Note that $(\bar P_{[0,T]})$ is defined without any constraints on the terminal states. However,
under appropriate controllability assumptions (similar to $(H_3)$ in Section \ref{general}),
one can also treat the case of terminal state constraint $R(\cdot)=0$ (see also \cite{Faulwasser1} and \cite{Grune2}).}
\end{remark}
\begin{remark}
\emph{
From Theorem \ref{turnpikeproperty1}, we see that strict dissipativity is sufficient for the measure-turnpike property for the optimal control problem. This fact was observed in the previous works \cite{Grune1, Faulwasser1, Grune2}. For the converse statements, i.e., results which show that the turnpike property implies strict dissipativity, we refer the reader to \cite{G3} and \cite{Faulwasser1}. In \cite{G3}, the authors first defined a turnpike-like behavior concerning all trajectories whose associate cost is close to the optimal one. This behavior is stronger than the measure-turnpike property, which only concerns the optimal trajectories. Then, the implication ``\,turnpike-like behavior $\Rightarrow$ strict dissipativity'' was proved in \cite{G3}. Besides, the implication ``\,exact turnpike property $\Rightarrow$ strict dissipativity along optimal trajectories'' was shown
in \cite{Faulwasser1}, where the exact turnpike property means that the optimal solutions have to remain exactly at an optimal steady-state for most part of the long-time horizon.
}
\end{remark}
\begin{proof}[\textbf{Proof of Theorem~\ref{turnpikeproperty1}}.]
We first prove the second point of the theorem.
Let $T>0$ and let $(y^T(\cdot),u^T(\cdot))$ be any optimal solution of the problem $(\bar P_{[0,T]})$. By the strict dissipation inequality \eqref{5225} applied to $(y^T(\cdot),u^T(\cdot))$, we have
\begin{equation}\label{5123}
\frac{1}{T}\int_0^T\alpha\big(\|(y^T(t)-y_s,u^T(t)-u_s)\|_{X\times U}\big)\,dt\leq
\bar J^T-\bar J_s+\frac{S(y^T(0))-S(y^T(T))}{T}.
\end{equation}
Note that $\alpha\big(\|(y^T(t)-y_s,u^T(t)-u_s)\|_{X\times U}\big)\geq \alpha(\varepsilon)$ whenever $t\in Q_{\varepsilon,T}$, where $Q_{\varepsilon,T}$ is defined by \eqref{5227}.
Since $E\subset X$ is a bounded subset and $S(\cdot)$ is locally bounded, there exists $M>0$ such that $|S(y)|\leq M$ for every $y\in E$.
Therefore, it follows from \eqref{5123} that
\begin{equation}\label{5132}
\frac{|Q_{\varepsilon,T}|}{T}\leq \frac{1}{\alpha(\varepsilon)}\left( \bar J^T-\bar J_s+\frac{2M}{T}\right).
\end{equation}
On the other hand, noting that $(y_s,u_s)$ is admissible for $(\bar P_{[0,T]})$ for any $T>0$, we have
\begin{equation}\label{5133}
\bar J^T\leq \frac{1}{T}\int_0^Tf^0(y_s,u_s)\,dt=f^0(y_s,u_s)=\bar J_s.
\end{equation}
This, combined with \eqref{5132}, leads to $|Q_{\varepsilon,T}|\leq \frac{2M}{\alpha(\varepsilon)}$ for every $T>0$. The second point of the theorem follows.
Let us now prove the first point.
On the one hand, it follows from \eqref{5133} that
\begin{equation*}\label{5121}
\limsup_{T\rightarrow \infty} \bar J^T\leq \bar J_s.
\end{equation*}
By the dissipation inequality \eqref{5224} applied to any optimal solution $(y^T(\cdot),u^T(\cdot))$ of $(\bar P_{[0,T]})$, we get
\begin{equation*}
S(y^T(0))+\int_0^Tf^0(y^T(t),u^T(t))\,dt\geq
Tf^0(y_s,u_s)+S(y^T(T)),
\end{equation*}
which leads to
\begin{equation*}
\bar J_s\leq \bar J^T+\frac{S(y^T(0))-S(y^T(T))}{T}.
\end{equation*}
Since $E$ is a bounded subset in $X$ and since the storage function $S(\cdot)$ is locally bounded and bounded below, we infer that
\begin{equation*}
\bar J_s\leq \liminf_{T\rightarrow\infty}\bar J^T.
\end{equation*}
Then \eqref{5226} follows.
\end{proof}
\begin{remark}
\emph{
The above proof borrows ideas from \cite[Theorem 5.3]{Grune2} and \cite{Faulwasser1}. We used in a crucial way the fact that any solution of the steady-state problem $(P_s)$ is admissible for the problem $(\bar P_{[0,T]})$ under consideration. This is due to the fact that the terminal states are let free in $(\bar P_{[0,T]})$. Note that we only use the boundedness of $y^T(0)$ in the proof.}
\end{remark}
\subsection{Dissipativity and Assumption (H)}
Under the (strict) dissipativity property, we can verify the abstract \textbf{Assumption (H)} for the autonomous case in Section~\ref{general}.
\begin{proposition}\label{thm3}
Assume that, for any $t_0$ and $t_1$, the problem $(\bar P_{[t_0,t_1]})$ is dissipative at $(y_s,u_s)$ with the supply rate $\omega(y,u)=f^0(y,u)-f^0(y_s,u_s)$, and the associated storage function $S(\cdot)$ is bounded on $E$. Then
\begin{itemize}
\item[(i).] $\bar J_{[t_0,+\infty)} = \bar J_s$, $\forall t_0\in \mathbb R$.
\item[(ii).] There exists a turnpike set $\mathcal{T} = \{ y_s \}$ such that
$(H_1)$ and
$(H_2)$ are satisfied.
\item[(iii).] Moreover, if $(H_3)$ is satisfied and it is strictly dissipative at $(y_s,u_s)$ with dissipation rate $d(\cdot)=\alpha(\|y(\cdot)-y_s\|_X)$, then $(H_4)$ is satisfied with $\beta(\cdot)=\alpha(\cdot)$ and $\text{dist}\,(y,\mathcal T)=\|y-y_s\|_X$.
\end{itemize}
\end{proposition}
\begin{proof}
$(i)$. With a slight modification, the proof is the same as that of the first point of
Theorem~\ref{turnpikeproperty1}.
$(ii)$. Since $(y_s,u_s)$ is an equilibrium point, the constant pair $(y_s,u_s)$ is admissible on any time interval.
By the definition,
$$
\bar J_{[t_0,t_1]}\leq V_{[t_0,t_1]}(y_s)\leq \frac{1}{t_1-t_0}\int_{t_0}^{t_1}f^0(y_s,u_s)\,dt=\bar J_s.
$$
This, along with $(i)$, indicates that
$$
V_{[t_0,+\infty)}(y_s)=\lim_{t_1\rightarrow+\infty}V_{[t_0,t_1]}(y_s)= \bar J_s.
$$
Hence, the assumptions $(H_1)$ and $(H_2)$ hold.
$(iii).$
Let $(\tilde y(\cdot),\tilde u(\cdot))$ be an optimal solution to the problem $(P_{[t_0,t_1]})$.
Then, by the strict dissipativity property we have
$$
S(\tilde y(t_1))+ \int_{t_0}^{t_1} \alpha(\|\tilde y(t)-y_s\|_X)\, dt \leq S(\tilde y(t_0))+\int_{t_0}^{t_1} \big(f^0(\tilde y(t),\tilde u(t))-f^0(y_s,u_s)\big)\, dt.
$$
Which is equivalent to
\begin{equation}\label{xie1v}
J_{[t_0,t_1]}\geq \bar J_s+ \frac{1}{t_1-t_0}\int_{t_0}^{t_1}\alpha(\|\tilde y(t)-y_s\|_X)\, dt
+ \frac{S(\tilde y(t_1))-S(\tilde y(t_0))}{t_1-t_0}.
\end{equation}
Because
\begin{equation*}
\begin{split}
J_{[t_0,t_1]}&=\bar J_{[t_0,t_1]}+(J_{[t_0,t_1]}-\bar J_{[t_0,t_1]})\\
&=\inf_{y\in X}V_{[t_0,t_1]}(y)+\big(J_{[t_0,t_1]}-\inf_{y\in X}V_{[t_0,t_1]}(y)\big)\\
&\leq V_{[t_0,t_1]}(\tilde y(t_0))+\big(J_{[t_0,t_1]}-\inf_{y\in X}V_{[t_0,t_1]}(y)\big).
\end{split}
\end{equation*}
The last inequality, along with \eqref{xie1v}, indicates that
\begin{multline}\label{xie2}
V_{[t_0,t_1]}(\tilde y(t_0))\geq
\inf_{y\in X}V_{[t_0,t_1]}(y)
+ \frac{1}{t_1-t_0}\int_{t_0}^{t_1}\alpha(\|\tilde y(t)-y_s\|_X)\, dt\\
+ \frac{S(\tilde y(t_1))-S(\tilde y(t_0))}{t_1-t_0}
+ \bar J_s-J_{[t_0,t_1]}.
\end{multline}
As the storage function $S(\cdot)$ is bounded on $E$,
by $(i)$ and \eqref{mi1} we infer
$$\lim_{t_1\rightarrow+\infty}J_{[t_0,t_1]}=\bar J_s,$$
Hence, the sum of last three terms in \eqref{xie2} is an infinitesimal
quantity as $t_1\rightarrow +\infty$, and thus $(H_4)$ holds.
\end{proof}
\begin{remark}\emph{
Proposition \ref{thm3} explains the role of dissipativity in the general turnpike phenomenon.
It reflects that dissipativity allows one to identify the limit value $\bar J_{[t_0,+\infty)}$, that dissipativity implies $(H_1)$ and $(H_2)$, and that strict dissipativity, plus $(H_3)$, implies $(H_4)$.
Recall that $(H_3)$ is a controllability assumption.}
\end{remark}
\subsection{Some comments on the periodic turnpike phenomenon}
Inspired from \cite{Willems} and \cite{Zon}, we introduce the concept of (strict) dissipativity with respect to a periodic trajectory. Let $A(\cdot)$, $f(\cdot)$ and $f^0(\cdot)$ be periodic in time with a period $\Pi>0$.
\begin{definition}\label{perdis}
We say the problem $(\bar P_{[t_0,t_1]})$ is dissipative with respect to a $\Pi$-periodic trajectory $(\hat y(\cdot),\hat u(\cdot))$ with respect to the supply rate function
\begin{equation*}
\omega(t,y,u)= f^0(t,y,u)-f^0(t,\hat y(t), \hat u(t)),\qquad \forall(t,y,u)\in \mathbb R\times E\times F,
\end{equation*}
if there exists a locally bounded and bounded from below storage function $S:\mathbb R\times E\rightarrow\mathbb R$, $\Pi$-periodic in time, such that
\begin{equation*}
S(\tau_0,y(\tau_0))+\int_{\tau_0}^{\tau_1}\omega(t,y(t),u(t))\,dt\geq S(\tau_1,y(\tau_1))\;\;\;\text{for all}\;\; t_0\leq \tau_0<\tau_1\leq t_1,
\end{equation*}
for any admissible pair $(y(\cdot),u(\cdot))$. If, in addition, there exists a $\mathcal{K}$-class function $\alpha(\cdot)$ such that
\begin{equation*}
S(\tau_0,y(\tau_0))+\int_{\tau_0}^{\tau_1}\omega(t,y(t),u(t))\,dt\geq S(\tau_1,y(\tau_1))+\int_{\tau_0}^{\tau_1}\alpha\big(\|(y(t)-\hat y(t), u(t)-\hat u(t))
\|_{X\times U}\big)\,dt,
\end{equation*}
we say it is strictly dissipative with respect to a $\Pi$-periodic trajectory $(\hat y(\cdot), \hat u(\cdot))$.
\end{definition}
The notion of (strict) dissipativity with respect to a periodic trajectory in Definition \ref{perdis} allows one to identify the optimal control problem $(\bar P_{[t_0,t_1]})$ as a periodic one.
Consider the periodic optimal control problem
$$
(\bar P_{\textrm{per}}) \qquad \left\{\begin{array}{l}
\bar J_{\textrm{per}}= \inf\frac{1}{\Pi}C_{[0,\Pi]}(y(\cdot),u(\cdot)),\\[2mm]
\text{subject to}\;\;\;
\dot y(t) = A(t)y+f(t,y(t),u(t)),\quad (y(t),u(t))\in E\times F,\;\;t\in[0,\Pi],\\[2mm]
y(0)=y(\Pi).\\
\end{array}\right.
$$
We assume that $(\bar P_{\textrm{per}})$ has at least one periodic optimal solution $(\bar y(\cdot),\bar u(\cdot))$ on $[0,\Pi]$ (see e.g. \cite{Barbu} for the existence of periodic optimal solutions), and we set $\bar J_\textrm{per} = \frac{1}{\Pi}\int_0^\Pi f^0(t,\bar y(t),\bar u(t))\, dt$ (optimal value of $(\bar P_{\textrm{per}})$). Let us extend $(\bar y(\cdot), \bar u(\cdot))$ in $\mathbb R$ by periodicity. Likewise, we have the following result.
\begin{proposition
Assume that, for any $t_0$ and $t_1$, the problem $(\bar P_{[t_0,t_1]})$ is dissipative with respect to the $\Pi$-periodic optimal trajectory $(\bar y(\cdot), \bar u(\cdot))$, with the supply rate $\omega(t, y,u)=f^0(t,y,u)-f^0(t,\bar y(t), \bar u(t))$, and the associated storage function $S(\cdot)$ is bounded on $E$ for all times. Then
\begin{itemize}
\item[(i).] $\bar J_{[t_0,+\infty)} = \bar J_{per}$, $\forall t_0\in \mathbb R$.
\item[(ii).] There exists a turnpike set $\mathcal{T} = \{ \bar y(t)\ \mid\ t\in[0,\Pi] \}$ such that
$(H_1)$ and
$(H_2)$ are satisfied.
\item[(iii).] Moreover, if $(H_3)$ is satisfied and $(\bar P_{[t_0,t_1]})$ is strictly dissipative with respect to the \emph{$\Pi$-periodic} trajectory $(\bar y(\cdot), \bar u(\cdot))$,
with dissipation rate $\alpha(\cdot)$, then $(H_4)$ is satisfied with $\beta(\cdot)=\alpha(\cdot)$ and $\text{dist}\, (y,\mathcal T)=\min_{t\in[0,\Pi]}\|y-\bar y(t)\|_X$.
\end{itemize}
\end{proposition}
\begin{proof}
We only show the proof of $(i)$, as the rest is similar to the arguments in the proof of Proposition~\ref{thm3}.
Since $(\bar y(\cdot),\bar u(\cdot))$ is an admissible trajectory in $[t_0,t_0+k\Pi]$ for any $k\in\mathbb N$, we have
$\bar J_{[t_0,+\infty)} \leq \bar J_\textrm{per}$. Let us prove the converse inequality.
By the periodic dissipativity in Definition \ref{perdis}, we have
$$
S(t_0, y(t_0)) + \int_{t_0}^{t_0+k\Pi} f^0(t,y(t),u(t))\, dt \geq k \int_{t_0}^{t_0+\Pi} f^0(t,\bar y(t),\bar u(t))\, dt + S(t_0+k\Pi, y(t_0+k\Pi))$$
for any admissible trajectory $(y(\cdot), u(\cdot))$. Since $\bar J_\textrm{per} = \frac{1}{\Pi}\int_{0}^{\Pi} f^0(t,\bar y(t),\bar u(t))\, dt$, it follows that
$$
\bar J_\textrm{per} \leq \frac{1}{k\Pi}\int_{t_0}^{t_0+k\Pi} f^0(t,y(t),u(t))\, dt + \frac{S(t_0,y(t_0))-S(t_0+k\Pi,y(t_0+k\Pi))}{k\Pi} .
$$
Letting $k$ tend to infinity, and taking the infimum over all possible admissible trajectories, we get that $\bar J_\textrm{per} \leq \bar J_{[t_0,+\infty)}$.
\end{proof}
\section{Relationship with (strict) strong duality}\label{sec_dissip}
After having detailed a motivating example in Section \ref{guai2}, we recall in Section \ref{guai3} the notion of (strict) strong
duality, and we establish in Section \ref{guai4} that strict strong duality implies strict dissipativity (and thus measure-turnpike according to Section \ref{dis}).
\subsection{A motivating example}\label{guai2}
To illustrate the effect of Lagrangian function associated with the static problem when one derives the measure-turnpike property for the evolution control system, we consider the simplest model of heat equation with control constraints.
Let $\Omega\subset\mathbb R^n$, $n\geq1$, be a bounded domain with a smooth boundary $\partial \Omega$, and let $\mathcal D\subset\Omega$ be a non-empty
open subset. Throughout this subsection, we denote by $\langle\cdot,\cdot\rangle$ and $\|\cdot\|$
the inner product and norm in $L^2(\Omega)$, respectively; by $\chi_\mathcal D$ the characteristic function of $\mathcal D$.
For any $T>0$, consider the optimal control problem for the heat equation with pointwise control constraints:
$$\;\;\;\;\;\;\;\; \bar J^T=\inf_{u(\cdot)\in L^2(0,T;\,\mathcal U_{ad})}\frac{1}{2T}\int_0^T\Big(\|y(t)-y_d\|^2+\|u(t)\|^2\Big)\,dt$$
subject to
\begin{equation*}\left\{
\begin{split}
&y_t-\Delta y=\chi_{\mathcal D} u,\;\;\text{in}\;\;\Omega\times(0,T),\\
&y=0,\;\;\text{on}\;\;\partial\Omega\times(0,T),\\
&y(\cdot,0)=y_0,\;\;\text{in}\;\;\Omega,
\end{split}\right.
\end{equation*}
where $y_d\in L^2(\Omega)$, $y_0\in L^2(\Omega)$ and
$$
\mathcal U_{ad}= \Big\{u\in L^2(\Omega)\;\,|\,\; u_a(x)\leq u(x)\leq u_b(x)\;\;\text{for a.e.}\;\;x\in \Omega\Big\},
$$
with $u_a$ and $u_b$ being in $L^2(\Omega)$. Assume that $(y^T(\cdot),u^T(\cdot))$ (the optimal pair obviously depends on the time horizon) is the unique optimal solution. We want to study the long time behavior of optimal solutions,
i.e., the optimal pair stays in a neighborhood of a static optimal solution at most of the time horizon.
As before, we consider the static optimal control problem stated below
$$
\;\;\;\;\;\;\;\bar J_s=\inf_{u\in\mathcal U_{ad}}\frac{1}{2}\Big(\|y-y_d\|^2+\|u\|^2\Big)
$$
subject to
\begin{equation*}\left\{
\begin{split}
&-\Delta y=\chi_\mathcal D u,\;\;\text{in}\;\;\Omega,\\
&y=0,\;\;\text{on}\;\;\partial\Omega.\\
\end{split}\right.
\end{equation*}
Assume that $(y_s,u_s)$ is the unique optimal solution to $(P_s)$.
For this purpose, given every $\varepsilon>0$, we define the set
$$
Q_{\varepsilon, T}=\Big\{t\in[0,T]\;\;|\;\; \|y^T(t)-y_s\|^2+\|u^T(t)-u_s\|^2>\varepsilon \Big\},
$$
which measures the time at which the optimal pair is outside of the $\varepsilon$-neighborhood of $(y_s,u_s)$.
\begin{proposition}\label{he}
The following convergence hold
$$
\frac{1}{T}\int_0^Ty^T(t)\,dt\rightarrow y_s \;\;\text{and}\;\;\frac{1}{T}\int_0^Tu^T(t)\,dt\rightarrow u_s\;\;
\text{in}\;\;L^2(\Omega),\;\;\text{as}\;\;T\rightarrow\infty.
$$
Moreover, for each $\varepsilon>0$, it holds true that
$$
|Q_{\varepsilon,T}|\leq O\Big(\frac{1}{\varepsilon}\Big), \;\;\text{for all}\;\; T\geq1,
$$
i.e., the measure-turnpike property holds.
\end{proposition}
\begin{proof}
The key point of the proof is to show that
\begin{equation}\label{i7}
\int_0^T\Big(\|y^T(t)-y_s\|^2+\|u^T(t)-u_s\|^2\Big)\,dt\leq C \;\;\;\;\text{for all}\;\;T>0,
\end{equation}
where $C$ is a constant independent of $T$. Once the inequality \eqref{i7} is proved, the desired results hold
automatically. To prove \eqref{i7}, the remaining part of the proof is proceeded into several steps as follows.
{\bf Step 1}. We first introduce a Lagrangian function for the above stationary problem.
According to the Karusch-Kuhn-Tucker (KKT for short) optimality conditions (see, e.g., \cite[Theorem 2.29]{T}),
there are functions $p_s$, $\mu_a$ and $ \mu_b$ in $ L^2(\Omega)$ such that
\begin{equation*}(KKT)\;\;\;\left\{
\begin{split}
&-\Delta y_s=\chi_\mathcal D u_s,\;\;\;\; -\Delta p_s=y_d-y_s, \;\;\text{in}\;\;\Omega,\\
&y_s=0,\;\;\;\;p_s=0,\;\;\text{on}\;\;\partial\Omega,\\
&u_s-\chi_\mathcal D p_s-\mu_a+\mu_b=0,\\
&\mu_a\geq0,\;\;\mu_b\geq0,\;\;\mu_a(u_a-u_s)=\mu_b(u_s-u_b)=0.\\
\end{split}\right.
\end{equation*}
Now, we define the associated Lagrangian function $L:H_0^1(\Omega)\times L^2(\Omega)\rightarrow \mathbb R$ by setting
\begin{multline}\label{jia2}
L(y,u)=\frac{1}{2}\Big(\|y-y_d\|^2+\|u\|^2\Big)+\langle\nabla y,\nabla p_s\rangle-\langle\chi_\mathcal D u,p_s\rangle\\+
\langle\mu_a,u_a-u\rangle+\langle\mu_b,u-u_b\rangle,\;\;\forall (y,u)\in H_0^1(\Omega)\times L^2(\Omega).
\end{multline}
From the above-mentioned KKT optimality conditions, we can see that
$$
L(y_s,u_s)=\frac{1}{2}\Big(\|y_s-y_d\|^2+\|u_s\|^2\Big)= \bar J_s,
$$
$$
L_{(y,u)}'(y_s,u_s)\big((y-y_s,u-u_s)\big)=0,
$$
$$
L^{''}_{(y,u)}(y_s,u_s)\big((y-y_s,u-u_s),(y-y_s,u-u_s)\big)=\|y-y_s\|^2+\|u-u_s\|^2.
$$
Since $L$ is a quadratic form, the Taylor expansion is
\begin{multline*}
L(y,u)=L(y_s,u_s)+L_{(y,u)}'(y_s,u_s)\big((y-y_s,u-u_s)\big)\\
+\frac{1}{2}L^{''}_{(y,u)}(y_s,u_s)\big((y-y_s,u-u_s),(y-y_s,u-u_s)\big),\;\;\forall (y,u)\in H^1_0(\Omega)\times L^2(\Omega),
\end{multline*}
which means that
\begin{equation}\label{jia1}
L(y,u)=\bar J_s+\frac{1}{2}\Big(\|y-y_s\|^2+\|u-u_s\|^2\Big),\;\;\forall (y,u)\in H^1_0(\Omega)\times L^2(\Omega).
\end{equation}
{\bf Step 2}.
Noting that $\mu_a\geq0$ and $\mu_b\geq0$, we obtain from \eqref{jia2} and \eqref{jia1} that for each $(y,u)\in H_0^1(\Omega)\times \mathcal U_{ad}$,
\begin{multline}\label{i2}
\bar J_s+\frac{1}{2}\Big(\|y-y_s\|^2+\|u-u_s\|^2\Big)
\leq
\frac{1}{2}\Big(\|y-y_d\|^2+\|u\|^2\Big)+\langle\nabla y,\nabla p_s\rangle-\langle\chi_\mathcal D u,p_s\rangle.
\end{multline}
Since $(y^T(t),u^T(t))\in H^1_0(\Omega)\times \mathcal U_{ad}$ for a.e. $t\in(0,T)$, we get from \eqref{i2} that
\begin{multline*}
\bar J_s+\frac{1}{2}\Big(\|y^T(t)-y_s\|^2+\|u^T(t)-u_s\|^2\Big)
\leq
\frac{1}{2}\Big(\|y^T(t)-y_d\|^2+\|u^T(t)\|^2\Big)\\+\langle\nabla y^T(t),\nabla p_s\rangle-\langle\chi_\mathcal D u^T(t),p_s\rangle,
\;\;\text{for a.e.}\;\;t\in(0,T).
\end{multline*}
Integrating the above inequality over $(0,T)$ and then multiplying the resulting by $1/T$, we have
\begin{multline}\label{huang1}
\bar J_s+\frac{1}{2T}\int_0^T\Big(\|y^T(t)-y_s\|^2+\|u^T(t)-u_s\|^2\Big)\,dt
\leq
\bar J^T
+\frac{1}{T}\int_0^T\Big(\langle\nabla y^T(t),\nabla p_s\rangle-\langle\chi_\mathcal D u^T(t),p_s\rangle\Big)\,dt.
\end{multline}
Observe that
\begin{equation*}\label{lu1}
-\big\langle y^T(T)-y_0,p_s\big\rangle=\int_0^T\Big(\big\langle\nabla y^T(t),\nabla p_s\big\rangle-\langle\chi_\mathcal D u^T(t),p_s\rangle\Big)\,dt.
\end{equation*}
This, along with \eqref{huang1}, implies that
\begin{equation}\label{i3}
\bar J_s+\frac{1}{2T}\int_0^T\Big(\|y^T(t)-y_s\|^2+\|u^T(t)-u_s\|^2\Big)\,dt\leq \bar J^T+
\frac{ \langle y_0-y^T(T),p_s\rangle }{T}.
\end{equation}
By the standard energy estimate for non-homogeneous heat equations, there is a constant $C>0$ (independent of $T>0$) such that
\begin{equation}\label{i5}
\|y^T(T)\|\leq C\Big(\|y_0\|+\max\big\{\|u_a\|,\|u_b\|\big\}\Big)\;\;\;\;\text{for all}\;\; T>0.
\end{equation}
Hence, by the Cauchy-Schwarz inequality we have
$$
\frac{ \langle y_0-y^T(T),p_s\rangle}{T}\leq \frac{C\|p_s\|}{T}\Big(\|y_0\|+\max\big\{\|u_a\|,\|u_b\|\big\}\Big)\leq O\Big(\frac{1}{T}\Big).
$$
This, together with \eqref{i3}, indicates that
\begin{equation}\label{huang2}
\bar J_s+\frac{1}{2T}\int_0^T\Big(\|y^T(t)-y_s\|^2+\|u^T(t)-u_s\|^2\Big)\,dt\leq \bar J^T+O\Big(\frac{1}{T}\Big).
\end{equation}
{\bf Step 3}.
We claim that
\begin{equation}\label{i10}
\bar J^T\leq \bar J_s+O\Big(\frac{1}{T}\Big), \;\;\text{when}\;\;T\geq1.
\end{equation}
Indeed, since $u_s$ is always an admissible control for the problem $(P^T)$, it holds that
\begin{equation}\label{i6}
\bar J^T\leq \frac{1}{2T}\int_0^T\Big(\|y(t;u_s)-y_d\|^2+\|u_s\|^2\Big)\,dt,
\end{equation}
where $y(\cdot;u_s)$ is the solution to
\begin{equation*}\left\{
\begin{split}
&y_t-\Delta y=\chi_\mathcal D u_s,\;\;\text{in}\;\;\Omega\times(0,T),\\
&y=0,\;\;\text{on}\;\;\partial\Omega\times(0,T),\\
&y(\cdot,0)=y_0,\;\;\text{in}\;\;\Omega.
\end{split}\right.
\end{equation*}
It can be readily checked that
\begin{equation}\label{iii4}
\frac{1}{2T}\int_0^T\Big(\|y(t;u_s)-y_d\|^2+\|u_s\|^2\Big)\,dt\leq \bar J_s+O\Big(\frac{1}{T}\Big), \;\;\text{when}\;\;T\geq1.
\end{equation}
Which in turn, together with \eqref{i6}, implies that \eqref{i10}.
{\bf Step 4}. End of the proof for the inequality $\eqref{i7}$.
We obtain immediately from \eqref{huang2} and \eqref{i10} that
$$
\bar J^T=\bar J_s+O\Big(\frac{1}{T}\Big),
$$
as well as
$$
\frac{1}{2T}\int_0^T\Big(\|y^T(t)-y_s\|^2+\|u^T(t)-u_s\|^2\Big)\,dt\leq O\Big(\frac{1}{T}\Big),
$$
which is equivalent to the inequality \eqref{i7}.
\end{proof}
\begin{remark}\emph{
Notice that the inequality \eqref{i7} is stronger than the weak turnpike property \eqref{guai1} for $\mathcal T=\{y_s\}$.
The proof above yields the convergence result for the long-time horizon control problems towards
to the steady-state one in the measure-theoretical sense. It is an improved version of
the case of \textit{time-independent} controls \cite[Section 4]{PZ2}.}
\end{remark}
\begin{remark}\emph{
We remark that, in the steps 2 and 3 of the proof of Proposition~\ref{he}, we have used the exponential stabilization of the heat equation to derive the upper bounds \eqref{i5} and \eqref{iii4}. See also Remark~\ref{nonc}.}
\end{remark}
\subsection{What is (strict) strong duality }\label{guai3}
In the above proof of Proposition \ref{he}, we have seen an important role played by the Lagrangian \eqref{jia1}, which is closely
related to the notion of strict strong duality introduced below. We recall that the notion of strong duality, well known in optimization (see, e.g., \cite{BV}).
\begin{definition}
We say that the static problem $(P_s)$ (in Section \ref{dd}) has the \emph{strong duality property} if there exists $\varphi_s\in D(A^*)$ (Lagrangian multiplier) such that $(y_s,u_s)$ minimizes the \emph{Lagrangian function} $L(\cdot,\cdot,\varphi_s): E\times F\rightarrow \mathbb R$ defined by
$$L(y,u,\varphi_s)= f^0(y,u)+\langle A^*\varphi_s,y\rangle_{X^*,X}+\langle \varphi_s, f(y,u)\rangle_{X^*,X}.$$
We say $(P_s)$ has the \emph{strict strong duality property} if there exists a $\mathcal K$-class function $\alpha(\cdot)$ such that
$$
L(y,u,\varphi_s)\geq L(y_s,u_s,\varphi_s)+\alpha\big(\|(y-y_s,u-u_s)\|_{X\times U}\big)
$$
for all $(y,u)\in E\times F$.
\end{definition}
\begin{remark}\emph{
Note that $L(y_s,u_s,\varphi_s)=\bar J_s$.
If $(y_s,u_s)$ is the unique minimizer of the Lagrangian function $L(\cdot,\cdot,\varphi_s)$, and if $E\times F$ is compact in $X\times U$, then $(P_s)$ enjoys the strict strong duality property. However, it is generally a very strong assumption that $L(\cdot,\cdot,\varphi_s)$ has a unique minimizer. Note that uniqueness of minimizers for elliptic
optimal control problems is still a long outstanding and difficult problem (cf., e.g., \cite{T}).}
\end{remark}
In finite dimension, strong duality is introduced and investigated in optimization problems for which the primal and dual problems are equivalent. The notion of strong duality is closely related to the saddle point property of the Lagrangian function associated with the primal optimization problem (see, e.g., \cite{BV,T}).
Note that Slater's constraint qualification (also known as ``interior point'' condition) is a sufficient condition ensuring strong duality for a convex problem, and note that, when the primal problem is convex, the well known Karusch-Kuhn-Tucker conditions are also sufficient conditions ensuring strong duality (see \cite[Chapter 5]{BV}).
Similar assumptions are also considered for other purposes in the literature (see, for example, \cite[Assumption 1]{CHJ}, \cite[Assumption 4.2 (ii)]{CarlsonBOOK} and \cite[Assumption 2]{strongduality}.)
In infinite dimension, however, the usual strong duality theory (for example, the above-mentioned Slater condition) cannot be applied because the underlying constraint set may have an empty interior. The corresponding strong dual theory, as well as the existence of Lagrange multipliers associated to optimization problems or to variational inequalities, have been developed only quite recently in \cite{Dstrong}.
The strict strong duality property is closely related to the second-order sufficient optimality condition, which guarantees the local
optimality of $(y_s,u_s)$ for the problem $(P_s)$ (see, e.g., \cite{T}).
We provide hereafter two examples satisfying the strict strong duality property.
\begin{example}\label{cont}
Consider the static optimal control problem
\begin{equation*}
\qquad \left\{\begin{split}
& \inf \,J_s(y,u)= f^0(y,u), \\
& \text{subject to}\;\;\;\;\;Ay+Bu=0, \\
& y\in E, \quad u\in F,\\
\end{split}\right.
\end{equation*}
with $A\in\mathbb R^{n\times n}, B\in\mathbb R^{n\times m}$,
$f^0(\cdot,\cdot)$ a strictly convex function, $E$ and $F$ convex, bounded and closed subsets of $\mathbb R^n$ and of $\mathbb R^m$, respectively.
Assume that \emph{Slater's condition} holds, i.e., there exists an interior point $(\tilde{y},\tilde {u})$ of $E\times F$ such that $A\tilde y+B\tilde u=0$. Recall that the Lagrangian function $L:E\times F\times\mathbb R^n\rightarrow\mathbb R$ is given by
$L(y,u,\varphi)= f^0(y,u)+\langle \varphi,Ay+Bu\rangle_{\mathbb R^n}$.
Let $(y_s,u_s)$ be the unique optimal solution.
It follows from the Slater condition that there exists a Lagrangian multiplier $\varphi_s\in\mathbb R^{n}$ such that (see, e.g., \cite[Section 5.2.3]{BV})
\begin{equation*}\label{5232}
L(y,u,\varphi_s)>L(y_s,u_s,\varphi_s),\;\;\forall (y,u)\in E\times F\setminus\{(y_s,u_s)\}.
\end{equation*}
The strict inequality is due to the strict convexity of the cost function $f^0$. Setting
$$\widetilde{L}(y,u)= L(y,u,\varphi_s)-L(y_s,u_s,\varphi_s),\;\;\forall (y,u)\in E\times F,$$
we have $\widetilde{L}(y_s,u_s)=0$ and $\widetilde{L}(y,u)>0$ for all $(y,u)\in E\times F \setminus\{(y_s,u_s)\}$.
We claim that
\begin{equation}\label{in5211}
\widetilde{L}(y,u)\geq \alpha\big(\|(y-y_s,u-u_s)\|_{\mathbb R^{n+m}}\big),\;\;\forall (y,u)\in E\times F
\setminus\{(y_s,u_s)\},
\end{equation}
for some $\mathcal K$-class function $\alpha(\cdot)$.
Indeed, since $E\times F$ is compact in $ \mathbb R^{n+m}$, without loss of generality,
we assume that $E \times F\subset B_r(y_s,u_s)$ with $r>0$, where
$$B_r(y_s,u_s)=\big\{(y,u)\in \mathbb R^{n+m}\ \mid\ \|(y-y_s,u-u_s)\|_{\mathbb R^{n+m}}\leq r\big\}.$$
Since the function $\widetilde{L}(\cdot,\cdot)$ is continuous, we define
\begin{equation*}
\alpha(\gamma)= \inf_{\substack{(y,u)\in E\times F\\
\gamma\leq\|(y-y_s,u-u_s)\|_{\mathbb R^{n+m}}\leq r}}\widetilde{L}(y,u),\;\;\;\;\;\text{when}\;\gamma\in[0,r], \end{equation*}
and $\alpha(\gamma)\equiv\alpha(r)$ when $\gamma>r$.
It is easy to check that the inequality \eqref{in5211} holds with the $\mathcal K$-class function $\alpha(\cdot)$ given above. This means that the static problem has the strict strong duality property.
\end{example}
\medskip
\begin{example}\label{ex1}
Let $\Omega\subset\mathbb R^3$ be a bounded domain with a smooth boundary $\partial \Omega$.
Given any $y_d\in L^2(\Omega)$, we consider the static optimal control problem
$$
\;\;\;\;\inf \frac{1}{2}\big(\|y-y_d\|^2_{L^2(\Omega)}+\|u\|^2_{L^2(\Omega)}\big),
$$
over all $(y,u)\in H^{1}_{0}(\Omega)\times L^2(\Omega)$ satisfying
\begin{equation*}
\left\{
\begin{split}
&-\triangle y+y^3=u\;\;&\text{in}\;\;\Omega,\\
&y=0\;\;&\text{on}\;\;\partial\Omega.
\end{split}
\right.
\end{equation*}
Let $(y_s,u_s)$ be an optimal solution of this problem. According to first-order necessary optimality conditions (see, e.g., \cite[Chapter 1]{Kun1} or \cite[Chapter 6, Section 6.1.3]{T}), there exists an adjoint state $\varphi_s\in H^2(\Omega)\cap H^1_0(\Omega)$ satisfying
\begin{equation*}
\left\{
\begin{split}
&-\triangle \varphi_s+3y_s^2\varphi_s=y_s-y_d\;\;&\text{in}\;\;\Omega,\\
&\varphi_s=0\;\;&\text{on}\;\;\partial\Omega ,
\end{split}
\right.
\end{equation*}
such that $u_s=\varphi_s$. Moreover, since $\varphi_s$ is a Lagrangian multiplier associated with $(y_s,u_s)$ for the Lagrangian function $L(\cdot,\cdot,\varphi_s): H^1_0(\Omega)\times L^2(\Omega) \rightarrow\mathbb R$ defined by
\begin{equation*}
L(y,u,\varphi_s)=\frac{1}{2}\big(\|y-y_d\|^2_{L^2(\Omega)}+\|u\|^2_{L^2(\Omega)}\big)+\langle -\triangle\varphi_s, y\rangle_{L^2(\Omega),L^2(\Omega)}+
\langle \varphi_s, y^3-u\rangle_{L^2(\Omega),L^2(\Omega)},
\end{equation*}
we have
\[
L(y_s,u_s,\varphi_s)\leq L(y,u,\varphi_s), \;\;\forall (y,u)\in H_0^1(\Omega)\times L^2(\Omega).
\]
It means that $(P_s)$ has the strong duality property.
Next, we claim that it holds the strict strong duality property under the condition that
$\|y_d\|_{L^2(\Omega)}$ is small enough.
Notice that
$$\frac{1}{2}\big(\|y_s-y_d\|^2_{L^2(\Omega)}+\|u_s\|^2_{L^2(\Omega)}\big)\leq \frac{1}{2}\|y_d\|^2_{L^2(\Omega)}.$$
Now, assuming that the norm of the target $y_d$ is small enough guarantees the smallness of $(y_s,u_s)$, which consequently belongs to a ball $B_r$ in $H_0^1(\Omega)\times L^2(\Omega)$, centered at the origin and with a small radius $r>0$. Moreover, by elliptic regularity, we deduce that the norms of $y_s$ and $\varphi_s$ are small in $ H^2(\Omega)\cap L^\infty(\Omega)$ (see \cite[Section 3]{PZ2}).
For the Lagrangian function $L(\cdot,\cdot,\varphi_s)$ defined above,
its first-order Fr\'echet derivative is
\begin{equation}\label{c4291}
L'(y_s,u_s,\varphi_s)\left((y-y_s,u-u_s)\right)=0,
\end{equation}
and its second-order Fr\'echet derivative is
\begin{multline}\label{c4292}
L''(y_s,u_s,\varphi_s)\left((y-y_s,u-u_s),(y-y_s,u-u_s)\right)\\
=\|y-y_s\|^2_{L^2(\Omega)}+\|u-u_s\|^2_{L^2(\Omega)}+6\int_\Omega y_s\varphi_s(y-y_s)^2\,dx,
\end{multline}
whenever $(y,u)\in B_r$ (see, for instance, \cite[Chapter 6, pp. 337-338]{T}).
Note that
\begin{multline*}
\begin{split}
L(y,u,\varphi_s)&=L(y_s,u_s,\varphi_s)+L'(y_s,u_s,\varphi_s)\left((y-y_s,u-u_s)\right) \\
&\;\;\;+L''(y_s,u_s,\varphi_s)\left((y-y_s,u-u_s),(y-y_s,u-u_s)\right)\\
&\;\;\;+o(\|y-y_s\|_{L^2(\Omega)}^2+\|u-u_s\|^2_{L^2(\Omega)}),
\end{split}
\end{multline*}
for all $(y,u)\in B_r$. This, together with
\eqref{c4291}, \eqref{c4292} and the smallness of $(y_s,\varphi_s)$ in $L^\infty(\Omega)$,
implies that
\begin{equation*}
L(y,u,\varphi_s)\geq L(y_s,u_s,\varphi_s)+\frac{1}{2}(\|y-y_s\|_{L^2(\Omega)}^2+\|u-u_s\|^2_{L^2(\Omega)}), \;\;\;\forall (y,u)\in B_r,
\end{equation*}
which proves the above claim.
\end{example}
\begin{remark}
\emph{
Similar to second order gap conditions for local optimality \cite{T}, the positive semi-definiteness of Hessian matrix of the Hamiltonian is a necessary condition for the local optimality,
while its positive definiteness is a sufficient condition for the local optimality. The latter is also known as the strengthened
Legendre-Clebsch condition.}
\end{remark}
\subsection{Strict strong duality implies strict dissipativity}\label{guai4}
In this subsection,
by means of strict strong duality, we extend Proposition \ref{he} to general optimal control problems.
More precisely,
we establish sufficient conditions, in terms of (strict) strong duality for $(P_s)$, under which (strict) dissipativity holds true with a specific storage function for $(\bar P_{[0,T]})$ in Section \ref{dd}. As seen in Theorem~\ref{turnpikeproperty1}, strict dissipativity implies measure-turnpike.
\begin{theorem}\label{equiv}
Let $E$ be a bounded subset of $X$. Then,
strong duality (resp., strict strong duality) for $(P_s)$ implies dissipativity (resp., strict dissipativity) for $(\bar P_{[0,T]})$, with the storage function given by $S(y)=-\langle \varphi_s,y\rangle_{X^*,X}$ for every $y\in E$. Consequently, $(\bar P_{[0,T]})$ has the measure-turnpike property under the strict strong duality property.
\end{theorem}
\begin{proof
It suffices to prove that strong duality for $(P_s)$ implies dissipativity for $(\bar P_{[0,T]})$ (the proof with the ``strict" additional property is similar with only minor modifications).
By the definition of strong duality, there exists a Lagrangian multiplier $\varphi_s\in D(A^*)$
such that $L(y_s,u_s,\varphi_s)\leq L(y,u,\varphi_s)$ for all $(y,u)\in E\times F$, which means that
\begin{equation*}
f^0(y_s,u_s)\leq f^0(y,u)+\langle A^*\varphi_s,y\rangle_{X^*,X}+\langle \varphi_s,f(y,u)\rangle_{X^*,X}\qquad\forall(y,u)\in E\times F.
\end{equation*}
Let $T>0$. Assume that $(y(\cdot),u(\cdot))$ is an admissible pair for the problem $(\bar P_{[0,T]})$. Then,
\begin{equation*}
f^0(y_s,u_s)\leq f^0(y(t),u(t))+
\langle A^*\varphi_s,y(t)\rangle_{X^*,X}+\langle \varphi_s,f(y(t),u(t))\rangle_{X^*,X},\;\;\text{for a.e.}\ t\in[0,T].
\end{equation*}
Integrating the above inequality over $(0,\tau)$, with $0<\tau\leq T$, leads to
\begin{equation}\label{ma1}
\tau f^0(y_s,u_s)\leq \int_0^\tau f^0(y(t),u(t))\,dt+\int_0^\tau\langle A^*\varphi_s,y(t)\rangle_{X^*,X}\,dt
+ \int_0^\tau \langle \varphi_s,f(y(t),u(t))\rangle_{X^*,X}\,dt.
\end{equation}
Notice that $(y(\cdot), u(\cdot))$ satisfies the state equation in the problem $(\bar P_{[0,T]})$, we have
\begin{equation*}
\int_0^\tau\langle A^*\varphi_s,y(t)\rangle_{X^*,X}\,dt
+ \int_0^\tau \langle \varphi_s,f(y(t),u(t))\rangle_{X^*,X}\,dt
=\langle \varphi_s,y(\tau)\rangle_{X^*,X}-\langle \varphi_s,y(0)\rangle_{X^*,X}.
\end{equation*}
This, together with \eqref{ma1}, leads to
\begin{equation*}
\int_0^\tau \Big(f^0(y(t),u(t))-f^0(y_s,u_s)\Big)\,dt+\langle \varphi_s, y(\tau)\rangle_{X^*,X}
\geq \langle \varphi_s,y(0)\rangle_{X^*,X}.
\end{equation*}
Set $S(y)=-\langle \varphi_s,y\rangle_{X^*,X}$ for every $y\in E$. Since $E$ is a bounded subset of $X$, we see that $S(\cdot)$ is locally bounded and bounded from below.
Therefore, we infer that $\{(\bar P_{[0,T]})\,\mid\,T>0\}$ has the dissipativity property.
\end{proof}
\begin{remark}
\emph{Strong duality and dissipativity are equivalent in some situations:}
\begin{itemize}
\item \emph{On one hand, we proved above that strong duality (resp. strict strong duality) implies dissipativity (resp., strict dissipativity). We refer also the reader to \cite[Lemma 3]{Faulwasser1} for a closely related result. }
\item \emph{On the other hand, it is easy to see that, if the storage function $S(\cdot)$ is continuously Fr\'echet differentiable, then strong duality (resp., strict strong duality) is the infinitesimal version of the dissipative inequality \eqref{5224} (resp., of \eqref{5225}).
For this point, we also mention that \cite[Assumption 5.2]{Grune2} is a discrete version of strict dissipativity, and that \cite[Inequality (14)]{Faulwasser1} is the infinitesimal version of strict dissipativity for the continuous system when the storage function is differentiable.}
\end{itemize}
\end{remark}
\section{Conclusions and further comments}\label{consec}
In this paper, we first have proved a general turnpike phenomenon around a set holds
for optimal control problems with terminal state constraints in an abstract framework.
Next, we have obtained the following auxiliary result:
\begin{quote}
strict strong duality $\Rightarrow$ strict dissipativity $\Rightarrow$ measure-turnpike property.
\end{quote}
We have also used dissipativity to identify the long-time limit of optimal values.
\medskip
Now, several comments and perspectives are in order.
\paragraph{Measure-turnpike versus exponential turnpike.}
In the paper \cite{TZZ}, we establish the exponential turnpike property for general classes of optimal control problems in infinite dimension that are similar to the problem $(\bar P_{[0,T]})$ investigated in the present paper, but with the following differences:
\begin{itemize}
\item[(i)] $E=X$ and $F=U$;
\item[(ii)] $y(0)=y_0\in X$.
\end{itemize}
The item (i) means that, in \cite{TZZ}, we consider optimal control problems without any state or control constraint. Under the additional assumption made in (ii), we are then able to apply the \emph{Pontryagin maximum principle} in Banach spaces (see \cite{LiXunjing}), thus obtaining an extremal system that is \emph{smooth}, which means in particular that the extremal control is a \emph{smooth} function of the state and of the adjoint state. This smooth regularity is crucial in the analysis done in \cite{TZZ} (see also \cite{TZ1}), consisting of linearizing the extremal system around an equilibrium point, which is itself the optimal solution of an associated static optimal control problem, and then of analyzing from a spectral point of view the hyperbolicity properties of the resulting linear system. Adequately interpreted, this implies the local exponential turnpike property, saying that
$$
\left\Vert y^T(t)-y_s\right\Vert_X+\left\Vert u^T(t)-u_s\right\Vert_U+\left\Vert \lambda^T(t)-\lambda_s\right\Vert_X\leq c \left( e^{-\mu t}+e^{-\mu(T-t)} \right) ,
$$
for every $t\in[0,T]$, for some constants $\mu,c>0$ not depending on $T$, where $\lambda^T$ is the adjoint state coming from the Pontryagin maximum principle.
There are many examples of control systems for which the measure-turnpike holds but not exponential turnpike (see, for instance, Example ~\ref{cont}).
The exponential turnpike property is much stronger than the measure-turnpike property, not only because it gives an exponential estimate on the control and the state, instead of the softer estimate in terms of Lebesgue measure, but also because it gives the closeness property for the adjoint state. This leads us to the next comment.
\red{}
\paragraph{Turnpike on the adjoint state.}
As mentioned above, the exponential turnpike property established in \cite{TZZ} holds as well for the adjoint state coming from the application of the Pontryagin maximum principle. This property is particularly important when one wants to implement a numerical shooting method in order to compute the optimal trajectories. Indeed, the exponential closeness property of the adjoint state to the optimal static adjoint allows one to successfully initialize a shooting method, as chiefly explained in \cite{TZ1} where an appropriate modification and adaptation of the usual shooting method has been described and implemented.
The flaw of the linearization approach developed in \cite{TZZ} is that it does not a priori allow to take easily into account some possible control constraints (without speaking of state constraints).
The softer approach developed in the present paper leads to the weaker property of measure-turnpike, but permits to take into account some state and control constraints.
However, under the assumption (ii) above, one can as well apply the Pontryagin maximum principle, and thus obtain an adjoint state $\lambda^T$. Due to state and control constraints, of course, one cannot expect that the extremal control $u^T$ be a smooth function of $y^T$ and $\lambda^T$, but anyway our approach by dissipativity is soft enough to yield the measure-turnpike property for the optimal state $y^T$ and for the optimal control $u^T$. Now, it is an open question to know whether the measure-turnpike property holds or not for the adjoint state $\lambda^T$.
As mentioned above, having such a result is particularly important in view of numerical issues.
\paragraph{Local versus global properties.}
It is interesting to stress on the fact that Theorem \ref{turnpikeproperty1} (saying that strict dissipativity implies measure-turnpike) is of \emph{global} nature, whereas Theorem \ref{equiv} (saying that strict strong duality implies strict dissipativity) is rather of \emph{local} nature. This is because, as soon as Lagrangian multipliers enter the game, except under strong convexity assumptions this underlies that one is performing reasonings that are local, such as applying first-order conditions for optimality. Therefore, although Theorem \ref{equiv} provides a sufficient condition ensuring strict dissipativity and thus allowing one to apply the result of Theorem \ref{turnpikeproperty1}, in practice showing strict strong duality can in general only be done locally. In contrast, dissipativity is a much more general property, which is global in the sense that it reflects a global qualitative behavior of the dynamics, as in the Lyapunov theory. We insist on this global picture because this is also a big difference with the results of \cite{TZZ,TZ1} on exponential turnpike, that are purely local and require smallness conditions. Here, in the framework of Theorem \ref{turnpikeproperty1}, no smallness condition is required. The price to pay however is that one has to know a storage function, ensuring strict dissipativity. In practical situations this is often the case and storage functions often represent an energy that has a physical meaning.
\paragraph{Semilinear heat equation.}
We end the paper with a still open problem, related to the above-mentioned smallness condition. Continuing with Example \ref{ex1}, given any $y_d\in L^2(\Omega)$ we consider the evolution optimal control problem
\begin{equation*}\label{semilinheat1}
\;\;\;\inf\, \frac{1}{2T}\int_0^T \left( \|y(t)-y_d\|^2_{L^2(\Omega)}+\|u(t)\|_{L^2(\Omega)}^2\right) dt
\end{equation*}
over all possible solutions of
\begin{equation}\label{semilinheat2}
\left\{
\begin{split}
&y_t-\triangle y+y^3=u\;\;&\text{in}\;\;\Omega\times(0,T),\\
&y=0\;\;&\text{on}\;\;\partial\Omega\times(0,T),
\end{split}
\right.
\end{equation}
such that $(y(t),u(t))\in H_0^1(\Omega)\times L^2(\Omega)$ for almost every $t\in (0,T)$.
It follows from Example~\ref{ex1} and Theorem~\ref{equiv} that the problem is dissipative
at an optimal stationary point $(y_s,u_s)$ with the storage function $S(y)=-\langle \varphi_s,y\rangle_{L^2(\Omega),L^2(\Omega)}$. Under the
additional smallness condition on $\|y_d\|_{L^2(\Omega)}$, the strict strong duality holds and thus
the measure-turnpike property follows.
As said above, this assumption reflects the fact that Theorem~\ref{equiv} is rather of a local nature. However, due to the fact that the nonlinear term in \eqref{semilinheat2} has the ``right sign", we do not know how to take advantage of this monotonicity of the control system \eqref{semilinheat2} to infer the measure-turnpike property.
It is interesting to compare this result with \cite[Theorem 3.1]{PZ2}, where the authors used a subtle analysis of optimality systems to establish an exponential turnpike property, under the same smallness condition.
The question of whether the turnpike property actually holds or not for optimal solutions \emph{but} without the smallness condition on the target, is still an interesting open problem.
\bigskip
\noindent \textbf{Acknowledgment}.
We would like to thank Prof. Enrique Zuazua for fruitful discussions and valuable suggestions on this subject.
We acknowledge the financial support by the grant FA9550-14-1-0214 of the EOARD-AFOSR. The second author was partially supported by the National Natural Science Foundation of China under grants 11501424 and 11371285.
\bigskip
|
1,108,101,564,189 | arxiv | \section{Introduction}
\subsection{Background and motivation}
The problem of finding Kähler-Einstein metrics has been central in the development of Kähler geometry, leading to the solution by Chen-Donaldson-Sun of the celebrated Yau-Tian-Donaldson conjecture \cite{CDSa, CDSb, CDSc}. While the problem is well understood on compact Kähler manifolds, or more generally compact Kähler varieties \cite{EGZ, Li22}, the non-compact case is still relatively open. In the pioneering work of Martelli-Sparks-Yau \cite{MSY08}, the existence of conical Calabi-Yau metrics (alias Ricci-flat Kähler cone metrics) on toric varieties with an isolated singularity is shown to be equivalent to a volume minimization principle for Euclidean convex cones. This principle still holds for mildly singular toric varieties as proved by Berman \cite{Ber20}. A more systematic study of polarized affine varieties with an isolated singularity was done by Collins and Székelyhidi \cite{CS19}, generalizing the work of Chen-Donaldson-Sun to the context of Kähler cones, or equivalently, Sasakian manifolds.
A Sasakian manifold is a compact Riemannian manifold such that the metric cone over it is Kähler. Sasakian manifolds can be viewed as odd-dimensional analogs of compact Kähler manifolds since they have a natural transverse Kähler structure on an intrinsic horizontal distribution. The existence of Ricci-flat Kähler cone metrics on a Kähler cone is in fact equivalent to the existence of Sasaki-Einstein metrics on the link, which boils down to a Kähler-Einstein-like problem on the transverse structure.
The existence of a (singular) Kähler-Einstein metrics is equivalent to solving a (degenerate) complex Monge-Ampère equation. An interesting problem to ask is the regularity of a singular Kähler-Einstein metric on the smooth locus. In the present paper, we are concerned with the regularity problem on a class of mildly singular affine varieties called \textit{Fano cones}.
In order to state the main result, let us first give some preliminaries on Fano cones and conical Calabi-Yau potentials. Recall that a normal variety is called \( \mathbb{Q} \)-Gorenstein if a multiple of its canonical line bundle is Cartier. The action of a complex torus \( T \) on \( Y \) is said to be \textit{good} if it is effective and has a unique fixed point contained in any orbit closure.
\begin{defn}
A \emph{cone} \( Y \) is a normal affine variety endowed with the good action of a complex torus \( T \simeq (\mathbb{C}^{*})^k \). We say that \( Y \) is a \emph{Fano cone} if it is \( \mathbb{Q}\)-Gorenstein with klt singularities. The unique fixed point of \( Y \), denoted by \( 0_Y \), is called the \emph{vertex} of \( Y \).
\end{defn}
Let \( \mathcal{M} := \operatorname{Hom}(T, \mathbb{C}^{*}) \simeq \mathbb{Z}^k \) be the weight lattice and \( \mathcal{N} := \mathcal{M}^{*} = \operatorname{Hom}(\mathbb{C}^{*}, T) \) the coweight lattice. The ring of regular functions of \( Y \) admits a decomposition into \( T \)-modules
\[ \mathbb{C}[Y] = \oplus_{\alpha \in \Gamma} R_{\alpha}, \quad \Gamma := \set{\alpha \in \mathcal{M}, R_{\alpha} \neq 0} \]
where \( R_{\alpha} \) is the \( T \)-module with weight \( \alpha \). Let \( \mathcal{M}_{\mathbb{R}} := \mathcal{M} \otimes \mathbb{R} \) and \(\mathcal{N}_{\mathbb{R}} := \mathcal{N} \otimes \mathbb{R} \). The set \( \Gamma \) is an affine semi-group of finite type which generates a strictly convex polyhedral cone \( \sigma^{\vee} \subset \mathcal{M}_{\mathbb{R}} \). Equivalently, the dual cone \( \sigma \) in \( \mathcal{N}_{\mathbb{R}} \) is polyhedral of maximal dimension \( k \). This results from the assumption that \( Y \) has a unique fixed point lying in the closure of every \( T \)-orbit (cf. \cite{AH06}). The interior of \( \sigma \) is then non-empty and coincides with its relative interior:
\[ \text{Int}(\sigma) = \set{ \xi \in \mathcal{N}_{\mathbb{R}}, \sprod{\alpha, \xi} > 0, \forall \alpha \in \Gamma} \]
\begin{defn} The interior of the cone \( \sigma \) is called the \emph{Reeb cone} of \( Y \). An element \( \xi \in \text{Int}(\sigma) \) is called a \emph{Reeb vector}. A Fano cone decorated with a Reeb vector \( (Y,\xi) \) is said to be a \emph{polarized Fano cone}. We say that \( (Y,\xi) \) is \emph{quasi-regular} if \( \xi \in \mathcal{N}_{\mathbb{Q}} \), and otherwise \emph{irregular} if \( \xi \notin \mathcal{N}_{\mathbb{Q}} \).
\end{defn}
The closure inside \( \operatorname{Aut}(Y) \) of the one-parameter subgroup generated by the infinitesimal action of \( \xi \) is a compact torus \( T_{\xi} \subset T_c \), where \( T_c \simeq (\mathbb{S}^1)^k \) is a maximal compact subtorus of \( T \). If \( \xi \) is quasi-regular then \( T_{\xi} \simeq \mathbb{S}^1 \), but if it is irregular then \( T_{\xi} \simeq (\mathbb{S}^1)^m \), \( k \geq m > 1 \). Equivalently, in the quasi-regular (resp. irregular) case, the holomorphic vector field associated to \( \xi \) generates an action of \( \mathbb{C}^{*} \) (resp. \( (\mathbb{C}^{*})^k \)). It can be shown that in the quasi-regular case, the quotient \( (Y \backslash \set{0_Y}) / \mathbb{C}^{*} \) is a Fano orbifold (see \cite[Paragraph 42]{Kol04}). Note however that in the irregular case, the quotient by \( (\mathbb{C}^{*})^k\) is only well-defined as an algebraic space (cf. \cite{Kol97} ). For more details on Fano cones, the reader may consult for example \cite{LLX20}, \cite{DS17} and references therein.
Given a Fano cone \((Y,T)\), by Sumihiro's theorem (see \cite[Theorem 1, Lemma 8]{Sum74}), there exists an embedding \( Y \subset \mathbb{C}^N \) such that \(T\) corresponds to a diagonal subgroup of \( GL_N(\mathbb{C}^N)\) acting linearly. Given an embedding \( Y \subset \mathbb{C}^N \), we say that a function \( f \) is plurisubharmonic (psh for short) on \( Y \) if it is locally the restriction to \( Y \) of a psh function on the ambient space \( \mathbb{C}^N \).
\begin{defn}
A \emph{\( \xi \)-radial function} (or \emph{\( \xi \)-conical potential}) \( r^2 : Y \to \mathbb{R}_{>0} \) is a psh function on \( Y \) that is invariant under the action of \( \xi \) and 2-homogeneous under \( -J \xi \), namely
\[ \mathcal{L}_{\xi} r^2 = 0, \quad \mathcal{L}_{-J\xi} r^2 = 2 r^2 \]
on \( Y_{\text{reg}} \).
\end{defn}
If \( Y \) is a \( \mathbb{Q} \)-Gorenstein cone, then for \( m > 0 \) large enough, \( m K_Y \) is a Cartier divisor and naturally linearized by the \( T \)-action. Moreover, there exists a \( T \)-invariant non-vanishing holomorphic section \( s \in mK_Y \) and a volume form \( dV_Y \) such that
\[ dV_Y = \tuple{i^{(n+1)^2 m } s \wedge \ol{s}}^{1/m} \]
where \( n+1 = \dim_{\mathbb{C}} Y \). To simplify the notation, by an abuse of language we will sometimes say that \( s \) is a ``multivalued'' section of \( K_Y \) and simply write \( dV_Y = i^{(n+1)^2} s \wedge \ol{s} \).
A \textit{canonical volume form} \( dV_Y \) on \( Y \) is a volume form that is \( (2n + 2) \)-homogeneous under the action of \( r \partial r \), namely
\[ \mathcal{L}_{r \partial r } dV_Y = 2 (n+1) dV_Y \]
on \( Y_{\text{reg}} \).
The \( \mathbb{Q} \)-Gorenstein and klt singularities assumptions on \( Y \) guarantee that there exists a unique canonical volume form on \( Y \) up to a constant, see \cite{MSY08}, \cite{CS19}.
A \( (1,1) \)-Kähler current \( \omega \) on a polarized Fano cone \( (Y,\xi) \) is said to be a \( \xi \)-\textit{Kähler cone current} if there exists a locally bounded \( \xi \)-radial function such that
\[ \omega = dd^c r^2 \]
This is well-defined thanks to the local theory of Bedford-Taylor \cite{BT76}. If moreover the function \( r^2 \) satisfies the Calabi-Yau condition
\begin{equation} \label{conical_CY}
\omega^{n+1} = (dd^c r^2)^{n+1} = dV_Y
\end{equation}
in the pluripotential sense, then \( r^2 \) is said to be a (singular) \textit{conical Calabi-Yau potential}.
\begin{defn}
We say that a Kähler cone current \( \omega = dd^c r^2 \) is a conical Calabi-Yau metric if the function \(r^2\) is a singular conical Calabi-Yau potential which is smooth on the regular locus of \( Y \).
\end{defn}
The motivation for studying these metrics on Fano cones actually has its origin in the compact Fano case. Concretely, Fano cones arise as metric tangent cones of the Gromov-Hausdorff limit of a Fano manifolds sequence \cite{DS17}. If each term of the sequence is moreover Kähler-Einstein, then the Fano cone admits conical Calabi-Yau metrics. As discussed in \cite[Section 4]{Ber20} (see also Remark 4.10), it is expected that a singular conical Calabi-Yau potential restricts to a smooth function on the regular locus of \( Y \). Our goal in this article is to give an affirmative answer to this problem.
\begin{thm}
Let \( (Y,\xi) \) be a polarized Fano cone and \( r^2 \) be a singular \( \xi \)-conical Calabi-Yau potential on \( Y \). Then \( r^2 \) is smooth on the regular locus of \( Y \). In particular, the curvature form of \( r^2 \) is a well-defined conical Calabi-Yau metric.
\end{thm}
Such smoothness result is well-known for singular Kähler-Einstein metrics on compact Kähler varieties \cite{EGZ}, \cite{BEGZ10}, \cite[Lemma 3.6]{BBEGZ}. In the non-compact setting, when the cone has a unique singularity at the vertex, the Sasakian link is smooth, so the conical metric is automatically smooth outside the vertex. For toric Fano cones with non-isolated singularities, a regularity property was obtained by Berman \cite{Ber20} by using the toric symmetry to reformulate the problem in terms of real Monge-Ampère equations. As discussed in \cite[Remark 4.10]{Ber20}, the only places where the toric structure was used were the \( L^{\infty}\)-estimate and uniqueness of the Monge-Ampère equation. Although it is possible to generalize the same approach to a larger class of highly symmetric varieties, such as horospherical varieties, we provide a proof closer to the pluripotential spirit and independent of any symmetry other than the given effective torus action. It is an interesting problem to ask if we can weaken the regularity assumption of the solution.
\subsection{Organization}
The organization of the article is as follows.
\begin{itemize}
\item In Section \ref{pluripotential_sasaki}, we give a quick review of the structure of degenerate Sasakian manifolds. We then gather results in pluripotential theory on these manifolds based the on the work of Guedj-Zeriahi \cite{GZ05} and He-Li \cite{HL21}. We also introduce \textit{extremal functions} associated to a Reeb-invariant Borel set on a degenerate Sasakian manifold, which seems to be new in the literature. These objects were not studied in \cite{HL21} in all generality (but see \cite[Prop. 3.17, Thm. 3.1]{HL21} for results concerning weighted global extremal functions). The capacity-extremal function comparison is crucial in the proof of the uniform estimate.
\item Section \ref{section_proof_main_theorem} is devoted to the proof of our main result. The general strategy is based on \cite{EGZ}, \cite{BEGZ10}, \cite{BBEGZ} and \cite{Ber20}. Let us give a brief explanation. After taking a resolution of singularities, the conical Calabi-Yau problem is translated by pullback to a Calabi-Yau problem on a degenerate Sasakian manifold.
Our key theorem is the uniform \( L^{\infty} \)-estimate of a family of solutions, which relies on a domination-by-capacity property (cf. Prop. \ref{domination_by_capacity}). This, combined with a transverse Yau-Aubin inequality, allows us to obtain a Laplacian estimate of the family, which implies regularity of the solution.
\item In Section \ref{appendice_yau_aubin_transverse}, we provide a proof for the transverse version of Yau-Aubin inequality, which is used in the Laplacian estimate.
\end{itemize}
\textbf{Acknowledgements.} This article is part of a thesis supervised by Thibaut Delcroix and Marc Herzlich. I wish to thank Vincent Guedj, Eleonora Di Nezza, and Tat-Dat To for their generosity as well as many helpful discussions and remarks. Thanks are also due to the hospitality of the Vietnam Institute for Advanced Study in Mathematics (VIASM), where this work first begun.
\section{Pluripotential theory on Sasakian manifolds} \label{pluripotential_sasaki}
\subsection{Structure of Sasakian manifolds}
In this section, we introduce the notion of \textit{degenerate Sasakian manifolds}. These are compact manifolds having all the essential properties of a Sasakian manifold, except that the form \( d \eta \) is not positive-definite, hence does not define a transverse Kähler structure. Still, we assume that a degenerate Sasakian manifold has a transverse Kähler structure, but that the basic Kähler form is not induced by the contact form.
Degenerate Sasakian manifolds arise as the link of the resolution of Fano cones (see Lem. \ref{resolution_singularities}). The reader should compare this setting to the Kähler situation: a resolution of a Kähler space is still Kähler, but the Kähler structure of the resolution is not the pullback of the Kähler structure on the base.
We refer the reader to \cite{BG08} for a detailed treatment of almost contact structures and Sasakian manifolds.
Let \( S \) be a compact differentiable manifold of dimension \( 2n + 1 \). A \textit{contact structure} on \( S \) is the data of a \( 1 \)-form \( \eta \) on \( S \) such that \( \eta \wedge (d \eta)^n \neq 0 \). The manifold \( S \) is then said to be a \textit{contact manifold}. On a contact manifold, there exists a unique vector field \( \xi \), called the \textit{Reeb vector field}, such that \( \eta(\xi) = 1, \mathcal{L}_{\xi} \eta = 0 \). The distribution \( \mathcal{D} := \text{ker}(\eta) \) is called the \textit{horizontal distribution} of \( S \).
\begin{defn}
An almost contact structure is given by \( (S,\xi,\eta, \Phi) \), where \( \eta \) is a contact form, \( \xi \) the corresponding Reeb vector field, and \( \Phi \) a \( (1,1) \)-tensor of \( TS \) such that:
\[ \Phi^2 = -Id + \xi \otimes \eta, \quad d \eta( \Phi . , \Phi. ) = d \eta, \quad d \eta(., \Phi . ) > 0 \]
In particular, \( \Phi|_{\mathcal{D}} \) is an almost complex structure.
A degenerate almost contact structure is the same as an almost contact structure, except that \( d \eta(.,\Phi .) \) is only semipositive, i.e. \( d \eta (., \Phi,.) \geq 0 \).
\end{defn}
\begin{defn}
A degenerate metric contact structure is a degenerate almost contact structure \( (S ,\xi, \eta, \Phi) \) endowed with a Riemannian metric \( g \) satisfying
\[g(\Phi., \Phi .) = g(.,.) - \eta \otimes \eta \]
Such a metric is said to be compatible.
\end{defn}
A (degenerate) almost contact structure is said to be \textit{normal} if the horizontal distribution \( \mathcal{D} \) is integrable. A form \( \alpha \) on \( S \) is said to be \textit{basic} if
\[ \mathcal{L}_{\xi} \alpha = i_{\xi} \alpha = 0 \]
\begin{defn}
A degenerate Sasakian manifold \( (S, \xi, \eta, \omega_B) \) is a normal degenerate contact structure with a transverse Kähler metric defined by a basic positive-definite \((1,1)\)-form \( \omega_B \).
\end{defn}
Let \( g_B \) be the Riemannian metric associated to \( \omega_B\). A degenerate Sasakian manifold admits a Riemannian metric, defined by
\[ g_S := \eta \otimes \eta + g_B, \]
which restricts to a transverse Kähler metric on \( \mathcal{D} \), but the latter is in general different from the semipositive form induced by the contact form. In particular, a degenerate Sasakian manifold has a degenerate metric contact structure.
\begin{rmk} In \cite{BG08}, a Sasakian manifold is defined as a \textit{normal metric contact structure}. In our paper, one should distinguish between a \textit{metric contact structure} and an \textit{degenerate metric contact structure}. Both are almost contact structures with a compatible metric, but the metric of the former is exactly \( d \eta ( Id \otimes \Phi ) \), while the latter has a compatible metric \( g_S \neq d \eta (Id \otimes \Phi) \).
\end{rmk}
Many properties of Sasakian manifolds still hold on their degenerate counterparts.
For example, on a degenerate Sasakian manifold, we still have a cover by \textit{local foliation charts}, coming from the foliation \( \mathcal{F}_{\xi} \) by the Reeb vector field \( \xi \) on \( S \).
\begin{defn}
The \emph{foliation atlas} on a degenerate Sasakian manifold is defined as a collection of charts \( (U_{\alpha}, \Phi_{\alpha})\) covering \( S \) with diffeomorphisms:
\begin{align*}
\Phi_{\alpha} : W_{\alpha} &\times ]-t,t[ \to U_{\alpha} \\
&(z,x) \longrightarrow (\phi_{\alpha}(z), \tau_{\alpha}(z,x))
\end{align*}
such that:
\begin{itemize}
\item The open interval \(]-t,t[ \subset \mathbb{R}\) has coordinate \( x \). Here, \( t \) can be taken to be independent of \( \alpha \).
\item For all \( \alpha \), \( W_{\alpha} \simeq B_{\delta} (0) \) is the ball of radius \( \delta > 0 \) centered in \( 0 \in \mathbb{C}^n \) with coordinates \( z = (z_1, \dots, z_n) \). Moreover, the transition map \( \phi_{\alpha \beta} := \phi_{\alpha} \circ \phi_{\beta}^{-1} \) from \( W_{\alpha} \cap W_{\beta} \) to itself is holomorphic.
In pratice, we usually take \( \delta = 1 \).
\end{itemize}
Each chart \( (U_{\alpha}, \Phi_{\alpha}) \) is called a \emph{foliation chart}, and each \( W_{\alpha}\) is said to be a \emph{transverse chart} (or \emph{transverse neighborhood}).
In a foliation chart \( U_{\alpha} \), we may identify \( \xi \) with \( \partial_x \) and a point \( p \in S \) can be written as \( p = (z_1, \dots, z_n, x) \).
\end{defn}
Let \( \Omega_B^k \) be the sheaf of basic \( k \)-forms on \( S \). Since the exterior differential \( d \) on \( S \) preserves basic forms, it descends to the \textit{basic exterior differential} \( d_B := d|_{\Omega_B^k} \). We then have a subcomplex \( \Omega_B^{.}(\mathcal{F}_{\xi}) \) of the de Rham complex, and the corresponding \textit{basic cohomology} \( H_B^{*} \). The integrable complex structure on \( \mathcal{D} \) leads to the decompositions
\[ d_B = \partial_B + \overline{\partial}_B, \quad \Omega_B^k = \bigoplus_{p+q = k} \Omega_B^{p,q} \]
as well as the basic Dolbeault complex and the corresponding cohomologies \( H_B^{p,q} \). We then say that a basic function is \textit{ transversely holomorphic} if it vanishes under \( \overline{\partial}_B \). The Kähler structure on \( \mathcal{D} \) induces the decomposition in basic cohomologies as in the classic Hodge theory:
\[ H^k_B = \bigoplus_{p+q = k} H^{p,q}_B \]
In short, usual Kähler properties still hold for a Kähler leaf space. We refer the reader to \cite{EKA90} for proofs.
\subsection{Quasipsh functions and capacities}
We present here some results concerning intrinsic capacities on degenerate Sasakian manifolds, following the lines of Guedj-Zeriahi \cite{GZ05}, slightly generalizing the work of He-Li \cite{HL21}. Apart from a subtlety in the definition of capacity, there are generally no supplementary difficulties compared to the case of a classic Sasakian manifold studied by He and Li.
Let \( (S,\xi,\eta, \omega_B) \) be a degenerate Sasakian manifold of dimension \( (2n + 1) \), where \( \omega_B \) a basic Kähler form on \( S \), while \( \theta := d \eta \) is smooth, semipositive and \textit{big}; the latter meaning:
\[ 0 < \text{vol}_{\theta}(S) := \int_S \theta^n \wedge \eta < +\infty\]
Let \( g_S := \eta \otimes \eta + g_B \) be the corresponding Riemannian metric on \( S \). We denote by
\[ \mu_{\omega_B} := \omega_B^n \wedge \eta \]
the volume form on \( S \) associated to \( g_S \).
\begin{defn}
By a \emph{\( \xi \)-invariant object} (function, set, etc.), we mean that the object is invariant under the action of the compact torus \( T_{\xi} \) generated by \( \xi \).
By a function in \( L^1(S) \), we mean a function being \( L^1 \) with respect to the measure \( \mu_{\omega_B} \) on \( S \).
\end{defn}
A \textit{\((p,q)\)-transverse current} is a collection \( \set{(W_{\alpha}, T_{\alpha})} \) where \( W_{\alpha} \) is a transverse neighborhood and \( T_{\alpha} \) a current of bidegre \((p,q)\) on \(W_{\alpha}\) such that
\[ \phi_{\alpha \beta}^{*}T_{\beta}|_{W_{\alpha} \cap W_{\beta}} = T_{\alpha}|_{W_{\alpha} \cap W_{\beta}}\]
The current \( T \) is said to be closed (resp. positive) if each \( T_{\alpha} \) is closed (resp. positive) on \(W_{\alpha}\). Recall that a basic function on \( S \) is a \( \xi \)-invariant function. A \textit{basic psh function} \(u\) on \( U_{\alpha} \) is a basic, upper-semicontinuous function on \( U_{\alpha} \) such that \( u|_{W_{\alpha}} \) is a classical psh function. In particular, \(u\) is locally integrable.
\begin{defn}
We say that a function \(u\): \(S \to \mathbb{R} \cup \set{-\infty} \) is \textit{basic \( \theta \)-psh} if \( u \) is locally the sum of a basic smooth function and a basic psh function, such that
\[ (\theta + d_B d_B^c u)|_{\mathcal{D}} \geq 0 \]
in the sense of transverse currents.
We will denote by
\( PSH(S,\xi,\theta) \)
the set of basic \( \theta\)-psh functions.
If \( u \in PSH(S,\xi,\theta) \), we put \( \theta_u := \theta + dd^c u \).
\end{defn}
In particular, a \( \theta \)-psh function is \( \xi \)-invariant, upper-semicontinuous and \( L^1(S)\). A Sasakian analogue of the Bedford-Taylor theory was developed by van Coevering \cite{vC} in the case where \( \theta \) is Kähler and \( u \) is a \( \theta \)-psh bounded function on \( S \). Let us give some details of the construction.
Let \( u \in PSH(S,\xi,\theta) \cap L^{\infty}(S) \) and \( T \) a transverse closed positive current on \( S \). Since \( \theta \) is a closed and basic \((1,1)\)-form, \( \theta_u \) defines a transverse \((1,1)\)-current. After perharps resizing the transverse neighborhood \( W_{\alpha} \), there exists a local \( \xi \)-invariant potential \(v\) such that \( \theta = dd^c v \). We then define on each \( W_{\alpha}\)
\[ \theta_u \wedge T := dd^c( (v+u).T) \]
This allows one to define inductively \( \theta_u^k \wedge T \) on each \( W_{\alpha} \). Passing to the foliation chart \( U_{\alpha} = W_{\alpha} \times ]-t,t[\), the Monge-Ampère operator of \( u \) is defined as
\[ \theta_u^n \wedge dx \]
where we identify the contact form \( \eta\) with \(dx\) in the local coordinate of \(]-t,t[\). One can check that this definition is independent of the foliation chart. We will denote the (sasakian) Monge-Ampère measure of \( u\) by
\[ \text{MA}_{\theta}(u) := \theta_u^n \wedge \eta\]
In particular, \( \text{MA}_{\theta}(u) \) is a \( \xi \)-invariant Radon measure, which has the following continuity property.
\begin{prop} \cite[Theorem 2.3.1]{vC} \label{ma_continuity_monotone_sequences}
The sasakian Monge-Ampère operator is continuous for monotone convergence. In other words, if \( (u_k)_{k \in \mathbb{N}} \subset PSH(S,\xi,\theta)^{\mathbb{N}} \cap L^{\infty}(S) \) increases (or decreases) towards \( u \), then \( \text{MA}_{\theta}(u_k) \to \text{MA}_{\theta}(u) \) in the sense of measures.
\end{prop}
If \( u \) is bounded, then by supposing \( u \geq 0 \) and noting that \( u^2 \) is basic and psh, one can define the transverse closed positive current:
\[ du \wedge d^c u \wedge T := \frac{1}{2} dd^c u^2 \wedge T - u dd^c \wedge T \]
As in the (transverse) Kähler case, we have for all \( u \in PSH(S,\xi,\theta) \cap L^{\infty}(S) \)
\[ \int_S \theta_u^n \wedge \eta = \text{vol}_{\theta}(S) \]
i.e. a locally bounded \( \theta\)-psh function is of full mass.
We record the following regularization property for a later use:
\begin{lem} \label{regularization_theorem}
Given \( u \in PSH(S,\xi,\omega_B ) \), there exists a sequence \( (u_k)_{k \in \mathbb{N}} \subset PSH(S,\xi,\omega_B) \cap C^{\infty}(S) \) decreasing to \( u \).
\end{lem}
\begin{proof}
We use the regularization procedure as in \cite[Theorem 3.3]{Ber19}. First, for a smooth basic function \( f \) and \( \beta > 0 \), consider the basic Calabi-Yau-type problem on \( S \):
\[ (\omega_B + d_B d_B \phi_{\beta} )^{n} \wedge \eta = e^{\beta(\phi_{\beta} - f)} \omega_B^n \wedge \eta \]
A solution \( \phi_{\beta} \) verifying \( \sup \phi_{\beta} = 0 \) exists and is unique (cf. \cite[3.5.5]{EKA90}). We will denote by \( P_{\beta}(f) \), \( \beta > 0 \) the unique solution.
Now let
\[ P_{\omega_B}(f)(p) := \sup \set{\phi(p), \phi \leq f, \phi \in PSH(S,\xi,\omega_B)}, \]
This function belongs to \( PSH(S,\xi,\omega_B) \) (cf. \cite[Proposition 3.17] {HL21}). Consider
\[P^{'}_{\omega_B}(f)(p) := \sup \set{\phi(p), \phi \leq f, \phi \in PSH(S,\xi, \omega_B) \cap C^{\infty}(S)} \]
Since \( u \) is u.s.c. and basic, it is a decreasing limit of a sequence of smooth basic functions \( (f_j) \). We assert that the sequence \( (v_j)_{j \in \mathbb{N}} := (P^{'}_{\omega_B}(f_j))_{j \in \mathbb{N}} \), which consists of basic functions, decreases to \( u \). Indeed, since \( P^{'}_{\omega_B} \) is a decreasing operator, \( (v_j) \) is a decreasing sequence and \(f_j \geq v_j \geq u \) by construction. Since \( f_j \searrow u \),
for all \( x\) and \( \varepsilon > 0 \), there exists \( j_0 \) such that for all \( j \geq j_0 \):
\[ u(x) \leq v_j(x) \leq f_j(x) \leq u(x) + \varepsilon \]
hence \( v_j(x) \) decreases to \( u(x) \).
Arguing as in \cite[Proposition 2.3]{Ber19}, one can show that the sequence of basic \( \omega_B \)-psh functions \( v_{j,\beta} := P_{\beta}(f_j) \) converges uniformly to \( v_j \) as \( \beta \to \infty \), hence for appropriate \( \varepsilon_j \to 0 \), the sequence
\[ u_j := v_{j,\beta(j)} + \varepsilon_j \]
which consists of smooth basic \( \omega_B\)-psh functions, decreases to \( u \).
\end{proof}
We also have the \textit{comparison principle} for \(\theta\)-psh functions in the degenerate Sasakian context.
\begin{prop} \label{comparison_principle}
For all \( u, v \in PSH(S,\xi,\theta) \cap L^{\infty}(S) \),
\[ \int_{ \set{v < u}} \text{MA}_{\theta}(u) \leq \int_{\set{v < u}} \text{MA}_{\theta}(u) \]
\end{prop}
\begin{proof}
We first prove the following \textit{maximum principle}:
\[ 1_{\set{v < u}} \text{MA}_{\theta}(\max(u,v)) = 1_{\set{v < u}} \text{MA}_{\theta}(u) \]
It is enough to prove the equality on a foliation chart \( U_{\alpha} \). First remark that since \( u, v \) are both basic, on \( U_{\alpha} \) they depend only on the \( z \)-coordinates, hence \( U_{\alpha} \cap \set{ v < u} = ]-t,t[ \times \set{z \in W_{\alpha}, v < u} \). Since \( \text{MA}_{\theta}(u) \) is \( \xi \)-invariant, it restricts to \( \theta_u^n \wedge dx \) on \( U_{\alpha} \). The equality is then equivalent to
\[ 1_{ ]-t,t[ \times \set{z \in W_{\alpha}, v < u}} \theta_{\max(u,v)}^n \wedge dx = 1_{ ]-t,t[ \times \set{z \in W_{\alpha}, v < u}} \theta_u^n \wedge dx \]
on each foliation chart. By contracting with \( \xi = \partial_x \), this is exactly the classical local maximum principle for \( \theta \)-psh functions.
It follows from the maximum principle that
\begin{align*}
\int_{\set{v < u}} \text{MA}_{\theta}(u) &= \int_S 1_{\set{v < u}} \text{MA}_{\theta}(\max(u,v)) \\
&= \text{vol}_{\theta}(S) - \int_{ \set{v \geq u }} \text{MA}_{\theta}(\max(u,v)) \\
&\leq \int_S \text{MA}_{\theta}(v) - \int_{ \set{v > u}} \text{MA}_{\theta}(\max(u,v)) = \int_{\set{v \leq u }} \text{MA}_{\theta}(v)
\end{align*}
By arguing the same way with \( u - \varepsilon \) and \( v \), we obtain
\[ \int_{\set{v < u - \varepsilon}} \text{MA}_{\theta}(u) \leq \int_{\set{v \leq u - \varepsilon}} \text{MA}_{\theta}(v) \leq \int_{\set{v < u}} \text{MA}(v) \]
The proof is now concluded by remarking that \( \set{v < u - \varepsilon} \) increases to \( \set{v < u} \).
\end{proof}
We record the following result for a later use.
\begin{prop} \label{local_dirichlet_problem}
Let \( U = B_1(0) \times ]-t,t[ \) be a foliation chart on \( S \). For every \( \phi \in PSH(S,\xi,\theta) \cap L^{\infty}(S) \), there exists a unique \( \wt{\phi} \in PSH(S,\xi,\theta) \cap L^{\infty}(S) \) such that
\[ \text{MA}_{\theta}(\wt{\phi}) = 0 \; \text{on} \; U, \; \wt{\phi} = \phi \; \text{on} \; S \backslash U, \; \wt{\phi} \geq \phi \; \text{on} \; S \]
Moreover, if \( \phi_1 \leq \phi_2 \), then \( \wt{\phi}_1 \leq \wt{\phi}_2 \).
\end{prop}
\begin{proof}
The proof is a direct consequence of the local Dirichlet problem on a degenerate Sasakian manifold. The problem can be solved in exactly the same way as in the classical case by remarking that for a basic function \( u \) in a foliation chart \( (z_1, \dots, z_n,x) \).
\[ (d_B d_B^c u)^n \wedge \eta = \det \tuple{ \frac{\partial^2 u}{\partial z_i \partial \ol{z}_j}} \bigwedge_{k=1}^n \frac{i}{2} dz_k \wedge d \ol{z}_k \wedge dx = 0 \iff \det(u_{i \ol{j}}) = 0 \]
Hence the local Dirichlet problem on a degenerate Sasakian manifold becomes the classical Dirichlet problem (see \cite{BT76}, \cite{BT82} for a proof).
\end{proof}
\begin{prop} \label{compacity_properties}
Let \( ( \phi_j)_{j \in \mathbb{N}} \subset PSH(S,\xi,\theta)^{\mathbb{N}} \).
\begin{enumerate}
\item \label{uniform_boundedness} There exists a constant \( C = C(\mu_{\omega_B}, \theta) \) such that for all \( u \in PSH(S,\xi,\theta) \):
\[ - C + \sup_S u \leq \int_S u d \mu_{\omega_B} \leq \text{vol}_{\omega_B} (S) \sup_S u \]
\item If \( ( \phi_j) \) is uniformly bounded on \( S \), then either \( (\phi_j) \) converges locally uniformly to \( -\infty \), or \( (\phi_j) \) is relatively compact in \( L^1(S) \).
\item \label{hartogs} If \( \phi_j \to \phi \) in \( L^1(S) \), then \( \phi \) coincides almost-everywhere with a function \( \phi^{*} \in PSH(S,\xi,\theta) \). Moreover,
\[ \sup_S \phi^{*} = \lim_{j \to +\infty} \sup_S \phi_j \]
\item The family
\[ \mathcal{F}_0 := \set{ \phi \in PSH(S,\xi,\theta), \sup \phi = 0} \]
is a compact subset of \( PSH(S,\xi,\theta) \).
\end{enumerate}
\end{prop}
\begin{proof}
For \( 1) \), we can adapt the strategy in \cite[Prop. 3.3]{HL21} to the non-degenerate Sasakian case. Let us sketch the arguments.
We only need to prove the first inequality in the statement (the second one is trivial). Assuming without loss of generality that \( \sup_S u = 0\), the inequality then reduces to
\[ \int_S u d \mu_{\omega_B} \geq -C \]
There exists two finite covering of \( S \) by foliation charts \( V_{\alpha} \subset U_{\alpha} \) such that \( V_{\alpha} \simeq B_1(0) \times ]-t,t[ \) is relatively compact in \( U_{\alpha} \simeq B_4(0) \times ]-2t,2t[ \). To prove the desired result, it is enough to show that
\[ \int_{V_{\alpha}} u d \mu_{\omega_B} \geq - C_{\alpha} \]
where \( C_{\alpha} = C_{\alpha}(\theta) \). But on \( V_{\alpha} \), this is equivalent to
\[ \int_{B_1(0) \times ]-t,t[} u d \mu_{z,x} = 2t \int_{B_1(0)} u(z) d \mu_z \geq - C_{\alpha} \]
where \( d \mu_{z,x} \) and \( d\mu_z \) are respectively the measures \( \omega_B^n \wedge \eta \) and \( \omega_B^n \) on \( V_{\alpha} \) and \( B_1(0) \). Let \( \phi_{\alpha} \) be a local potential of \( \theta \) on \( B_4(0) \) (\( \phi_{\alpha} \) exists by the \( \partial_B \overline{\partial}_B \)-lemma). The function \( \phi_{\alpha} + u \) is independent of \( x \) and psh in \( B_4(0) \). By upper-semicontinuity, \( u \) attains its local supremum \( u(p_1) = 0 \) at \( p_1 = (z_1, 0 ) \in B_4(0) \) . By the submean inequality on \( B_2(z_1) \subset B_4(0) \),
\[ (\phi_{\alpha} + u)(z_1,0) = \phi_{\alpha}(z_1,0) \leq \frac{1}{\mu_z(B_2(z_1))} \int_{B_2(z_1)} (\phi_{\alpha} + u)(z,0) d \mu_z \]
Since \( u \leq 0 \) and \( B_1(0) \subset B_2(z_1) \), this completes our proof.
\( 2) \) is a consequence of \( 1) \) (cf. \cite[Proposition 3.4]{HL21}).
\( 3) \) is a consequence of the local result for psh functions (see e.g. \cite[Theorem 1.46 (2)]{GZ17}). Indeed, by assumption, on each foliation chart \( U_{\alpha} \simeq B_1(0) \times ]-t,t[ \), we have \( \phi_j \to \phi \) in \( L^1_{\text{loc}}(U_{\alpha}) \). In particular, \( \phi_j \to \phi \) in \( L^1_{\text{loc}}(B_1(0)) \) as psh functions.
\( 4) \) is a direct consequence of \(2) \) and \( 3) \).
\end{proof}
The following is a Chern-Levine-Nirenberg-type inequality.
\begin{lem} \label{cln_inequality}
Let \( v, u \in PSH(S, \xi, \theta) \) such that \( 0 \leq u \leq 1 \). Then
\[ 0 \leq \int_{S} \abs{v} \theta_u^{n} \wedge \eta \leq \int_{S} \abs{v} \theta^{n} \wedge \eta + n (1 + 2 \sup v) \text{vol}_{\theta}(S) \]
\end{lem}
\begin{proof}
We first suppose that \( v \leq 0 \). It is enough to establish the equality for \( v_k := \max \set{v,-k} \). Indeed, the sequence \(-v_k \) increases to \( -v \), which allows us to conclude by monotone convergence theorem. Now let us prove the desired result for \( v_k \). It is clear that \( v_k \) is \( \theta \)-psh. We then have the following chain of inequalities:
\begin{align*}
\int_{S} (-v_k) \theta_u^{n} \wedge \eta &= \int_S (-v_k) \theta_u^{n-1} \wedge ( \theta + \sqrt{-1} \partial_B \overline{\partial}_B u) \wedge \eta \\
&= \int_S (-v_k) \theta_u^{n-1} \wedge \theta \wedge \eta + \int_S (-v_k) \theta_u^{n-1} \wedge \sqrt{-1} \partial_B \overline{\partial}_B u \wedge \eta \\
&= \int_S (-v_k) \theta_u^{n-1} \wedge \theta \wedge \eta + \int_S u \theta_u^{n-1} \wedge (- \sqrt{-1} \partial_B \overline{\partial}_B v_k) \wedge \eta \\
&\leq \int_S (-v_k) \theta_u^{n-1} \wedge \theta \wedge \eta + \int_S \theta_u^{n-1} \wedge \theta \wedge \eta
\end{align*}
A simple induction allows us to conclude for the case \( v \leq 0 \). The general case follows by considering \( v' := v - \sup_S v \).
\end{proof}
\begin{defn}
The capacity of a Borel set \( E \subset S \) is defined as:
\[ Cap_{\theta}(E) := \sup \set{ \int_E \text{MA}_{\theta}(u), u \in PSH(S,\xi,\theta), 0 \leq u \leq 1} \]
\end{defn}
This definition makes sense since \( \theta \) is supposed to be big (otherwise \( Cap \) would be identically zero). It is clear by definition that \( Cap_{\theta}(.) \geq 0 \).
Now let \( PSH^{-}(S, \xi, \theta) \) be the set of negative, basic \( \theta \)-psh functions.
\begin{prop} \label{capacity_properties}
\hfill
\begin{itemize}
\item[1)] If \( \theta_1 \leq \theta_2 \) are two basic semipositive \((1,1)\)-forms on \( S \), then \( Cap_{\theta_1}(.) \leq Cap_{\theta_2}(.) \). Moreover, for all \( \delta \geq 1 \),
\[ Cap_{\theta} (.) \leq Cap_{\delta \theta} (.) \leq \delta^n Cap_{\theta}(.) \]
For every Borel set \( K \subset E \), we have
\[ 0 \leq Cap_{\theta}(K) \leq Cap_{\theta}(E) \leq Cap_{\theta}(X) = \text{vol}_{\theta}(X) \]
\item[2)] For all \( v \in PSH^{-}(S, \xi, \theta) \), there exists a constant \( C = C(S, \theta) > 0 \) such that:
\[ Cap_{\theta} ( v < - t) \leq \frac{C}{t} \]
for all \( t > 0 \). In particular,
\(\lim_{t \to +\infty} Cap_{\theta}(v < -t) = 0 \).
\end{itemize}
\end{prop}
\begin{proof}
1) It is clear that if \( \theta_1 \leq \theta_2 \) then \( \text{MA}_{\theta_1}(.) \leq \text{MA}_{\theta_2}(.) \) by a property of the complex Hessian in local coordinates. Moreover, if \( \theta_1 \leq \theta_2 \), then \( PSH(S, \xi, \theta_1) \subset PSH(S, \xi, \theta_2) \), so \( Cap_{\theta_1} \leq Cap_{\theta_2} \). For all \( \delta \geq 1 \) and \( u \in PSH(S,\xi, \delta \theta) \), \( 0 \leq u \leq 1 \), we have \( u \in PSH(S,\xi,\theta) \) and:
\[ 0 \leq (u/\delta) \leq (1/ \delta) \leq 1, \quad ( \delta \theta + d_B d_B^c u)^n = \delta^n \tuple{ \theta + \frac{d_B d_B^c u}{\delta} }^n \]
Therefore
\( Cap_{\delta \theta} (.) \leq \delta^n Cap_{\theta}(.) \) by definition.
For all \( K \subset E \) and all candidate function \( u \) in the definition of \( Cap \), \( \int_K \text{MA}_{\theta}(u) \leq \int_E \text{MA}_{\theta}(u) \), hence \( Cap_{\theta}(K) \leq Cap_{\theta}(E) \leq Cap_{\theta}(X) \). Finally, \( Cap_{\theta}(X) = \text{vol}_{\theta}(X) \) since a locally bounded function has full mass.
2) By the Chern-Levine-Nirenberg inequality in Lem. \ref{cln_inequality}, for a \(\theta\)-psh function \( u \) such that \( 0 \leq u \leq 1 \) and \( v \in PSH(S, \xi, \theta), v \leq 0 \), we have:
\begin{equation}
\int_S (-v) \theta_u^{n} \wedge \eta \leq \int_S (-v) \theta^{n} \wedge \eta + n \text{vol}_{\theta}(S)
\end{equation}
This inequality allows us to complete the proof. Indeed, for all \( u \in PSH(S, \xi, \theta) \) such that \( 0 \leq u \leq 1 \):
\begin{align*}
\int_{\set{v < -t}} \theta_u^{n} \wedge \eta &\leq \frac{1}{t} \int_S (-v) \theta_u^{n} \wedge \eta \\
&\leq \frac{1}{t} \tuple{ \int_S (-v) \theta^{n} \wedge \eta + n \text{vol}_{\theta} (S) } \\
&\leq \frac{1}{t} \tuple{ C(S, \theta) + n \text{vol}_{\theta}(S) } (\text{by Prop. \ref{compacity_properties}})
\end{align*}
We conclude then by the definition of capacity.
\end{proof}
The following uniqueness result still holds in the context of degenerate Sasakian manifolds.
\begin{prop} \label{uniqueness}
Let \( u, v \in PSH(S,\xi,\theta) \cap L^{\infty}(S) \). If
\[ \text{MA}_{\theta}(u) = \text{MA}_{\theta}(v) \]
then \( u = v + cst \).
\end{prop}
\begin{proof}
We borrow the proof from \cite[Theorem 3.3]{GZ07} (see also \cite[Theorem 6.4]{HL21}), which still applies when \( \theta \) is only semipositive. Let \( f = (u-v)/2 \) and \( h = (u+v)/2 \). We can assume that \( u, v \geq - C_{\theta} \) so that \( \int_{S} (-h) \theta_h^n \wedge \eta \geq 1 \). The key idea is to obtain the following inequalities:
\begin{align}
\int_S d_B f \wedge d_B^c f \wedge \theta^{n-1}_h \wedge \eta &\leq \int_S \frac{f}{2} ( \theta_u^n - \theta_v^n) \wedge \eta \label{prop_uniqueness_first} \\
\frac{\int_S d_B f \wedge d_B^c f \wedge \theta^{n-1} \wedge \eta} { \int_{S} (-h) \theta_h^n \wedge \eta } &\leq 3^n \tuple{\int_S d_B f \wedge d_B^c f \wedge \theta_h^{n-1} \wedge \eta}^{1/2^{n-1}} \label{prop_uniqueness_second}
\end{align}
As a consequence, if \( \theta_u^n \wedge \eta = \theta_v^n \wedge \eta \), then combining (\ref{prop_uniqueness_first}) and (\ref{prop_uniqueness_second}) yields \( \nabla f = 0 \), hence \( u = v + cst \) as desired.
We give a quick proof of (\ref{prop_uniqueness_first}).
Note that the current under integration on the lhs of (\ref{prop_uniqueness_first}) is well-defined since \( u\) and \(v\) are supposed to be bounded. A direct calculation yields
\begin{align*}
\int_S d_B f \wedge d_B^c f \wedge \theta^{n-1}_h \wedge \eta &\leq \sum_{k=1}^{n-1} \int_S d_B f \wedge d_B^c f \wedge \theta_u^k \wedge \theta_v^{n-1-k} \wedge \eta \\
& = \sum \int_S f (d_B d^c_B f) \wedge \theta_u^k \wedge \theta_v^{n-1-k} \wedge \eta \\
&= \int_S \frac{f}{2} ( \theta_u^n - \theta_v^n) \wedge \eta
\end{align*}
The first inequality follows from \( C^k_{n-1} \leq 2^{n-1} \), the second one from Stokes' theorem, and the third from the fact that \( 2 d_B d_B^c f =\theta_u - \theta_v \).
The proof of (\ref{prop_uniqueness_second}) still goes through unchanged. It consists of proving inductively that for \( T = \theta_h^l \wedge \theta^{n-2-l} \wedge \eta \), \( l = n-2, \dots, 0\), we have
\[ \frac{\int_S df \wedge d^c f \wedge \theta \wedge T}{\tuple{\int_S (-h) \theta_h^{2} \wedge T }^{1/2}} \leq 3 \tuple{ \int_S df \wedge d^c f \wedge \theta_h \wedge T}^{1/2} \]
using an integration by parts and Cauchy-Schwartz inequality.
\end{proof}
\subsection{Extremal functions}
Motivated by extremal functions in pluripotential theory, we introduce the following counterpart in the Sasakian setting.
\begin{defn}
Let \( K \subset S \) be a \( {\xi} \)-invariant Borel subset. The extremal function associated to \( \theta \) and \( K \) is defined as:
\[ V_{K,\theta}(p) := \sup \set{ \phi(p), \phi \in PSH(S,\xi, \theta), \phi \leq 0 \; \text{on} \; K} \]
\end{defn}
Let \(V^{*}_{K,\theta} \) be the u.s.c. regularization of \( V_{K,\theta} \). We say that a \( \xi \)-invariant Borel set \( K \subset S \) is \textit{\( PSH(S,\xi,\theta) \)- pluripolar} if \( K \) belongs to the \( -\infty \) locus of a basic \( \theta \)-psh function. Clearly \( \set{u = -\infty} \) is \( \xi \)-invariant if \( u \) is basic \(\theta\)-psh. Here we impose the symmetry by \( \xi \) on \( K \) so that there is no inherent contradiction in the definition of pluripolarity. The pluripolarity of \( K \) is determined by its extremal function, as the following lemma shows.
\begin{lem} \label{extremal_fucntion_properties}
Let \( K \subset S \) be a \( \xi \)-invariant Borel set.
\begin{itemize}
\item[1)] \( K \) is \( PSH(S,\xi,\theta) \)-pluripolar \(\iff V^{*}_{K,\theta} = +\infty \iff \sup V^{*}_{K,\theta} = +\infty \).
\item[2)] If \( K \) is not \( PSH(S,\xi,\theta) \)-pluripolar, then \( V^{*}_{K,\theta} \in PSH(S,\xi,\theta) \) and \( V^{*}_{K,\theta} = 0 \) on \( \text{Int}(K) \). Moreover,
\[ \int_{\ol{K}} \text{MA}_{\theta} (V^{*}_{K,\theta}) = \int_{\ol{K}} (V^{*}_{K,\theta})^n \wedge \eta = \text{vol}_{\theta}(S), \quad \int_{S \backslash \ol{K}} \text{MA}_{\theta}(V^{*}_{K,\theta}) = 0 \]
\end{itemize}
\end{lem}
\begin{proof}
1) Suppose that \( \sup_S V^{*}_{K,\theta} = +\infty \). By Choquet's lemma, there exists an increasing sequence of functions \( \phi_j \in PSH(S, \xi, \theta) \) such that \( \phi_j = 0 \) on \( K \) and \( V^{*}_{K,\theta} = (\lim \nearrow \phi_j)^{*} \). Up to extracting a subsequence, we can assume that \( \sup_S \phi_j \geq 2^j \). Define \( \psi_j := \phi_j - \sup_S \phi_j \). The sequence \( \set{\psi_j}_{j \in \mathbb{N}} \subset PSH(S,\xi,\theta) \) is compact and satisfies \( \int_S \psi_j d \mu_{\omega_B} \geq - C(\mu_{\omega_B} ) \) (cf. Lem. \ref{compacity_properties}). Let
\[ \psi := \sum_{j \geq 1} 2^{-j} \psi_j \]
The function \( \psi \) is basic \(\theta\)-psh as a limit of basic \(\theta\)-psh functions, and satisfies \( \int \psi d \mu_{\omega_B} \geq -C(\mu_{\omega_B}) \). It is clear that \( \psi_j(x) = - \sup_S \phi_j \), \( \forall x \in K \), hence \( K \subset \set{\psi = - \infty} \).
Now suppose that \( K \subset \set{\psi = -\infty} \) where \( \psi \in PSH(S,\xi,\theta) \). For all \( c \in \mathbb{R} \), \( \psi +c \in PSH(S, \xi,\theta) \) and \( \psi +c \leq 0 \) on \( K \). It follows that \( V_{K,\theta}^{*} \geq \psi +c \), hence \( V_{K,\theta}^{*} = + \infty \) on \( S \backslash \set{\psi = -\infty} \). Finally, \( V_{K,\theta}^{*} = +\infty \) on \( S \) since \( \set{\psi = -\infty} \) has zero mass with respect to \( \mu_{\omega_B} = \omega_B^n \wedge \eta \).
2) Clearly \( V_{K,\theta}^{*} = 0 \) in \( \text{Int}(K) \) by definition. The function \( V_{K,\theta} \) is basic as the sup-envelope of basic functions, hence its u.s.c. regularization \( V^{*}_{K,\theta} \) is also basic. The fact that \( V^{*}_{K,\theta} \) is \( \theta \)-psh follows from (\ref{hartogs}) of Prop. \ref{compacity_properties}.
Since a locally bounded \( \theta \)-psh function has full mass, we have
\[ \int_{\ol{K}} \text{MA}_{\theta}(V^{*}_{K,\theta}) = \int_S \text{MA}_{\theta}(V^{*}_{K,\theta}) = \int_{S} (\theta + d_B d_B^c V^{*}_{K,\theta})^n \wedge \eta = \text{vol}_{\theta}(S) \]
It only remains to show that
\( \text{MA}_{\theta}(V^{*}_{K,\theta}) = 0 \) on \( S \backslash \ol{K} \), which is equivalent to showing
\[ \int_{U_{\alpha}} \text{MA}_{\theta}(V^{*}_{K,\theta}) = 0 \]
on each foliation chart \( U_{\alpha} = B_1(0) \times ]-t,t[ \subset S \backslash \ol{K} \).
By Choquet's lemma, there exists an increasing sequence of functions \( \phi_j \in PSH(S, \xi, \theta) \) such that \( \phi_j = 0 \) on \( K \) and \( V^{*}_{K,\theta} = (\lim \nearrow \phi_j)^{*} \). Let \( \wt{\phi}_j \) the unique solution of local Dirichlet problem with initial datum \( \phi_j \) (which exists by Prop. \ref{local_dirichlet_problem}). In particular,
\[ \text{MA}_{\theta}(\wt{\phi}_j) = 0 \; \text{on} \; U_{\alpha} \]
Moreover, the sequecne \( (\wt{\phi}_j) \) is increasing and \( \wt{\phi}_j = \phi_j \) on \( S \backslash U_{\alpha} \), hence \( \wt{\phi}_j = 0 \) on \( K \). This shows that \( \wt{\phi}_j \leq V_{K,\theta}^{*} \) , therefore \( \wt{\phi}_j \nearrow V_{K,\theta}^{*} \).
By continuity of the Monge-Ampère operator along a monotone sequence (cf. Thm \ref{ma_continuity_monotone_sequences}), \( \text{MA}_{\theta}(V^{*}_{K,\theta}) = 0 \) on \( U_{\alpha} \).
\end{proof}
Let us now state an important comparison theorem between capacity and extremal functions.
\begin{lem} \label{capacity_extremal_comparison}
Let \( M_{K,\theta} := \sup_S V^{*}_{K,\theta} \). For all compact non-pluripolar and \( \xi \)-invariant \( K \subset S \) we have:
\[ 1 \leq \text{vol}_{\theta}(S)^{1/n} Cap_{\theta}(K)^{-1/n} \leq \max(1, M_{K,\theta}) \]
\end{lem}
\begin{proof}
The inequality on the left is clear by Prop. \ref{capacity_properties}. First suppose that \( M_{K,\theta} \leq 1 \), then \( V_{K,\theta}^{*} \) is bounded. Since \( K \) is non-pluripolar, \( V^{*}_{K,\theta} \in PSH(S, \xi,\theta) \). Moreover, \( \text{MA}_{\theta} (V^{*}_{K,\theta}) \) is supported in \( K \) (cf. Lem. \ref{extremal_fucntion_properties}), hence
\[ Cap_{\theta}(K) \geq \int_{K} \text{MA}_{\theta} (V^{*}_{K,\theta}) = \int_S \text{MA}_{\theta}(V^{*}_{K,\theta}) = \text{vol}_{\theta}(S) , \]
which completes the proof in the \( M_{K,\theta} \leq 1 \) case.
Assume now that \( M := M_{K,\theta} \geq 1 \). Since the function \( V_{K,\theta}^{*} / M \) is a candidate in the definition of \( Cap_{\theta} \), it follows that
\begin{align*}
Cap_{\theta}(K) &\geq \int_K \text{MA}_{\theta}(M^{-1} V^{*}_{K,\theta} ) \\
&= \int_S \text{MA}_{\theta} (M^{-1} V^{*}_{K,\theta}) \; (\text{by Lem. \ref{extremal_fucntion_properties}}) \\
&\geq M^{-n}\int_S \text{MA}_{\theta}(V^{*}_{K,\theta}) = M^{-n} \text{vol}_{\theta}(S)
\end{align*}
This allows us to conclude.
\end{proof}
\subsection{Lelong number and integrability}
We define the \textit{Lelong number} of a basic psh function \( u \) on a foliation chart \( U_{\alpha} \) at a point \( p \) with coordinates \( (z,x) \) by
\[ \nu(u,p) := \lim_{r \to 0^{+}} \frac{1}{\log(r) \text{vol}(B(z,r))} \int_{B(z,r)} u(z) \omega_B^n \]
This number does not depend on the foliation chart since the transition maps restrict to biholomorphisms on transverse neighborhoods and that the right-hand side is invariant under biholomorphisms by a theorem of Siu.
It is clear by our definition that the Lelong number is \( \xi \)-invariant. Moreover, in a foliation chart \( B_1(0) \times ]-t,t[ \), the function \( x \in ]-t,t[ \to \nu(u,(z,x)) \) is constant for all \( z \in B_1(0) \). The Lelong number at a point \( p \) on a Sasakian manifold therefore equals to its value at the projection of \( p \) to the transverse holomorphic ball of a foliation chart. Local properties of Lelong number can be translated word by word to the Sasakian setting.
\begin{prop}
The number
\[ \nu(\set{\theta}) := \sup \set{ \nu(\phi,x), (\phi,x) \in PSH(S,\xi,\theta) \times S } \]
is finite and depends only on the basic cohomology class of \( \theta \).
\end{prop}
\begin{proof}
Since \( S \) is compact, there exists a basic Kähler form \( \theta' \) such that \( \theta' \geq \theta \), hence \( PSH(S,\xi,\theta') \supset PSH(S,\xi,\theta) \), so \( \nu(\set{\theta'}) \geq \nu(\set{\theta}) \). It is then enough to prove the assertion when \( \theta \) is transverse Kähler.
For \( p \in S \), we define \( \chi \) to be a smooth function equals to \( 1 \) in a neighborhood of \( p \) and \( 0 \) outside a larger neighborhood. Let
\[ g_p(.) := \chi(.) \log d(.,p) \]
where \( d \) is the Riemannian distance associated to \( \theta \).
It is clear that \( g_p \) is smooth on \( S \backslash \set{p} \) and psh on a neighborhood of \( p \), hence \( A \theta \)-psh for \( A > 0 \). Since \( S \) is compact, we can choose a uniform constant \( A = A(\theta) \) such that for all \( p \in S \):
\[dd^c g_p \geq -A \theta \]
By taking average with respect to the action of the compact torus generated by \( \xi \), we can suppose that \( g_p \) is \( \xi \)-invariant, hence \( g_p \in PSH(S,\xi, A \theta) \).
A basic psh function \( \phi \) in a foliation chart \( B_1(0) \times ]-t,t[ \) restricts to a psh function on the ball \( B_1(0) \), so we have
\[\nu(\phi,0) = \int_{\set{0_z}} d_B d_B^c \phi \wedge (d_B d_B^c \log \abs{z})^{n-1} \]
with \( 0_z \) being the center of \( B_1(0) \) (see e.g. \cite[Lemma 2.46]{GZ17} for a proof).
It follows from this local result that for any \( a = (z,x) \)
\begin{equation}
\nu(\phi,a) = \int_{\set{z}} \theta_{\phi} \wedge (A \theta + d_B d_B^c g_a)^{n-1}
\end{equation}
The right-hand side is bounded by \( \int_S A^n \theta^n \wedge \eta = A^n \text{vol}_{\theta}(S) \). This completes our proof.
\end{proof}
\begin{thm} \label{skoda_integrability}
Let \( \mathcal{F}_0 := \set{ \phi \in PSH(S,\xi,\theta), \sup_S \phi = 0} \). If
\[ A < 2 \nu(\set{\theta})^{-1}, \]
then
\[ \sup_{ \phi \in \mathcal{F}_0} \set{\int_S e^{-A \phi} \omega_B^n \wedge \eta} \leq C\]
for a constant \( C \) depending only on \( \omega_B \) and \( \theta \).
\end{thm}
\begin{proof}
We will reduce the problem to the classic Skoda's integrability theorem. First remark that there exist two covers of \( S \) by a finite number of foliations charts \( (V_j)_{1 \leq j \leq N} \) and \( (U_j)_{1 \leq j \leq N} \), where \( U_j = B_{1}(0) \times ]-t,t[ \), such that \( \ol{V}_j \subset U_j \). We need to show that on each foliation chart \( U_j \), there exists a constant \( C_j = C(V_j, \mathcal{F}_0, A) \) satisfying
\[ \int_{U_j} e^{-A \phi} \omega_B^n \wedge \eta \leq C_j \]
But since on \( U_j \), \( \phi \) depends only on the \( z \) coordinates and \( \eta \) coincides with \( dx \), it is enough to show that
\[ \int_{U_j} e^{-A \phi} \omega_B^n \wedge \eta = 2 t \int_{B_{1}(0)} e^{- A \phi } \omega_B^n \leq C_j \]
This follows from the local Skoda's integrability theorem since the family \( \mathcal{F}_0 \) is compact (cf. \cite[Theorem 2.50]{GZ17} for a proof).
\end{proof}
\section{Regularity of the potential} \label{section_proof_main_theorem}
This part is dedicated to the proof of our main theorem. Let us first give some preliminaries and outline the arguments of the proof. Consider a Fano cone \( Y \) of complex dimension \( n + 1 \) with a good action by \( T \simeq (\mathbb{C}^{*})^k \). Let \( T_c \simeq (\mathbb{S}^1)^k \) be the maximal compact subtorus of \( T \).
Consider a \( T_c \)-equivariant embedding of \( Y \) into \( \mathbb{C}^N \) such that \( T_c \) corresponds to a diagonal group acting linearly on \( \mathbb{C}^N \). Recall that \( \xi \) generates the action of a compact torus \( T_{\xi} \subset T_c \). Now fix a locally bounded conical Calabi-Yau potential \(r^2 \) and a Reeb vector \( \xi \) on \( Y \), whose action by \(T_{\xi}\) extends to \( \mathbb{C}^N \) through the embedding. Let \( r^2_{\xi} \) be the radial function on \( \mathbb{C}^N \) associated to \( \xi \) with conical metric \(\omega_{\xi} = dd^c r^2_{\xi} \). Then \( r_{\xi}^2 \) restricts to a \(\xi\)-conical potential on \( Y \). The link of \( Y \) is homeomorphic to the set \( Y \cap \set{r_{\xi}^2 = 1} \). Now let
\[\pi: X \to Y \]
be a \( T \)-equivariant resolution of \( Y \) (which exists by Lem. \ref{resolution_singularities}). Let
\[ \mathcal{U} := \pi^{-1} (Y_{\text{reg}}) \]
be the open Zariski subset of \( X \) isomorphic to \( Y_{\text{reg}} \).
Consider the following submanifold of \( X \):
\[(S = \pi^{-1}( Y \cap \set{r_{\xi}^2 = 1} ), \xi, \eta, \omega_B) \]
where by an abuse of notation \(\xi \) still denotes the pullback of the given Reeb field on \( \mathbb{C}^N \), \( \omega_B \) is a transverse Kähler form on \( S \) (cf. Lem. \ref{basic_form_asymptotic}), and \( \eta = 2 \pi^{*} d^c \log r_{\xi}^2 \) the contact form on \( S \), which is pullback of the contact form associated to \( \xi \) on \( \mathbb{C}^N \). Since \( d \theta \) is only semipositive, \( S \) is degenerate Sasakian.
One can show (see Prop. \ref{conical_eqn_equivalent_transverse_eqn}) that the conical Calabi-Yau equation
\[ (dd^c r )^{n+1} = dV_Y \]
is in fact equivalent to the following transverse equation on \( \mathcal{U} \cap S \):
\[ (\theta_X + d_B d_B^c \phi_X)^{n} \wedge \eta = e^{-(n+1) \phi_X } e^{(n+1)(\Psi_{+} - \Psi_{-})} \omega_B^{n} \wedge \eta \]
Here
\begin{itemize}
\item \( \theta_X := d \eta \),
\item \( \phi_X := \pi^{*} \phi, \; \phi := \log (r^2 / r_{\xi}^2) \),
\item \( \Psi_{\pm} \) are basic \( A \omega_B \)-quasi-psh on \( S \) for \( A > 0 \) large enough,
\end{itemize}
Remark that \( \theta_X \) is a semipostive and big form on \( S \). By construction, \( \phi_X \) is invariant under the induced actions of \( \xi \) and \( - J \xi \) on \( X \). In a foliation chart \( (z_1, \dots, z_n,x) \) of \( S \), the equation can be written as:
\[ \det \tuple{ \theta_{X,i\ol{j}} + \frac{\partial^2 \phi_X }{\partial z_i \partial \ol{z}_j }} = e^{-(n+1) \phi_X(z) } e^{(n+1)(\Psi_{+}(z) - \Psi_{-}(z))} \det( \omega_{B, i \ol{j}} ) \]
The smoothness of \( r^2 = r_{\xi}^2 e^{\phi} \) on \( Y_{\text{reg}} \) is then equivalent to the regularity of \( \phi_X := \pi^{*} \phi \) on \( S \cap \mathcal{U} \). Consider the family of equations:
\[ (\theta_X + \varepsilon \omega_B + d_B d_B^c \phi_{j, \varepsilon} )^{n} = e^{(n+1)(\psi_{+,j} - \psi_{-,j})} \omega_B^{n} \]
where \( \psi_{\pm,j} \) are two sequences of basic \( A \omega_B \)-qpsh functions decreasing to \( \psi_{+} := \Psi_{+} \) and \( \psi_{-} := \Psi_{-} + \phi_X \) for \( A > 0 \) large enough. The existence of a unique \( \phi_{j, \varepsilon} \) verifying \( \sup \phi_{j,\varepsilon} = 0 \) is guaranteed by the transverse Calabi-Yau theorem of \cite{EKA90}. Finally, to obtain the regularity of \( \phi_X \), we proceed by the following classic steps:
\begin{itemize}
\item[1)] \textit{Uniform estimate:} The functions \(\phi_{j,\varepsilon}\) are uniformly bounded, i.e. there exists a constant \( C \) independent of \( j \) and \( \varepsilon \), such that:
\[ \norm{\phi_{j, \varepsilon}}_{L^{\infty}(S)} \leq C \]
\item[2)] \textit{Laplacian uniform estimate:} Using the uniform estimate of the previous step, one can show that there exists \( C' \) such that for all \(j, \varepsilon \),
\[ \sup_{S \cap \mathcal{U}} \abs{\Delta_{\omega_B} \phi_{j,\varepsilon}} \leq C' \]
where
\[ \operatorname{Tr}_{\omega_B} f := n \frac{d_B d_B^c f \wedge \omega_B^{n-1} }{\omega_B^{n} } \]
\item[3)] By the complex Evans-Krylov theory, we obtain the following uniform estimate:
\[ \norm{ \phi_{j,\varepsilon}}_{C^{2,\beta}(S)} \leq C'', \]
which implies \( C^{k+2, \beta} \)-estimates for all \( k > 0 \) by Schauder estimate and a bootstrapping argument.
\end{itemize}
The last step is classic and well-known in the literature (cf. \cite{Blo05}). Our focus will be mostly on the first and second steps (see Prop. \ref{linfty_estimate_pluripotential} and Prop. \ref{laplacian_estimate}).
\subsection{Transverse Kähler form}
Let \( V \) be an irreducible projective variety. Following \cite[Paragraph 3]{Kol07}, by a \textit{strong resolution} we mean a proper morphism \( \pi : V' \to V \) such that
\begin{itemize}
\item \( V' \) is smooth and \( \pi \) is birational.
\item \( \pi: \pi^{-1}(V_{\text{reg}}) \to V_{\text{reg}} \) is a biholomorphism.
\item \( \pi^{-1}(V_{\text{sing}}) \) is a divisor with simple normal crossings (s.n.c).
\end{itemize}
In the sense of \cite[Paragraph 4]{Kol07}, we say that a resolution is \textit{functorial} if for any varieties \(V,W\) with resolutions \( \pi_V: V' \to V\), \(\pi_W: W' \to W \), every smooth morphism \( \phi: V \to W \) can be lifted to a smooth morphism \( \phi': V' \to W' \) such that \( \pi_W \circ \phi' = \phi \circ \pi_V \).
\begin{lem} \label{resolution_singularities}
There exists a smooth \( T \)-equivariant resolution of singularities \( \pi: X \to Y \).
\end{lem}
\begin{proof}
Let us embed \( Y \) in a \( T \)-equivariant manner into \( \mathbb{C}^N \) such that \( T \) is identified with a diagonal group. Let \( \ol{Y} \subset \mathbb{P}^N \) be the closure of \( Y \) in \( \mathbb{P}^N \). There exists a \( T \)-equivariant resolution \( \pi : \ol{X} \to \ol{Y} \). Indeed, it is enough to take a \( \pi \) as a strong and functorial resolution in the sense of Kollar as recalled above (see \cite[Theorem 36]{Kol07} for a proof of existence).
The functoriality of the resolution implies that the action of all algebraic group on \( \ol{Y} \) lifts on \( \ol{X} \) such that \( \pi \) is equivariant (see \cite[Paragraph 9]{Kol07}). We conclude that \(\pi: X := \ol{X} \cap \mathbb{C}^N \to Y \) is a \( T \)-equivariant resolution of \( Y \).
\end{proof}
Now let \( (X, \pi) \) be the resolution of \( Y \), constructed in the previous lemma. Let $E_0 := \pi^{-1}(0_Y)$ be the ``vertex exceptional divisor''. Since \( \pi \) is equivariant, the vector fields $\xi$ and $ - J \xi$ induce by pullback the respective actions on $X$ (still denoted by \( \xi \) and \( -J \xi \)). The action generated by \( - J \xi \) is an action of \( \mathbb{R}^{*}_{+} \).
The pullback by \( \pi \) of the holomorphic vector field \( v_{\xi} := (-J \xi - \sqrt{-1} \xi)/2 \) defines a holomorphic foliation \( \mathcal{F}_{v_{\xi}} \) on \( X \backslash E_0 \). At every point \( p \in X \backslash E_0 \), there exists a \textit{transverse holomorphic coordinates} \((z_1,\dots,z_n, w) \) such that
\[ v_{\xi} . z_j = 0, \; \frac{\partial}{\partial \Im w } = \xi, \; \frac{\partial}{\partial \Re w} = (- J \xi) \]
which restrict to the coordinate \( (z,x) \) on \( S \). In other words, \( w = \pi^{*} \log r_{\xi} + \sqrt{-1} x \). A form \( \alpha \) on \( X \backslash E_0 \) is said to be \textit{basic} if
\[ \mathcal{L}_V \alpha = i_V \alpha = 0, \; \forall V \in \mathbb{R} \set{\xi, -J\xi} \]
The restriction map allows us to identify basic forms on \( X \backslash E_0 \) and basic forms on \( S \).
\begin{lem} \label{initial_kahler_form}
There exists a $T_c$-invariant Kähler form $\omega$ on $X$ and a global smooth function $\Phi_{\omega}$ defined on $U$ such that
\begin{equation*}
dd^c \Phi_{\omega} = \omega, \; \Phi_{\omega} \to -\infty \; \text{near} \; \partial \mathcal{U}
\end{equation*}
\end{lem}
\begin{proof}
Let \( \pi: \ol{X} \to \ol{Y} \subset \mathbb{P}^N \) be the resolution as in the previous lemma. Let \( \mathcal{O}(1) \) be the \( T \)-linearized hyperplane line bundle of \( \mathbb{P}^N \) and \( E \) the exceptional divisor of \( (\ol{X}, \pi) \). Since \( \pi \) is relatively ample, there exists an ample line bundle \( A \) on \( \ol{X} \) such that:
\[ \pi^{*} \mathcal{O}(1) = A + E \]
Now let \( \norm{.}_E \) a \( T_c \)-invariant metric on \( E = \set{s_E = 0} \) and \( \phi_E := -\log \norm{s_E}^2_E \) its potential. Let \( h_A \) be a \( T_c \)-invariant metric of strictly positive curvature on \( A \). Let \( h \) be the \( T_c \)-invariant metric \( h := h_A e^{-\phi_E} \) on \( \pi^{*} \mathcal{O}(1) \) and \( \Phi_{\omega} := - \log h \) the potential of \( h \).
Since \( X \) is contained in an open affine set \( \simeq \mathbb{C}^N \) of \( \mathbb{P}^N \), there exists a global trivializing \( T_c \)-invariant section \( s \) of the line bundle \( \pi^{*} \mathcal{O}(1)|_X \). The global form
\[ \omega := dd^c \Phi_{\omega}|_X = - dd^c \log h(s) \]
is clearly closed and positive definite. Indeed, since \( s \neq 0 \) in \( X \), we have
\[-dd^c \log h(s) = -dd^c \log h_A|_X > 0\]
Finally, since \( s_E \to 0 \) near \( \partial \mathcal{U} \), we have \( \Phi_{\omega} = - \log h_A + \phi_E \to -\infty \) near \( \partial \mathcal{U} \).
\end{proof}
\begin{lem} \label{basic_form_asymptotic} \cite[Prop. 4.3]{Ber20}
There exists a global smooth function $\Phi_B$ on $\mathcal{U}$ satisfying
\[ \mathcal{L}_{\xi} \Phi_B = 0, \mathcal{L}_{-J \xi} \Phi_B = 2, \Phi_{B} \to -\infty \; \text{near} \; \partial \mathcal{U} \]
and a transverse basic Kähler form \( \omega_B \) on \( X \backslash E_0 \) such that \( dd^c \Phi_B = \omega_B \) on \( \mathcal{U} \).
\end{lem}
\begin{rmk}
The information on the behavior of \( \Phi_B \) near the border of \( \mathcal{U} \) is crucial in the Laplacian estimate of the potential \( \phi_X \).
\end{rmk}
\begin{proof}
The proof in \cite{Ber20} is an adaptation of the construction of reduced Kähler metrics on a symplectic quotient (see e.g. \cite[Formulae 4.5, 4.6]{BG04}). We provide here the details for the reader's convenience.
Remark however that in our case, the symplectic quotient is not well defined since the action generated by \( \xi \) on the level set of the hamiltonian is not free in general. However, the construction still applies since it is local in nature.
Let \( \omega \) be the \( T_c \)-invariant Kähler form on \( X \), constructed in Lem. \ref{initial_kahler_form}. Remark that the action generated by \( \xi \) is hamiltonian with respect to \( \omega\) (since by the embedding of \( Y \) into \( \mathbb{C}^N \), \( \xi \) is identified with a hamiltonian action on \( \mathbb{C}^N \)). It follows that there exists a smooth function \( \mathcal{H}: X \to \mathbb{R} \) such that:
\[ d \mathcal{H} (.) = - \omega( \xi, .) = g_{\omega} (- J \xi, .) \]
where \( g_{\omega} \) is the metric associated to \( \omega \).
In particular, \( d \mathcal{H} (- J \xi) > 0 \), so \( d_x \mathcal{H} \) is surjective for \( x \notin E_0 \). It follows that \( \mathcal{H} \) is a submersion for \( x \notin E_0\); hence for \( \lambda \) positive, sufficiently large,
\[ S_{\lambda} = \set{ \mathcal{H} = \lambda} \]
is a compact submanifold of \( X \backslash E_0 \), diffeomorphic to \( (X \backslash E_0)/ \mathbb{R}^{*}_{+} \). Now let
\[ \pi_{\lambda} : X \backslash E_0 \to S_{\lambda}, \quad i_{\lambda} : S_{\lambda} \to X \backslash E_0 \]
be the natural projection and inclusion. Let \( \Phi_{\omega} \) be the global potential on \( \mathcal{U} \) constructed in Lemma \ref{initial_kahler_form}. Let \( V_p \) be the neighborhood of a point \( p \in X \backslash E_0 \) with local transverse coordinates \( (z,w) \). Consider the following \( \xi \)-invariant function on \( S_{\lambda} \cap \mathcal{U} \):
\[ \Phi_{\lambda} := i^{*}_{\lambda} (\Phi_{\omega} - \lambda \Im w ) \]
The function
\begin{equation} \label{local_basic_potential_equation}
\Psi_B = \pi^{*}_{\lambda} \Phi_{\lambda} + \lambda \Im w = \pi^{*}_{\lambda} \Phi_{\omega}|_{S_{\lambda}} + \lambda(\Im w - i^{*}_{\lambda} \Im w )
\end{equation}
is then \( \xi \)-invariant on \( V_p \) and well-defined on \( V_p \). Indeed, let \( V_{p'} \) be another local transverse neighborhood of a point \( p' \in S_{\lambda} \cap V_p \). By the definition of \( w \), \( v_{\xi} (w-w') = 0 \), so there exists a basic transversely holomorphic function \( f(z) \) on \( V_p \cap V_{p'} \) such that \( w-w' = f(z) \). It follows that:
\[ \Im (w - w')|_{V_p \cap V_{p'}} = \Im(w-w')|_{S_{\lambda} \cap V_p \cap V_{p'}} = i^{*}_{\lambda} \Im(w-w') |_{V_p \cap V_{p'}} \]
By construction, we have
\( \mathcal{L}_{-J \xi} \Psi_B = \lambda\),
hence \( \Psi_B \) extends uniquely to a smooth function on \( \mathcal{U} \). The function
\[ \Phi_B := 2 (\Psi_B/ \lambda ) \]
satisfies \( \mathcal{L}_{\xi} \Phi_B = 0, \; \mathcal{L}_{-J \xi} \Phi_B = 2 \). We assert that the following global form on \( \mathcal{U} \)
\[ \omega_B := dd^c \Phi_B \]
defines a transverse Kähler metric on \( \mathcal{U} \). By a direct computation from the equation (\ref{local_basic_potential_equation}) as in \cite[Section 9]{BG04}, \( 2 \lambda^{-1} \omega \) is exactly \( \omega_B \) on \( S_{\lambda} \). After replacing \( \Phi_B \) with \( 2 \lambda^{-1} \Phi_{\omega} \) on each \( V_p \), we see that \( \omega_B \) extends to a transverse Kähler metric on \( X \backslash E_0 \).
It remains to show that \( \Phi_B \to -\infty \) on \( \partial \mathcal{U} \). Indeed, on \( \mathcal{U} \cap V_p \),
\( \Phi_B - 2 \lambda^{-1} \Phi_{\omega} = \Im w - i^{*}_{\lambda}(\Im w) \) for all \( p \in S_{\lambda} \). It follows that \( \Phi_B - 2 \lambda^{-1} \Phi_{\omega}\) is bounded on \( S_{\lambda} \cap \mathcal{U} \), so \( \Phi_B = (\Phi_B -2 \lambda^{-1} \Phi_{\omega}) + 2 \lambda^{-1} \Phi_{\omega} \to -\infty \) near \( \partial \mathcal{U} \) since \( \Phi_{\omega} \to -\infty \) near \( \partial \mathcal{U} \).
\end{proof}
Since \( X \) is a \(T_c \)-invariant resolution of \( Y \) and that \( Y \) has klt singularities, there exists a \( T_c \)-invariant divisor \( D \) such that:
\[ \pi^{*} K_Y = K_X + D, \; D = \sum_{a_j > -1} a_j D_j, \]
We have moreover a decomposition $D = D_{+} - D_{-}$ where:
\[ D_{+} := \sum_{a_j > 0} D_j, \; D_{-} := \sum_{a_j < 0}(-a_j) D_j \]
are two effective \( T_c \)-invariant \(\mathbb{Q}\)-divisors.
There exists then a \( T_c \)-invariant volume form \( dV_X \) on \( X \), two multivalued sections \( s_{\pm} \) and hermitian \( T_c \)-invariant metrics \( h^{\pm} \) on \(D_{\pm} \), such that:
\begin{equation} \label{volume_discrepancy}
\pi^{*} dV_Y = \norm{s_{+}}^2_{h^{+}} \norm{s_{-}}^{-2}_{h^{-}} dV_X
\end{equation}
To be precise, we may choose:
\[ \norm{s_{+}}^2_{h^{+}} := \prod_{a_j > 0} \abs{s_j}^{2a_j}_{h_j}, \quad \norm{s_{-}}^2_{h^{-}} := \prod_{a_j < 0 } \abs{s_j}^{-2a_j}_{h_j} \]
where \( h_j \) are \( T_c \)-invariant hermitian metrics of the fiber \( \mathcal{O}_{X} (D_j) \).
Up to a positive constant, we have the following volume form on \( S \):
\[ dV_X ( -J \xi, . ) = \omega_B^{n} \wedge \eta \]
\begin{lem} \label{volume_pullback}
There exist two basic psh \( T_c \)-invariant functions \( \Psi_{\pm} \) on \( S \), smooth on \( \mathcal{U} \) and a constant \( A > 0 \) such that on \( S \),
\[ \pi^{*} dV_Y ( -J \xi, .) = e^{(n+1)(\Psi_{+} - \Psi_{-} )} \omega_B^{n} \wedge \eta, \quad \frac{i}{2 \pi} \partial_B \overline{\partial}_B \Psi_{\pm} \geq -A \omega_B \]
Moreover, \( e^{-\Psi_{-}} \in L^p(S), p > 1 \).
\end{lem}
\begin{proof}
Assume that there exists a positive constant \( C > 0 \) satisfying:
\begin{equation} \label{section_norm_estimate}\frac{i}{2 \pi } \partial_B \overline{\partial}_B \log \norm{s_{\pm}}^2_{h^{\pm}|_S} \geq - C \omega_B \end{equation}
Then by choosing \( \Psi_{\pm} \) such that:
\[
(n+1) \Psi_{\pm} = \log \norm{s_{\pm}}^2_{h^{\pm}|_S} \in C_B^{\infty}(S)
\]
we obtain the equality between volume forms from (\ref{volume_discrepancy}) and the estimate of \( \partial_B \overline{\partial}_B \Psi_{\pm} \) follows immediately.
It remains to prove (\ref{section_norm_estimate}). By definition of \( s_{\pm} \) and \( \norm{.}_{h^{\pm}} \), in a transverse holomorphic chart of \( X \backslash E_0 \) with coordinates \( (z,w) \), there exist \( T_c \)-invariant local potentials \( \phi_{\pm} \) and holomorphic \( T_c \)-semi-invariant local functions \( f_{\pm} \) such that:
\[ \norm{s_{\pm}}_{h^{\pm}} = \abs{f_{\pm}(z,w)} e^{-\phi_{\pm}(z,w)} \]
In particular, there exist \( \lambda_{\pm} \in \mathbb{R} \) satisfying:
\[ \frac{\partial}{\partial \Im w} f_{\pm} = i \lambda_{\pm} f \]
After replacing \( f_{\pm} \) by \( f_{\pm} e^{-\lambda_{\pm} w} \), one can suppose that \( f_{\pm} \) are \( \xi \)-invariant (hence basic), so \( \overline{\partial}_B f_{\pm} = 0 \). It follows that \( f_{\pm} \) are transversely holomorphic, hence \( d_B d^c_B \log \abs{f_{\pm}(z,w)}^2 \geq 0 \), so locally:
\[ d_B d^c_B \log \norm{s_{\pm}}^2_{h^{\pm} |_S} \geq - C d_B d_B^c \phi_{\pm} \]
for some constant \( C \) depending only on the local open set. Moreover, since \( \omega_B \) is Kähler, one can find in a transverse neighborhood a constant \( A > 0 \) (which depends only on the neighborhood) such that
\[ d_B d^c_B \phi_{\pm} \leq A \omega_B \]
The compacity of \( S \) then completes the proof of (\ref{section_norm_estimate}). Finally, since \( Y \) has klt singularities, \( D_j \) are normal crossing divisors, hence there exists \( p > 1 \) such that \( pa_j > -1 \) for all \( j \), so \( e^{-\Psi^{-}} \in L^p(S) \) for some \( p > 1 \).
\end{proof}
\subsection{Transverse Monge-Ampère equation}
\begin{prop} \label{conical_eqn_equivalent_transverse_eqn}
The conical potential \( r \) is a solution in the pluripotential sense of the equation:
\begin{equation}
(dd^c r ^2)^{n+1} = dV_Y
\end{equation}
on \( Y_{\text{reg}} \) if and only if \( \phi_X \) satisfies the following equation on \( S \cap \mathcal{U} \):
\begin{equation} \label{transversal_CY_eqn}
(\theta_X + d_B d_B^c \phi_X)^{n} \wedge \eta = e^{-(n+1) \phi_X} e^{(n+1)(\Psi_{+} - \Psi_{-})} \omega_B^{n} \wedge \eta
\end{equation}
In particular, in a transverse holomorphic neighborhood \( S \cap \mathcal{U} \),
\[ (\theta_X + d_B d^c_B \phi_X)^n = e^{-(n +1)\phi_X} e^{-(n+1)(\Psi_{+} - \Psi_{-} )} \omega_B^{n} \]
\end{prop}
\begin{proof}
By definition \( \Phi = \log r ^2 \), hence:
\[ dd^c r^2 = e^{\Phi} (dd^c \Phi + d \Phi \wedge d^c \Phi ) = r^2 (dd^c \Phi + d \Phi \wedge d^c \Phi ) \]
in the current sense.
We have
\[ ( dd^c \Phi + d \Phi \wedge d^c \Phi)^{n+1} = \sum c_{k,n} (dd^c \Phi)^k \wedge ( d \Phi \wedge d^c \Phi)^{n-k} = ( dd^c \Phi)^{n} \wedge d \Phi \wedge d^c \Phi \]
Indeed, in the transverse coordinates \( (z,w) \) on \( X \backslash E_0 \),
\[ \frac{\partial \Phi }{\partial w} = \frac{\partial \Phi}{\partial \ol{w} } = 1 \]
hence \( (dd^c \Phi)^{n+1} = 0 \).
It follows that
\begin{align*}
(dd^c r^2)^{n+1} = dV_Y \iff r^{2n + 2} (dd^c \Phi)^{n} \wedge d \Phi \wedge d^c \Phi = dV_Y
\end{align*}
Since
\[ \mathcal{L}_{\xi} \Phi = 0, \]
the restriction of \( \Phi \) in \( S \) is basic.
It follows that
\begin{align*}
(dd^c \Phi)^{n} \wedge d \Phi \wedge d^c \Phi &= \det \tuple{ \frac{\partial^2 \Phi} {\partial z_l \partial \ol{z}_m}} \bigwedge (i/2) dz_k \wedge d \ol{z}_k \wedge d \Phi \wedge d^c \Phi \\
& = (d d^c \Phi)^{n} \wedge (dw + d \ol{w}) \wedge (d^c w + d^c \ol{w}) \\
& = (d d^c \Phi)^{n} \wedge 2 d \Re w \wedge 2 d^c \Re w
\end{align*}
The conical Calabi-Yau equation then becomes
\begin{equation*}
r^{2n + 2} (dd^c \Phi)^{n} \wedge 2 d \Re w \wedge 2 d^c \Re w = dV_Y
\end{equation*}
By contracting the equality with \(- J \xi \), and using \( 2 d \Re w(-J\xi) = 1 \), we have:
\[ r^{2n + 2} (dd^c \Phi)^{n} \wedge 2 d^c \Re w = dV_Y(-J \xi) = \omega_B^n \wedge \eta \]
By using \( dd^c \Phi = \theta + dd^c \phi = \theta + d_B d_B^c \phi \), \( 2 d^c \Re w = \eta \), the previous lemma and the fact that \( S = X \cap \pi^{-1}(\set{r_{\xi}^2 = 1}) \), we obtain by pullback the following equation on \( S \cap \mathcal{U} \):
\[ (\theta_X + d_B d^c_B \phi_X )^{n} \wedge \eta = e^{-(n +1)\phi_X} e^{-(n+1)(\Psi_{+} - \Psi_{-} )} \omega_B^{n} \wedge \eta \]
Finally by applying \( i_{\xi} \) and using that \( \eta(\xi) = 1 \), the equation on \( S \cap \mathcal{U} \) becomes
\[ (\theta_X + d_B d^c_B \phi_X)^n = e^{-(n +1)\phi_X} e^{-(n+1)(\Psi_{+} - \Psi_{-} )} \omega_B^{n} \]
The converse is proved in the same manner.
\end{proof}
\subsection{Uniform estimate}
Let \( \psi_{\pm,j} \) be two sequences of smooth basic quasi-psh functions which decrease to
\[ \psi_{+} := \Psi_{+}, \quad \psi_{-} := \Psi_{-} + \phi_X,\]
such that :
\begin{equation} \label{regularization}
\quad d_B d^c_B \psi_{\pm,j } \geq - C \omega_B
\end{equation}
for a uniform constant \( C \) independent of \( j \). Such a sequence exists by virtue of Lem. \ref{regularization_theorem}.
Let \( \varepsilon > 0 \). Recall that the form \( \theta_X = \pi^{*} dd^c \log r_{\xi}^2 \) is semi-positive, big and basic, hence \( \theta_X + \varepsilon \omega_B \) is a transverse Kähler form. Consider the following equation on \( S \) for a smooth basic \( (\theta_X + \varepsilon \omega_B) \)-psh function \( \phi_{j,\varepsilon} \):
\begin{equation} \label{transversal_CY_eqn_perturbed}
\tuple{\theta_X + \varepsilon \omega_B + d_B d^c_B \phi_{j, \varepsilon}}^{n} \wedge \eta = e^{(n+1)( \psi_{+,j} - \psi_{-,j}) } \omega_B^{n} \wedge \eta
\end{equation}
By the transverse Calabi-Yau theorem of El-Kacimi Alaoui \cite[3.5.5]{EKA90}, for all \(j, \varepsilon \), there exists a unique basic solution satisfying:
\[\sup \phi_{j, \varepsilon} = 0 \]
Now let \( \mu_j \) be the smooth volume form \( e^{(n+1)(\psi_{+,j} - \psi_{-,j})} \omega_B^n \wedge \eta \) on \( S \). The following lemma is elementary:
\begin{lem}
Let \( \mu \) be an inner-regular positive Borel measure on \( S \). Then for all \( \xi \)-invariant Borel set \( E \subset S \),
\[ \mu(E) = \sup \set{\mu(K), K \subset E \; \text{compact}, \xi-\text{invariant}} \]
In particular, \( \mu_j \) satisfies this property.
\end{lem}
\begin{proof}
It is enough to show that for all \( j \in \mathbb{N}^{*} \), there exists a compact \( \xi \)-invariant \( K_j \) such that:
\[ \mu(E) \leq \mu(K_j) + \frac{1}{j} \]
By inner regularity of \( E \), there exists a compact \( C_j \subset E \) such that:
\[ \mu(E) \leq \mu(C_j) + 1/j \]
The idea is to average \( C_j \) by the action of \( T_{\xi} \). We define
\[ K_j := \cup_{ g \in T_{\xi} } g. C_j = T_{\xi} . C_j \]
For each \( j \), the set \( K_j \) is compact and \( \xi \)-invariant by construction. Moreover, \( K_j \subset E \) since \( g.C_j \subset g.E \subset E \). Finally, the fact that \( C_j \subset K_j\) implies \( \mu(C_j) \leq \mu(K_j) \). This completes our proof.
\end{proof}
We also have the important \textit{domination by capacity} property of the measures \( \mu_j \).
\begin{prop} \label{domination_by_capacity}
The measures \( \mu_{j} \) satisfy the \( \mathcal{H}(\alpha, A, \theta) \) condition for all \( \alpha \). Namely, for all \( \alpha > 0 \), there exists a constant \( A \) independent of \(j\) such that:
\[ \mu_j (E) \leq A Cap_{\theta}(E)^{1+\alpha} \]
for all \(\xi\)-invariant Borel subset \( E \subset S \).
\end{prop}
\begin{proof}
By inner regularity of \( \mu_j \), it is enough to establish the lemma for a compact \( \xi \)-invariant \( K \subset S \). Indeed, suppose that the inequality is true for all such \( K \), then for all Borel \( \xi \)-invariant set \( E \),
\begin{align*}
\mu_j(E) &= \sup \set{ \mu_j(K), K \subset E \; \text{compact}, \xi-\text{invariant} }\\
&\leq A \sup \set{ Cap_{\theta}(K)^{1+\alpha}, K \subset E \; \text{compact}, \xi-\text{invariant} } \\
&\leq A Cap_{\theta}(E)^{1+\alpha} \; (\text{by Prop. \ref{capacity_properties}(1))}
\end{align*}
We can suppose furthermore that \( K \) is non-pluripolar (otherwise \( \mu_j(K) = 0 \) and the inequality is then trivial).
Now let \( K \) be a compact \( {\xi} \)-invariant and non-pluripolar. Let \( p > 1 \) be as in Lemma \ref{volume_pullback}. By Hölder inequality, we have:
\[ 0 \leq \mu_j(K) \leq \norm{f_j}_{L^p(\omega_B^n \wedge \eta )} \text{vol}_{\omega_B }(K)^{1/q} \]
where \( 1/p + 1/q = 1 \). Since \( \psi_{+,j} \leq \psi_{+,1} \) and that \( \psi_j \geq \psi_{-} \), the function \( e^{(n+1)(\psi_{+,j} - \psi_{-,j})} \) is bounded in \( L^p \) by \( e^{(n+1)(C - \psi_{-})} \), where \( C := \sup_S \psi_{+,1} \). It follows that the norm \( \norm{f_j}_{L^p(\omega_B^n \wedge \eta)} \) is uniformly bounded, therefore it is enough to show that
\[ \text{vol}_{\omega_B}(K) \leq C \exp \tuple{-\gamma (Cap_{\theta}(K))^{-1/n}} ) \]
where \( C = C(\theta, \omega_B) , \gamma = \gamma(\theta) \) are constants independent of \( j \). The conclusion then follows from the elementary equality \( \exp(-x^{\beta}) \leq A_{\alpha} x^{\alpha} \), for all \( x \in [0,1], \alpha > 0 \).
By Theorem \ref{skoda_integrability}, for \( \gamma := 2 / (\nu(\set{\theta}) + 1 ) \), there exists a constant \( C = C(\theta, \omega_B) \) such that:
\[ \sup_{\psi \in \mathcal{F}_0} \int_S \exp( - \gamma \psi) \omega_B^n \wedge \eta \leq C \]
In particular, for \( \psi := V^{*}_{K,\theta} - M_{K,\theta} \) (recall that \( M_{K,\theta} = \sup V^{*}_{K,\theta} \)), we obtain
\[ \int_{S} \exp(- \gamma V^{*}_{K,\theta} ) \omega_B^n \wedge \eta \leq C \exp(-\gamma M_{K,\theta} ) \]
Note that \( V^{*}_{K,\theta} \) is well defined thanks to the \( \xi \)-invariance of \( K \). Finally, since \( V^{*}_{K,\theta} \leq 0 \) \( \mu_{\omega_B} \)-a.e. on \( K \), we have
\[ \text{vol}_{\omega_B}(K) \leq C \exp(-\gamma M_{K,\theta} ) \]
An application of Lemma \ref{capacity_extremal_comparison} then completes our proof.
\end{proof}
Let us first establish some more useful lemmas before proving the uniform estimate.
\begin{lem} \label{capacity_estimate}
Let \(u \in PSH(S, \xi, \theta) \cap L^{\infty}(S) \) be a negative function. For all \( s \geq 0 \), \( 0 \leq t \leq 1 \),
\[ t^n Cap_{\theta} ( u < -s - t) \leq \int_{\set{u < -s}} \theta_u^{n} \wedge \eta \]
\end{lem}
\begin{proof}
Let \( v \in PSH(S,\xi,\theta) \), \( 0 \leq v \leq 1 \). Then
\[ \set{ u < - s -t} \subset \set{u \leq tv-s-t} \subset \set{u < -s} \]
By definition of the Monge-Ampère operator
\[ \int_{\set{u < -s-t}} \text{MA}_{\theta} (v) \leq \int_{\set{u \leq tv - s - t}} \text{MA}_{\theta}(v) \leq t^{-n} \int_{\set{u \leq tv - s - t}} \text{MA}_{\theta} (tv) \]
Applying the comparison princple \ref{comparison_principle} to the functions \( u + s + t \) and \( tv \),
\[ t^{-n} \int_{\set{u \leq tv - s - t}} \text{MA}_{\theta} (tv) \leq t^{-n} \int_{\set{u \leq tv - s - t}} \text{MA}_{\theta}(u) \leq \int_{\set{u < -s}} \text{MA}_{\theta}(u) \]
which terminates our proof.
\end{proof}
\begin{lem} \cite[Lem. 2.4]{EGZ} \label{vanishing_capacity}
Let \( f : \mathbb{R}^{+} \to \mathbb{R}^{+} \) be a right-continuous decreasing function such that \( \lim_{s \to +\infty} f(s) = 0 \). If \( f \) satisfies the condition
\[H(\alpha,B), \quad t f(s+t) \leq Bf(s)^{1 + \alpha}, \; \forall s \geq 0, 0 \leq t \leq 1 \]
then there exists \( s_0 = s_0(\alpha, B) \) such that \( f(s) = 0 \), \( \forall s \geq s_0 \).
\end{lem}
\begin{prop} \label{linfty_estimate_pluripotential}
There exists a uniform constant \( C \) such that:
\[ \norm{\phi_{j,\varepsilon}}_{L^{\infty}(S)} \leq C \]
\end{prop}
\begin{proof}
Let \( f(s) := Cap_{\theta}( \phi_{j,\varepsilon} < - s )^{1/n} \). It is clear that \( f : \mathbb{R}^{+} \to \mathbb{R}^{+} \) is right-continuous, and \( \lim_{s \to + \infty} f(s) = 0 \) (cf. Prop. \ref{capacity_properties}). Moreover, \( f \) is decreasing: for all \( t > s \), \( \set{\phi < -t} \subset \set{\phi < -s}, \; \forall t > s \), hence \( f(t) \leq f(s) \). Following Lem. \ref{capacity_estimate} and the fact that \( \mu_j \) satisfy \( \mathcal{H}(\alpha,A,\theta) \), \( f \) satisfies the condition \( H( \alpha, B) \) with \( B = A^{1/n} \). Indeed,
\begin{align*}
t^n f(s+t)^n &\leq t^n Cap_{\theta + \varepsilon \omega_B} ( \phi_{j,\varepsilon} < - s -t) \\
&\leq \int_{\set{ \phi_{j,\varepsilon} < -s}} (\theta + \varepsilon \omega_B + d_B d_B^c \phi_{j,\varepsilon} )^n \wedge \eta \\
&= \int_{\set{ \phi_{j,\varepsilon} < -s}} \mu_j \leq A Cap_{\theta}( \phi_{j,\varepsilon} < -s)^{1+\alpha} = A f(s)^{n(1+\alpha)}
\end{align*}
The first inequality follows from Lem. \ref{capacity_properties}, the second is direct from Lem. \ref{capacity_estimate}, while the fourth is a consequence of Lem. \ref{domination_by_capacity}.
Now let \( \omega_{\varepsilon} := \theta_X + \varepsilon \omega_B \). For \( \varepsilon \) sufficiently small and \( \delta \) large enough, there exists \( \delta = \delta(S) \geq 1 \) such that \( \omega_{\varepsilon} \leq \delta \omega_B \). In particular, \( \phi_{j,\varepsilon} \in PSH^{-}(S, \xi, \delta \omega_B) \). Again by Lem. \ref{capacity_properties},
\begin{align*}
f(s)^n
&\leq Cap_{\delta \omega_B} ( \phi_{j,\varepsilon} < -s) \\
&\leq \frac{\delta^n}{s} \tuple{ \int_S (-\phi_{j,\varepsilon}) \omega_B^{n} \wedge \eta + n \text{vol}_{\omega_B}(S) }
\end{align*}
But by (\ref{uniform_boundedness}) of Lem. \ref{compacity_properties}:
\[ \int_S - \phi_{j,\varepsilon} d \mu_{\omega_B} \leq -\sup \phi_{j,\varepsilon} + C(\omega_B) = C(\omega_B) \]
Therefore, \( f(s) \leq (C_1/s^{1/n}) \), where \( C_1 = C_1( \omega_B, \theta_X) \). We can then apply Lem. \ref{vanishing_capacity} to select \( s_0 = s_0(n,\alpha, A, \omega_B, \theta_X) \) as in \cite[Lemma 2.3, Theorem 2.1]{EGZ} such that:
\[ Cap_{\theta_X}(\phi_{j,\varepsilon} < -s) = 0, \; \forall s \geq s_0 \]
In particular, \( \mu_j ( \phi_{j,\varepsilon} < -s_0) = 0 \) by Lem. \ref{domination_by_capacity}. Hence \( \phi_{j,\varepsilon} \geq s_0 \) on \( S \), so there exists \( C = C(n, \alpha, A, \omega_B, \theta_X) \) such that:
\[ \norm{\phi_{j,\varepsilon}}_{L^{\infty}(S)} \leq C \]
\end{proof}
\subsection{Laplacian estimate}
We will need the transverse version of the Yau-Aubin inequality, obtained by Siu for two cohomologous forms \cite{Siu87}, but the proof can be generalized to any couple of Kähler forms. Let
\[ \Delta_{\omega'_B} := \operatorname{Tr}_{\omega'_B} d_B d_B^c \]
be the Laplacian associated to the transverse Kähler form \( \omega_B' \).
\begin{lem} \label{transversal_yau_aubin_inequality}
For all transverse Kähler form \( \omega_B' \), there exists a constant \( \kappa \) depending only on the transverse bisectional curvature of \( \omega_B \) such that:
\[ \Delta_{\omega'_B} \log \operatorname{Tr}_{\omega_B} \omega_B' \geq - \kappa \operatorname{Tr}_{\omega_B'} \omega_B - \frac{\operatorname{Tr}_{\omega_B} \text{Ric} (\omega_B')}{\operatorname{Tr}_{\omega_B} \omega_B' }\]
where \( \text{Ric} (\omega'_B) \) is the transverse Ricci curvature.
\end{lem}
\begin{proof}
On each foliation chart, the transverse Kähler forms depend only on the \( z \)-coordinates. The inequality thus follows from the purely local proof in the compact Kähler case. The reader may consult Appendice \ref{appendice_yau_aubin_transverse} for a proof.
\end{proof}
The following proposition gives a \textit{a priori} Laplacian estimate of the solution \( \phi_{j, \varepsilon} \) of equation (\ref{transversal_CY_eqn_perturbed}). We follow the arguments of \cite[Appendice B]{BBEGZ}.
In this section, by a \textit{uniform constant}, we mean a constant independent of the \( j, \varepsilon \) parameters.
\begin{prop} \label{laplacian_estimate}
Let \( \psi := \Phi_B - r_{\xi}^2 \) and \(
\omega_{\varepsilon} = \theta _X + \varepsilon \omega_B, \; \omega'_{\varepsilon} := \omega_{ \varepsilon} + d_B d_B^c \phi_{j, \varepsilon} \).
There exist uniform constants \( C_1, C_2 \) such that:
\[ \sup_{S \cap \mathcal{U} } \operatorname{Tr}_{\omega_{\varepsilon}} \omega'_{\varepsilon} \leq C_2 e^{-C_1 \psi - \psi_{-,j}} \leq C_2 e^{-C_1 \psi - \psi_{-}} \]
In particular, there exists a uniform constant \( C_3 \) such that
\[ \sup_{ S \cap \mathcal{U} } \abs{\Delta_{\omega_B} \phi_{j,\varepsilon}} \leq C_3 e^{-C_1 \psi - \psi_{-}} \]
\end{prop}
\begin{proof}
The function \( \psi \) is clearly basic \( \theta_X \)-psh and \( \psi \to -\infty \) near \( \partial \mathcal{U} \) by the construction of \( \Phi_B \) in Prop. \ref{basic_form_asymptotic}. Moreover, \( \omega_B |_{\mathcal{U}} = (\theta_X + dd^c \psi) |_{\mathcal{U}} \) is the restriction into \( \mathcal{U} \) of the transverse Kähler form \( \omega_B \), constructed on \( X \backslash E_0 \).
Consider the following smooth function on \( S \cap \mathcal{U} \):
\[ h := \log ( \operatorname{Tr}_{\omega_{\varepsilon}} \omega_{\varepsilon}' ) + n \psi_{-,j} - A_1 ( \phi_{j, \varepsilon} - \psi) \]
where \( A_1 := A_1(\kappa) \) is a constant sufficiently large and depends on \( \kappa \). The compacity of \( S \), the \( L^\infty \)-estimate in Prop. \ref{linfty_estimate_pluripotential}, combined with transverse Yau-Aubin inequality in Lem. \ref{transversal_yau_aubin_inequality} are all the ingredients we need to repeat the arguments of \cite[Appendice B]{BBEGZ} to conclude.
For the reader's convenience, we provide here some details of the proof. By the transverse Yau-Aubin inequality, we have on \( S \cap \mathcal{U} \):
\[ \Delta_{\omega'_{\varepsilon}} h \geq \operatorname{Tr}_{\omega'_{\varepsilon}}(\omega_{\varepsilon}) - A_2 \]
where \( A_2 \) depends only on \( A_1 \) and \( n \). Since \( \phi_{j,\varepsilon} \) is uniformly bounded and that \( \psi \to -\infty \) near \( \partial (S \cap \mathcal{U}) \), \( h \) attains its maximum at \(x_0 \in S \cap \mathcal{U} \). It follows from the maximum principle that
\[ 0 \geq \Delta_{\omega'_{\varepsilon}} h(x_0) \geq \operatorname{Tr}_{\omega'_{\varepsilon}}(\omega_{\varepsilon})(x_0) - A_2 \]
By local elementary reasonings as in the compact Kähler case, we obtain the following inequality for two transverse Kähler forms:
\[ \operatorname{Tr}_{\omega_{\varepsilon}}(\omega'_{\varepsilon}) \leq n \frac{(\omega'_{\varepsilon})^{n}}{\omega_{\varepsilon}^{n}} (\operatorname{Tr}_{\omega'_{\varepsilon}}(\omega_{\varepsilon}))^{n} = (n+1) e^{\psi_{+,j} - \psi_{-,j}} (\operatorname{Tr}_{\omega'_{\varepsilon}}(\omega_{\varepsilon}))^{n} \]
Taking log on both sides gives us
\[ \log (\operatorname{Tr}_{\omega_{\varepsilon}}\omega'_{\varepsilon}) \leq \log(n) + (n+1)(\psi_{+,j} - \psi_{-,j}) + n \log ( \operatorname{Tr}_{\omega'_{\varepsilon}} \omega_{\varepsilon} ) \]
hence by definition of \( h\),
\[ h \leq \log(n) + (n + 1) \psi_{+,j} + n \log( \operatorname{Tr}_{\omega'_{\varepsilon}} \omega_{\varepsilon} ) - A_1(\phi_{j,\varepsilon} - \psi) \]
Therefore
\[ \sup_{S \cap \mathcal{U}} h \leq h(x_0) \leq A_3 - A_1 \inf_{S \cap \mathcal{U} } (\phi_{j,\varepsilon} - \psi) \leq A_3 - A_1 \inf_{S \cap \mathcal{U} } \phi_{j,\varepsilon} \]
where \( A_3 \) is a uniform constant since \( \psi_{+,j} \) and \( \operatorname{Tr}_{\omega'_{\varepsilon}} \omega_{\varepsilon}(x_0) \) are both uniformly bounded.
As a consequence, there exists a uniform constant \( A_4 \) such that:
\[ h := \log ( \operatorname{Tr}_{\omega_{\varepsilon}} \omega_{\varepsilon}' ) + (n+1) \psi_{-,j} - A_1 ( \phi_{j, \varepsilon} - \psi) \leq A_4 \]
which leads to:
\[ \operatorname{Tr}_{\omega_{\varepsilon}} \omega_{\varepsilon}' \leq e^{-(n+1)\psi_{-,j}} e^{A_1(\phi_{j,\varepsilon} - \psi)} e^{A_4} \]
hence the existence of uniform constants \( A_1, A_5 \), depending only on \( C \) in inequality (\ref{regularization}), \( \kappa \), and the bound of the \( L^{\infty} \)-estimate \ref{linfty_estimate_pluripotential} such that:
\[ \sup_{S \cap \mathcal{U} } \operatorname{Tr}_{\omega_{\varepsilon}} \omega'_{\varepsilon} \leq A_5 e^{-A_1 \psi - \psi_{-,j}} \leq A_4 e^{-A_1 \psi - \psi_{-}} \]
For the estimate of \( \Delta_{\omega_B} \phi_{j, \varepsilon} \), we make the following remark. By compacity of \( S \), there exists a uniform constant \( \delta \) sufficiently large such that
\[ \omega_{\varepsilon} = \theta + \varepsilon \omega_B \leq \delta \omega_B \]
hence
\[ \operatorname{Tr}_{\omega_B}(.) \leq \delta^{-1} \operatorname{Tr}_{\omega_{\varepsilon}}(.) \]
But since
\[ \sup_{S \cap \mathcal{U} } \operatorname{Tr}_{\omega_{\varepsilon} } ( \omega_{\varepsilon} + dd^c \phi_{j, \varepsilon} ) = n + \sup_{S \cap \mathcal{U} } \Delta_{\omega_{ \varepsilon}} \phi_{j, \varepsilon} \leq A_4 e^{-A_1 \psi - \psi_{-}}, \]
this completes our proof.
\end{proof}
\subsection{Conclusion}
\begin{proof}[Proof of the main Theorem]
By using the \( L^{\infty} \)-estimate in Lem. \ref{linfty_estimate_pluripotential} and the transverse Yau-Aubin inequality \ref{transversal_yau_aubin_inequality}, we obtained in Lem. \ref{laplacian_estimate} the estimate of \( \Delta_{\omega_B} \phi_{j, \varepsilon} \).
As a consequence, \( \Delta_{\omega_B} \phi_{j, \varepsilon} \) is locally uniformly bounded on \( S \cap \mathcal{U} \) since \( \psi_{-} := \Psi_{-} + \phi_X \) is locally bounded by our assumption. It follows that there exists a subsequence \( \phi_{j, \varepsilon(j)} \) which is \( C^1 \)-convergent on \( S \cap \mathcal{U} \) to
\[ \phi_0 \in L^{\infty}(S \cap \mathcal{U}), \Delta_{\omega_B} \phi_0 \in L^{\infty}_{\text{loc}}(S \cap \mathcal{U}) \]
which is a solution of
\begin{equation}
(\theta + d_Bd_B^c \phi_0)^{n} \wedge \eta = e^{-(n+1) \phi_X } \pi^{*} dV_Y(-J\xi,.)
\end{equation}
on \( S \cap \mathcal{U} \). The equation admits a unique solution up to constant (cf. Prop. \ref{uniqueness}), hence:
\[ \phi_0 = \phi_X + c \]
which implies that \( \Delta_{\omega_B} \phi_X \) is locally bounded. This allows us to obtain a \( C^{2,\alpha} \)-estimate of \( \phi_X \), as well as higher order estimates using Schauder's estimate and complex Evans-Krylov theory as in \cite[5.3, p.210]{Blo05}, hence the smoothness of \( \phi_X \) on \( S \cap \mathcal{U} \).
By definition,
\( r^2 = r_{\xi}^2 e^{\phi} \) and \( \phi_X = \pi^{*} \phi \). Using symmetry by \( \mathbb{R}_{ > 0} \)-action generated by \( -J\xi \), we conclude that \( \phi_X = \phi \circ \pi \) is actually smooth on \( \mathcal{U} \), hence \( \phi \) is smooth on \( Y_{\text{reg}} \). In particular, \( r^2 \) is smooth on \( Y_{\text{reg}} \).
\end{proof}
\section{Appendice : Transverse Yau-Aubin inequality} \label{appendice_yau_aubin_transverse}
In the sequel, we will use the summation convention. Let \( \omega_B , \omega_B' \) be two transverse Kähler forms on \( S \). Let \( (z,x) \) be the coordinates on a foliation chart of \( S \) such that:
\[ \omega_B = g_{ j \ol{k}} \sqrt{-1} dz^{j} \wedge d \ol{z}^k, \quad \omega'_B = g'_{ j \ol{k}} \sqrt{-1} dz^{j} \wedge d \ol{z}^k \]
After choosing a normal transverse holomorphic chart, one can suppose that \( g_{j \ol{k}} = \delta_{jk} \) and that \( \omega'_B \) is diagonal. Let \( ( g^{j \ol{k}}) \) denote the inverse of \( ( g_{j \ol{k}} ) \). We have:
\[ \operatorname{Tr}_{\omega_B} \omega_B' = g^{j \ol{j}} g'_{j \ol{j}} = \sum_{j}g'_{j \ol{j}}, \quad \operatorname{Tr}_{\omega_B'} \omega_B = g'^{j \ol{j}} g_{j \ol{j}} = \sum_{j} g'^{j \ol{j}} \]
Denote
\[ \partial_j := \frac{\partial}{\partial z_j}, \; \overline{\partial}_k := \frac{\partial}{\partial \ol{z}_k}, \; \partial_j \overline{\partial}_k := \frac{\partial^2}{ \partial z_j \partial \ol{z}_k} \]
\begin{lem} We have the following inequality:
\[ g'^{p \ol{p}} (\partial_p g'_{a \ol{a}}) (\overline{\partial}_p g'_{b \ol{b}}) \leq (\operatorname{Tr}_{\omega_B} \omega_B') \sum_{p,a,j} g'^{p \ol{p}} g'^{a \ol{a}} \abs{\partial_p g'_{a \ol{j}} }^2\]
\end{lem}
\begin{proof}
The lemma follows from repeated applications of Cauchy-Schwartz inequality:
\begin{align*}
\sum_{p,a,b} g^{p \ol{p}} (\partial_p g'_{a \ol{a}})(\overline{\partial}_p g'_{b \ol{b}}) & \leq \sum_{a,b} ( g^{p \ol{p}} \abs{\partial_p g'_{a \ol{a}} }^2 )^{1/2} ( g^{p \ol{p}} \abs{ \overline{\partial}_p g'_{b \ol{b}}}^2 ) ^{1/2} \\
&= ( \sum_{a} (\sum_p g^{p \ol{p}} \abs{\partial_p g'_{a \ol{a}}}^2 )^{1/2} )^2 \\
&= ( \sum_a \sqrt{g_{a \ol{a}}'} ( \sum_p g^{p \ol{p}}g'^{a \ol{a}} \abs{ \partial_p g'_{a \ol{a}} }^2 )^{1/2} )^2 \\
&\leq (\sum_{a} g'_{a \ol{a}})( \sum_{p,a} g^{p \ol{p}} g'^{a \ol{a}} \abs{\partial_p g'_{a \ol{a}}}^2 ) \\
& \leq (\operatorname{Tr}_{\omega_B} \omega_B') ( \sum_{p,a,j} g^{p \ol{p}} g'^{a \ol{a}} \abs{\partial_p g'_{a \ol{j}}}^2 )
\end{align*}
\end{proof}
Recall the statement of the transverse Yau-Aubin inequality:
\begin{lem}
\[ \Delta_{\omega'_B} \log \operatorname{Tr}_{\omega_B} \omega_B' \geq - \kappa \operatorname{Tr}_{\omega_B'} \omega_B - \frac{\operatorname{Tr}_{\omega_B} \text{Ric} (\omega_B')}{\operatorname{Tr}_{\omega_B} \omega_B' }\]
\end{lem}
\begin{proof}
We have:
\begin{align*}
\Delta_{\omega'_B} \log \operatorname{Tr}_{\omega_B} \omega'_B &= \frac{\Delta_{\omega'_B} \operatorname{Tr}_{\omega_B} \omega_B'} {\operatorname{Tr}_{\omega_B} \omega'_B} - g^{p \ol{q}} \frac{(\overline{\partial}_q \operatorname{Tr}_{\omega_B} \omega'_B) (\partial_p \operatorname{Tr}_{\omega_B} \omega'_B) }{ (\operatorname{Tr}_{\omega_B} \omega_B')^2 } \\
&= \frac{\Delta_{\omega'_B} \operatorname{Tr}_{\omega_B} \omega_B'} {\operatorname{Tr}_{\omega_B} \omega'_B} - \frac{ g^{p \ol{p}} (\partial_p g'_{a \ol{a}}) (\overline{\partial}_p g'_{b \ol{b}})}{ (\operatorname{Tr}_{\omega_B} \omega_B')^2}
\end{align*}
By definition,
\begin{align*}
\Delta_{\omega'_B} \operatorname{Tr}_{\omega_B} \omega_B' &= g'^{p \ol{q}} ( \partial_p \overline{\partial}_q g^{j \ol{k}} ) g'_{j \ol{k}} + g'^{p \ol{q}} g^{j \ol{k}} \partial_p \overline{\partial}_q g'_{j \ol{k}} \\
&= g'^{p \ol{q}} ( \partial_p \overline{\partial}_q g^{j \ol{k}} ) g'_{j \ol{k}} - g'^{p \ol{q}} g^{j \ol{k}} R'_{j \ol{k} p \ol{q}} + g'^{p \ol{q}} g^{j \ol{k}} g'^{a \ol{b}} ( \partial_p g'_{j \ol{b}} )( \overline{\partial}_q g'_{a \ol{k}})
\end{align*}
where \( R'_{j \ol{k} p \ol{q}} \) is the local expression of the transverse curvature form of \( \omega_B' \). Let us estimate the three terms of the expression above.
\begin{itemize}
\item Since \( \omega_B \) and \( \omega_B' \) are diagonal, we have for the first term:
\[ g'^{p \ol{q}} ( \partial_p \overline{\partial}_q g^{j \ol{k}} ) g'_{j \ol{k}} = g'^{p \ol{p}} ( \partial_p \overline{\partial}_p g^{j \ol{j}} ) g'_{j \ol{j}} \geq - \kappa (\operatorname{Tr}_{\omega_B} {\omega'_B}) (\operatorname{Tr}_{\omega_B'} \omega_B) \]
where \( \kappa \) is the infimum of the transverse sectional curvature (which exists since \( S \) is compact).
\item In the second term, \( g'^{p \ol{q}} R_{j \ol{k} p \ol{q}} = R'_{j \ol{k}} \), where \(R'_{j \ol{k}} \) is the local expression of the transverse Ricci-form \( \text{Ric}(\omega'_B) \).
\item
For the third term, we have:
\[ g'^{p \ol{q}} g^{j \ol{k}} g'^{a \ol{b}} ( \partial_p g'_{j \ol{b}} )( \overline{\partial}_q g'_{a \ol{k}}) = g'^{p \ol{p}} g'^{a \ol{a}} \abs{\partial_p g'_{a \ol{j}} }^2 \]
\end{itemize}
It follows that
\[ \Delta_{\omega'_B} \operatorname{Tr}_{\omega_B} \omega_B' \geq - \kappa \operatorname{Tr}_{\omega_B} {\omega'_B} \operatorname{Tr}_{\omega_B'} \omega_B - g^{j \ol{k}} R'_{j \ol{k}} + \sum_{p, a, j} g'^{p \ol{p}} g'^{a \ol{a}} \abs{\partial_p g'_{a \ol{j}} }^2 \]
hence
\begin{align*}
\Delta_{\omega'_B} \log \operatorname{Tr}_{\omega_B} \omega'_B & \geq - \kappa \operatorname{Tr}_{\omega'_B} \omega_B - \frac{\operatorname{Tr}_{\omega_B} \text{Ric} (\omega_B') } {\operatorname{Tr}_{\omega_B} \omega_B'} \\
&+ \frac{\sum_{p,a,j} g'^{p \ol{p}} g'^{a \ol{a}} \abs{\partial_p g'_{a \ol{j}} }^2}{\operatorname{Tr}_{\omega_B} \omega'_B} - \frac{g^{p \ol{p}} (\partial_p g'_{a \ol{a}}) (\overline{\partial}_p g'_{b \ol{b}})}{(\operatorname{Tr}_{\omega_B} \omega_B)^2} \\
& \geq - \kappa \operatorname{Tr}_{\omega'_B} \omega_B - \frac{\operatorname{Tr}_{\omega_B} (\text{Ric} (\omega_B') } {\operatorname{Tr}_{\omega_B} \omega_B'}
\end{align*}
by the previous lemma.
\end{proof}
\bibliographystyle{alpha}
|
1,108,101,564,190 | arxiv | \section{Introduction}
Let $n\in {\rm I\hspace{-0.2em}N}$ with $n\ge 2$ and suppose $\Omega$ is a bounded, open set in ${\rm I\hspace{-0.2em}R}^{n}$ with locally Lipschitz boundary $\partial\Omega.$
Fix $H\in C^{2}\left({\rm I\hspace{-0.2em}R}^{n}\times{\rm I\hspace{-0.2em}R}\right)$ such that $H$ is bounded and $H(x,t)$ is nondecreasing in $t$ for $x\in\Omega.$
Consider the prescribed mean curvature Dirichlet problem of finding a function $f\in C^{2}\left(\Omega\right)\cap C^{0}\left(\overline{\Omega}\right)$
which satisfies
\begin{eqnarray}
\label{ONE-A}
{\rm div}\left( Tf\right) & = & H(x,f) \ \ \ \ \ {\rm in} \ \ \Omega, \\
f & = & \phi \ \ \ \ \ {\rm on} \ \ \partial\Omega,
\label{ONE-B}
\end{eqnarray}
where $Tf= \frac{\nabla f}{\sqrt{1+\left|\nabla f\right|^{2}}}$ and $\phi\in C^{0}\left( \partial \Omega\right)$ is a prescribed function;
such a function $f,$ if it exists, is a classical solution of the Dirichlet problem.
It has been long known (e.g. Bernstein in 1912) that some type of boundary curvature condition (which depends on $H$) must be satisfied
in order to guarantee that a classical solution exists for each $\phi\in C^{0}\left( \partial \Omega\right)$
(e.g. \cite{JenkinsSerrin,Serrin}).
When $H\equiv 0$ and $\partial\Omega$ is smooth, this curvature condition is that $\partial\Omega$ must have nonnegative mean
curvature (with respect to the interior normal direction of $\Omega$) at each point (\cite{JenkinsSerrin}).
However, Leon Simon (\cite{Simon}) has shown that if $\Gamma_{0}\subset \partial\Omega$ is smooth (i.e. $C^{4}$), the mean curvature $\Lambda$
of $\partial\Omega$ is negative on $\Gamma_{0}$ and $\Gamma$ is a compact subset of $\Gamma_{0},$
then the minimal hypersurface $z=f(x),$ $x\in\Omega,$ extends to $\Omega\cup \Gamma$ as a continuous function, even though $f$ may not equal $\phi$ on $\Gamma.$
Since \cite{Simon} appeared, the requirement that $H\equiv 0$ has been eliminated and the conclusion remains similar to that which Simon reached
(see, for example, \cite{Bour, LauLin, Lin}).
How important is the role of boundary smoothness in the conclusions reached in \cite{Simon}?
We shall show, by constructing suitable domains $\Omega$ and Dirichlet data $\phi,$
that the existence of a ``nonconvex corner'' $P$ in $\Gamma$ can cause the unique generalized (e.g. variational) solution
to be discontinuous at $P$ even if $\Gamma\setminus \{P\}$ is smooth and the generalized mean curvature $\Lambda^{*}$
(i.e. \cite{Serrin}) of $\Gamma$ at $P$ is $-\infty$; this shows that some degree of smoothness of
$\Gamma$ is required to obtain the conclusions in \cite{Simon}.
We shall prove the following
\begin{thm}
\label{Main Theorem}
Let $n\in {\rm I\hspace{-0.2em}N},$ $n\ge 2,$ and assume there exists $\lambda>0$ such that $|H(x,t)|\le \lambda$ for $x\in{\rm I\hspace{-0.2em}R}^{n}$ and $t\in{\rm I\hspace{-0.2em}R}.$
Then there exist a domain $\Omega\subset{\rm I\hspace{-0.2em}R}^{n}$ and a point $P\in\partial\Omega$ such that
\begin{itemize}
\item[(i)] $\partial\Omega\setminus\{P\}$ is smooth ($C^{\infty}$),
\item[(ii)] there is a neighborhood ${\cal N}$ of $P$ such that $\Lambda(x)<0$ for $x\in {\cal N}\cap \partial\Omega\setminus\{P\},$ where $\Lambda$
is the mean curvature of $\partial\Omega,$ and
\item[(iii)] $\Lambda^{*}(P)=-\infty,$ where $\Lambda^{*}$ is the generalized mean curvature of $\partial\Omega,$
\end{itemize}
and there exists Dirichlet boundary data $\phi\in C^{\infty}\left({\rm I\hspace{-0.2em}R}^{n}\right)$ such that the minimizer $f\in BV(\Omega)$ of
\begin{equation}
\label{The_Functional}
J(u)=\int_{\Omega} |Du| + \int_{\Omega} \int_{0}^{u} H(x,t) dt \ dx +\int_{\partial\Omega} |u-\phi| d {\cal H}^{n-1}, \ \ \ \ u\in BV(\Omega),
\end{equation}
exists and satisfies (\ref{ONE-A}), $f\in C^{2}(\Omega)\cap C^{0}\left(\overline{\Omega}\setminus \{P\}\right) \cap L^{\infty}(\Omega),$
$f\notin C^{0}\left(\overline{\Omega}\right)$ and $f \neq \phi$ in a neighborhood of $P$ in $\partial\Omega.$
\end{thm}
\vspace{3mm}
Since there are certainly many examples of Dirichlet problems which have continuous solutions even though their domains fail to satisfy appropriate smoothness
or boundary curvature conditions (e.g. by restricting to a smaller domain a classical solution of a Dirichlet problem on a larger domain), the question of necessary or sufficient conditions
for the continuity at $P$ of a generalized solution of a particular Dirichlet problem is of interest and the examples here suggest (to us)
that a ``Concus-Finn'' type condition might yield necessary conditions for the continuity at $P$ of solutions; see \S \ref{CFcondition}.
We view this note as being analogous to other articles (e.g. \cite{FinnShi, HuffMcCuan06, HuffMcCuan09, Korevaar}) which enhance our knowledge of the behavior of solutions of boundary
value problems for prescribed mean curvature equations by constructing and analyzing specific examples.
One might also compare Theorem \ref{Main Theorem} with the behavior of generalized solutions of (\ref{ONE-A})-(\ref{ONE-B}) when $\partial\Omega\setminus\{P\}$
is smooth and $|H(x,\phi(x))|\le (n-1)\Lambda(x)$ for $x\in \partial\Omega\setminus\{P\}$ (e.g. \cite{EL1986, Lan1985, Lan1988}) and with capillary surfaces (e.g. \cite{LS1}).
\section{Nonparametric Minimal Surfaces in ${\rm I\hspace{-0.2em}R}^{3}$}
\label{BLUE}
In this section, we will assume $n=2$ and $H\equiv 0;$ this allows us to use explicit comparison functions and illustrate our general procedure.
Let $\Omega$ be a bounded, open set in ${\rm I\hspace{-0.2em}R}^{2}$ with locally Lipschitz boundary $\partial\Omega$ such that a point $P$
lies on $\partial\Omega$ and there exist distinct rays $l^{\pm}$ starting at $P$ such that $\partial\Omega$ is tangent to
$l^{+}\cup l^{-}$ at $P.$
By rotating and translating the domain, we may assume $P=(0,1)$ and there exists a $\sigma\in \left(-\frac{\pi}{2},\frac{\pi}{2}\right)$
such that
$l^{-}=\{\left(r\cos(\sigma),1+r\sin(\sigma)\right) : r\ge 0\},$
$l^{+}=\{\left(r\cos(\pi-\sigma),1+r\sin(\pi-\sigma)\right) : r\ge 0\}$ and
\begin{equation}
\label{PIZZA}
\Omega\cap B\left(P,\delta\right) = \{\left(r\cos(\theta),1+r\sin(\theta)\right) : 0<r<\delta, \theta^{-}(r)<\theta<\theta^{+}(r)\}
\end{equation}
for some $\delta>0$ and functions $\theta^{\pm}\in C^{0}(\left[0,\delta)\right)$ which satisfy $\theta^{-}<\theta^{+},$
$\theta^{-}(0)=\sigma$ and $\theta^{+}(0)=\pi-\sigma$; here $B\left(P,\delta\right)$ is the open ball in ${\rm I\hspace{-0.2em}R}^{2}$ centered at $P$
of radius $\delta.$ If we set $\alpha=\frac{\pi}{2}-\sigma,$ then $\alpha\in (0,\pi)$ and the angle at $P$ in $\Omega$ of
$\partial\Omega$ has size $2\alpha.$
As $\sigma<0$ goes to zero, $2\alpha>\pi$ goes to $\pi$ and the (upper) region
between $l^{-}$ and $l^{+}$ becomes ``less nonconvex'' and approaches a half-plane through $P.$
We will show that for each choice of $\sigma\in \left(-\frac{\pi}{2},0\right),$ there is a domain $\Omega$ as above and a choice of
Dirichlet data $\phi\in C^{\infty}\left(\partial\Omega\right)$ such that the solution of (\ref{ONE-A})-(\ref{ONE-B}) for $\Omega$
and $\phi$ is discontinuous at $P.$
Fix $\sigma\in \left(-\frac{\pi}{2},-\frac{\pi}{4}\right).$ Let $\epsilon$ be a small, fixed parameter, say $\epsilon\in (0,0.5),$ and let
$a=a(\sigma)\in (1,2)$ be a parameter to be determined.
Set $\tau=(1+\epsilon)\cot(-\sigma)$ and $r_{1}=\sqrt{\tau^{2}+(1+\epsilon)^{2}}.$
Define $h_{2/\pi}\in C^{2}((0,2)\times (-1,1))$ by
\[
h_{2/\pi}(x_1,x_2) =
\frac{2}{\pi}\ln\left(\frac{\cos\left(\frac{\pi x_2}{2}\right)}{\sin\left(\frac{\pi x_1}{2}\right)}
\right).
\]
Notice that the graph of $h_{2/\pi}$ is part of Scherk's first surface, so $\mathrm {div}(Th_{2/\pi})=0$ on $(0,2)\times (-1,1),$
and $h_{2/\pi}(t,t-1)=0$ for each $t\in (0,2).$
A computation using L'Hospital's Rule shows
\begin{equation}
\label{ZipZoom}
\lim_{t\to 0^{+}} h_{2/\pi}((t\cos(\theta),1+t\sin(\theta))) = \frac{2}{\pi} \ln(-\tan(\theta)), \ \ \ \theta\in \left(-\frac{\pi}{2},0\right)
\end{equation}
Let $D=B\left({\cal O},1\right)\cap B\left((\tau,-\epsilon),r_{1}\right)\cap B\left((-\tau,-\epsilon),r_{1}\right)$ be the intersection
of three open disks and let $E\subset D$ be a strictly convex domain such that $\{x\in \partial E:x_{2}< 1\}$ is a $C^{\infty}$ curve,
$E\cap\{x_{2}\ge 0\} = D\cap \{x_{2}\ge 0\},$ $E$ is symmetric with respect to the $x_{2}-$axis and $(0,-1)\in \partial E;$
here ${\cal O}$ denotes $(0,0).$
Define
\[
\Omega = B\left({\cal O}, a\right)\setminus \overline{E}
\]
(see Figure \ref{FigureOne}); notice that $P\in \partial\Omega$ and (\ref{PIZZA}) holds with the choice of $\sigma$ above.
If we set $C=\{(x_{1},x_{2})\in {\rm I\hspace{-0.2em}R}^{2} : 0<x_{1}<1, x_{1}-1<x_{2}<1-x_{1} \},$ then (\ref{ZipZoom}) implies
$\sup_{x\in C\cap \partial E} h_{2/\pi}(x)<\infty.$
\begin{figure}
\centering
\includegraphics{FigureA05}
\caption{$\Omega$}
\label{FigureOne}
\end{figure}
Let $m>\max\{r_{1}\cosh^{-1}\left(\frac{2+\sqrt{\tau^{2}+\epsilon^{2}}}{r_{1}}\right),\sup_{x\in C\cap \partial E} h_{2/\pi}(x) \}.$
Notice that $m$ is independent of the parameter $a.$
Define $\phi\in C^{\infty}\left(\partial\Omega\right)$ by $\phi=0$ on $\partial B\left({\cal O},a\right)$ and $\phi=m$ on $\partial E.$
Let $f$ be the variational solution of (\ref{ONE-A})-(\ref{ONE-B}) with $\phi$ as given here (e.g. \cite{Ger2,Giu178}).
Since $\phi\ge 0$ on $\partial\Omega$ and $\phi>0$ on $\partial E,$ $f\ge 0$ in $\Omega$ (e.g. Lemma \ref{Four} (with $h\equiv 0$)) and
so $f>0$ in $\Omega$ (e.g. the Hopf boundary point lemma).
Notice that $h_{2/\pi}=0<f$ on $\Omega\cap\partial C$ and $h_{2/\pi}<\phi$ on $C\cap \partial E = C\cap \partial\Omega$ and therefore
$h_{2/\pi}<f$ on $\Omega\cap C$ (see Figure \ref{FigureTwo}).
Together with (\ref{ZipZoom}), this implies
\begin{equation}
\label{TRex}
\liminf_{\Omega\cap C\ni x\to P} f(x) \ge \frac{2}{\pi} \ln(\tan(-\sigma))>0.
\end{equation}
\begin{figure}
\centering
\includegraphics{FigureA07}
\caption{$\Omega\cap C,$ the domain of the comparison function for (\ref{TRex})}
\label{FigureTwo}
\end{figure}
Set $W=B\left({\cal O},a\right) \setminus \overline{B\left({\cal O},1\right)}$ (see Figure \ref{FigureThree}); then $W\subset \Omega$.
Define $g\in C^{\infty}(W)\cap C^{0}(\overline{W})$ by $g(x) = \cosh^{-1}\left(a\right)-\cosh^{-1}\left(|x|\right)$
and notice that the graph of $g$ is part of a catenoid, where $g=0$ on $\partial B\left({\cal O},a\right)$ and
$g=\cosh^{-1}\left(a\right)$ on $\partial B\left({\cal O},1\right).$
It follows from the General Comparison Principle (e.g. \cite{FinnBook}, Theorem 5.1) that $f\le g$ on $W$ and therefore
\begin{equation}
\label{Raptor}
f\le \cosh^{-1}\left(a\right) \ \ \ \ \ {\rm on} \ \ W.
\end{equation}
If we select $a>1$ so that $\cosh^{-1}\left(a\right) < \frac{2}{\pi} \ln(\tan(-\sigma)),$ then (\ref{TRex}) and (\ref{Raptor}) imply
that $f$ cannot be continuous at $P.$
Notice that \cite{Simon} implies $f\in C^{0}\left(\overline{\Omega}\setminus \{P\}\right).$
\begin{figure}
\centering
\includegraphics{FigureA06}
\caption{$W,$ the domain of the comparison function for (\ref{Raptor})}
\label{FigureThree}
\end{figure}
This example illustrates the procedure we shall use in \S \ref{Happy}; a somewhat similar approach was used in \cite{FinnShi,Korevaar,LS1,Serrin}.
The case when $\sigma\in \left[-\frac{\pi}{4},0\right)$ has a similar proof with the changes that $D$ is the intersection of the open disk
$B\left({\cal O},1\right)$ with the interiors of two ellipses and a Scherk surface with rhomboidal domain (\cite{NitscheBook}, pp. 70-71) is used
as a comparison surface to obtain the analog of (\ref{TRex}); the details can be found in \cite{MelinThesis}.
\section{Lemmata}
\begin{lem}
\label{Three}
Let $\Omega$ be a bounded open set in ${\rm I\hspace{-0.2em}R}^{n},$ $n\ge 2,$ with locally Lipschitz boundary and let $\Gamma$ be an open, $C^{2}$
subset of $\partial\Omega.$ Let $\phi \in L^{\infty}(\partial\Omega)\cap C^{1,\beta}(\Gamma).$
Suppose $g\in C^{2}(\Omega)\cap L^{\infty}(\Omega)$ is the variational solution of (\ref{ONE-A})-(\ref{ONE-B}) and $g<\phi$ on $\Gamma.$
Then $\nu\equiv \frac{(\nabla g,-1)}{\sqrt{1+|\nabla g|^{2}}}\in C^{0}\left(\Omega\cup\Gamma \right)$ and
$\nu\cdot\eta=1$ on $\Gamma,$
where $\eta(x)\in S^{n-1}$ is the exterior unit normal to $\Gamma$ at $x.$
\end{lem}
\vspace{3mm}
\noindent {\bf Proof:} Since $g$ minimizes the functional $J$ in (\ref{The_Functional}) over $BV\left(\Omega\right),$ $g$ also minimizes the functional
$K(u)=J(u)-\int_{\Gamma} \phi\ d {\cal H}^{n-1}.$ Notice
\[
K(u)=\int_{\Omega} |Du| + \int_{\Omega} \int_{0}^{u} H(x,t) dt \ dx +\int_{\partial\Omega\setminus\Gamma} |u-\phi| d {\cal H}^{n-1} - \int_{\Gamma} u\ d {\cal H}^{n-1}
\]
for each $u\in BV\left(\Omega\right)$ with $tr(u)\le \phi$ on $\Gamma;$ in particular, this holds when $u=g.$
Therefore, for each $x\in\Gamma,$ there exists $\rho>0$ such that $\partial\Omega\cap B_{n}(x,\rho)\subset\Gamma$ and
the Lemma follows as in \cite{KorevaarSimon}. \qed
\vspace{3mm}
\begin{lem}
\label{Four}
Let $\Omega$ be a bounded open set in ${\rm I\hspace{-0.2em}R}^{n},$ $n\ge 2,$ with locally Lipschitz boundary, $\phi,\psi \in L^{\infty}(\partial\Omega)$
with $\psi\le\phi$ on $\partial\Omega,$ $H_{0}\in C^{2}\left(\Omega\times{\rm I\hspace{-0.2em}R}\right)$ with $H_{0}(x,t)$ nondecreasing in $t$ for $x\in\Omega,$
and $H_{0}\ge H$ on $\Omega\times{\rm I\hspace{-0.2em}R}.$
Consider the boundary value problem
\begin{eqnarray}
\label{ONE-D}
{\rm div}\left( Tf\right) & = & H_{0}(x,f) \ \ \ \ \ {\rm in} \ \ \Omega, \\
f & = & \psi \ \ \ \ \ {\rm on} \ \ \partial\Omega.
\label{ONE-E}
\end{eqnarray}
Suppose $g\in C^{2}(\Omega)\cap L^{\infty}(\Omega)$ is the variational solution of (\ref{ONE-A})-(\ref{ONE-B}) and either
(i) $h\in C^{2}(\Omega)\cap L^{\infty}(\Omega)$ is the variational solution of (\ref{ONE-D})-(\ref{ONE-E}) or
(ii) $\psi\in C^{0}(\partial{\Omega}),$ $h\in C^{2}(\Omega)\cap C^{0}(\overline{\Omega})$ and $h$ satisfies (\ref{ONE-D})-(\ref{ONE-E}).
Then $h\le g$ in $\Omega.$
\end{lem}
\vspace{3mm}
\noindent {\bf Proof:} Let $A=\{x\in\Omega : h(x)>g(x)\}.$ In case (i), let $f=hI_{\Omega\setminus A}+gI_{A},$ where $I_{B}$ is the characteristic
function of a set $B;$ then a simple calculation using $J(g)\le J(f)$ shows that $J_{1}(f)\le J_{1}(h)$ and therefore $f=h$ and $A=\emptyset,$
where $J_{1}(u)=\int_{\Omega} |Du| + \int_{\Omega} \int_{0}^{u} H_{0}(x,t) dt \ dx +\int_{\partial\Omega} |u-\psi| d {\cal H}^{n-1},$ $u\in BV(\Omega),$ is the
functional $h$ minimizes. Case (ii) follows from Lemma 1 of \cite{Williams}. \qed
\begin{lem}
\label{Mango}
Let $\Omega \subset \{x\in{\rm I\hspace{-0.2em}R}^{2}\ : \ x_{2}>0\}$ be a bounded open set, $n\in {\rm I\hspace{-0.2em}N}$ with $n\ge 2$ and $g\in C^{2}\left(\Omega\right).$
Set $\tilde \Omega = \{(x_{1},x_{2}\omega)\in {\rm I\hspace{-0.2em}R}^{n}\ : \ (x_{1},x_{2})\in \Omega, \omega\in S^{n-2} \}$ and
define $\tilde g\in C^{2}\left( \tilde \Omega \right)$ by
$\tilde g(x_{1},x_{2}\omega)=g(x_{1},x_{2})$ for $(x_{1},x_{2})\in \Omega, \omega\in S^{n-2}.$
Then, for $x=(x_{1},\dots,x_{n})=(x_{1},r\omega) \in \tilde \Omega$ with $r=\sqrt{x_{2}^{2}+\dots+x_{n}^{2}},$ $\omega=\frac{1}{r}(x_{2},\dots,x_{n})$
and $(x_{1},r)\in \Omega,$ we have
\[
{\rm div}\left( \frac{\nabla \tilde g}{\sqrt{1+|\nabla \tilde g|^{2}}}\right)(x)
= {\rm div}\left( \frac{\nabla g}{\sqrt{1+|\nabla g|^{2}}}\right)(x_{1},r)
+ \frac{n-2}{r}\frac{g_{x_{2}}(x_{1},r)}{\sqrt{1+|\nabla g(x_{1},r)|^{2}}}.
\]
In particular, if $H\ge 0,$ $R>0,$ $\Omega\subset \{x\in{\rm I\hspace{-0.2em}R}^{2}\ : \ x_{2}\ge R\}$ and
\[
{\rm div}\left( \frac{\nabla g}{\sqrt{1+|\nabla g|^{2}}}\right)\ge H + \frac{n-2}{R} \ \ \ \ \ {\rm on} \ \ \Omega,
\]
then ${\rm div}\left( \frac{\nabla \tilde g}{\sqrt{1+|\nabla \tilde g|^{2}}}\right)\ge H$ on $\tilde\Omega$
\end{lem}
\vspace{3mm}
\noindent {\bf Proof:} Notice that $1+|\nabla\tilde g|^{2}=1+|\nabla g|^{2},$
\[
\left( 1+|\nabla \tilde g|^{2}\right) \triangle \tilde g =
\left( 1+|\nabla g|^{2}\right) \left( \triangle g + \frac{n-2}{r} g_{x_{2}} \right),
\]
\[
\sum_{i,j=1}^{n} \frac{\partial \tilde g}{\partial x_{i}} \frac{\partial \tilde g}{\partial x_{j}} \frac{\partial^{2} \tilde g}{\partial x_{i}\partial x_{j}}
= \left( \frac{\partial g}{\partial x_{1}}\right)^{2} \frac{\partial^{2} g}{\partial x_{1}^{2}}
+2\frac{\partial g}{\partial x_{1}}\frac{\partial g}{\partial x_{2}}\frac{\partial^{2} g}{\partial x_{1}\partial x_{2}}
+ \left( \frac{\partial g}{\partial x_{2}}\right)^{2} \frac{\partial^{2} g}{\partial x_{2}^{2}}
\]
and so
\[
\left( 1+|\nabla \tilde g|^{2}\right) \triangle \tilde g -
\sum_{i,j=1}^{n} \frac{\partial \tilde g}{\partial x_{i}} \frac{\partial \tilde g}{\partial x_{j}} \frac{\partial^{2} \tilde g}{\partial x_{i}\partial x_{j}}
\]
\[
= \left(1+g_{x_{2}}^{2}\right)g_{x_{1}x_{1}} - 2g_{x_{1}}g_{x_{2}}g_{x_{1}x_{2}}+\left(1+g_{x_{1}}^{2}\right)g_{x_{2}x_{2}}
+ \frac{n-2}{r} \left(1+g_{x_{1}}^{2}+g_{x_{2}}^{2} \right) g_{x_{2}}.
\]
The lemma follows from this. \qed
\section{The $n-$dimensional case}
\label{Happy}
Let $B_{k}\left(x,r\right)$ denote the open ball in ${\rm I\hspace{-0.2em}R}^{k}$ centered at $x\in {\rm I\hspace{-0.2em}R}^{k}$ with radius $r>0$ and ${\cal O}_{k}=(0,\dots,0)\in {\rm I\hspace{-0.2em}R}^{k},$ for $k\in{\rm I\hspace{-0.2em}N}.$
Now consider $n\ge 2$ and set
\[
\lambda=\sup_{(x,t)\in {\rm I\hspace{-0.2em}R}^{n}\times{\rm I\hspace{-0.2em}R}}|H(x,t)|;
\]
if $\lambda=0,$ replace it with a positive constant. For each $a\in \left(0,\frac{n}{\lambda}\right)$ and $Q\in {\rm I\hspace{-0.2em}R}^{n},$ we have
\begin{equation}
\label{BOOK}
\int_{B_{n}(Q,a)} \lambda^{n} dx < n^{n}\omega_{n}.
\end{equation}
By translating our problem in ${\rm I\hspace{-0.2em}R}^{n},$ we may (and will) assume $Q={\cal O}_{n}.$
By Proposition 1.1 and Theorem 2.1 of \cite{Giu176}, we see that if $\Omega$ is a bounded, connected, open set in ${\rm I\hspace{-0.2em}R}^{n}$ with Lipschitz-continuous boundary,
$\overline{\Omega}\subset B_{n}\left({\cal O}_{n},\frac{n}{\lambda}\right)$ and $\phi\in L^{1}\left(\partial\Omega\right),$ then the functional $J$ in (\ref{The_Functional}) has a minimizer
$f\in BV(\Omega),$ $f\in C^{2}(\Omega)$ and $f$ satisfies (\ref{ONE-A}).
The proof in \S \ref{CDO} consists of setting some parameters (e.g. $p,$ $r_{1},$ $r_{2},$ $m_{0},$ $b,$ $c,$ $\tau,$ $\sigma,$ $a$), determining the domain $\Omega,$
finding different comparison functions (e.g. $g_{1},$ $g^{[u]},$ $k_{\pm},$ $k_{2},$ $k_{3},$ $k_{4}$), and mimicking (\ref{TRex}) and (\ref{Raptor})
to show that the variational solution $f$ of (\ref{ONE-A})-(\ref{ONE-B}) is discontinuous at a nonconvex corner.
In particular, we use a torus (i.e. $j_{a}$) to obtain (\ref{Hopping}), unduloids (i.e. $k_{\pm},$ $k_{2}$) to obtain (\ref{CatX}) (an analog of (\ref{Raptor})) and
nodoids (i.e. $g_{1},$ $g^{[u]}$), unduloids (i.e. $k_{\pm},$ $k_{4}$) and a helicoidal function (i.e. $h_{2}$) to obtain (\ref{Cab}) (an analog of (\ref{TRex}))
and prove that $f$ is discontinuous at $P=(0,p,0,\dots,0)\in {\rm I\hspace{-0.2em}R}^{n}\in \partial\Omega.$
\subsection{Codimension $1$ singular set}
\label{CDO}
In this section, we will obtain a domain $\Omega$ as above and $\phi\in C^{\infty}({\rm I\hspace{-0.2em}R}^{n})$ such that $P\in\partial\Omega,$ the minimizer $f$ of
(\ref{The_Functional}) is discontinuous at $P,$ $\partial\Omega\setminus T$ is smooth ($C^{\infty}$)
and $f\in C^{2}(\Omega)\cap C^{0}\left(\overline{\Omega}\setminus T\right),$ where $T$ is a smooth set of dimension $n-2$ (i.e. $T$ has codimension $1$ in $\partial\Omega$).
We will use portions of nodoids, unduloids and helicoidal surfaces with constant mean curvature as comparison functions.
For the convenience of the reader, we will denote functions whose graphs are subsets of nodoids with the letter $g$
(e.g. $g_{1}(x_{1},x_{2})$), subsets of CMC helicoids with the letter $h$ and subsets of unduloids (or onduloids) with the letter $k.$
Let ${\cal N}_{1}\subset {\rm I\hspace{-0.2em}R}^{3}$ be a nodoid which is symmetric with respect the $x_{3}$-axis and has mean curvature $1$ (when ${\cal N}_{1}$ is oriented ``inward'',
so that the unit normal $\vec N_{{\cal N}_{1}}$ to ${\cal N}_{1}$ points toward the $x_{3}$-axis at the points of ${\cal N}_{1}$ which are furthest from the $x_{3}$-axis).
Let $s_{1}=\inf_{(x,t)\in {\cal N}_{1}} |x|$ be the inner neck size of ${\cal N}_{1}$ and let $s_{3}$ satisfy the condition that the unit normal to ${\cal N}_{1}$ is vertical
(i.e. parallel to the $x_{3}$-axis) at each point $(x,t)\in {\rm I\hspace{-0.2em}R}^{2}\times{\rm I\hspace{-0.2em}R}$ of ${\cal N}_{1}$ at which $|x|=s_{3};$ then $s_{1}<s_{3}.$ Let $s_{2}\in (s_{1},s_{3}).$
(Notice that we can assume $s_{2}/s_{1}$ is close to $s_{3}/s_{1}$ if we wish.)
Let us fix $0<p<\frac{1}{\lambda}$ and set $w=(0,p)\in{\rm I\hspace{-0.2em}R}^{2},$ $P=(0,p,0,\dots,0)\in {\rm I\hspace{-0.2em}R}^{n}.$
Let $m_{0}=\lambda/2+(n-2)/(p/3).$ We shall assume $r_{2}=s_{2}/m_{0}<p/3;$ if necessary, we increase $m_{0}$ to accomplish this.
Let $r_{1}=s_{1}/m_{0}$ and $r_{3}=s_{3}/m_{0}.$
Let ${\cal N}=\{(m_{0})^{-1}X\in {\rm I\hspace{-0.2em}R}^{3} : X\in {\cal N}_{1} \};$ then ${\cal N}$ is a nodoid with mean curvature $m_{0}.$
Set $\Delta_{1}=\{x\in{\rm I\hspace{-0.2em}R}^{2} : r_{1}<|x|< r_{2} \}.$
Fix $b\in \left(0,\frac{1}{4m_{0}}\left(1+2m_{0}p-\sqrt{1+4m_{0}^{2}p^{2}}\right)\right).$
Define $g_{1}\in C^{\infty}\left(\Delta_{1}\right)\cap C^{0}\left(\overline{\Delta_{1}}\right)$ to be a function whose graph is a
subset of ${\cal N}$ on which $\vec N_{\cal N}=(n_{1},n_{2},n_{3})$ satisfies $n_{3}\ge 0;$
then
\begin{equation}
\label{STAR}
{\rm div}\left( \frac{\nabla g_{1}}{\sqrt{1+|\nabla g_{1}|^{2}}}\right) = m_{0}\ge \lambda+\frac{2(n-2)}{p/3}.
\end{equation}
By moving ${\cal N}$ vertically, we may assume $g_{1}(x)=0$ when $|x|=r_{2};$ then $g_{1}>0$ in $\Delta_{1}.$
Notice that $\frac{\partial g_{1}}{\partial x_{1}}(r_{1},0)=-\infty$ and $\frac{\partial g_{1}}{\partial x_{1}}(r_{2},0)<0$;
then there exists a $\beta_{0}>0$ such that, for each $\theta\in {\rm I\hspace{-0.2em}R},$
\begin{equation}
\label{Dog}
\frac{\partial}{\partial r} \left( g_{1}(r\Theta) \right)<-\beta_{0} \ \ \ \ \ {\rm for} \ \ r_{1}<r<r_{2},
\end{equation}
where $\Theta=(\cos(\theta),\sin(\theta)).$ Fix $\beta\in (0,\beta_{0}).$
Let
\begin{equation}
\label{Shockers}
0<\tau < \min \left\{\frac{pr_{1}}{\sqrt{r_{2}^{2}-r_{1}^{2}}}, \frac{2(1-p\lambda)}{\lambda(2-p\lambda)}, \frac{b(4p-b)}{4(2p-b)} \right\}.
\end{equation}
Consider $\sigma\in \left(-\frac{\pi}{2},0\right).$
Notice that the distance between $L$ and the point $(0,p-r_{2})$ is $r_{2}\cos(\sigma),$ where $L$ is the closed sector given by
\[
L=\{ \left(r\cos(\theta),p+r\sin(\theta)\right) : r\ge 0, \sigma\le\theta\le\pi-\sigma \}.
\]
Define $r_{4}=\sqrt{p^{2}+\tau^{2}}$ and
\[
M= B_{2}\left((\tau,0),r_{4}\right) \cap B_{2}\left((-\tau,0),r_{4}\right).
\]
Notice that $\tau<\frac{b(4p-b)}{4(2p-b)}$ and therefore $B_{2}\left({\cal O}_{2},\frac{a+p}{2}-b \right)\subset M$ if $p<a<p+b.$
Set $\sigma=-\arctan(\tau/p);$ then $\cos(\sigma)>\frac{r_{1}}{r_{2}},$ since $\tau<\frac{p\sqrt{r_{2}^{2}-r_{1}^{2}}}{r_{1}},$
and $L\cap \overline{B_{2}}=\emptyset,$ where $B_{2}=B_{2} \left( (0,p-r_{2}) ,r_{1} \right).$ Therefore there exists a $\delta_{1}>0$ such that if
$u=(u_{1},u_{2})\in \partial B_{2}({\cal O}_{2},p)$ with $|u-w|<\delta_{1},$ then
\begin{equation}
\label{Cat2}
B_{2}\left(\frac{p-r_{2}}{p}u,r_{1}\right) \subset M.
\end{equation}
Since $\tau<\frac{2(1-p\lambda)}{\lambda(2-p\lambda)},$ we have $\tau-\left( \frac{2}{\lambda}-r_{4}\right)< -p$ and so
$B_{2}\left({\cal O}_{2},p\right)\subset B_{2}\left((\tau,0),\frac{2}{\lambda}-r_{4}\right)$ (see Figure \ref{FigureEight} (b)).
Notice that
\begin{equation}
\label{SALAD}
M\setminus \{(0,\pm p)\} = \{\left(r\cos(\theta),p+r\sin(\theta)\right) : 0<r<2p, \theta^{-}(r)<\theta<\theta^{+}(r)\}
\end{equation}
for some functions $\theta^{\pm}\in C^{0}(\left[0,\delta)\right)$ which satisfy $\theta^{-}<\theta^{+},$
$\theta^{-}(0)=-\pi-\sigma$ and $\theta^{+}(0)=\sigma.$
Let $a>p$ and set
\[
{\cal T} = \left\{ \left(\left(\frac{a+p}{2}+b\cos v\right)\cos u,\left(\frac{a+p}{2}+b\cos v\right)\sin u,b\sin v+c \right) : (u,v)\in R \right\},
\]
where $R=[0,2\pi]\times [-\pi,0]$ and $0<c<b;$ since $b< \frac{1}{4m_{0}}\left(1+2m_{0}p-\sqrt{1+4m_{0}^{2}p^{2}}\right),$ we see that
$\frac{(a+p)/2-2b}{4b((a+p)/2-b)}>m_{0}$ for all $a\ge p.$
We shall assume
\begin{equation}
\label{Assumption_a}
a\in \left(p,\min\{p+b,1/\lambda\}\right)
\end{equation}
and $c=\sqrt{b^{2}-\left(\frac{a-p}{2}\right)^{2}}.$
Notice that ${\cal T}$ is the lower half of a torus whose mean curvature (i.e. one half of the trace of the shape operator) at each point is greater than $m_{0}.$
Let ${\cal T}$ be the graph of a function $j_{a}$ over $\Delta_{a}=\{x\in {\rm I\hspace{-0.2em}R}^{2} : \frac{a+p}{2}-b\le |x| \le \frac{a+p}{2}+b\};$ then $j_{a}(x)=0$ on $|x|=a$ and $|x|=p,$ $j_{a}(x)<0$ on $p<|x|<a$ and
$j_{a}(x)>0$ on $\frac{a+p}{2}-b\le |x|<p$ and $a<|x|\le \frac{a+p}{2}+b$ for $x\in {\rm I\hspace{-0.2em}R}^{2}.$
Notice that $|j_{a}(x)|< \frac{1}{2m_{0}}$ for all $x\in \Delta_{a}.$
\begin{figure}[h]
\centering
\input{Figure05ALT.pdf_t}
\caption{The domain of $j_{a}$}
\label{FigureFour}
\end{figure}
Set
\begin{equation}
\label{AAAA}
\Omega=B_{n}\left({\cal O}_{n},a\right)\setminus \overline{\cal M},
\end{equation}
where ${\cal M}=\tilde M = \{(x_{1},x_{2}\omega)\in {\rm I\hspace{-0.2em}R}^{n}\ : \ (x_{1},x_{2})\in M, \omega\in S^{n-2} \}.$
If we define $\Pi_{i,j}(A)=\{(x_{i},x_{j}):(x_{1},\dots,x_{n})\in A,\ x_{k}=0 \ \ {\rm for}\ \ k\neq i,j\}$ for $A\subset {\rm I\hspace{-0.2em}R}^{n}$ and $1\le i<j\le n,$ then
$\Pi_{1,j}(\Omega)=B_{2}\left({\cal O}_{2},a\right)\setminus \overline{M}$ for $2\le j\le n$ and
$\Pi_{i,j}(\Omega)=B_{2}\left({\cal O}_{2},a\right)\setminus \overline{B_{2}\left({\cal O}_{2},1\right)}$ for $2\le i<j\le n$ (see Figure \ref{FigureFive}).
\begin{figure}
\centering
\includegraphics{Figure01}
\caption{(a) $\Pi_{1,j}\left(\Omega\right)$ for $2\le j\le n$ (b) $\Pi_{i,j}\left(\Omega\right)$ for $2\le i<j\le n$}
\label{FigureFive}
\end{figure}
We wish to select a helicoidal surface in ${\rm I\hspace{-0.2em}R}^{3}$ (e.g. \cite{DC_MD}) with constant mean curvature $m_{0},$ axis $\{w\}\times{\rm I\hspace{-0.2em}R}$
and pitch $-\beta$ (recall $-\beta \in (-\beta_{0},0))$, which we will denote ${\cal S};$ then, for each $t\in{\rm I\hspace{-0.2em}R},$
$k_{t}\left({\cal S}\right)={\cal S},$ where $k_{t}:{\rm I\hspace{-0.2em}R}^{3}\to {\rm I\hspace{-0.2em}R}^{3}$ is the helicoidal motion given by
$
k_{t}(x_{1},x_{2},x_{3})= (l_{t}(x_{1},x_{2}),x_{3}-\beta t)
$
with $l_{t}:{\rm I\hspace{-0.2em}R}^{2}\to {\rm I\hspace{-0.2em}R}^{2}$ given by
\[
l_{t}(x_{1},x_{2})=(x_{1}\cos(t)+(x_{2}-p)\sin(t),p-x_{1}\sin(t)+(x_{2}-p)\cos(t)).
\]
Set $c_{0}=\frac{1}{4}\beta \sigma <0;$ by vertically translating ${\cal S},$ we may assume that there is an open
$c_{0}-$level curve ${\cal L}_{0}$ of ${\cal S}$ with endpoints $w=(0,p)$ and $b=(b_{1},b_{2})$ such that
${\cal L}_{0}\subset (0,\infty)\times {\rm I\hspace{-0.2em}R},$
${\cal L}=\overline{{\cal L}_{0}}$ is tangent to the (horizontal) line ${\rm I\hspace{-0.2em}R}\times\{p\}$ at $w$ and the slope $m_{v}$ of the tangent line
to ${\cal L}$ at $v$ satisfies $|m_{v}|<\tan\left(-\sigma/5\right)$ for each $v\in {\cal L}_{0};$
then ${\cal L}\times \{c_{0}\}\subset {\cal S}$ and
the curves $l_{t}\left({\cal L}_{0}\right),$ $-\frac{7\pi}{8}<t<\frac{7\pi}{8},$ are mutually disjoint.
Notice that the set
\[
{\cal R}= \{ l_{t}\left({\cal L}_{0}\right) : -\frac{7\pi}{8}<t<\frac{7\pi}{8} \}
= \bigcup_{-\frac{7\pi}{8}<t<\frac{7\pi}{8}} l_{t}\left({\cal L}_{0}\right)
\]
is an open subset of ${\rm I\hspace{-0.2em}R}^{2}\setminus \left( (-\infty,0]\times \{p\}\right)$ (see Figure \ref{FigureSix}),
$w\in \overline{{\cal R}}$ and ${\cal S}$
implicitly defines the smooth function $h_{2}$ on ${\cal R}$ given by $h_{2}(x)= \frac{\beta}{4} (\sigma-4t)$ if
$x\in l_{t}\left({\cal L}_{0}\right)$ for some $t\in (-\pi/2,\pi/2).$
Notice that $B_{2}\left(w,b_{1}\right) \cap \{x_{1}>0\} \subset {\cal R}.$
Now $l_{t}\left({\cal L}_{0}\right)\cap M = \emptyset$ for $t\in \left(3\sigma/4,\sigma/4\right)$ and, by making $b_{1}>0$
sufficiently small, we may assume that
\begin{equation}
\label{STARSHINE}
l_{t}\left({\cal L}_{0}\right)\subset B_{2}({\cal O}_{2},p)\setminus M \ \ \ \ \ {\rm for \ \ each} \ \ \ \ \ t\in \left(3\sigma/4,\sigma/4\right).
\end{equation}
Notice that $h_{2}<\frac{\beta(2\sigma^{2}-\pi)}{8\sigma}$ on $l_{t}\left({\cal L}_{0}\right)$ for $-\frac{\pi}{2}<t<\frac{7\pi}{8}.$
\begin{figure}
\centering
\includegraphics{Figure03AA}
\caption{${\cal R}$}
\label{FigureSix}
\end{figure}
Let us fix $u=(u_{1},u_{2})\in \partial B_{2}({\cal O}_{2},p)$ such that $|u-w|<\min\{\delta_{1},b_{1}\}$ and $u_{1}>0.$
Then there exists $\theta_{u}\in (0,\pi/2)$ such that $u=(p\cos(\theta_{u}),p\sin(\theta_{u})).$
Define $g^{[u]}(x)=g_{1}\left(x+\frac{r_{2}-p}{p}u\right)$ and notice that $g^{[u]}(u)=g_{1}\left(\frac{r_{2}}{p}u\right)=0,$
since $|\frac{r_{2}}{p}u|=r_{2}.$ Note that the domain
\[
{\cal D}^{[u]}=
\{x+\frac{p-r_{2}}{p}u : x\in \Delta_{1}\} = B_{2}\left(\frac{p-r_{2}}{p}u,r_{2}\right) \setminus \overline{ B_{2}\left(\frac{p-r_{2}}{p}u,r_{1}\right) }
\]
of $g^{[u]}$ is contained in $B_{2}({\cal O}_{2},p)$ since $\partial B_{2}\left(\frac{p-r_{2}}{p}u,r_{2}\right)$ and
$\partial B_{2}({\cal O}_{2},p)$ are tangent circles at $u$ and $r_{2}<p$ (see Figure \ref{FigureSeven}).
Notice that
\begin{equation}
\label{SPACESHIP}
h_{2}(r\cos(\theta_{u}),r\sin(\theta_{u})) < g^{[u]}(r\cos(\theta_{u}),r\sin(\theta_{u}))
\end{equation}
when $p-r_{2}+r_{1}\le r\le p,$ because $h_{2}(u) < 0 = g^{[u]}(u),$ $\beta<\beta_{0}$ and (\ref{Dog}) holds.
\begin{figure}
\centering
\includegraphics[trim = 0mm 22mm 0mm 0mm, clip, width=8cm]{Figure04T}
\caption{${\cal D}^{[u]};$ $\Omega\cap \tilde{\cal D}^{[u]}$ is the domain of the comparison function for (\ref{Bingo})}
\label{FigureSeven}
\end{figure}
Let
\[
{\cal N}_{\pm}\subset \{x\in{\rm I\hspace{-0.2em}R}^{2} : r_{4}\le |(x_{1} \pm \tau,x_{2})| \le \frac{2}{\lambda}-r_{4}\}\times{\rm I\hspace{-0.2em}R}
\]
be unduloids in ${\rm I\hspace{-0.2em}R}^{3}$ with mean curvature $\lambda/2$ such that $\{(\mp \tau,0)\}\times{\rm I\hspace{-0.2em}R}$ are the respective axes of symmetry;
the minimum and maximum radii (or ``neck'' and ``waist'' sizes) of both unduloids are $r_{4}$ and $\frac{2}{\lambda}-r_{4}$ respectively.
Set
\[
\Delta_{\pm}=B_{2}\left( (\mp \tau,0),\frac{2}{\lambda}-r_{4}\right) \setminus \overline{B_{2}\left( (\mp \tau,0),r_{4}\right)}
\]
and define
$k_{\pm}\in C^{\infty}\left(\Delta_{\pm}\right)$ so that the graphs of $k_{\pm}$ are subsets of ${\cal N}_{\pm}$ respectively,
\[
{\rm div}\left( Tk_{\pm}\right)=-\lambda \ \ \ \ \ {\rm in} \ \ \Delta_{\pm},
\]
$\frac{\partial}{\partial r} \left(k_{\pm}\left((\mp p,0)+r\Theta\right) \right)|_{r=r_{4}} = -\infty$ and
$\frac{\partial}{\partial r} \left(k_{\pm}\left((\mp p,0)+r\Theta\right) \right)|_{r=\frac{2}{\lambda}-r_{4}} = -\infty$
for each $\theta\in{\rm I\hspace{-0.2em}R},$ where $\Theta=(\cos(\theta),\sin(\theta)).$
We may vertically translate ${\cal N}_{\pm}$ so that $k_{\pm}(x)=0$ for $x\in{\rm I\hspace{-0.2em}R}^{2}$ with $|(x_{1}\pm \tau,x_{2})|=\frac{2}{\lambda}-r_{4}.$
Notice that $k_{+}\left(0,p\right)=k_{-}\left(0,p\right)=\sup_{\Delta_{+}} k_{+}=\sup_{\Delta_{-}} k_{-}.$
\begin{figure}
\centering
\includegraphics{Figure07}
\caption{(a) $B_{2}\left({\cal O}_{2},p\right)\nsubseteq B_{2}\left((-\tau,0),\frac{2}{\lambda}-r_{4}\right)$
(b) $B_{2}\left({\cal O}_{2},p\right)\subset B_{2}\left((-\tau,0),\frac{2}{\lambda}-r_{4}\right)$ }
\label{FigureEight}
\end{figure}
Let ${\cal N}\subset \{x\in{\rm I\hspace{-0.2em}R}^{2} : p\le |x| \le \frac{2}{\lambda}-p\}\times{\rm I\hspace{-0.2em}R}$ be an unduloid with mean curvature $\lambda/2$
such that the $x_{3}-$axis is the axis of symmetry and the minimum and maximum radii (or ``neck'' and ``waist'' sizes) are $p$ and
$\frac{2}{\lambda}-p$ respectively.
Set $\Delta_{2}=B_{2}\left({\cal O}_{2},\frac{2}{\lambda}-p\right) \setminus \overline{B_{2}\left({\cal O}_{2},p\right)}$ and define
$k_{2}\in C^{\infty}\left(\Delta_{2}\right)$ so that the graph of $k_{2}$ is a subset of ${\cal N},$
${\rm div}\left( Tk_{2}\right)=-\lambda$ in $\Delta_{2},$
$\frac{\partial}{\partial r} \left(k_{2}\left(r\Theta\right) \right)|_{r=p} = -\infty$ and
$\frac{\partial}{\partial r} \left(k_{2}\left(r\Theta\right) \right)|_{r=\frac{2}{\lambda}-p} = -\infty$
for each $\theta\in{\rm I\hspace{-0.2em}R},$ where $\Theta=(\cos(\theta),\sin(\theta)).$
\begin{figure}
\centering
\includegraphics{Figure06}
\caption{$B_{2}\left({\cal O}_{2},a\right) \setminus \overline{B_{2}\left({\cal O}_{2},p\right)}:$ (\ref{Once upon a time})}
\label{FigureNine}
\end{figure}
Define $\phi\in C^{\infty}\left({\rm I\hspace{-0.2em}R}^{n}\right)$ so that $\phi=0$ on $\partial B_{n}\left({\cal O}_{n},a\right)$
and $\phi=m$ on $\partial {\cal M},$ where
\begin{equation}
\label{Early}
m>\max\{g_{1}(0,r_{1}),\frac{1}{2m_{0}},k_{+}(0,r_{4}-\tau)+ k_{2}(0,p)-k_{2}\left(0,\frac{2}{\lambda}-p\right) \};
\end{equation}
recall then that $m>j_{a}\left(\frac{a+p}{2}-b\right).$
Let $f$ be the variational solution of (\ref{ONE-A})-(\ref{ONE-B}) with $\Omega$ and $\phi$ as given here; that is, let $f$ minimize the
functional given in (\ref{The_Functional}) and notice that the existence of $f$ follows from (\ref{BOOK}), (\ref{Assumption_a}),
\S 1.D. of \cite{Giu176} and \cite{Ger2,Giu178}.
(Notice that there exists $w:B_{2}({\cal O}_{2},a)\setminus M \to {\rm I\hspace{-0.2em}R}$ such that $f= \tilde w.$)
The comparison principle implies $j_{a}(x)\le f(x)$ for $x\in \Omega$ and so $f(x)\ge j_{a}(x)\ge 0$ if $x\in \Omega$ with $|x|\le p$
(recall (\ref{Assumption_a}) holds). In particular,
\begin{equation}
\label{Hopping}
f(x)\ge 0 \ \ \ \ \ {\rm when} \ \ \ x\in \Omega \ \ {\rm with} \ \ |x| \le p.
\end{equation}
Set $W=\left(B_{2}\left({\cal O}_{2},a\right) \setminus \overline{B_{2}\left({\cal O}_{2},p\right)}\right)\times {\rm I\hspace{-0.2em}R}^{n-2}.$ Now
\[
\Omega\subset B_{2}\left({\cal O}_{2},a\right) \times {\rm I\hspace{-0.2em}R}^{n-2} \subset B_{2}\left({\cal O}_{2},\frac{2}{\lambda}-p\right) \times {\rm I\hspace{-0.2em}R}^{n-2}
\]
(see Figure \ref{FigureNine}). Define $k_{3}(x)=k_{2}\left(x_{1},x_{2}\right)-k_{2}(0,a)$ for $x=(x_{1},x_{2},\dots,x_{n})\in W.$
Notice that $f=0\le k_{3}$ on $\overline{W}\cap \partial B_{n}\left({\cal O}_{n},a\right),$
\[
{\rm div}\left( Tf\right)=H(x,f(x))\ge -\lambda= {\rm div}\left( Tk_{3}\right) \ \ \ \ \ {\rm in} \ \ \Omega\cap W
\]
and $\frac{\partial}{\partial r} \left(k_{2}\left(r\Theta\right) \right)|_{r=p} = -\infty$
(so that $\lim_{W\ni y\to x} Tk_{3}(y) \cdot \xi(x)=1$ for $x\in \partial B_{2}\left({\cal O}_{2},p\right)\times {\rm I\hspace{-0.2em}R}^{n-2},$
where $\xi$ is the unit exterior normal to $\partial W$).
The general comparison principle (e.g. \cite{FinnBook}, Theorem 5.1) then implies
\begin{equation}
\label{Once upon a time}
f\le k_{3} \ \ \ \ \ {\rm in} \ \ \ \ \ \Omega\cap W
\end{equation}
and, in particular,
\begin{equation}
\label{Bedtime_Stories}
\limsup_{\Omega\cap W\ni y\to x} f(y) \le k_{3}(x) \ \ \ \ \ {\rm for} \ \ x\in \partial \Omega \cap \overline{W}
\end{equation}
(see Figure \ref{FigureTen}).
By rotating the axis of symmetry of $W$ through all lines in ${\rm I\hspace{-0.2em}R}^{n}$ containing ${\cal O}_{n}$ (or, equivalently, keeping $W$
fixed and rotating $\Omega$ about ${\cal O}_{n}$), we see that
\begin{equation}
\label{CatX}
\sup \{f(x) : x\in B_{n}\left({\cal O}_{n},a\right) \setminus \overline{B_{n}\left({\cal O}_{n},p\right)} \} \le k_{2}(0,p)-k_{2}(0,a).
\end{equation}
\begin{figure}
\centering
\includegraphics{figure_10}
\caption{(\ref{Bedtime_Stories}): $W$ and $B_{n}\left({\cal O}_{n},a\right) \setminus \overline{B_{n}\left({\cal O}_{n},p\right)}$ when $n=3$}
\label{FigureTen}
\end{figure}
Now define $k_{4}\in C^{\infty}\left(\Delta_{+}\times {\rm I\hspace{-0.2em}R}^{n-2}\right)\cap C^{0}\left(\overline{\Delta_{+}}\times {\rm I\hspace{-0.2em}R}^{n-2}\right)$
by
\[
k_{4}(x)=k_{+}(x_{1},x_{2}) + k_{2}(0,p)-k_{2}(0,a), \ \ \ x=(x_{1},x_{2},\dots,x_{n})\in \overline{\Delta_{+}}\times {\rm I\hspace{-0.2em}R}^{n-2}.
\]
Combining (\ref{ONE-A}) and (\ref{CatX}) with the facts that ${\rm div}\left(Tk_{4}\right)=-\lambda$ in $\Delta_{+}\times {\rm I\hspace{-0.2em}R}^{n-2}$ and
$\lim_{\Delta_{+}\times {\rm I\hspace{-0.2em}R}^{n-2}\ni y\to x} Tk_{4}(y) \cdot \xi_{+}(x)=1$ for $x\in \partial B_{2}\left( (-\tau,0),r_{4}\right)\times {\rm I\hspace{-0.2em}R}^{n-2},$
where $\xi_{+}$ is the inward unit normal to $\partial B_{2}\left( (-\tau,0),r_{4}\right)\times {\rm I\hspace{-0.2em}R}^{n-2},$
we see that
\begin{equation}
\label{CatY}
f\le k_{4} \ \ \ {\rm in} \ \ \Omega \cap \left(\Delta_{+}\times {\rm I\hspace{-0.2em}R}^{n-2}\right).
\end{equation}
(If Figure \ref{FigureEight} (a) held, then (\ref{CatY}) would not be valid.)
Now let $L:{\rm I\hspace{-0.2em}R}^{n}\to {\rm I\hspace{-0.2em}R}^{n}$ be any rotation about ${\cal O}_{n}$ which satisfies $L(\Omega)=\Omega,$
notice that $f\circ L$ satisfies (\ref{ONE-A})-(\ref{ONE-B}) and apply the previous argument
to obtain $f\circ L \le k_{4}$ in $\Omega \cap \left(\Delta_{+}\times {\rm I\hspace{-0.2em}R}^{n-2}\right)$ and therefore
\begin{equation}
\label{Taxi}
\sup \{f(x) : x\in \partial {\cal M} \} \le k_{4}(p,0) <m.
\end{equation}
\begin{figure}
\centering
\includegraphics{Figure05X}
\caption{$A:$ (\ref{Jerry})}
\label{FigureEleven}
\end{figure}
From Lemma \ref{Three}, we see that the downward unit normal to the graph of $f,$ $N_{f},$ satisfies
$N_{f} = (\nu,0)$ on $\partial {\cal M} \setminus \{ (0,p\omega) : \omega\in S^{n-2} \}$ and
\begin{equation}
\label{Doggy}
\lim_{\Omega\ni y\to x} Tf(y)\cdot \nu(x) =1 \ \ \ \ \ {\rm for} \ \ x\in\partial {\cal M} \setminus \{ (0,p\omega) : \omega\in S^{n-2} \}.
\end{equation}
Let us write $B=B_{2}\left(\frac{p-r_{2}}{p}u,r_{2}\right);$ then $\tilde g^{[u]}=0 \le f$ on $\Omega\cap\partial \tilde B$ and
$\tilde g^{[u]}\le g_{1}(r_{1},0)<\phi$ on $\tilde B \cap \partial M.$
It follows from (\ref{ONE-A}), (\ref{STAR}) and Lemma \ref{Mango} that
\begin{equation}
\label{Bingo}
\tilde g^{[u]}<f \ \ \ \ \ {\rm on} \ \ \ \ \ \Omega\cap \tilde {\cal D}^{[u]}=\Omega \cap \tilde B.
\end{equation}
Set $U=\{r\left(\cos(\theta),\sin(\theta)\omega\right)\in\Omega : r\in (0,p), \theta\in (0,\theta_{u}),\omega\in S^{n-2} \}.$
If we write $\partial_{1}U=\{ \left(p\cos(\theta),p\sin(\theta)\omega\right) : \theta\in (0,\theta_{u}],\omega\in S^{n-2} \}, $
$\partial_{2}U=\partial {\cal M} \cap \partial U$ and
$\partial_{3}U= \{ \left(r\cos(\theta_{u}),r\sin(\theta_{u})\omega\right)\in \overline{\Omega} : r\in [0,p],\omega\in S^{n-2} \}, $
then $\partial U = \partial_{1}U \cup \partial_{2}U \cup \partial_{3}U,$
$\tilde h_{2}\le 0\le f$ on $\partial_{1}U\setminus\{P\}$ and $\tilde h_{2}<\tilde g^{[u]}<f$ on $\partial_{3}U$ (see (\ref{SPACESHIP}));
then (\ref{Doggy}) and the general comparison principle imply
\begin{equation}
\label{Jerry}
\tilde h_{2}<f \ \ \ \ \ {\rm in} \ \ \ \ \ U=\tilde A,
\end{equation}
where $A=\{r\left(\cos(\theta),\sin(\theta)\right)\in B_{2}({\cal O}_{2},p)\setminus \overline{M} : r\in (0,p), \theta\in (0,\theta_{u})\}$
(see Figure \ref{FigureEleven}).
Set ${\cal R}_{2} = \bigcup_{t=3\sigma/4}^{2\sigma/4} l_{t}\left({\cal L}_{0}\right).$
Now (\ref{STARSHINE}) implies $\tilde {\cal R}_{2}\subset U$ and so
\begin{equation}
\label{Cab}
f > \tilde h_{2}\ge -\frac{\beta\sigma}{4} \ \ \ \ \ {\rm on} \ \ \ {\cal R}_{2}.
\end{equation}
Using (\ref{CatX}) and (\ref{Cab}), we see that if $a\in \left(p,\frac{2}{\lambda}-p\right)$ is close enough to $p,$ then
$k_{2}(0,p)-k_{2}(0,a)<-\frac{\beta\sigma}{4}$ and therefore $f$ cannot be continuous at $P$ or at any point of $T=\{(0,p\omega)\in {\rm I\hspace{-0.2em}R}^{n} : \omega\in S^{n-2} \}$.
Notice that $f\in C^{0}\left(\overline{\Omega}\setminus T\right)$ (e.g. \cite{Lin}).
\begin{figure}
\centering
\includegraphics{Test1}
\caption{An illustration of ${\cal R}_{2}$ (blue region) and $A$ (green and blue regions)}
\label{FigureTwelve}
\end{figure}
\subsection{One singular point}
\label{OSP}
In this section, we will obtain a domain $\Omega$ and $\phi\in C^{\infty}({\rm I\hspace{-0.2em}R}^{n})$ such that $P\in\partial\Omega,$ the minimizer $f$ of
(\ref{The_Functional}) is discontinuous at $P,$ $\partial\Omega\setminus \{P\}$ is smooth ($C^{\infty}$)
and $f\in C^{0}\left(\overline{\Omega}\setminus \{P\}\right).$
This is accomplished by replacing ${\cal M}$ by a convex set ${\cal G}$ such that $\partial{\cal G}\setminus \{P\}$ is smooth ($C^{\infty}$) and
${\cal G}\subset B_{n}\left({\cal O}_{n},p\right).$ We shall use the notation of \S \ref{CDO} throughout this section.
We assume $p\in \left(0,\frac{1}{\lambda}\right)$ and set $P=(0,p,0,\dots,0).$ (We will no longer require Figure \ref{FigureEight} (b) to hold.)
Let $\alpha>1,$ $n\ge 3,$ and $Y:\left[-\frac{\pi}{2\alpha},\frac{\pi}{2\alpha}\right] \times [0,\pi] \times S^{n-3}\to {\rm I\hspace{-0.2em}R}^{n}$ be defined by
\[
Y(\theta,\phi,\omega) = 2\cos(\alpha\theta)\sin(\phi)\left(\cos(\theta)\sin(\phi),\sin(\theta)\sin(\phi),\cos(\phi)\omega\right).
\]
Let $F:{\rm I\hspace{-0.2em}R}^{n}\to {\rm I\hspace{-0.2em}R}^{n}$ be given by $F\left(x_{1},\dots,x_{n}\right)=\left(\frac{x_{2}}{p},\frac{1-x_{1}}{p},\frac{x_{3}}{p},\dots,\frac{x_{n}}{p}\right)$
and define $X(\theta,\phi,\omega) = F\left(Y(\theta,\phi,\omega)\right)$ for $-\frac{\pi}{2\alpha}\le\theta\le \frac{\pi}{2\alpha}, \ 0\le\phi\le\pi, \ \omega\in S^{n-3}$
(see Figures \ref{FigureTwelve} and Figure \ref{FigureThirteen} with $n=3,$ $\alpha=2;$ the axes are labeled $x,y,z$ for $x_{1},x_{2},x_{3}$ respectively).
Let ${\cal G}$ be the open, convex set whose boundary is the image of $X;$ that is,
\[
\partial{\cal G}= \{X(\theta,\phi,\omega) : -\frac{\pi}{2\alpha}\le\theta\le \frac{\pi}{2\alpha}, \ 0\le\phi\le\pi, \
\omega\in S^{n-3} \}.
\]
Notice that $\partial{\cal G}\setminus \{P\}$ is a $C^{\infty}$ hypersurface in ${\rm I\hspace{-0.2em}R}^{n}$ and $\partial {\cal G}\subset {\overline{B_{n}\left({\cal O}_{n},p\right)}}.$
\begin{figure}
\centering
\includegraphics[width=4cm]{teardrop_A}
\includegraphics[width=4cm]{tearsectionC}
\caption{$X\left(\theta,\frac{\pi}{2},1\right),\ \ X\left(\theta,\frac{1}{2}\arccos\right(1-\sec(\theta)\sec(2\theta)\left),1\right)$ }
\label{FigureTwelve}
\end{figure}
Let $\tau$ satisfy
\[
0<\tau < \min \left\{\frac{pr_{1}}{\sqrt{r_{2}^{2}-r_{1}^{2}}}, \frac{b(4p-b)}{4(2p-b)} \right\}.
\]
Set $\sigma=-\arctan(\tau/p)$ and $\alpha=\frac{\pi}{\pi+2\sigma}.$
Then the tangent cones to $\partial {\cal G}$ and $\partial {\cal M}$ at $P$ are identical, $\cos(\sigma)>\frac{r_{1}}{r_{2}}$ and (\ref{Cat2}) holds
for $u=(u_{1},u_{2})\in \partial B_{2}({\cal O}_{2},p)$ with $|u-w|<\delta_{1}.$
If necessary by making $\tau>0$ smaller, we may assume $B_{n}\left({\cal O}_{n},\frac{a+p}{2}-b \right)\subset {\cal G}$ if $p<a<p+b.$
Now pick $a\in \left(p,\min\{p+b,1/\lambda\}\right)$ such that $k_{2}(0,p)-k_{2}(0,a)<-\frac{\beta\sigma}{4},$ as in (\ref{Cab}), and define
\begin{equation}
\label{BBBB}
\Omega=B_{n}\left({\cal O}_{n},a\right)\setminus \overline{\cal G}.
\end{equation}
Let
\[
m>\max\{g_{1}(0,r_{1}), \frac{1}{2m_{0}}, \frac{\beta(2\sigma^{2}-\pi)}{8\sigma}\}
\]
and define $\phi\in C^{\infty}\left({\rm I\hspace{-0.2em}R}^{n}\right)$ so that $\phi=0$ on $\partial B_{n}\left({\cal O}_{n},a\right)$ and
$\phi=m$ on $\partial {\cal G}$ and let $f$ be the variational solution of (\ref{ONE-A})-(\ref{ONE-B}).
Notice that $f\in C^{2}(\Omega)$ satisfies (\ref{ONE-A}) and $f\in C^{0}\left(\overline{\Omega}\setminus \{P\}\right)$ (e.g. \cite{Lin}).
As in (\ref{Bingo}), let $B=B_{2}\left(\frac{p-r_{2}}{p}u,r_{2}\right).$
Set $U_{0}=\{x\in\Omega : x\in \tilde B, x_{1}>0\}$ and
$U=\{r\left(\cos(\theta),\sin(\theta)\omega\right)\in \Omega: r\in (0,p), \theta\in (0,\theta_{u}),\omega\in S^{n-2} \}.$
Now $\tilde g^{[u]}=0$ on $\partial U_{0}\cap \partial \tilde B$ and $\tilde g^{[u]}\le g_{1}(0,r_{1})<m$ on
$\partial U_{0}\cap \partial {\cal G}$ and so Lemma \ref{Four}, Lemma \ref{Mango} and (\ref{ONE-A}) imply $\tilde g^{[u]} \le f$ in $U_{0}$
since $f$ minimizes the functional in (\ref{The_Functional}).
As before, set $\partial_{1}U=\{ \left(p\cos(\theta),p\sin(\theta)\omega\right) : \theta\in [0,\theta_{u}],\omega\in S^{n-2} \},$
$\partial_{2}U=\partial {\cal G} \cap \partial U$ and
$\partial_{3}U= \{ \left(r\cos(\theta_{u}),r\sin(\theta_{u})\omega\right)\in \overline{\Omega} : r\in [0,p],\omega\in S^{n-2} \}.$
Then $f\ge 0$ on $\partial_{1}U\setminus\{P\},$ $\partial U = \partial_{1}U \cup \partial_{2}U \cup \partial_{3}U,$
$\tilde h_{2}\le 0\le f$ on $\partial_{1}U,$ $\tilde h_{2}< m=\phi$ on $\partial_{2}U$ and $\tilde h_{2}<\tilde g^{[u]}<f$
on $\partial_{3}U;$ Lemma \ref{Four} implies that (\ref{Cab}) continues to hold.
Then (\ref{CatX}) and (\ref{Cab}) imply $f$ is discontinuous at $P$ since $k_{2}(0,p)-k_{2}(0,a)<-\frac{\beta\sigma}{4}.$
\begin{figure}
\centering
\includegraphics[width=5cm]{teardropEXP_2}
\includegraphics[width=5cm]{tearsectionB}
\caption{(a) $\Pi_{1,2}\left(\Omega\right)$ \hspace{2cm} (b) $\Pi_{1,3}\left(\Omega\right)$ }
\label{FigureThirteen}
\end{figure}
\section{The Concus-Finn conjecture}
\label{CFcondition}
For the moment, assume $n=2.$
In approximately 1970, Paul Concus and Robert Finn conjectured that if $\kappa\ge 0,$ $\Omega\subset {\rm I\hspace{-0.2em}R}^{2}$ has a corner at $P\in\partial\Omega$
of (angular) size $2\alpha,$ $\alpha\in\left(0,\frac{\pi}{2}\right),$ $\gamma:\partial\Omega\setminus\{P\}\to [0,\pi]$ and
$|\frac{\pi}{2}-\gamma_{0}|>\alpha,$ where
\begin{equation}
\label{FROZEN}
\lim_{\partial\Omega\ni x\to P} \gamma(x)=\gamma_{0},
\end{equation}
then a function $f\in C^{2}(\Omega)\cap C^{1}\left(\overline{\Omega}\setminus \{P\}\right)$ which satisfies
\begin{eqnarray}
\label{TEN-A}
{\rm div}\left( Tf\right) & = & \kappa f \ \ \ \ \ {\rm in} \ \ \Omega, \\
Tf\cdot\eta & = & \cos(\gamma) \ \ \ \ \ {\rm on} \ \ \partial\Omega\setminus \{P\},
\label{Fletch}
\end{eqnarray}
must be discontinuous at $P;$ here $\eta(x)$ is the exterior unit normal to $\Omega$ at $x\in\partial\Omega\setminus \{P\}.$
A generalization (including the replacement of (\ref{TEN-A}) by (\ref{ONE-A})) of this conjecture in the case $\gamma_{0}\in \left(0,\pi\right)$
was proven in \cite{CFC}.
In the situation above with $\alpha\in\left(\frac{\pi}{2},\pi\right),$ the ``nonconvex Concus-Finn conjecture'' states that if
$|\frac{\pi}{2}-\gamma_{0}|>\pi-\alpha,$ then the capillary surface $f$ with contact angle $\gamma$ must be discontinuous at $P.$
A generalization (including the replacement of (\ref{TEN-A}) by (\ref{ONE-A})) of this extension of the Concus-Finn conjecture
in the case $\gamma_{0}\in \left(0,\pi\right)$ was proven in \cite{NCFC}.
Both \cite{CFC} and \cite{NCFC} include the possibility of differing limiting contact angles; that is, the following limits
\[
\lim_{\partial^{+}\Omega\ni x\to P} \gamma(x)=\gamma_{1} \ \ \ \ \ {\rm and} \ \ \ \ \ \lim_{\partial^{-}\Omega\ni x\to P} \gamma(x)=\gamma_{2}
\]
exist, $\gamma_{1}, \gamma_{2}\in (0,\pi)$ and $\gamma_{1}\neq \gamma_{2}.$
Here $\partial^{+}\Omega$ and $\partial^{-}\Omega$ are the two components of $\partial\Omega\setminus \{P,Q\},$
where $Q\in \partial\Omega\setminus \{P\}.$ When $\gamma_{1}\neq \gamma_{2},$ the necessary and sufficient (when $\alpha\le \frac{\pi}{2}$) or necessary
(when $\alpha>\frac{\pi}{2}$) conditions for the continuity of $f$ at $P$ become slightly more complicated.
The cases where $\gamma_{0}=0,$ $\gamma_{0}=\pi,$ $\min\{\gamma_{1},\gamma_{2}\}=0$ and $\max\{\gamma_{1},\gamma_{2}\}=\pi$ remain unresolved.
If we suppose for a moment that the nonconvex Concus-Finn conjecture with limiting contact angles of zero or $\pi$ is proven, then the discontinuity
of $f$ at $P$ in \S \ref{BLUE} follows immediately from the fact that $f<\phi$ in a neighborhood in $\partial\Omega\setminus \{P\}$ of $P$
since then Lemma \ref{Three} implies $\gamma_{0}=0$ and therefore $|\frac{\pi}{2}-\gamma_{0}|>\pi-\alpha.$
In this situation (i.e. the solution $f$ of a Dirichlet problem satisfies a zero (or $\pi$) contact angle boundary condition near $P$),
establishing the discontinuity of $f$ at $P$ would be much easier and a much larger class of domains $\Omega$ with a nonconvex corner
(i.e. $\alpha>\frac{\pi}{2}$) at $P$ would have this property.
For example, if $\Omega$ is a bounded locally Lipschitz domain in ${\rm I\hspace{-0.2em}R}^{2}$ for which (\ref{PIZZA}) holds, $f\in C^{2}(\Omega)$ is a generalized solution of
(\ref{ONE-A})-(\ref{ONE-B}) (and $H$ need not vanish) and $\phi$ is large enough near $P$ (depending on $H$ and the maximum of $\phi$ outside
some neighborhood of $P$) that $f<\phi$ on $\partial\Omega\setminus \{P\}$ near $P,$
then the fact that $\gamma_{0}=0$ (Lemma \ref{Three}) together with the nonconvex Concus-Finn conjecture would imply that $f$ is discontinuous at $P.$
Now consider $n\in{\rm I\hspace{-0.2em}N}$ with $n\ge 3.$
Formulating generalizations of the Concus-Finn conjecture in the ``convex corner case'' (i.e.
$\Omega\cap B_{n}(P,r)\subset \{X\in{\rm I\hspace{-0.2em}R}^{n}: (X-P)\cdot \mu>0\}$ for some $\mu\in S^{n-1},$ $P\in\partial\Omega$ and $r>0$) and in other
cases where $\partial\Omega$ is not smooth at a point $P\in\partial\Omega$ may be complicated because the geometry of
$\partial\Omega\setminus \{P\}$ is much more interesting when $n>2.$
Establishing the validity of a generalization of the Concus-Finn conjecture for solutions of (\ref{ONE-A}) \& (\ref{Fletch})
when $n>2$ is probably significantly harder than doing so when $n=2.$
Suppose we knew that a solution $f$ of (\ref{ONE-A}) \& (\ref{Fletch}) is necessarily discontinuous at a ``nonconvex corner'' $P\in\partial\Omega$
when $\gamma_{0}=0,$ where $\gamma_{0}$ is given by (\ref{FROZEN}).
In this case, a necessary condition for the continuity of $f$ at $P$ would be that $\limsup_{\partial\Omega\ni X\to P} Tf\left(X\right)\cdot \eta(X) >0$
and $\liminf_{\partial\Omega\ni X\to P} Tf\left(X\right)\cdot \eta(X)<\pi.$
Then the arguments in \S \ref{Happy} could be made more easily and the conclusion
that $f$ is discontinuous at $P$ would hold in a much larger class of domains $\Omega;$ here, of course, we use the ridge point $P$ in \S \ref{Happy}
as an example of a ``nonconvex corner'' of a domain in ${\rm I\hspace{-0.2em}R}^{n}.$
The primary difficulty in proving in \S \ref{Happy} that $f$ is discontinuous at $P$ is establishing (\ref{Cab}); a more ``natural'' generalization
of $\Omega\subset{\rm I\hspace{-0.2em}R}^{2}$ in \S \ref{BLUE} would be
\[
\Omega^{*} = \{(x\omega_{1},y,\omega_{2},\dots,\omega_{n-1})\in {\rm I\hspace{-0.2em}R}^{n} : (x,y)\in B_{2}\left({\cal O}_{2},a\right)\setminus \overline{M}, \
\omega\in S^{n-1}\}.
\]
However, the use of Lemma \ref{Mango} to help establish (\ref{Cab}) in $\Omega^{*}$ is highly problematic.
On the other hand, an n-dimensional ``Concus-Finn theorem'' for a nonconvex conical point (e.g. $P\in\partial\Omega^{*}$) would only require an
inequality like (\ref{Taxi}) to prove that $f<\phi$ on $\partial\Omega\setminus\{P\}$ near $P$ and hence that $f$ is discontinuous at $P;$
the replacement of (\ref{AAAA}) by (\ref{BBBB}) in order to obtain a $\Omega$ such that $\partial\Omega\setminus\{P\}$ is $C^{\infty}$ would be
unnecessary.
|
1,108,101,564,191 | arxiv | \section{Introduction}
Dexterous robotic manipulation is essential in many industrial and domestic settings. Traditional robotic manipulation controllers often rely on solving inverse kinematic equations \cite{manipulation-review}. The goal of this approach is to find the robotic joint angle time course to move the end-effector of a robotic system (arm, gripper, fingers, etc.) to a desired pose \cite{inverse-kin}. Because the solution to this problem is not unique, motion primitives (i.e., a set of pre-computed movements that a robot can take in a given environment) are typically used \cite{motion-primitives,motion-primitives2}. These primitives can each have a defined cost, allowing the robot to avoid non-smooth or other undesirable transitions. However, these techniques have poor generalization ability and require complex control system structures. More precisely, they require significant bespoke tailoring for each novel manipulation task, leading to high implementation times and costs. Moreover, current state-of-the-art in traditional robotic control strategies generally struggle in unstructured tasks, which require high degrees of dexterity.
Reinforcement learning (RL), a data-driven learning approach in which models are trained by rewarding or punishing an agent that is acting in a particular environment. RL shows promise \cite{rubiks} to replace traditional robotic control approaches. The goal of this learning paradigm is to maximize the cumulative sum of scalar or vector reward signals received by the agent in response to the actions that it takes \cite{rlbook} so that it learns how to interact appropriately with the environment; that is, it learns how to act in order to maximize the rewards it can receive. However, traditional RL is unable to solve tasks with continuous action and state space, due to their high dimensionality. In other words, there are infinite combinations of states and actions in a continuous space, and therefore traditional RL algorithms cannot learn the required mappings between state, action, and reward for these problems. Hence, the field of deep reinforcement learning (DRL) \cite{dqn} has emerged; a new discipline combining deep learning (DL) \cite{dl} and RL, which inherits the capacity to deal with high-dimensional continuous data from DL and decision-making ability from RL. DRL methods have been able to obtain outstanding results in robotic manipulation \cite{drl-review-1,drl-review-2}.
However, the data inefficiency of DRL is a major barrier to its application in real-world robotics: real robot data collection is time-consuming and expensive. Much DRL research to-date has focused on improving these data-efficiency issues. Due to their generally improved sample complexity, off-policy DRL methods \cite{ddpg,sac} are often preferred to on-policy methods \cite{ppo,trpo}. Model-based DRL methods, which explicitly learn a model of the environment, have been used to further improve sample efficiency \cite{pilco,mbpo,dream}, and have seen success in real robot settings \cite{boadong}. Moreover, offline DRL techniques seek to leverage previously collected data to accelerate learning \cite{offline}, and are able to learn dexterous real-world skills such as opening a drawer \cite{awac}. Imitation learning methods provide the policy with expert demonstrations to learn from \cite{imitation,coarse}, enabling success in real robot tasks, such as peg insertion \cite{leverage imitation}. Finally, simulation-to-real (sim-to-real) transfer methods train a policy quickly and cheaply in simulation before deploying it on the real robot, where learning is significantly slower, and have notably been used to solve a Rubik's cube with a robot hand \cite{rubiks}. To account for simulator modelling errors, and to improve the policies ability to generalize to the real robot, sim-to-real approaches often employ domain randomisation \cite{sim2real,domain-randomization} or domain adaptation \cite{meta,adapt} techniques. Domain randomization, which has been particularly effective \cite{rubiks}, randomizes the physics parameters in simulation to learn a robust policy that can adapt to the partially unknown physics of the real system.
The costly nature of real-robot experimentation has limited research related to robotic dexterous manipulation. In light of these issues, the Real Robot Challenge (RRC) \cite{rrc} aims to advance the state-of-the-art in robotic manipulation by providing participants with remote access to a TriFinger robotic platform (see Figure \ref{sim-real}(b)) \cite{trifinger}, allowing for free and easy real-robot experimentation. To further support ease of experimentation, users are also provided with a simulation of this robotic system (see Figure \ref{sim-real}(a)). Full details can be found in the `Protocol' section of the RRC website\footnote{\url{https://real-robot-challenge.com}}.
This paper aims to extend the task of Phase 1 of the RRC 2021 to a more difficult task of moving a cube along a 3D positional trajectory while also maintaining a desired orientation. The task of Phase 1 of the RRC 2021 consisted in moving a cube along a defined trajectory. So here we increase the difficulty of this task by requiring the cube orientation to match a target orientation throughout the trajectory. We formulated both tasks as a pure RL problem. The movement-related skills of the robot are entirely learned in simulation, hence reducing human involvement compared to traditional real-world learning methods. We chose to use the Deep Deterministic Policy Gradient (DDPG) algorithm, as it was shown to have an excellent performance in handling robotic manipulation tasks with continuous action and observation spaces \cite{ddpg}. We used a reward function composed of goal-based sparse rewards in conjunction with Hindsight Experience Replay (HER) \cite{her} to teach the control policy to move the cube to the desired \textit{xy} coordinates (in the first task, controlling cube position) and then also orientation (in the extended task). Simultaneously, a dense distance-based reward is employed to teach the policy to lift the cube to the desired $z$ coordinate. Finally, we use a novel concept that we call \textit{knowledge transfer} (KT) facilitate learning during the task requiring cube position and orientation be controlled, which is based on the strategies previously-learned during the task requiring only the cube position be controlled. Compared with other methods used to solve a similar task \cite{gpu-paper,ppo}, our KT method has the advantage of being easier to implement, and can be extended to all actor-critic RL algorithms.
\section{Methods}
\subsection{Real Robot Challenge (RRC)}
We were provided with remote access to well-maintained TriFinger robotic platforms \cite{trifinger} (see Figure \ref{sim-real}(b)) by the Max Planck Institute for Intelligent Systems (Tübingen/Stuttgart, Germany)\footnote{\url{https://is.mpg.de/}}. To further support ease of experimentation, users are also provided with a simulated version of the robotic setup (see Figure \ref{sim-real}(a)). The 2021 RRC consists of an initial qualifying phase, performed purely in simulation, followed by independent Phases 1 and 2, both performed on the real robot. Full details can be found in the `Protocol' section of the RRC website\footnote{\url{https://real-robot-challenge.com}}.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{images/rc_sim.png}
\label{tri-sim}
\subcaption{Visualisation of the PyBullet simulation environment for the TriFinger robot used in the RRC 2021.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{images/rrc_real.jpg}
\label{tri-real}
\subcaption{The real TriFinger robot used in the RRC 2021. Robot access to this system was possible, allowing competitors from around the world to test the policies learned in simulation the real robot. Competition results are generated from performance on this real-world system}
\end{subfigure}
\caption{The TriFinger robot in: (a) the PyBullet \cite{pybullet} simulation, and; (b) the real world. Three identical robotic fingers are uniformly placed around the circular arena. The colored cube is the target object that must be moved.}
\label{sim-real}
\end{figure}
\subsubsection{Phase 1 task: Following a 3D position trajectory}
Competition participants are tasked with solving the \textit{`Move Cube on Trajectory'} task. In this task, a cube must be carried along a goal trajectory, which specifies the Cartesian coordinates at which the cube should be positioned at each time-step\footnote{Agents interact with the environment in (discrete) time-steps, which are incremented after the agent takes an action}. The goals are sampled discretely, and the time interval between the two goals in the training phase can be defined manually. The robot should react as soon as the new goal is received obtain a higher score. For the final evaluation, multiple previously-unseen goal trajectories must be followed. Each evaluation trajectory consists of 120,000 time-steps, with 10 different coordinates that must be visited by the cube. The structure of the trajectory is such that the next goal (of 10 in total) is introduced at each of the following time-steps, $t \in \left\{0,30000,40000,50000,...,110000 \right\}$. The obtained score, $s_{pos}$ is defined by a distance-based weighted criterion:
\begin{equation}
\label{eq:position critirion}
s_{pos} = -\dfrac{1}{2} \left(
\dfrac{\left\|g'_{xy} - g_{xy} \right\|}
{2d_r} +
\dfrac{\left\|g'_{z} - g_{z}\right\|}{d_h} \right)
\end{equation}
\noindent where ${g'}_{xy}$ and ${g'}_{z}$ are the actual $xy$ and $z$ coordinates of the cube, respectively, and $g_{xy}$ and ${g}_{z}$ are the desired $xy$ and $z$ coordinates of the cube. $d_r$ and $d_h$ are constants representing the radius of the arena floor and the maximum height fingertips can reach. A higher score, averaged across all tested trajectories, implies a better manipulation performance.
Unlike end-to-end pick-and-place manipulation, the goals are continuously changing in this task. Various action strategies, such as pushing, cradling, or pinching, must be dynamically employed to move the cube in response to the changing positional goals.
\subsubsection{Extended task with increased difficulty: Position and orientation trajectory}
We extended the task described above by introducing the an orientation trajectory goal in addition to the positional; this was inspired by the most challenging task in the RRC 2020\footnote{\url{https://real-robot-challenge.com/2020}}, where the cube must not only be moved to the desired position but also aligned with the desired orientation. The task in 2020 is a single end-to-end movement, rather than trajectory. Hence, we integrated position, orientation, and trajectory together to formed up a more challenging task that we named \textit{Move Cube on Trajectory Pro}. To evaluate performance during this task we used the criterion used in RRC 2020 (c.f., Eq. (\ref{eq:position orientation critirion}))
\begin{equation}
\label{eq:position orientation critirion}
s_{pos+ori} = -{\left\| \left(R\left(g'_{o}\right)\right)^{-1} R\left(g_{o}\right) \right\|} + s_{pos}
\end{equation}
\noindent where the ${g'}_{o}$ and ${g}_{o}$ are the achieved orientation and the desired orientation, respectively, described using quaternions. \textit{R} is the rotation matrix derived from the quaternions representation and $^{-1}$ is the inverse matrix operator. Note, $s_{pos}$ from Eq. (\ref{eq:position critirion}) is incorporated in the calculation of $s_{pos+ori}$ to evaluate the cube's position.
\subsection{Simulated environment}
\subsubsection{Actions and observations}
Pure torque control of the robot arms is employed with an update frequency of 20 Hz (i.e., each time-step in the environment is 0.05 seconds). The robot has three arms, with three motorized joints in each arm; thus, the action space is 9-dimensional (and continuous). Observations include: (i) robot joint positions, velocities, and torques; (ii) cube's current pose (i.e., its position and orientation, which in simulation is read directly from the simulator with no measurement error, and for the real world arena is estimated using provided computer vision object detection and segmentation methods), along with the difference between the poses at current and previous time-steps; and (iii) the current goal pose. In total, the observation space has 44 dimensions for the \textit{Move Cube on Trajectory} task, and 48 for the \textit{Move Cube on Trajectory Pro}.
In the \textit{Move Cube on Trajectory} task, there are three target variables (three Cartesian position coordinates). In the \textit{Move Cube on Trajectory Pro} task, there are seven target variables, representing 3D position and an additional four quaternion variables representing 3D orientation.
\subsubsection{Episodes}
In each simulated training and testing episode, the robot begins in its default position. The simulator instantiates the cube and the arena environment, with the cube's initial position randomly sampled from the arena floor. Each episode lasts 90 time-steps, and the goal trajectory contains three desired goals, which are randomly sampled from the arena 3D space. The intervals between goals are same; i.e., the goal changes every 30 steps.
\subsubsection{Domain randomization}
To help the learned policy generalize from a potentially inaccurate simulation to the real environment, we used some basic domain randomization (DR) techniques\footnote{Our domain randomization implementation is based on the benchmark code from the 2020 RRC \cite{dr-code}.} (i.e., physics randomization). This includes uniformly sampling, from a specified range, parameters of the simulation physics (e.g., robot mass, restitution, damping, friction; see our code for more details\footnote{\url{https://github.com/RobertMcCarthy97/rrc_phase1}}) and cube properties (mass and width) for each episode. To account for noisy real robot actuation and observations, uncorrelated noise is added to actions and observations during simulated episodes.
DR is only applied in the \textit{Move Cube on Trajectory} task, since it requires sim-to-real transfer. In our final approach, we train the agent initially in the simulated non-DR environment for 300 epochs to allow it to learn the optimal policy quickly and easily. Afterward, the agent is tuned in the simulated DR environment for 100 epochs to enhance its robustness, before deploying to the real TriFinger robot.
\subsection{Learning algorithm}
The goal-based nature of the \textit{Move Cube on Trajectory} and \textit{Move Cube on Trajectory Pro} tasks makes HER a good choice of learning algorithm; HER has excelled in similar goal-based robotic tasks \cite{her} and obviates the need for complex reward engineering. As such, we combine DDPG and HER to make our RL algorithm\footnote{Our DDPG + HER implementation is taken from \url{https://github.com/TianhongDai/hindsight-experience-replay}, and uses hyperparameters largely based on \cite{fetch results}.}. However, in our early experiments we observed that the standard DDPG plus HER learning algorithm was slow in learning to lift the cube. To resolve this issue, we slightly altered the HER process and incorporated an additional dense reward which encourages cube-lifting behaviors. We describe this amendment below.
\subsubsection{Rewards and HER}
In our approach, the reward function that guides the RL agent's learning consists of three components: (i) a sparse reward based on the the cube's $xy$ coordinates, termed $r_{xy}$; (ii) a dense reward based on the cube's $z$ coordinate, termed $r_{z}$ (the coordinate frame can be seen in Figure \ref{sim-real}(a)); and (iii) a sparse reward based on the cube's 3D orientation (used in the \textit{Move Cube on Trajectory Pro} task only).
The sparse $xy$ reward, $r_{xy}$, is calculated as:
\begin{equation}
\label{eq:xyonly}
r_{xy} = \begin{cases}
0 & \textrm{if} \quad \left\| {g'}_{xy} - g_{xy} \right\|\leq 2 \textrm{ cm} \\
-1 & \textrm{otherwise}
\end{cases}
\end{equation}
\noindent where ${g'}_{xy}$ are the $xy$ coordinates of the \textit{achieved} goal (the actual $xy$ coordinates of the cube), and $g_{xy}$ are the $xy$ coordinates of the \textit{desired} goal.
The dense $z$ coordinate reward, $r_{z}$, is defined as:
\begin{equation}
\label{eq:z}
r_{z} = \begin{cases}
- a \left\| z_{cube} - z_{goal}\right\| & \textrm{if} \quad z_{cube} < z_{goal} \\
\\
\dfrac{-a}{2}\left\| z_{cube} - z_{goal}\right\| & \textrm{if } \quad z_{cube} > z_{goal}
\end{cases}
\end{equation}
\noindent where $z_{cube}$ and $z_{goal}$ are the $z$ coordinates of the cube and goal, respectively, and $a$ is a parameter which weights $r_{z}$ relative to $r_{xy}$; we use $a=20$.
The sparse reward for the orientation is defined as:
\begin{equation}
\label{eq:orientation}
r_{ori} = \begin{cases}
0 & \textrm{if} \quad \left\| \left(R\left(g'_{o}\right)\right)^{-1}
R\left(g_{o}\right) \right\|\leq 0.384 \textrm{ rad (i.e., $22^{\circ}$)} \\
-1 & \textrm{otherwise}
\end{cases}
\end{equation}
\noindent where the ${g'}_{o}$ and ${g}_{o}$ are the \textit{achieved} orientation and the \textit{desired} orientation represented as quaternions. \textit{R} indicates rotation matrix calculation and $^{-1}$ is the matrix inverse operator. We set the position ($2$ cm) and orientation ($22^{\circ}$) thresholds as per Andrychowicz \textit{et al.} \cite{openai}.
Another pure distance-based reward is also used as a comparator for our position-based reward scheme. This reward is popular in the field because of its clear geometric meaning and its computational efficiency:
\begin{equation}
\label{eq:pure-dis}
r_{dis} = -{\left\| g'_{xyz} - g_{xyz}\right\|}
\end{equation}
\noindent where ${g'}_{xyz}$ and ${g}_{xyz}$ are the \textit{achieved} $xyz$ coordinates and the \textit{desired} $xyz$ coordinates of the cube.
For the position-only \textit{Move Cube on Trajectory} task, we only apply HER to the $xy$ coordinates of the goal; i.e., the $xy$ coordinates of the goal can be altered in hindsight, but the $z$ coordinate remains unchanged. Thus, our HER altered goals are: $\hat{g} = (g'_{xy},g_z)$, meaning only $r_{xy}$ is recalculated after HER is applied to a transition sampled during policy updates. This reward system is motivated by the following:
\begin{enumerate}
\item Using $r_{xy}$ with HER allows the agent to learn to push the cube around in the early stages of training, even if it cannot yet lift the cube to reach the $z$ coordinate of the goal. As the agent learns to push the cube around in the $xy$ plane of the arena floor, it can then more easily stumble upon actions which lift it. Importantly, the approach of using $r_{xy}$ with HER requires no bespoke reward engineering.
\item $r_{z}$ aims to explicitly teach the agent to lift the cube by encouraging minimisation of the vertical distance between the cube and the goal. It is less punishing when the cube is above the goal, serving to further encourage lifting behaviours.
\item In the early stages of training, the cube mostly remains on the floor. During these stages, most $g'$ sampled by HER will be on the floor. Thus, applying HER to $r_{z}$ could often lead to the agent being punished for briefly lifting the cube. Since we only apply HER to the $xy$ coordinates of the goal, our HER altered goals, $\hat{g}$, maintain their original $z$ height. This leaves more room for the agent to be rewarded by $r_{z}$ for any cube lifting it performs.
\end{enumerate}
In the \textit{Move Cube on Trajectory Pro} task, the reward of the positional components is same as above. HER is applied on the orientation and thus the HER altered goals are $\hat{g} = (g'_{xy},g'_{ori},g_z)$. Finally all items are summed to form the final reward, $r=r_{xy}+r_{z}+r_{ori}$
\subsubsection{Goal trajectories}
In each episode, the agent is faced with multiple goals; it must move the cube from one goal to the next along a given trajectory. To ensure the HER process remains meaningful in these multi-goal episodes, we only sample future achieved goals, $g'$, (to replace $g$) from the period of time in which $g$ was active.
In our implementation, the agent is unaware that it is dealing with trajectories: when updating the policy with transitions $(s_{t},g_{t},a_{t},r_{t},s_{t+1},g_{t+1})$ we always set $g_{t+1} = g_{t}$, even if in reality $g_{t+1}$ was different\footnote{Interestingly, we found that exposing the agent (during updates) to transitions in which $g_{t+1} \neq g_{t}$ hurt performance significantly, perhaps due to the extra uncertainty this introduces to the DDPG action-value estimates.}. Thus, the policy focuses solely on achieving the current active goal and is unconcerned with any future changes in the active goal.
\subsubsection{Exploration vs exploitation}
We derive our DDPG-HER hyperparameters from Plappert \textit{et al.} \cite{fetch results}, who use a highly exploratory policy when collecting data in the environment: with probability 30\% a random action is sampled (uniformly) from the action-space, and when policy actions are chosen, Gaussian noise is applied. This is beneficial for exploration in the early stages of training, however, it can be limiting in the later stages when the policy must be fine-tuned; we found that the exploratory policy repeatedly drops the cube due to the randomly sampled actions and the injected action noise. To resolve this issue, rather than slowly reducing the level of exploration each epoch (which would require a degree of hyperparameter tuning), we make efficient use of evaluation episodes (which are performed by the standard exploitation policy) by adding them to the replay buffer. Thus, 90\% of rollouts added to the buffer are collected with the exploratory policy, and the remaining 10\% with the exploitation policy. This addition was sufficient to boost final success rates in simulation from 70-80\% to >90\% (where "success rate" is equivalent to that seen in Figure \ref{sim train}).
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{images/multi_sparse_t.png}
\caption{Success rate vs experience collected during simulated training (1 day in simulator, $\approx$ 1.7 million environment steps). We took the average over 3 randomly-selected seeds for this illustration. The solid dotted lines are the trend lines and the faded lines are the raw data. An episode is deemed successful if, when complete, the final goal of the trajectory has been achieved. We compare training with: (i) Our final method where HER is applied to $r_{xy}$ but not to $r_z$ (red); (ii) HER applied to both $r_{xy}$ and $r_z$ (green); (iii) HER applied to a standard sparse reward where $xyz$ are calculated together (blue); (iv) HER applied to pure distance reward, $r_{dis}$, where $xyz$ are calculated together (orange); and (v) Pure distance reward, $r_{dis}$, without HER (purple); (vi) Standard sparse reward without HER (black).
}
\label{sim train}
\end{figure}
\subsection{Knowledge transfer}
The signal from our computationally simple reward rarely guides the agent to focus on playing with the cube in the early training stage; hence, the agent's learning relies heavily on pure exploration. However, its arbitrary exploration is unlikely to explore actions that change the cube's orientation. Unlike the position-only \textit{Move Cube on Trajectory} task, the orientation exploration in the \textit{Move Cube on Trajectory Pro} task is comparatively more difficult, requiring the cube to be both lifted and rotated simultaneously. To this end, we introduce our KT approach, which transfers a trained agent's knowledge (\textit{teacher}) to another untrained agent (\textit{student}), or uses the \textit{teacher}'s knowledge to assist the \textit{student} with learning. In our case, we train a \textit{teacher} from the position-only \textit{Move Cube on Trajectory} task, and train \textit{students} in the \textit{Move Cube on Trajectory Pro} task. The \textit{teacher's} transferred knowledge increases the likelihood of the \textit{student} discovering useful actions to control the orientation of the cube through exploring in the early stages of training, instead of exploring all possible actions at random, as the \textit{student} has been transferred the knowledge required to control the cube's position; in this way, we expect the agent to learn how to control the cube's orientation more quickly. We use two strategies ensure the \textit{teacher's} guidance not too biased toward position-based exploration: 1) increasing the action noise in the early stages of exploration; 2) choose a \textit{teacher} that has weaker performance in the \textit{Move Cube on Trajectory} task (i.e., success rate of only 80\%).
We have the following three implementations of KT. We name them differently for ease of explanation later.
\begin{enumerate}
\item \textit{\textbf{ACTOR-CRITIC}}: Initialise the \textit{student's} actor and critic\footnote{The \textit{actor} is typically a policy function, which outputs actions to interacts with the environment. The \textit{critic} uses approximate architecture and simulations to learn a value function, which is then used to update the \textit{actors'} policy parameters \cite{ac}.} network weights by loading the \textit{teacher's} trained weights.
\item \textit{\textbf{ACTOR}}: Initialise the \textit{student}'s actor network weights by loading the \textit{teacher's} trained weights and randomly initialize the \textit{student's} critic.
\item \textit{\textbf{COLLECT}}: The \textit{student} does not inherit any network weights from the \textit{teacher}. The \textit{teacher} only helps the \textit{student} to collect experience in the early stages of training. Both the student's actor and critic networks are randomly initialized. As the \textit{teacher's} knowledge is not updated in the new task, the collected experience lacks diversity. Hence, we force the \textit{teacher's} participation to decay as training progresses, allowing the \textit{students} to engage in exploration further, making the experience more beneficial to the \textit{student}.
\item \textit{\textbf{SCRATCH}}: The attempt to solve the \textit{Move Cube on Trajectory Pro} task, `from scratch', without using KT. This is used for comparing with KT approaches.
\end{enumerate}
\subsection{Train, test, and evaluate}
All training and testing were performed on the Sonic high performance cluster at University College Dublin, Ireland. For our training, jobs are randomly assigned to Dell machines configured with R640 2x Intel Xeon Gold 6152 (2.1 GHz, 22 cores) or C6200 V2 2x Intel E5-2660 v2 (2.2 GHz, 10 cores). In each training run, eight RL agents are assigned to eight processors and run in parallel. At the end of each training step, the neural network weights of 8 agents are synchronized by averaging. The experiences collected by agents are stored in their respective experience replay buffers and will not be shared. The global success rate and reward are calculated by averaging the local values from eight agents.
\begin{itemize}
\item \textit{Training}: For the \textit{Move Cube on Trajectory} task, each job runs for 300 epochs, overall interacting with the simulated environment for 21.6 million time-steps (2.7 million time-steps for each RL agent). The neural network of each agent is updated 15,000 times and synchronization is performed after each update. For the \textit{Move Cube on Trajectory Pro} task, the number of training epochs is increased to 500, and other training parameters increased proportionally.
\item \textit{Testing}: Testing is performed after each training epoch. Each agent runs for 900 time-steps in the simulator. Experience collected from testing is stored in the replay buffer. The neural network losses, success rates and rewards of eight agents are averaged and saved.
\item \textit{Real robot evaluation of \textit{Move cube on Trajectory}}: For the \textit{Move cube in trajectory} task performed on the real TriFinger robot, we adopt the evaluation criteria set by the RRC 2021. For each evaluation episode, a random goal trajectory is sampled. Each test run keeps 120,000 time-steps, with 10 different goals (first goal at first time-step, then 30,000 time-steps to the second goal, and 10,000 time-steps for each of the remaining 8 goals after that). The domain gap between simulation and reality was significant, and generally led to inferior scores on the real robot. Policies often struggled to gain control of the real cube which appeared to slip more easily from the fingers than in simulation. Additionally, on the real robot, policies could become stuck with an fingertip pressing the cube into the wall. As a makeshift solution to this issue, we assumed the policy was stuck whenever the cube had not reached the goal's $xy$ coordinates for 50 consecutive steps; then uniformly sampled random actions were taken for 7 time-steps in an attempt to free the policy from its stuck state.
\end{itemize}
\section{Results}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{images/push_grasp.png}
\subcaption{Pushing}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{images/cupping_grasp.png}
\subcaption{Cradling}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pinching_grasp.png}
\subcaption{Pinching}
\end{subfigure}
\caption{The various manipulation strategies learned by the agent: (a) pushing; (b) cradling; (c) pinching.}
\label{fig:three graphs}
\end{figure}
\subsection{RRC 2021 Phase 1}
\subsubsection{Learning outcomes of RL agents}
Our method is highly effective in simulation. The algorithm can learn from scratch to proficiently grasp the cube and lift it along goal trajectories. Figure \ref{sim train} compares the training performance of our final algorithm (red curve) to that of other combinations\footnote{These runs did not use domain randomization. Generally we trained from scratch in standard simulation before fine-tuning in a domain-randomized simulation}. Our algorithm appears to converge stably and more quickly than either HER applied to a standard sparse reward or HER applied to both $r_{xy}$ and $r_{z}$. Furthermore, HER plays a crucial role in the success of our algorithm. The pure DDPG agent finds it difficult to learn from sparse rewards without HER, and the convergence speed is extremely slow for an agent guided by this traditional distance-based reward.
Throughout different training runs, our policies learned several different manipulation strategies, the most distinct of which included: `\textit{pushing}' the cube on the arena floor; (ii) `\textit{cradling}' the cube with all three of its forearms (not fingertips); and (iii) `\textit{pinching}' the cube with two fingertips and supporting it with the third (see Figure \ref{fig:three graphs}).
Our final policies transferred to the real robot with reasonable success. Table \ref{scores} displays the self-reported scores of our best \textit{pinching} and \textit{cradling} policies under RRC Phase 1 evaluation conditions. As a baseline comparison, we trained a simple `\textit{pushing}' policy which ignores the height component of the goal and simply learns to push the cube along the floor to the goal's $xy$ coordinates. The \textit{pinching} policy performed best on the real robot, and is capable of carrying the cube along goal trajectories for extended periods of time, and of recovering the cube when it is dropped. This policy was submitted for the official RRC Phase 1 final evaluation round and obtained the winning score\cite{rrc-win} (see {\footnotesize \url{https://real-robot-challenge.com/leaderboard}}, username `thriftysnipe').
\begin{table}[]
\caption{Self-reported (i.e., not reported by competition organizers) evaluation scores of our learned \textit{pushing}, \textit{cradling}, and \textit{pinching} policies when deployed on the simulated and real robots (mean $\pm$ standard deviation (SD) score over 10 episodes). Scores are based on the cumulative position error of the cube during an episode: $ s_{pos} = -\frac{1}{2}\sum_{t=0}^{n} \left(\frac{||\textbf{e}_{xy}^{t}||}{d_{r}} + \frac{|e_{z}^{t}|}{d_{h}}\right) $, where $\textbf{e}^{t} = \left(e_x^t,e_y^t;e_z^t\right)$ is the error between the cube and goal position at time-step $t$, $d_{r}$ the arena floor radius, and $d_h$ the range on the z-axis.}
\vspace{1.5mm}
\label{scores}
\centering
\begin{tabular}{l|rlr|rlr|rlr}
\hline
& \multicolumn{3}{c|}{Pushing} & \multicolumn{3}{c|}{Cradling} & \multicolumn{3}{c}{Pinching} \\ \hline
Simulation & -20,399 & $\pm$ & 3,799 & -6,349 & $\pm$ & 1,039 & \textbf{-6,198} & $\pm$ & 1,840 \\ \hline
Real robot & -22,137 & $\pm$ & 3,671 & -14,207 & $\pm$ & 2,160 & \textbf{-11,489} & $\pm$ & 3,790 \\ \hline
\end{tabular}
\end{table}
\subsubsection{Agents trained from different random seeds behave differently}
We verified that discrepancies exist between agents' performances when trained from different random seeds (see Table \ref{real and sim eval different seeds}) \cite{rl-matters}. In our case, the random seed is used to randomly reset the environment and randomize actions so that agents can explore in the training phase. Interestingly, the best performing agent (Seed 0 in Table \ref{real and sim eval different seeds}) in the simulator had a relatively poor performance on the real robot. This agent overfits to the simulator environment, and hence the acquired policy has low generalization capability. The agent train under (random) Seed 200 (see Table \ref{real and sim eval different seeds}) did not obtain the optimal policy and performed the worst on the real robot.
\begin{table*}[]
\renewcommand{\arraystretch}{1.2}
\caption{Evaluated score of RL agents trained under different random seeds; three seeds selected here for illustration. The RRC organizing institute has a small number of real robots available for remote use, with a small but expected variation the physical parameters of each arena and the performance of each robots. The evaluations are performed on three of these robots, chosen at random, named Roboch1, Roboch5 and Roboch6, to increase the generalizability of the results. Each evaluation is repeated 15 times (15 goal trajectories) with 120,000 time-steps in each repeat. The results from the simulator is also shows in the Sim subcolumns.}
\label{real and sim eval different seeds}
\centering
\resizebox{\textwidth}{!}{
\large
\begin{tabular}{c|cccc|cccc|cccc}
\hline
\multicolumn{1}{l|}{} &
\multicolumn{4}{c|}{\textbf{Seed 0}} &
\multicolumn{4}{c|}{\textbf{Seed 123}} &
\multicolumn{4}{c}{\textbf{Seed 200}} \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Robot\\ (Real/Sim)\end{tabular}} &
\multicolumn{1}{c|}{Robch1} &
\multicolumn{1}{c|}{Robch5} &
\multicolumn{1}{c|}{Robch6} &
Sim &
\multicolumn{1}{c|}{Robch1} &
\multicolumn{1}{c|}{Robch5} &
\multicolumn{1}{c|}{Robch6} &
Sim &
\multicolumn{1}{c|}{Robch1} &
\multicolumn{1}{c|}{Robch5} &
\multicolumn{1}{c|}{Robch6} &
Sim \\ \hline
\textbf{Reward} &
\multicolumn{1}{r|}{-12175} &
\multicolumn{1}{r|}{-11300} &
\multicolumn{1}{r|}{-9913} &
\multirow{2}{*}{\textit{\textbf{-5447}}} &
\multicolumn{1}{r|}{-9308} &
\multicolumn{1}{r|}{-8926} &
\multicolumn{1}{r|}{-10017} &
\multirow{2}{*}{\textit{\textbf{-6376}}} &
\multicolumn{1}{r|}{-16266} &
\multicolumn{1}{r|}{-13120} &
\multicolumn{1}{r|}{-15750} &
\multirow{2}{*}{\textit{\textbf{-7277}}} \\ \cline{1-4} \cline{6-8} \cline{10-12}
\textbf{Avg} &
\multicolumn{3}{c|}{\textit{\textbf{-11129}}} &
&
\multicolumn{3}{c|}{\textit{\textbf{-9417}}} &
&
\multicolumn{3}{c|}{\textit{\textbf{-15045}}} &
\\ \hline
\end{tabular}
}
\end{table*}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{images/dr_vs_ndr_4.png}
\caption{Comparison between agents trained from scratch with domain randomization (DR) and without.}
\label{dr-vs-ndr-sr}
\end{figure}
\subsubsection{Domain randomization improves real robot performance}
The performance of agents on the real robot significantly improved after tuning using DR (see Table \ref{dr-vs-ndr}). Among these agents, the agent trained from Seed 123 achieved simulator-level performance. Nonetheless, using DR from the beginning of training is not recommended (at least for our algorithm). In a domain-randomized simulation environment, the agent will be hindered in learning the optimal policy, compared to the policy learned in the standard simulation environment that does not use domain randomization (see Figure \ref{dr-vs-ndr-sr}), and hence will achieve poor performance on the real robots (see Table \ref{dr-vs-ndr}). A demonstration video of this trained network in action can be seen in \url{https://www.youtube.com/watch?v=0Lpod542T9k}
\begin{table}[]
\centering
\caption{Evaluation rewards of different agents evaluated in simulation and on real robots; three seeds chosen at random to illustrate typical performance. Scratch(DR) represents the agent trained from scratch using DR from the beginning of training. Scratch(NDR) is the agent trained from scratch without using DR; Scratch(NDR) + Tune(DR) is the agent firstly trained from scratch without DR and then subsequently tuned using DR in a second training phase.}
\label{dr-vs-ndr}
\begin{tabular}{cccc}
\hline
\textbf{Seed} & \textbf{Type} & \textbf{Simulation} & \textbf{Real robot} \\ \hline
\multirow{3}{*}{\textbf{0}} & Scratch(DR) & -8808 & -16685 \\
& Scratch(NDR) & -5447 & -11129 \\
& \textbf{Scratch(NDR) + Tune(DR)} & -5825 & -8922 \\ \hline
\multirow{3}{*}{\textbf{123}} & Scratch(DR) & -9873 & -15726 \\
& Scratch(NDR) & -6376 & -9417 \\
& \textbf{Scratch(NDR) + Tune(DR)} & -6253 & \textit{-7030} \\ \hline
\multirow{3}{*}{\textbf{200}} & Scratch(DR) & -11779 & -15398 \\
& Scratch(NDR) & -7277 & -15045 \\
& \textbf{Scratch(NDR) + Tune(DR)} & -6147 & -8535 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{0.327\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pos_t.png}
\subcaption{Combined position and orientation (i.e., pose) reward.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.327\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pos_t.png}
\subcaption{Position reward vs training time.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/ori_t.png}
\subcaption{Orientation reward vs training time.}
\end{subfigure}
\caption{Learning curves showing rewards achieved by agents using different learning algorithms.}
\label{st performance}
\end{figure}
\subsection{Move Cube on Trajectory Pro}
Figure \ref{st performance} compares the training performance of different approaches to learning the \textit{Move Cube on Trajectory Pro} task, including using our proposed KT scheme compared to not using it. Agents' overall rewards are decomposed into position and orientation components so that the learning outcomes of the two components can be compared.
Training using the combined DDPG and HER algorithm (not using KT, see \textit{SCRATCH} curve in Figure \ref{st performance}) completely fails in the \textit{Move Cube on Trajectory Pro} task, with rewards on all components staying at their minimum possible values. In contrast, agents in all three KT methods shows learning trends. Among them, both \textit{COLLECT} and the \textit{SCRATCH} use randomly initialized neural networks. However, with the help of the experience collected by the teacher, \textit{COLLECT} shows a more successful learning performance.
By transferring the teacher's knowledge of how to perform the position-only task to the \textit{ACTOR-CRITIC} \textit{student}, the \textit{ACTOR-CRITIC} excels in its ability to reach position targets from the beginning of training. This makes it more likely that the \textit{ACTOR-CRITIC} will discover actions which change the cube's orientation and are rewarded. All agents learn orientation from scratch, while the \textit{ACTOR-CRITIC} has a better learning performance than that of \textit{COLLECT}. However, the \textit{COLLECT} agent relies strongly on the experience collected by the \textit{teacher} in the early stages of training, although we set a decay on the \textit{teacher's} participation over time.
Although the \textit{ACTOR} agent inherits the knowledge of the \textit{teacher's} actor, it performs the worst among all KT strategies. Because the randomly-initialized critic is not compatible with the well-trained actor, and the critic's feedback cannot meet the actor's expectations; hence, the actor's performance worsens.
The evaluation results are shown in Table \ref{average deviation}. All three methods enable the student agent to learn the \textit{Move Cube on Trajectory Pro} task, but behave differently. Among them, the \textit{ACTOR-CRITIC} agent can achieve comparable performance to the \textit{teacher} at reaching position targets, and learn to perform very well on reaching orientation targets. A demonstration video of this trained network in action can be seen here: \url{https://www.youtube.com/watch?v=GhkCqoMqxU4}.
\begin{table}[]
\centering
\caption{The average deviations between achieved and desired position/orientation over 15 evaluation episodes in the simulator; each episode lasts 120,000 time-steps. The deviations are self-customized to see the learning outcomes intuitively. For the position: $\textbf{dev}_\textbf{dis} = ||{g'}_{xyz} - {g}_{xyz}||$, which can measure the distance between the actual position of the cube and the goal position. For the orientation: $\textbf{dev}_\textbf{angle} = {||(R(g'_{o}))^{-1} R(g_{o})||}$, which can measure the angle difference between the actual and target orientations. The deviations of all time-steps are accumulated and finally averaged.}
\label{average deviation}
\begin{tabular}{lrrrrr}
\hline
& \textbf{ACTOR-CRITIC} & \textbf{ACTOR} & \textbf{COLLECT} & \textbf{SCRATCH} & \textbf{TEACHER} \\ \hline
\textbf{Average position deviation (m)} & \textit{\textbf{0.023}} & 0.066 & 0.031 & 0.134 & 0.024 \\
\textbf{Average orientation deviation ($^\circ$)} & \textit{\textbf{75.8}} & 98.6 & 84.9 & 142.2 & 126.2 \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}
Our relatively simple reinforcement learning approach fully solves the \textit{`Move Cube on Trajectory'} task in simulation. Unlike the RRC 2020 benchmark solution \cite{dr-code}, this was achieved with the use of minimal domain-specific knowledge. We have only tried our approach using the DDPG learning algorithm; other later published DRL algorithms, such as SAC \cite{sac} or TD3 \cite{td3} may have superior performance, and will be the focus of our future research.
Reproducing our method is highly feasible, since only one machine with eight processors is required; no GPUs are required. The number of agents can be set to more if an increase in training speed is possible. Or, the number of agents can be set to less, but sacrificing training speed.
Although our results in simulation were very good, the algorithm is somewhat sample inefficient, taking roughly 10 million environment steps to converge (equivalent to 6 days of simulated experience). Thus, another important direction for future work would be to increase sample efficiency; perhaps achievable via a model-based reinforcement learning approach \cite{mbpo,I-HER}.
The reward function is the driver of DRL; a well-designed and well-shaped reward can improve the quality and speed of DRL \cite{gpu-paper}. A bespoke reward function might limit the utility of a method to a small range of tasks, and is likely costly to develop.
In contrast, goal-based sparse rewards can be easily created. Compared to shaped rewards, goal-based sparse rewards have an extremely weak reward signal. The agent can only get the bonus when succeeding in the task; otherwise, it gets punished. However, the probability of success in some tasks is very low, and a dearth of positive feedback hinders improvement in the task. HER can effectively enhance sparse reward signals by modifying the experience data before giving it to the agent. Typically, HER replaces the goal with the posterior state, making the prior-posterior transition highly likely to succeed by the action applied to the prior state. HER has been proved to work wonders \cite{her} in sparse-reward-based tasks, as verified in our case.
In addition, our distance-based dense reward can effectively accelerate the agent's learning in the early stages of training, and its implementation is straightforward. It can be easily extended to other robotic manipulation tasks.
Moreover, our learned policies can successfully implement their sophisticated manipulation strategies on the real robot through our DR tuning approach, which bridges the large sim-to-real domain gap and allows the achievement of near-simulator-level performance on real robots. In the RRC 2021, which used the \textit{Move Cube on Trajectory} task, requiring only position targets be reached, we outperformed all competing submissions, including those employing more classical robotic control techniques. The sim-to-real transfer is an essential means to bring robotic learning into reality. Training an intelligent robotic controller in the real world is expensive and impractical. Take an example of a problem we faced before: we tried to collect the data from the real TriFinger robots to train an agent from scratch; however, a large number of interactions caused a lot of friction between the finger and the cube, resulting in wear and tear, which adhered to the surface of the cube and interfered with the estimation of the cube's pose by machine vision algorithms. Indeed, the main limitation of our approach was the absence of any real-robot data. It is likely that some fine-tuning of the policy on real data would greatly increase its robustness in the real environment, and developing a technique which could do so efficiently is one direction for future work.
There is some variance between agents trained under different seeds. Some agents perform excellently in the simulator while having poor performance on real robots, as they overfit the simulated environment. In contrast, some agents cannot acquire the optimal strategies in the simulator, even if using the same algorithm, and ultimately perform poorly on the real robot. Hence, when applying the DRL controller on a real robot through sim-to-real transfer, do not easily trust the agent that performs best in the simulator. It is wise to try multiple trained agents on the real robot and select the best.
Our novel KT approach can enable an agent to perform more useful interactions with the environment in the early stages of learning, helping to avoid exploration actions that do not result in learning experiences. It allows the complex \textit{Move Cube on Trajectory Pro} task to be solved efficiently in the simulator. Additionally, KT increases resource utilization efficiency, avoiding repeatedly re-learning similar skills. It may be feasible to deploy this method in any actor-critic RL algorithm, and the implementation method is straightforward. A future similar, but innovative research direction would be to train a master \textit{teacher} with broad knowledge in complex environments and use it widely in various downstream tasks \cite{worldknowledge}.
The development of DRL-driven robotic manipulation is still an unsolved problem, but is currently receive a lot of attention from the research community. Solving this problem will have a huge impact on industry and society in general. At present, the training of DRL agents is mostly carried out in the simulator, using perfect feedback data which is mostly vision-based. The current capabilities of simulators to render realistic mechanics, including the friction, texture, and deformations at contact interfaces between robotic fingers and the objects they manipulate, are very basic. However, as such simulations improve, and the ability of real world robots to sense these same contact mechanics advances in tandem, we can expect significant improvements in the dexterity of DRL-based robotic manipulation systems. Through this ability to dexterously interact with and learn from the world is surely the path to developing systems with superior intelligence.
\section*{Acknowledgments}
This publication was supported by a Science Foundation Ireland President of Ireland Future Research Leaders Award (17/FRL/4832), by the Insight SFI Research Centre for Data Analytics (SFI/12/RC/2289\_P2) co-funded by the European Regional Development Fund, and by the China Scholarship Council (CSC). We acknowledge the Research IT HPC Service at University College Dublin for providing computational facilities and support that contributed to the research results reported in this paper.
\setlength{\bibsep}{0em}
|
1,108,101,564,192 | arxiv | \section{Introduction}
The classical large sieve inequality asserts that
\begin{equation*}
\sum\limits_{q\le Q} \sum\limits_{\substack{a=1\\ (a,q)=1}}^{q} \left|\sum\limits_{n\le N} a_n \cdot e\left(n\cdot \frac{a}{q}\right)\right|^2 \ll \left(Q^2+N\right)\sum\limits_{n\le N} |a_n|^2
\end{equation*}
for any $Q,N\ge 1$ and any sequence $(a_n)_{n\in \mathbb{N}}$ of complex numbers. (Equivalently, the summation of $n$ over the interval $(0,N]$ can be replaced by a summation over any interval $(M,M+N]$.) The large sieve with square moduli was investigated by L. Zhao and the author of the present paper in a series of papers (see \cite{Baie}, \cite{BaZh}, \cite{Zhao}). To date, the best result is the following.
\begin{equation*}
\sum\limits_{q\le Q} \sum\limits_{\substack{a=1\\ (a,q)=1}}^{q^2} \left|\sum\limits_{n\le N} a_n \cdot e\left(n\cdot \frac{a}{q^2}\right)\right|^2 \ll (QN)^{\varepsilon}\left(Q^3+\min\left\{Q^2\sqrt{N},\sqrt{Q}N\right\}+N\right)\sum\limits_{n\le n} |a_n|^2,
\end{equation*}
where $\varepsilon$ is any positive constant, and the implied $\ll$-constants depends only on $\varepsilon$. The object of this paper is to establish a large sieve inequality for square norm moduli in $\mathbb{Z}[i]$.
M. Huxley \cite{Huxl} established a generalization of the large sieve for number fields, which we describe in the following. Let $K$ be an algebraic number field of degree $k$ over $\mathbb{Q}$ and let $(\theta_1,...,\theta_k)$ be an integral basis of $K$, so that every integer $\xi$ of $K$ is representable uniquely as
$$
\xi=n_1\theta_1+...+n_k\theta_k,
$$
where $n_1,...,n_k$ are rational integers. For any integral ideal $\mathfrak{a}$ of $K$, let $\mathcal{N}(\mathfrak{a})$ be its norm and $\sigma(\xi)$ an additive character modulo $\mathfrak{a}$. Such a character is called proper if it is not an additive character modulo an ideal $\mathfrak{b}$ which divides $\mathfrak{a}$ properly. Then for any $X,N_1,...,N_k\ge 1$, $M_1,...,M_k\in \mathbb{R}$ and complex sequence $(b_n)_{n\in \mathbb{Z}^k}$, we have
\begin{equation} \label{huxley}
\begin{split}
& \sum\limits_{\mathcal{N}(\mathfrak{a})\le X^k} \sum\limits_{\substack{\sigma \bmod{\mathfrak{a}}\\ \sigma\ \mbox{\scriptsize proper}}} \left|\sum\limits_{M_1<n_1\le M_1+N_1}\cdots
\sum\limits_{M_k<n_k\le M_k+N_k} b_{n_1,...,n_k} \cdot \sigma(n_1\xi_1+\cdots+n_k\xi_k)\right|^2\\
\ll & \prod\limits_{j=1}^k \left(N_j^{1/2}+X\right)^2 \cdot
\sum\limits_{M_1<n_i\le M_1+N_1}\cdots
\sum\limits_{M_k<n_k\le M_k+N_k} |b_{n_1,...,n_k}|^2,
\end{split}
\end{equation}
where the implied $\ll$-constant depends only on the field $K$. As demonstrated in \cite{Huxl}, using Gauss sums similarly as in the case $K=\mathbb{Q}$, the above can be converted into a large sieve inequality for multiplicative characters $\chi$ of the form
\begin{equation*}
\begin{split}
& \sum\limits_{\mathcal{N}(\mathfrak{a})\le X^k} \frac{\mathcal{N}(\mathfrak{a})}{\Phi(\mathfrak{a})}\cdot \sum\limits_{\substack{\chi \bmod{\mathfrak{a}}\\ \chi\ \mbox{\scriptsize proper}}} \left|\sum\limits_{M_1<n_1\le M_1+N_1}\cdots
\sum\limits_{M_k<n_k\le M_k+N_k} b_{n_1,...,n_k} \cdot \chi(n_1\xi_1+\cdots+n_k\xi_k)\right|^2\\
\ll & \prod\limits_{j=1}^k \left(N_j^{1/2}+X\right)^2
\sum\limits_{M_1<n_i\le M_1+N_1}\cdots
\sum\limits_{M_k<n_k\le M_k+N_k} |b_{n_1,...,n_k}|^2,
\end{split}
\end{equation*}
where $\Phi(\mathfrak{a})$ is the generalized Euler totient function for ideals in $K$, and, similarly as in the case of additive characters, $\chi$ is called proper if it is not a multiplicative character modulo an ideal $\mathfrak{b}$ which divides $\mathfrak{a}$ properly.
In $\mathbb{Z}[i]$, which is a principal ideal domain, the proper additive characters for the ideal $(q)$ take the form
$$
\sigma(\xi)=e\left(\mbox{Tr}\left(\frac{\xi r}{2q}\right)\right)=e\left(\Re\left(\frac{\xi r}{q}\right)\right),
$$
where $r\in \mathbb{Z}[i]$ ranges over a reduced residue system modulo $q$ (in particular, $r$ and $q$ are coprime). In the above, $\mbox{Tr}(x)$ denotes the trace of $x\in \mathbb{Z}[i]$, given by $\mbox{Tr}(x)=x+\overline{x}=2\Re x$. Hence, setting $K:=\mathbb{Q}[i]$, $k=2$, $\xi_1=1$, $\xi_2=i$, $X:=Q$, $N_j:=\sqrt{N}$ and $a_n:=b_{\Re n,\Im n}$ for $n\in \mathbb{Z}[i]$, where $b_{\Re n,\Im n}=0$ if $\mathcal{N}(n)>N$, we deduce the following version of the large sieve for $\mathbb{Z}[i]$ from \eqref{huxley}.
\begin{theorem} \label{theo1} Let $Q,N\ge 1$ and $(a_n)_{n\in \mathbb{Z}[i]}$ be any sequence of complex numbers. Then
$$
\sum\limits_{\substack{q\in \mathbb{Z}[i]\setminus\{0\}\\ \mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2 \ll \left(Q^2+N\right)\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} |a_n|^2.
$$
\end{theorem}
Here, as in the following, $\mathcal{N}(x)$ denotes the norm of $x\in \mathbb{Z}[i]$, given by $\mathcal{N}(x)=x\overline{x}=(\Re x)^2+(\Im x)^2$, and $r$ runs over a reduced residue system modulo $q$ in $\mathbb{Z}[i]$, $(r,q)=1$ indicating the coprimality of $r$ and $q$.
A version of the large sieve for $\mathbb{Z}[i]$ with moduli confined to natural numbers was proved by W. Schlackow in \cite[Theorem 4.2.1]{Schl} and may be reformulated as follows.
\begin{theorem} \label{theo3} Let $Q,N\ge 1$ and $(a_n)_{n\in \mathbb{Z}[i]}$ be any sequence of complex numbers. Then
$$
\sum\limits_{\substack{q\in \mathbb{N}\\ q\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2 \ll \left(Q^3+Q^2\sqrt{N}+N\right)\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} |a_n|^2,
$$
where $r$ runs over a reduced residue system modulo $q$ in $\mathbb{Z}[i]$.
\end{theorem}
In this paper, we shall establish the following version of the large sieve with square norm moduli for $\mathbb{Z}[i]$.
\begin{theorem} \label{theo5} Let $Q,N\ge 1$ and $(a_n)_{n\in \mathbb{Z}[i]}$ be any sequence of complex numbers. Then
$$
\sum\limits_{\substack{q\in \mathbb{Z}[i]\setminus\{0\}\\ \mathcal{N}(q)\le Q^2\\ \mathcal{N}(q)=\Box}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2 \ll (QN)^{\varepsilon}\left(Q^3+Q^2\sqrt{N}+\sqrt{Q}N\right)\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} |a_n|^2,
$$
where $\varepsilon$ is any positive constant, and the implied $\ll$-constant depends only on $\varepsilon$.
\end{theorem}
Here, as in the following, $\mathcal{N}(q)=\Box$ indicates that $\mathcal{N}(q)$ is a perfect square. The following analogue for multiplicative characters can be deduced by the standard procedure using Gauss sums mentioned above.
\begin{theorem} Let $Q,N\ge 1$ and $(a_n)_{n\in \mathbb{Z}[i]}$ be any sequence of complex numbers. Then
$$
\sum\limits_{\substack{q\in \mathbb{Z}[i]\setminus\{0\}\\ \mathcal{N}(q)\le Q^2\\ \mathcal{N}(q)=\Box}} \frac{\mathcal{N}(q)}{\Phi(q)} \cdot \sum\limits_{\substack{\chi \bmod{(q)}\\ \chi \ \mbox{\scriptsize \rm proper}}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n\cdot \chi(n)\right|^2 \ll (QN)^{\varepsilon}\left(Q^3+Q^2\sqrt{N}+\sqrt{Q}N\right)\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} |a_n|^2,
$$
where $\varepsilon$ is any positive constant, and the implied $\ll$-constant depends only on $\varepsilon$.
\end{theorem}
We start by considering sums over restricted sets of moduli of the form
\begin{equation} \label{Sigmadef}
\Sigma(Q,N;\mathcal{S}):=\sum\limits_{\substack{q\in \mathcal{S}\\ Q/2<\mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left| \sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}}a_{n}\cdot e\left(\mbox{Tr}\left(\frac{nr}{2q}\right)\right)\right|^2,
\end{equation}
where $\mathcal{S}$ is a subset of $\mathbb{Z}[i]$. The restriction to moduli norms in dyadic intervals will be of importance in our method.
To estimate the above sums, we first use the double large sieve due to H. Iwaniec and E. Bombieri \cite{IwBo}. This will lead us to a lattice point counting problem, which we reformulate as counting certain points in a disk. Considerations about the spacing of these points and the Poisson summation formula will enable us to recover slightly weakened versions of Theorems \ref{theo1} and Theorem \ref{theo3}. This weakening by a factor of logarithm comes from our restriction to dyadic intervals above. The main point of this paper is to prove Theorem \ref{theo5}, for which we shall, in addition to the above-mentioned spacing results and the Poisson summation formula, use a 2-dimensional Weyl shift similar to the 1-dimensional Weyl shift performed in \cite{Zhao}.
We note that Theorem \ref{theo5} is the precise analogue to the first known large sieve inequality for square moduli in $\mathbb{Z}$, proved by L. Zhao \cite{Zhao}, which asserts that
\begin{equation*}
\sum\limits_{q\le Q} \sum\limits_{\substack{a=1\\ (a,q)=1}}^{q^2} \left|\sum\limits_{n\le N} a_n \cdot e\left(n\cdot \frac{a}{q^2}\right)\right|^2 \ll (QN)^{\varepsilon}\left(Q^3+Q^2\sqrt{N}+\sqrt{Q}N\right)\sum\limits_{n\le N} |a_n|^2.
\end{equation*}
Throughout this paper, we follow the usual convention that $\varepsilon$ is an arbitrarily small positive number that can change from line to line, and $O$-constants may depend on $\varepsilon$. \\
{\bf Acknowledgement.} The author thanks the IISER Thiruvananthapuram for financial support and excellent working conditions.
\section{Initial transformations}
In this section, we start with some initial transformations of the sum $\Sigma(Q,N;\mathcal{S})$ defined in \eqref{Sigmadef}. Setting
$$
q=u+vi,\quad r=x+yi, \quad n=s+ti,
$$
a notation which we shall use throughout the sequel, we deduce that
\begin{equation} \label{1}
\begin{split}
\Sigma(Q,N;\mathcal{S}) = & \sum\limits_{\substack{q\in \mathcal{S}\\ Q/2<\mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}}
a_{n}\cdot e\left(\frac{(sx-ty)u+(sy+tx)v}{\mathcal{N}(q)}\right)\right|^2\\
= & \sum\limits_{\substack{q\in \mathcal{S}\\ Q/2<\mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n\cdot e\left(\left(\frac{xu+yv}{\mathcal{N}(q)},\frac{xv-yu}{\mathcal{N}(q)}\right)\cdot (s,t)\right)\right|^2\\
= & \sum\limits_{\substack{q\in \mathcal{S}\\ Q/2<\mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{|s|\le \sqrt{N}}\sum\limits_{|t|\le \sqrt{N}} a_{s,t}'\cdot e\left(\left(\frac{xu+yv}{\mathcal{N}(q)},\frac{xv-yu}{\mathcal{N}(q)}\right)\cdot (s,t)\right)\right|^2,
\end{split}
\end{equation}
where
\begin{equation} \label{ast'}
a_{s,t}':=
\begin{cases}
a_{s+it} & \mbox{ if } \mathcal{N}(s+it)\le N, \\
0 & \mbox{ if } \mathcal{N}(s+it)>N.
\end{cases}
\end{equation}
We set
$$
\mathcal{T}(q,r):=\sum\limits_{|s|\le \sqrt{N}}\sum\limits_{|t|\le \sqrt{N}} a_{s,t}'\cdot e\left(\left(\frac{xu+yv}{\mathcal{N}(q)},\frac{xv-yu}{\mathcal{N}(q)}\right)\cdot (s,t)\right)
$$
and
$$
b_{q,r}:= \begin{cases}
\left|\mathcal{T}(q,r)\right|^2/\mathcal{T}(q,r) & \mbox{ if } \mathcal{T}(q,r)\not=0, \\ 0 & \mbox{ otherwise.}
\end{cases}
$$
Then it follows that
\begin{equation} \label{from}
\Sigma(Q,N;\mathcal{S})= \sum\limits_{\substack{q\in \mathcal{S}\\ Q/2<\mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \sum\limits_{|s|\le \sqrt{N}} \sum\limits_{|t|\le \sqrt{N}} a_{s,t}'b_{q,r} \cdot e\left(\left(\frac{xu+yv}{\mathcal{N}(q)},\frac{xv-yu}{\mathcal{N}(q)}\right)\cdot (s,t)\right)
\end{equation}
and also
\begin{equation} \label{hallo}
\Sigma(Q,N;\mathcal{S})= \sum\limits_{\substack{q\in \mathcal{S}\\ Q/2<\mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}}\left| b_{q,r} \right|^2.
\end{equation}
\section{Application of the double large sieve}
Now we use the double large sieve due to Bombieri and Iwaniec \cite{IwBo} to further estimate $\Sigma(Q,N;\mathcal{S})$.
\begin{lemma}[Bombieri-Iwaniec] \label{dls} Let ${\bf X}$ and ${\bf Y}$ be two subsets of $\mathbb{R}^K$. Let $a({\bf x})$ and $b({\bf y})$ be arbitrary complex numbers for ${\bf x}\in {\bf X}$ and ${\bf y} \in {\bf Y}$. Let $X_1,...,X_K,Y_1,...,Y_K$ be positive numbers. Define the bilinear forms
\begin{equation*}
\begin{split}
B(b;{\bf X}):=&\mathop{\sum\limits_{{\bf y}\in {\bf Y}} \sum\limits_{{\bf y}'\in {\bf Y}}}_{|{\bf y}-{\bf y}'|< (2X_k)^{-1};\ k=1,...,K} |b({\bf y})b({\bf y}')|,\\
B(a;{\bf Y}):=&\mathop{\sum\limits_{{\bf x}\in {\bf X}} \sum\limits_{{\bf x}'\in {\bf X}}}_{|{\bf x}-{\bf x}'|< (2Y_k)^{-1};\ k=1,...,K} |a({\bf x})a({\bf x}')|,\\
B(a,b;{\bf X},{\bf Y}):= &\sum\limits_{\substack{{\bf x}\in {\bf X}\\ |x_k|< X_k}} \sum\limits_{\substack{{\bf y}\in {\bf Y}\\ |y_k|< Y_k}} a({\bf x})b({\bf y})e({\bf x}\cdot {\bf y}).
\end{split}
\end{equation*}
Then
$$
|B(a,b;{\bf X},{\bf Y})|^2\le \left(2\pi^2\right)^K \prod\limits_{k=1}^K \left(1+X_kY_k\right)B(b;{\bf X})B(a;{\bf Y}).
$$
\end{lemma}
We note that in the original statement of the above lemma, "$< (2X_k)^{-1}$", "$< (2Y_k)^{-1}$", "$|x_k|< X_k$" and "$|y_k|< Y_k$" were replaced by "$\le (2X_k)^{-1}$", "$\le (2Y_k)^{-1}$", "$|x_k|\le X_k$" and "$|y_k|\le Y_k$", respectively, but the proof in \cite{IwBo} applies to the above variant as well.
In the following let, for any real number $z$, $\{z\}=z-[z]$ be its fractional part, $||z||$ its distance to the nearest integer, and
\begin{equation} \label{fdef}
f(z):=\left\{z+\frac{1}{2}\right\}-\frac{1}{2}.
\end{equation}
Then noting that
$$
e\left(\left(\frac{xu+yv}{\mathcal{N}(q)},\frac{xv-yu}{\mathcal{N}(q)}\right)\cdot (s,t)\right)=e\left(\left(f\left(\frac{xu+yv}{\mathcal{N}(q)}\right),f\left(\frac{xv-yu}{\mathcal{N}(q)}\right)\right)\cdot (s,t)\right)
$$
and using \eqref{ast'} and Lemma \ref{dls} with
\begin{equation*}
\begin{split}
{\bf X}:= & \left\{(s,t)\in \mathbb{Z}^2 \ : \ \mathcal{N}(s+it)\le N\right\},\\
{\bf Y}:= & \left\{\left(f\left(\frac{xu+yv}{\mathcal{N}(q)}\right),f\left(\frac{xv-yu}{\mathcal{N}(q)}\right)\right)\in \mathbb{R}^2\ :\ q\in \mathcal{S},\ Q/2<\mathcal{N}(q)\le Q,\ r \bmod{q},\ (r,q)=1\right\}, \\
X_1:= & \sqrt{N}, \quad X_2:=\sqrt{N}, \quad Y_1:=\frac{1}{2}, \quad Y_2:=\frac{1}{2},
\end{split}
\end{equation*}
we obtain
\begin{equation*}
\begin{split}
\left| \Sigma(Q,N;\mathcal{S})\right|^2 \ll N \cdot \mathop{\sum\limits_{\substack{q_1\in \mathcal{S}\\ Q/2<\mathcal{N}(q_1)\le Q}} \sum\limits_{\substack{r_1 \bmod{q_1}\\ (r_1,q_1)=1}} \sum\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q}} \sum\limits_{\substack{r_2 \bmod{q_2}\\ (r_2,q_2)=1}}}_{\substack{\left|\left|(x_1u_1+y_1v_1)/\mathcal{N}(q_1)-(x_2u_2+y_2v_2)/\mathcal{N}(q_2)\right|\right|\le 1/\sqrt{N}\\ \left|\left|(x_1v_1-y_1u_1)/\mathcal{N}(q_1)-(x_2v_2-y_2u_2)/\mathcal{N}(q_2)\right|\right|\le 1/\sqrt{N}}} \left|b_{r_1,q_1}b_{r_2,q_2}\right| \cdot \sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} \left| a_n\right|^2
\end{split}
\end{equation*}
from \eqref{from}, where we write
$$
q_j=u_j+v_ji\quad \mbox{and} \quad r_j=x_j+y_ji \quad \mbox{for } j=1,2.
$$
Since
\begin{equation*}
\left|b_{r_1,q_1}b_{r_2,q_2}\right|\le \left|b_{r_1,q_1}\right|^2+ \left|b_{r_2,q_2}\right|^2,
\end{equation*}
it follows that
\begin{equation*}
\begin{split}
& \left| \Sigma(Q,N;\mathcal{S})\right|^2\\ \ll & NZ \cdot \max\limits_{\substack{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}}} \sharp\Bigg\{(q_1,r_1)\ : \ q_1\in \mathcal{S}, \ Q/2<\mathcal{N}(q_1)\le Q, \ r_1 \bmod{q_1},\ (r_1,q_1)=1, \\ & \left|\left|\frac{x_1u_1+y_1v_1}{\mathcal{N}(q_1)}-\frac{k}{\mathcal{N}(q_2)}\right|\right|\le \frac{1}{\sqrt{N}}, \
\left|\left|\frac{x_1v_1-y_1u_1}{\mathcal{N}(q_1)}-\frac{l}{\mathcal{N}(q_2)}\right|\right|\le \frac{1}{\sqrt{N}} \Bigg\}\times\\ & \sum\limits_{\substack{q\in \mathcal{S}\\ Q/2<\mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|b_{q,r}\right|^2,
\end{split}
\end{equation*}
where we set
$$
Z:= \sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} \left| a_{n}\right|^2
$$
and
\begin{equation} \label{kl}
k:=x_2u_2+y_2v_2\quad \mbox{and} \quad l:=x_2v_2-y_2u_2
\end{equation}
throughout the sequel. Using \eqref{hallo} and dividing both sides by $|\Sigma(Q,N;\mathcal{S})|$, we deduce that
\begin{equation*}
\begin{split}
\Sigma(Q,N;\mathcal{S})
\ll & NZ \cdot \max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sharp\Bigg\{(q_1,r_1)\ :\ q_1\in \mathcal{S}, \ Q/2<\mathcal{N}(q_1)\le Q, \ r_1 \bmod{q_1},\ (r_1,q_1)=1,\\ &
\left|\left|\frac{x_1u_1+y_1v_1}{\mathcal{N}(q_1)}-\frac{k}{\mathcal{N}(q_2)}\right|\right|\le \frac{1}{\sqrt{N}}, \
\left|\left|\frac{x_1v_1-y_1u_1}{\mathcal{N}(q_1)}-\frac{l}{\mathcal{N}(q_2)}\right|\right|\le \frac{1}{\sqrt{N}} \Bigg\}.
\end{split}
\end{equation*}
\section{Reduction to a lattice point counting problem}
\subsection{Simplification of the problem} If
$$
\left|\left|\frac{x_1u_1+y_1v_1}{\mathcal{N}(q_1)}-\frac{k}{\mathcal{N}(q_2)}\right|\right|\le \frac{1}{\sqrt{N}}, \quad \mbox{and} \quad
\left|\left|\frac{x_1v_1-y_1u_1}{\mathcal{N}(q_1)}-\frac{l}{\mathcal{N}(q_2)}\right|\right|\le \frac{1}{\sqrt{N}},
$$
then we can find $(x_1',y_1')\in \mathbb{Z}^2$ such that
\begin{equation} \label{2}
\left|\left|\frac{x_1u_1+y_1v_1}{\mathcal{N}(q_1)}-\frac{k}{\mathcal{N}(q_2)}\right|\right|=\left|\frac{x_1'u_1+y_1'v_1}{\mathcal{N}(q_1)}-\frac{k}{\mathcal{N}(q_2)}\right|
\end{equation}
and
\begin{equation} \label{3}
\left|\left|\frac{x_1v_1-y_1u_1}{\mathcal{N}(q_1)}-\frac{l}{\mathcal{N}(q_2)}\right|\right|= \left|\frac{x_1'v_1-y_1'u_1}{\mathcal{N}(q_1)}-\frac{l}{\mathcal{N}(q_2)}\right|.
\end{equation}
Indeed, if
$$
x_1'=x_1+au_1+bv_1 \quad \mbox{and} \quad y_1'=y_1+av_1-bu_1,
$$
then
$$
\frac{x_1'u_1+y_1'v_1}{\mathcal{N}(q_1)}=a+\frac{x_1u_1+y_1v_1}{\mathcal{N}(q_1)} \quad \mbox{and} \quad \frac{x_1'v_1-y_1'u_1}{\mathcal{N}(q_1)}=b+\frac{x_1v_1-y_1u_1}{\mathcal{N}(q_1)},
$$
and thus for suitable $a,b\in \mathbb{Z}$, we get \eqref{2} and \eqref{3}. It follows that
\begin{equation*}
\begin{split}
\Sigma(Q,N;\mathcal{S})
\ll & NZ\cdot
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sharp\Bigg\{(q_1,x_1',y_1')\ : \ q_1\in \mathcal{S}, \ Q/2<\mathcal{N}(q_1)\le Q,\ \binom{x_1'}{y_1'}\in \mathbb{Z}^2,\\
& \left|\frac{x_1'u_1+y_1'v_1}{\mathcal{N}(q_1)}-\frac{k}{\mathcal{N}(q_2)}\right|\le \frac{1}{\sqrt{N}},\
\left|\frac{x_1'v_1-y_1'u_1}{\mathcal{N}(q_1)}-\frac{l}{\mathcal{N}(q_2)}\right|\le \frac{1}{\sqrt{N}} \Bigg\}.\end{split}
\end{equation*}
In the following, for brevity of notation, we shall replace $(q_1,u_1,v_1,x_1',y_1')$ by $(q,u,v,x,y)$.
\subsection{Rescaling and rotating}
We may interpret the above as a problem of counting points of orthogonal lattices in a closed square because the last estimate is equivalent to
\begin{equation*}
\begin{split}
\Sigma(Q,N;\mathcal{S}) \ll NZ\cdot
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} & \sharp\left\{ (q,x,y)\ :\ q\in \mathcal{S}, \ Q/2<\mathcal{N}(q)\le Q, \ \binom{x}{y}\in \mathbb{Z}^2,\right. \\ & \left.
x\binom{u/\mathcal{N}(q)}{v/\mathcal{N}(q)}+y\binom{v/\mathcal{N}(q)}{-u/\mathcal{N}(q)}\in S(k,l)\right\},
\end{split}
\end{equation*}
where $S(k,l)$ is the closed square defined by
$$
S(k,l):=\left[\frac{k}{\mathcal{N}(q_2)}-\frac{1}{\sqrt{N}},\frac{k}{\mathcal{N}(q_2)}+\frac{1}{\sqrt{N}}\right]\times \left[\frac{l}{\mathcal{N}(q_2)}-\frac{1}{\sqrt{N}},\frac{l}{\mathcal{N}(q_2)}+\frac{1}{\sqrt{N}}\right].
$$
Rescaling by a factor of $\sqrt{\mathcal{N}(q)}$, it follows that
\begin{equation} \label{ditte}
\begin{split}
\Sigma(Q,N;\mathcal{S}) \ll NZ\cdot
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} & \sharp\left\{(q,x,y)\ :\ q\in \mathcal{S}, \ Q/2<\mathcal{N}(q)\le Q, \ \binom{x}{y}\in \mathbb{Z}^2, \right.\\
& \left.
x\binom{u'}{v'}+y\binom{v'}{-u'}\in \sqrt{\mathcal{N}(q)}S(k,l)\right\},
\end{split}
\end{equation}
where $\sqrt{\mathcal{N}(q)}S(k,l)$ is the square
$$
\sqrt{\mathcal{N}(q)} S(k,l):=\left\{\sqrt{\mathcal{N}(q)}{\bf r} \ :\ {\bf r}\in S(k,l)\right\},
$$
and
$$
\binom{u'}{v'}=\frac{1}{\sqrt{\mathcal{N}(q)}}\binom{u}{v} \quad \mbox{and} \quad \binom{v'}{-u'}=\frac{1}{\sqrt{\mathcal{N}(q)}}\binom{v}{-u}
$$
are orthonormal vectors.
For two real numbers $\mu$ and $\nu$ set
$$
M(\mu,\nu):=\begin{pmatrix} \mu & \nu \\ -\nu & \mu \end{pmatrix}.
$$
Then rotating by applying the rotation matrix
$$
M(u',v')=\begin{pmatrix} u' & v' \\ -v' & u' \end{pmatrix}=\frac{1}{\sqrt{\mathcal{N}(q)}} \begin{pmatrix} u & v \\ -v & u \end{pmatrix}=\frac{1}{\sqrt{\mathcal{N}(q)}}M(u,v),
$$
and replacing $y$ by $-y$, we deduce from \eqref{ditte} that
\begin{equation*}
\begin{split}
& \Sigma(Q,N;\mathcal{S})
\ll NZ\times\\ & \max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sharp\left\{(q,x,y)\ :\ q\in \mathcal{S}, \ Q/2<\mathcal{N}(q)\le Q, \ \binom{x}{y}\in
\mathbb{Z}^2\cap M(u,v)S(k,l)\right\},
\end{split}
\end{equation*}
where
$$
M(u,v)S(k,l):=\left\{M(u,v){\bf s}\ :\ {\bf s}\in S(k,l)\right\}
$$
is the square $\sqrt{\mathcal{N}(q)}S(k,l)$, rotated by applying the matrix $M(u',v')$.
\subsection{Switching between lattices}
The closed square $M(u,v)S(k,l)$ is contained in the closed disk with radius
\begin{equation} \label{radius}
R:=\sqrt{\frac{4Q}{N}}
\end{equation}
and midpoint
$$
M(u,v)\binom{k/\mathcal{N}(q_2)}{l/\mathcal{N}(q_2)}=u\binom{k/\mathcal{N}(q_2)}{l/\mathcal{N}(q_2)}+v\binom{l/\mathcal{N}(q_2)}{-k/\mathcal{N}(q_2)}.
$$
Therefore, we have
\begin{equation*}
\begin{split}
\Sigma(Q,N;\mathcal{S})
\ll NZ\cdot
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} & \sharp\Bigg\{ (q,x,y)\ :\ q\in \mathcal{S}, \ Q/2<\mathcal{N}(q)\le Q, \\
& \binom{x}{y}\in \mathbb{Z}^2\cap D_{R}\left(u\binom{k/\mathcal{N}(q_2)}{l/\mathcal{N}(q_2)}+v\binom{l/\mathcal{N}(q_2)}{-k/\mathcal{N}(q_2)}\right)\Bigg\},
\end{split}
\end{equation*}
where $D_R({\bf r})$ is the closed disk with radius $R$ and midpoint ${\bf r}$.
Hence, we count points of the standard lattice $\mathbb{Z}^2$ contained in closed $R$-neighborhoods of points of the lattice
$$
\mathcal{L}:=\left(\tilde{x}\binom{k/\mathcal{N}(q_2)}{l/\mathcal{N}(q_2)}+\tilde{y}\binom{l/\mathcal{N}(q_2)}{-k/\mathcal{N}(q_2)}\right)_{\binom{\tilde{x}}{\tilde{y}}\in \mathbb{Z}^2}.
$$
If $R\ge 1/2$, then there are $O(R^2)$ $\mathbb{Z}^2$-points in the closed $R$-neighborhood of every $\mathcal{L}$-point, and using \eqref{radius}, it therefore follows that
\begin{equation} \label{Rlarge}
\Sigma(Q,N;\mathcal{S})
\ll QZ\cdot \sharp \left\{q\in \mathcal{S} \ :\ Q/2<\mathcal{N}(q)\le Q\right\}.
\end{equation}
In the following, we assume that $R<1/2$. We observe that counting $\mathbb{Z}^2$-points in closed $R$-neighborhoods of $\mathcal{L}$-points
amounts to the same as counting $\mathcal{L}$-points in closed $R$-neighborhoods of $\mathbb{Z}^2$-points. By this switch of lattices, we have
\begin{equation} \label{das}
\begin{split}
\Sigma(Q,N;\mathcal{S})
\ll NZ\cdot
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} & \sharp\Bigg\{ (q,x,y)\ :\ q\in \mathcal{S}, \ Q/2<\mathcal{N}(q)\le Q, \ \binom{x}{y}\in \mathbb{Z}^2,\\ &
u\binom{k/\mathcal{N}(q_2)}{l/\mathcal{N}(q_2)}+v\binom{l/\mathcal{N}(q_2)}{-k/\mathcal{N}(q_2)}\in D_R\binom{x}{y}\Bigg\}.
\end{split}
\end{equation}
Since $R<1/2$, \eqref{das} is equivalent to
\begin{equation} \label{smallR}
\begin{split}
& \Sigma(Q,N;\mathcal{S}) \ll NZ\times\\ &
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sharp\left\{q\in \mathcal{S} \ : \ Q/2<\mathcal{N}(q)\le Q, \
\binom{f\left((uk+vl)/\mathcal{N}(q_2)\right)}{f\left((-vk+ul)/\mathcal{N}(q_2)\right)}\in D_R({\bf 0})\right\},
\end{split}
\end{equation}
where $f(z)$ is defined as in \eqref{fdef}.
We combine \eqref{Rlarge} and \eqref{smallR} below.
\begin{proposition} \label{lab} Let ${\bf 1}_I$ be the indicator function of the interval $I$. Then we have
\begin{equation} \label{ex}
\begin{split}
& \Sigma(Q,N;\mathcal{S}) \ll QZ\cdot \sharp \left\{q\in \mathcal{S} \ :\ Q/2<\mathcal{N}(q)\le Q\right\}+NZ\cdot {\bf 1}_{(0,1/2)}(R) \times\\ &
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sharp\left\{q\in \mathcal{S} \ : \ Q/2<\mathcal{N}(q)\le Q, \
\binom{f\left((uk+vl)/\mathcal{N}(q_2)\right)}{f\left((-vk+ul)/\mathcal{N}(q_2)\right)}\in D_R({\bf 0})\right\}.
\end{split}
\end{equation}
\end{proposition}
\section{Reproof of a slightly weakened version of Theorem 1}
In this section, we reprove Theorem 1 in a slightly weakened form, namely we establish the inequality
\begin{equation} \label{ine}
\sum\limits_{\substack{q\in \mathbb{Z}[i]\setminus\{0\}\\ \mathcal{N}(q)\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2 \ll \left(Q^2+N\log(2Q)\right)Z.
\end{equation}
\subsection{Spacing modulo 1}
To estimate the maximum on the right-hand side of \eqref{ex}, we prove the following lemma on the spacing of the points
$$
\binom{f\left((uk+vl)/\mathcal{N}(q_2)\right)}{f\left((-vk+ul)/\mathcal{N}(q_2)\right)}.
$$
\begin{lemma} \label{spacinglemma} Assume that $u_2+v_2i=q_2\in \mathbb{Z}[i]\setminus\{0\}$, $x_2+y_2i=r_2\in \mathbb{Z}[i]$, $(r_2,q_2)=1$, $k,l$ are given as in \eqref{kl}, and $u,v,\tilde{u},\tilde{v}\in \mathbb{Z}$. Then
\begin{equation*}
\begin{split}
& \left|f\left(\frac{uk+vl}{\mathcal{N}(q_2)}\right)-f\left(\frac{\tilde{u}k+\tilde{v}l}{\mathcal{N}(q_2)}\right)\right|^2 +\left|f\left(\frac{-vk+ul}{\mathcal{N}(q_2)}\right)-f\left(\frac{-\tilde{v}k+\tilde{u}l}{\mathcal{N}(q_2)}\right)\right|^2 \\ &
\begin{cases} \ge 1/\mathcal{N}(q_2) & \mbox{ if } q_2\nmid\left((u-\tilde{u})+(v-\tilde{v})i\right),\\ 0 & \mbox{ if } q_2|\left((u-\tilde{u})+(v-\tilde{v})i\right).
\end{cases}
\end{split}
\end{equation*}
\end{lemma}
\begin{proof} Clearly, there exist integers $\mu$ and $\nu$ such that
$$
f\left(\frac{uk+vl}{\mathcal{N}(q_2)}\right)-f\left(\frac{\tilde{u}k+\tilde{v}l}{\mathcal{N}(q_2)}\right)=\frac{u_0k+v_0l}{\mathcal{N}(q_2)}-\mu
$$
and
$$
f\left(\frac{-vk+ul}{\mathcal{N}(q_2)}\right)-f\left(\frac{-\tilde{v}k+\tilde{u}l}{\mathcal{N}(q_2)}\right)= \frac{-v_0k+u_0l}{\mathcal{N}(q_2)}-\nu,
$$
where $u_0:=u-\tilde{u}$ and $v_0:=v-\tilde{v}$. It follows that
\begin{equation*}
\begin{split}
& \left|f\left(\frac{uk+vl}{\mathcal{N}(q_2)}\right)-f\left(\frac{\tilde{u}k+\tilde{v}l}{\mathcal{N}(q_2)}\right)\right|^2 +\left|f\left(\frac{-vk+ul}{\mathcal{N}(q_2)}\right)-f\left(\frac{-\tilde{v}k+\tilde{u}l}{\mathcal{N}(q_2)}\right)\right|^2\\
= & \left(\frac{u_0k+v_0l}{\mathcal{N}(q_2)}-\mu\right)^2+\left(\frac{-v_0k+u_0l}{\mathcal{N}(q_2)}-\nu\right)^2\\
= & \frac{(u_0^2+v_0^2)(k^2+l^2)}{\mathcal{N}(q_2)^2}-2\left(\mu\cdot \frac{u_0k+v_0l}{\mathcal{N}(q_2)}+\nu \cdot \frac{-v_0k+u_0l}{\mathcal{N}(q_2)}\right)+\mu^2+\nu^2.
\end{split}
\end{equation*}
From \eqref{kl}, we have
\begin{equation} \label{and}
k^2+l^2=(x_2u_2+y_2v_2)^2+(x_2v_2-y_2u_2)^2=(x_2^2+y_2^2)(u_2^2+v_2^2)=(x_2^2+y_2^2)\mathcal{N}(q_2).
\end{equation}
Hence,
\begin{equation*}
\begin{split}
& \left|f\left(\frac{uk+vl}{\mathcal{N}(q_2)}\right)-f\left(\frac{\tilde{u}k+\tilde{v}l}{\mathcal{N}(q_2)}\right)\right|^2 +\left|f\left(\frac{-vk+ul}{\mathcal{N}(q_2)}\right)-f\left(\frac{-\tilde{v}k+\tilde{u}l}{\mathcal{N}(q_2)}\right)\right|^2\\
= & \frac{(u_0^2+v_0^2)(x_2^2+y_2^2)-2\left(\mu(u_0k+v_0l)+\nu(-v_0k+u_0l)\right)}{\mathcal{N}(q_2)}+\mu^2+\nu^2.
\end{split}
\end{equation*}
By non-negativity, this equals zero or is greater or equal to $1/\mathcal{N}(q_2)$. Further, we have
\begin{equation*}
\left|f\left(\frac{uk+vl}{\mathcal{N}(q_2)}\right)-f\left(\frac{\tilde{u}k+\tilde{v}l}{\mathcal{N}(q_2)}\right)\right|^2 +\left|f\left(\frac{-vk+ul}{\mathcal{N}(q_2)}\right)-f\left(\frac{-\tilde{v}k+\tilde{u}l}{\mathcal{N}(q_2)}\right)\right|^2=0
\end{equation*}
if and only if
\begin{equation} \label{system}
\begin{split}
u_0k+v_0l\equiv & 0 \bmod{\mathcal{N}(q_2)},\\
-v_0k+u_0l\equiv & 0 \bmod{\mathcal{N}(q_2)}.
\end{split}
\end{equation}
This is equivalent to $\mathcal{N}(q_2)| (u_0+v_0i)(k-li)$. Further, we observe that
$$
k-li=(x_2+y_2i)(u_2-v_2i)=r_2\overline{q}_2.
$$
Since $(r_2,q_2)=1$, we deduce that
\eqref{system} is equivalent to $q_2|u_0+v_0i$ and hence $q_2|(u-\tilde{u})+(v-\tilde{v})i$. This completes the proof.
\end{proof}
\subsection{Counting points in disks} Let the conditions in Lemma \ref{spacinglemma} be satisfied. Then it follows from the same lemma that
\begin{equation} \label{oh}
\binom{f\left((uk+vl)/\mathcal{N}(q_2)\right)}{f\left((-vk+ul)/\mathcal{N}(q_2)\right)} =
\binom{f\left((\tilde{u}k+\tilde{v}l)/\mathcal{N}(q_2)\right)}{f\left((-\tilde{v}k+\tilde{u}l)/\mathcal{N}(q_2)\right)}
\end{equation}
if and only if $q_2|\left((u-\tilde{u})+(v-\tilde{v})i\right)$. If $Q/2<\mathcal{N}(q_2)\le Q$ and $L\ge 1$, then for every $q=u+vi$ with $\mathcal{N}(q)\le Q$, the number of $\tilde{q}=\tilde{u}+\tilde{v}i$ such that
$\mathcal{N}(\tilde{q})\le LQ$ and $q_2|\left((u-\tilde{u})+(v-\tilde{v})i\right)$ is $O(L)$. (We note that this is the point where our restriction to dyadic intervals is crucial.) Therefore, again by Lemma \ref{spacinglemma}, the number of $u+vi=q\in \mathcal{S}$ such that $\mathcal{N}(q)\le LQ$ and
$$
\binom{f\left((uk+vl)/\mathcal{N}(q_2)\right)}{f\left((-vk+ul)/\mathcal{N}(q_2)\right)}\in D_R({\bf 0})
$$
is bounded by a constant times $L$ times the maximum number of points in a disk with radius $R$ such that the distance between any two of them is greater or equal $1/\sqrt{\mathcal{N}(q_2)}$.
This maximum number is bounded by 1 plus the maximum number of open disks with radius $1/(2\sqrt{\mathcal{N}(q_2)})$ that can be packed into a disk of radius $2R$ without overlaps. The latter is bounded by the area of a disk with radius $2R$ divided by the area of a disk with radius $1/(2\sqrt{\mathcal{N}(q_2)})$. Altogether, we thus get the following.
\begin{proposition} \label{ballcount} Assume that $Q,L\ge 1$ and $R>0$. Then
\begin{equation*}
\max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sharp\left\{q\in \mathcal{S} \ : \ \mathcal{N}(q)\le LQ, \
\binom{f\left((uk+vl)/\mathcal{N}(q_2)\right)}{f\left((-vk+ul)/\mathcal{N}(q_2)\right)}\in D_R({\bf 0})\right\}
\ll \left(1+\frac{R^2}{1/Q}\right)L.
\end{equation*}
\end{proposition}
Now \eqref{radius}, Proposition \ref{lab} and Proposition \eqref{ballcount} give
\begin{equation*}
\Sigma(Q,N;\mathcal{S})\ll \left(N+Q^2\right)Z.
\end{equation*}
This holds in particular for $\mathcal{S}=\mathbb{Z}[i]$, implying \eqref{ine} upon summing up the contributions of $O(\log 2Q)$ dyadic intervals containing the moduli norms $\mathcal{N}(q)$.
\section{Reproof of a slightly weakened version of Theorem 2}
\subsection{General Fourier analytic approach}
To get savings for sparse subsets $\mathcal{S}$ of $\mathbb{Z}[i]$, it may be useful to apply Fourier analysis to estimate the right-hand side of \eqref{ex}.
Let $\Phi\ :\ \mathbb{R}\rightarrow \mathbb{R}$ be a Schwartz class function which is positive in the interval $[1/2,1]$ and assume that $R<1/2$. Then we observe that
\begin{equation} \label{ri}
\begin{split}
& \sharp\left\{q\in \mathcal{S} \ : \ Q/2<\mathcal{N}(q)\le Q, \
\binom{f\left((uk+vl)/\mathcal{N}(q_2)\right)}{f\left((-vk+ul)/\mathcal{N}(q_2)\right)}\in D_R({\bf 0})\right\} \\
\ll & \sum\limits_{q\in \mathcal{S}} \Phi\left(\frac{\mathcal{N}(q)}{Q}\right)\cdot \sum\limits_{x\in \mathbb{Z}} \sum\limits_{y\in \mathbb{Z}} e^{-\pi\left(((uk+lv)/\mathcal{N}(q_2)-x)^2+((-vk+ul)/\mathcal{N}(q_2)-y)^2\right)/R^2}.
\end{split}
\end{equation}
Now the Poisson summation formula, applied to the sums over $x$ and $y$, transforms the right-hand side of \eqref{ri} into
\begin{equation} \label{4}
\begin{split}
& R^2 \cdot \sum\limits_{q\in \mathcal{S}} \Phi\left(\frac{\mathcal{N}(q)}{Q}\right)\cdot \sum\limits_{\alpha\in \mathbb{Z}} \sum\limits_{\beta\in \mathbb{Z}} e^{-\pi (\alpha^2 +\beta^2)R^2}
e\left(\frac{\alpha(uk+vl)+\beta(-vk+ul)}{\mathcal{N}(q_2)}\right)\\
= & R^2 \cdot \sum\limits_{\alpha\in \mathbb{Z}} \sum\limits_{\beta\in \mathbb{Z}} e^{-\pi (\alpha^2 +\beta^2)R^2} \cdot
\sum\limits_{q\in \mathcal{S}} \Phi\left(\frac{\mathcal{N}(q)}{Q}\right)\cdot e\left(\frac{u(\alpha k+\beta l)+v(-\beta k+\alpha l)}{\mathcal{N}(q_2)}\right).
\end{split}
\end{equation}
Using \eqref{radius}, Proposition \ref{lab} and the above, we obtain the following.
\begin{proposition} \label{Fou} We have
\begin{equation} \label{hop}
\begin{split}
& \Sigma(Q,N;\mathcal{S}) \ll QZ\cdot \sharp \left\{q\in \mathcal{S} \ :\ Q/2<\mathcal{N}(q)\le Q\right\}+QZ\times\\ & \max\limits_{\substack{q_2\in \mathcal{S}\\ Q/2<\mathcal{N}(q_2)\le Q\\ r_2 \bmod q_2\\ (r_2,q_2)=1}}
\sum\limits_{\alpha\in \mathbb{Z}} \sum\limits_{\beta\in \mathbb{Z}} e^{-4\pi (\alpha^2 +\beta^2)Q/N} \cdot
\sum\limits_{q\in \mathcal{S}} \Phi\left(\frac{\mathcal{N}(q)}{Q}\right)\cdot e\left(\frac{u(\alpha k+\beta l)+v(-\beta k+\alpha l)}{\mathcal{N}(q_2)}\right).
\end{split}
\end{equation}
\end{proposition}
For suitable sets $\mathcal{S}$, we may hope to be able to estimate the inner sum over $q$ on the right-hand side of \eqref{hop} non-trivially. We now consider the case when $\mathcal{S}=\mathbb{N}$, thus recovering a slightly weakened form of Theorem 2, namely the bound
\begin{equation}\label{iine}
\sum\limits_{\substack{q\in \mathbb{N}\\ q\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2 \ll \left(Q^3+Q^2\sqrt{N}+N\log 2Q\right)\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} |a_n|^2.
\end{equation}
We note that the above Fourier analytic approach also works if $\mathcal{S}$ equals the full set $\mathbb{Z}[i]$. In this case, the Poisson summation formula, applied to the sum over $q$, leads to a counting problem which is in a sense dual to that considered in subsection 5.2. The resulting estimate for $\Sigma(Q,N;\mathbb{Z}[i])$ will be the same, though. For this reason, we don't carry out this calculation here.
\subsection{Case of integer moduli}
In the situation of Theorem 2 we have $\mathcal{S}=\mathbb{N}$, and the contribution of $Q/\sqrt{2}<q\le Q$ equals
\begin{equation} \label{rupa}
\sum\limits_{\substack{q\in \mathbb{N}\\ Q/\sqrt{2}<q\le Q}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2= \Sigma\left(Q^2,N;\mathbb{N}\right).
\end{equation}
On choosing $\Phi$ in such a way that
$$
\Phi(z)=e^{-\pi z} \quad \mbox{ if } z\ge 0,
$$
\eqref{kl} and Proposition \ref{Fou} give
\begin{equation*}
\begin{split}
\Sigma\left(Q^2,N;\mathbb{N}\right) \ll & Q^3Z+Q^2Z\times\\ & \max\limits_{\substack{u_2\in \mathbb{Z}\\ Q/\sqrt{2}<u_2\le Q\\ x_2+y_2i \bmod u_2\\ (x_2+y_2i,u_2)=1}}
\sum\limits_{\alpha\in \mathbb{Z}} \sum\limits_{\beta\in \mathbb{Z}} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N} \cdot
\sum\limits_{u\in \mathbb{Z}} e^{-\pi u^2/Q^2} \cdot e\left(\frac{u(\alpha x_2-\beta y_2)}{u_2}\right)
\end{split}
\end{equation*}
upon noting that $v=0=v_2$ in this case. Applying the Poisson summation formula to the inner sum over $u$, we get
\begin{equation*}
\begin{split}
\sum\limits_{u\in \mathbb{Z}} e^{-\pi u^2/Q^2} \cdot e\left(\frac{u(\alpha x_2-\beta y_2)}{u_2}\right) = & \sum\limits_{h=1}^{u_2} e\left(\frac{h(\alpha x_2-\beta y_2)}{u_2}\right)
\cdot \sum\limits_{\substack{u\in \mathbb{Z}\\ u\equiv h \bmod{u_2}}} e^{-\pi u^2/Q^2}\\
= & \frac{Q}{u_2} \cdot \sum\limits_{\gamma\in \mathbb{Z}} e^{-\pi \gamma^2(Q/u_2)^2} \cdot \sum\limits_{h=1}^{u_2} e\left(\frac{h(\alpha x_2-\beta y_2-\gamma)}{u_2}\right)\\
= & Q \cdot \sum\limits_{\substack{\gamma\in \mathbb{Z}\\ \gamma\equiv \alpha x_2-\beta y_2 \bmod{u_2}}} e^{-\pi \gamma^2(Q/u_2)^2}\\
\le & Q \cdot \sum\limits_{\substack{\gamma\in \mathbb{Z}\\ \gamma\equiv \alpha x_2-\beta y_2 \bmod{u_2}}} e^{-\pi \gamma^2}
\end{split}
\end{equation*}
if $Q/\sqrt{2}<u_2\le Q$.
Therefore,
\begin{equation} \label{z1}
\Sigma\left(Q^2,N;\mathbb{N}\right) \ll Q^3Z+Q^3Z\cdot \max\limits_{\substack{u_2\in \mathbb{Z}\\ Q/\sqrt{2}<u_2\le Q\\ x_2+y_2i \bmod u_2\\ (x_2+y_2i,u_2)=1}}
\sum\limits_{\gamma\in \mathbb{Z}} e^{-\pi \gamma^2}\cdot \mathop{\sum\limits_{\alpha\in \mathbb{Z}} \sum\limits_{\beta\in \mathbb{Z}}}_{\alpha x_2 -\beta y_2 \equiv \gamma \bmod{u_2}} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N}.
\end{equation}
We now consider the truncated sum
\begin{equation*}
T_{\gamma}\left(U,V\right) :=
\mathop{\sum\limits_{|\alpha|\le U} \sum\limits_{|\beta|\le V}}_{\alpha x_2 -\beta y_2 \equiv \gamma \bmod{u_2}} 1.
\end{equation*}
Let $d=(x_2,u_2)$, $x_2':=x_2/d$ and $u_2'=u_2/d$. Let $\overline{x_2'}$ be a multiplicative inverse of $x_2'$ modulo $u_2'$, i.e. $\overline{x_2'}x_2' \equiv 1 \bmod{u_2'}$. Necessarily $(d,y_2)=1$ because otherwise $(x_2+iy_2,u_2)\not=1$. Let $\overline{y_2}$ be a multiplicative inverse of $y_2$ modulo $d$, i.e. $\overline{y_2}y_2\equiv 1\bmod{d}$. It follows that
$$
\alpha x_2 -\beta y_2 \equiv \gamma \bmod{u_2} \quad \Longleftrightarrow\quad \left(\beta\equiv - \overline{y_2}\gamma\bmod{d} \quad \mbox{and} \quad \alpha \equiv \overline{x_2'} \cdot \frac{\beta y_2+\gamma}{d} \bmod{u_2'}\right).
$$
This implies
\begin{equation} \label{z2}
T_{\gamma}(U,V)
\ll \left(1+\frac{V}{d}\right)\cdot \left(1+\frac{U}{u_2'}\right)\ll 1+U+V+\frac{UV}{u_2}\ll 1+U+V+\frac{UV}{Q}
\end{equation}
if $Q/\sqrt{2}<u_2\le Q$.
Using partial summation for the sums over $\alpha$ and $\beta$ on the right-hand side of \eqref{z1} together with \eqref{z2} now gives
\begin{equation*}
\Sigma\left(Q^2,N;\mathbb{N}\right)\ll Q^3Z \cdot \sum\limits_{\gamma\in \mathbb{Z}} e^{-\pi \gamma^2}\cdot \left(1+\frac{N^{1/2}}{Q}+\frac{N}{Q^3}\right)\ll \left(Q^3+Q^2N^{1/2}+N\right)Z,
\end{equation*}
implying \eqref{iine} upon summing up the contributions of $O(\log 2Q)$ dyadic intervals containing the moduli norms $\mathcal{N}(q)$.
\section{Proof of Theorem 3}
Now we turn to the main point of this paper, a proof of Theorem 3. In the situation of this theorem, we have $\mathcal{S}=\{q\in \mathbb{Z}[i]\ :\ \mathcal{N}(q)=\Box\}$, and the contribution of $Q^2/2<\mathcal{N}(q)\le Q^2$ equals
\begin{equation} \label{with}
\sum\limits_{\substack{q\in \mathbb{Z}[i]\setminus\{0\}\\ Q^2/2<\mathcal{N}(q)\le Q^2\\ \mathcal{N}(q)=\Box}} \sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2 = \Sigma\left(Q^2,N;\mathcal{S}\right).
\end{equation}
\subsection{Case of large $Q$} We first deal with the case when $Q> N^{1/2-\varepsilon}$. For individual moduli $q\in \mathbb{Z}[i]\setminus \{0\}$, we have
\begin{equation} \label{individual}
\sum\limits_{\substack{r \bmod{q}\\ (r,q)=1}} \left|\sum\limits_{\substack{n\in \mathbb{Z}[i]\\ \mathcal{N}(n)\le N}} a_n \cdot e\left(\mbox{\rm Tr}\left(\frac{nr}{2q}\right)\right)\right|^2
\ll (\mathcal{N}(q)+N)Z,
\end{equation}
which can be proved in a way analogous to the corresponding bound
$$
\sum\limits_{\substack{r \bmod{q}\\ r\in \mathbb{Z}\\ (r,q)=1}} \left|\sum\limits_{n\le N} a_n \cdot e\left(\frac{nr}{q}\right)\right|^2
\ll (q+N)Z
$$
in the setting of rational integers. Summing up trivially over $q$ and using $Q> N^{1/2-\varepsilon}$ now gives
\begin{equation} \label{ulm}
\Sigma\left(Q^2,N;\mathcal{S}\right) \ll Q^{3}N^{\varepsilon}Z.
\end{equation}
\subsection{Case of small $Q$}
In the following, we assume that $Q\le N^{1/2-\varepsilon}$. We observe that $q\in \mathcal{S}$ if and only if $\left(u,v,\sqrt{\mathcal{N}(q)}\right)$ is a Pythagorean triple.
Therefore, one of the numbers $u$ and $v$ is odd, and the other one is even. Without loss of generality, we may assume that $u$ is odd and $v$ is even because the contribution of the modulus $iq=-v+iu$ is the same as that of $q$. The Pythagorean triples $\left(u,v,\sqrt{\mathcal{N}(q)}\right)$ with this property are parametrized by
$$
\left(u,v,\sqrt{\mathcal{N}(q)}\right)=\left(m^2-n^2,2mn,m^2+n^2\right), \quad (m,n)\in\mathbb{Z}^2.
$$
Thus, on choosing $\Phi$ in such a way that
$$
\Phi(z)=e^{-\sqrt{z}} \quad \mbox{ if } z\ge 0,
$$
Proposition \ref{Fou} gives
\begin{equation*}
\begin{split}
\Sigma\left(Q^2,N;\mathcal{S}\right) \ll & Q^3Z+
Q^2Z\times\\ & \max\limits_{\substack{\mathcal{N}(q_2)=\Box\\ Q^2/2<\mathcal{N}(q_2)\le Q^2\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N} \cdot
\sum\limits_{m}\sum\limits_{n} e^{-\pi \left(m^2+n^2\right)/Q} \times\\ & e\left(\frac{(m^2-n^2)(\alpha k+\beta l)+2mn(-\beta k+\alpha l)}{\mathcal{N}(q_2)}\right).
\end{split}
\end{equation*}
Applying the Cauchy-Schwarz inequality, we deduce that
\begin{equation} \label{5}
\begin{split}
\left|\Sigma\left(Q^2,N;\mathcal{S}\right)\right|^2
\ll & Q^6Z^2+Q^4Z^2\times\\ & \max\limits_{\substack{\mathcal{N}(q_2)=\Box\\ Q^2/2<\mathcal{N}(q_2)\le Q^2\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \left(\sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N}\right) \left(\sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N}\times \right. \\ & \left.
\left|
\sum\limits_{m}\sum\limits_{n} e^{-\pi (m^2+n^2)/Q} \cdot e\left(\frac{(m^2-n^2)(\alpha k+\beta l)+2mn(-\beta k+\alpha l)}{\mathcal{N}(q_2)}\right)\right|^2\right).\\
\end{split}
\end{equation}
Clearly,
\begin{equation} \label{6}
\sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N}\ll 1+\frac{N}{Q^2} \ll \frac{N}{Q^2}
\end{equation}
and
\begin{equation} \label{7}
\begin{split}
& \sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N}\times\\ & \left|
\sum\limits_{m}\sum\limits_{n} e^{-\pi (m^2+n^2)/Q} \cdot e\left(\frac{(m^2-n^2)(\alpha k+\beta l)+2mn(-\beta k+\alpha l)}{\mathcal{N}(q_2)}\right)\right|^2\\
= & \sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N} \cdot
\sum\limits_{m_1}\sum\limits_{n_1}\sum\limits_{m_2}\sum\limits_{n_2} e^{-\pi (m_1^2+n_1^2+m_2^2+n_2^2)/Q} \times\\ &
e\left(\frac{(m_1^2-m_2^2-n_1^2+n_2^2)(\alpha k+\beta l)+2(m_1n_1-m_2n_2)(-\beta k+\alpha l)}{\mathcal{N}(q_2)}\right).
\end{split}
\end{equation}
Setting
$$
h_1:=m_1+m_2, \quad h_2:=m_1-m_2, \quad j_1:=n_1+n_2,\quad j_2:=n_1-n_2,
$$
the right-hand side of \eqref{7} turns into
\begin{equation} \label{8}
\begin{split}
& \sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N} \cdot
\mathop{\sum\limits_{h_1}\sum\limits_{h_2}}_{h_1\equiv h_2\bmod{2}} \mathop{\sum\limits_{j_1}\sum\limits_{j_2}}_{j_1\equiv j_2\bmod{2}}
e^{-\pi (h_1^2+j_1^2+h_2^2+j_2^2)/(2Q)} \times\\ &
e\left(\frac{(h_1h_2-j_1j_2)(\alpha k+\beta l)+(h_1j_2+h_2j_1)(-\beta k+\alpha l)}{\mathcal{N}(q_2)}\right)\\
= &
\mathop{\sum\limits_{h_1}\sum\limits_{h_2}}_{h_1\equiv h_2\bmod{2}} \mathop{\sum\limits_{j_1}\sum\limits_{j_2}}_{j_1\equiv j_2\bmod{2}}
e^{-\pi (h_1^2+j_1^2+h_2^2+j_2^2)/(2Q)} \cdot \sum\limits_{\alpha} \sum\limits_{\beta} e^{-4\pi (\alpha^2 +\beta^2)Q^2/N} \times\\ &
e\left(\frac{\alpha\left((h_1h_2-j_1j_2)k+(h_1j_2+h_2j_1)l\right)+\beta\left(-(h_1j_2+h_2j_1)k+(h_1h_2-j_1j_2)l\right)}{\mathcal{N}(q_2)}\right).
\end{split}
\end{equation}
Applying the Poisson summation for the sums over $\alpha$ and $\beta$ and then changing variables into $a=h_1+ij_1$ and $b=h_2+ij_2$, the right-hand side of \eqref{8} transforms into
\begin{equation} \label{9}
\begin{split}
& \frac{N}{4Q^2}\cdot \mathop{\sum\limits_{h_1}\sum\limits_{h_2}}_{h_1\equiv h_2\bmod{2}} \mathop{\sum\limits_{j_1}\sum\limits_{j_2}}_{j_1\equiv j_2\bmod{2}}
e^{-\pi (h_1^2+j_1^2+h_2^2+j_2^2)/(2Q)} \cdot \sum\limits_{\gamma} \sum\limits_{\delta} \\ &
e^{-\pi \left(\left(((h_1h_2-j_1j_2)k+(h_1j_2+h_2j_1)l)/\mathcal{N}(q_2)-\gamma\right)^2+\left((-(h_1j_2+h_2j_1)k+(h_1h_2-j_1j_2)l)/\mathcal{N}(q_2)-\delta\right)^2\right)N/(4Q^2)}\\
= & \frac{N}{4Q^2} \cdot \mathop{\sum\limits_{a\in \mathbb{Z}[i]}\sum\limits_{b\in \mathbb{Z}[i]}}_{a\equiv b\bmod{2}}
e^{-\pi (\mathcal{N}(a)+\mathcal{N}(b))/(2Q)} \times\\ & \sum\limits_{\gamma} \sum\limits_{\delta}
e^{-\pi \left(\left((\Re(ab)k+\Im(ab)l)/\mathcal{N}(q_2)-\gamma\right)^2+
\left((-\Im(ab)k+\Re(ab)l)/\mathcal{N}(q_2)-\delta\right)^2\right)N/(4Q^2)}.
\end{split}
\end{equation}
Setting $ab=q'=u'+v'i$ and re-arranging summations, the last line turns into
\begin{equation} \label{10}
\begin{split}
& \frac{N}{4Q^2} \cdot \sum\limits_{q'\in \mathbb{Z}[i]} \sum\limits_{\gamma} \sum\limits_{\delta}
e^{-\pi \left(\left((u'k+v'l)/\mathcal{N}(q_2)-\gamma\right)^2+
\left((-v'k+u'l)/\mathcal{N}(q_2)-\delta\right)^2\right)N/(4Q^2)}\times\\ & \mathop{\sum\limits_{a\in \mathbb{Z}[i]}\sum\limits_{b\in \mathbb{Z}[i]}}_{\substack{a\equiv b\bmod{2}\\ ab=q'}}
e^{-\pi (\mathcal{N}(a)+\mathcal{N}(b))/(2Q)}\\
= & \frac{N}{4Q^2} \cdot \Bigg( \sum\limits_{q'\in \mathbb{Z}[i]\setminus\{0\}} \sum\limits_{\gamma} \sum\limits_{\delta}
e^{-\pi \left(\left((u'k+v'l)/\mathcal{N}(q_2)-\gamma\right)^2+
\left((-v'k+u'l)/\mathcal{N}(q_2)-\delta\right)^2\right)N/(4Q^2)}\times\\ & \mathop{\sum\limits_{a\in \mathbb{Z}[i]}\sum\limits_{b\in \mathbb{Z}[i]}}_{\substack{a\equiv b\bmod{2}\\ ab=q'}}
e^{-\pi (\mathcal{N}(a)+\mathcal{N}(b))/(2Q)} +
\sum\limits_{\gamma} \sum\limits_{\delta} e^{-\pi \left(\gamma^2+\delta^2\right)N/(4Q^2)} \cdot \Bigg(2 \sum\limits_{c\in \mathbb{Z}[i]} e^{-2\pi\mathcal{N}(c)/Q}-1\Bigg)\Bigg).
\end{split}
\end{equation}
Clearly,
\begin{equation} \label{11}
\sum\limits_{\gamma} \sum\limits_{\delta} e^{-\pi \left(\gamma^2+\delta^2\right)N/(4Q^2)} \cdot \left(2 \sum\limits_{c\in \mathbb{Z}[i]} e^{-2\pi\mathcal{N}(c)/Q}-1\right)
\ll \left(1+\frac{Q^2}{N}\right) \cdot Q \ll Q,
\end{equation}
and the geometric-arithmetic mean inequality gives
\begin{equation} \label{12}
\begin{split}
& \mathop{\sum\limits_{a\in \mathbb{Z}[i]}\sum\limits_{b\in \mathbb{Z}[i]}}_{\substack{a\equiv b\bmod{2}\\ ab=q'}}
e^{-\pi (\mathcal{N}(a)+\mathcal{N}(b))/(2Q)} \le \mathop{\sum\limits_{a\in \mathbb{Z}[i]}\sum\limits_{b\in \mathbb{Z}[i]}}_{\substack{a\equiv b\bmod{2}\\ ab=q'}}
e^{-\pi \sqrt{\mathcal{N}(a)\mathcal{N}(b)}/Q}\\
= & e^{-\pi \sqrt{\mathcal{N}(q')}/Q} \cdot \mathop{\sum\limits_{a\in \mathbb{Z}[i]}\sum\limits_{b\in \mathbb{Z}[i]}}_{\substack{a\equiv b\bmod{2}\\ ab=q'}} 1 \ll
\mathcal{N}(q')^{\varepsilon} \cdot e^{-\pi \sqrt{\mathcal{N}(q')}/Q},
\end{split}
\end{equation}
where for the last inequality, we have used the estimate
$$
\sum\limits_{\substack{d\in \mathbb{Z}[i]\\ d|z}} 1 \ll \mathcal{N}(z)^{\varepsilon}
$$
for the generalized divisor function in $\mathbb{Z}[i]$.
Combining \eqref{5}, \eqref{6}, \eqref{7}, \eqref{8}, \eqref{9}, \eqref{10}, \eqref{11} and \eqref{12}, and taking the square root, we obtain
\begin{equation*}
\begin{split}
\Sigma\left(Q^2,N;\mathcal{S}\right) \ll &
\left(Q^3+Q^{1/2}N\right)Z+NZ \cdot \max\limits_{\substack{\mathcal{N}(q_2)=\Box\\ Q^2/2<\mathcal{N}(q_2)\le Q^2\\ r_2 \bmod q_2\\ (r_2,q_2)=1}}\left|\sum\limits_{q'\in \mathbb{Z}[i]\setminus\{0\}} \mathcal{N}(q')^{\varepsilon} \cdot e^{-\pi \sqrt{\mathcal{N}(q')}/Q}\times\right. \\ & \left.
\sum\limits_{\gamma} \sum\limits_{\delta}
e^{-\pi \left(\left((u'k+v'l)/\mathcal{N}(q_2)-\gamma\right)^2+
\left((-v'k+u'l)/\mathcal{N}(q_2)-\delta\right)^2\right)N/(4Q^2)}\right|^{1/2}.
\end{split}
\end{equation*}
Now using $Q\le N^{1/2-\varepsilon}$ and the rapid decay of the function $e^{-x^2}$, we may cut summations at the cost of a small error, leading to
\begin{equation} \label{repl}
\begin{split}
& \Sigma\left(Q^2,N;\mathcal{S}\right) \ll
\left(Q^3+Q^{1/2}N\right)Z+NQ^{\varepsilon}Z \times\\ & \max\limits_{\substack{\mathcal{N}(q_2)=\Box\\ Q^2/2<\mathcal{N}(q_2)\le Q^2\\ r_2 \bmod q_2\\ (r_2,q_2)=1}} \sharp\left\{q'\in \mathbb{Z}[i] : \mathcal{N}(q')\le Q^2N^{\varepsilon}, \
\binom{f\left((u'k+v'l)/\mathcal{N}(q_2)\right)}{f\left((-v'k+u'l)/\mathcal{N}(q_2)\right)}\in D_{R'}({\bf 0})\right\}^{1/2},
\end{split}
\end{equation}
where $f(x)$ is defined as in \eqref{fdef} and
$$
R':=4Q^2N^{\varepsilon-1}.
$$
Using Proposition \ref{ballcount} with $Q$ replaced by $Q^2$ and $L=N^{\varepsilon}$, the maximum on the right-hand side of \eqref{repl} is bounded by
$$
\ll \left(N^{\varepsilon}\left(1+\frac{R'}{1/Q^2}\right)\right)^{1/2} =N^{\varepsilon/2}\left(1+4Q^{4}N^{\varepsilon-1}\right)^{1/2}.
$$
Hence, we have
\begin{equation} \label{ab}
\Sigma\left(Q^2,N;\mathcal{S}\right) \ll (QN)^{\varepsilon}
\left(Q^3+Q^{1/2}N+Q^2N^{1/2}\right)Z.
\end{equation}
Taking \eqref{ulm} into consideration, we deduce that \eqref{ab} holds
for all $Q,N\ge 1$. This together with \eqref{with} gives Theorem \ref{theo5} upon summing up the contributions of $O(\log 2Q)$ dyadic intervals containing the moduli norms $\mathcal{N}(q)$.
\section{Open problems}
The following problems appear naturally in connection with this work. \\ \\
(i) Can these results be extended to general number fields?\\ \\
(ii) What can be proved for more general sets of moduli such as moduli whose norms are represented by polynomials?\\ \\
(iii) Is it possible to improve the above large sieve inequality for square norm moduli along similar lines as in \cite{BaZh}?
|
1,108,101,564,193 | arxiv | \section{Preliminaries}\label{sec:pre}
In this section, we state some useful facts in both design theory and group theory. Recall that a group $G$ is called almost simple if $X\unlhd G\leq \mathrm{Aut}(X)$, where $X$ is a nonabelian simple group. If $H$ is a maximal subgroup not containing the socle $X$ of an almost simple group $G$, then $G=HX$, and since we may identify $X$ with $\mathrm{Inn}(X)$, the group of inner automorphisms of $X$, we also conclude that $|H|$ divides $|\mathrm{Out}(X)|\cdot |X\cap H|$. This implies the following elementary and useful fact:
\begin{lemma}\label{lem:New}{\rm \cite[Lemma 2.2]{a:ABD-PSL3}}
Let $G$ be an almost simple group with socle $X$, and let $H$ be maximal in $G$ not containing $X$. Then
\begin{enumerate}[{\rm \quad (a)}]
\item $G=HX$;
\item $|H|$ divides $|\mathrm{Out}(X)|\cdot |X\cap H|$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem:Tits}
Suppose that $\mathcal{D}$ is a symmetric $(v,k,\lambda)$ design admitting a flag-transitive and point-primitive almost simple automorphism group $G$ with socle $X$ of Lie type. Suppose also that the point-stabiliser $G_{\alpha}$, not containing $X$, is not a parabolic subgroup of $G$. Then $\gcd(p,v-1)=1$.
\end{lemma}
\begin{proof}
Note that $G_{\alpha}$ is maximal in $G$, then by Tits' Lemma \cite[1.6]{a:tits}, $p$ divides $|G:G_{\alpha}|=v$, and so $\gcd(p,v-1)=1$.
\end{proof}
\begin{lemma}\label{lem:subdeg}{\rm \cite[3.9]{a:LSS1987}}
If $X$ is a group of Lie type in characteristic $p$, acting on the set
of cosets of a maximal parabolic subgroup, and $X$ is not $\mathrm{PSL}_n(q)$, $\mathrm{P \Omega}_{2m}^{+}(q)$
(with $m$ odd) and $E_{6}(q)$, then there is a unique subdegree which is a power of $p$.
\end{lemma}
\begin{lemma}\label{lem:six}{\rm \cite[Lemma 2.1]{a:ABD-PSL2}}
Let $\mathcal{D}$ be a symmetric $(v,k,\lambda)$ design, and let $G$ be a flag-transitive automorphism group of $\mathcal{D}$. If $\alpha$ is a point in $\mathcal{P}$ and $H:=G_{\alpha}$, then
\begin{enumerate}[\rm \quad (a)]
\item $k(k-1)=\lambda(v-1)$;
\item $4\lambda(v-1)+1$ is square;
\item $k\mid |H|$ and $\lambda v<k^2$;
\item $k\mid \gcd(\lambda(v-1),|H|)$;
\item $k\mid \lambda d$, for all subdegrees $d$ of $G$.
\end{enumerate}
\end{lemma}
If a group $G$ acts primitively on a set $\mathcal{P}$ and $\alpha\in \mathcal{P}$ (with $|\mathcal{P}|\geq 2$), then the point-stabiliser $G_{\alpha}$ is maximal in $G$ \cite[Corollary 1.5A ]{b:Dixon}. Therefore, in our study, we need a list of all maximal subgroups of almost simple group $G$ with socle $X:=\mathrm{PSU}_{4}(q)$. Note that if $H$ is a maximal subgroup of $G$, then $H_{0}:=H\cap X$ is not necessarily maximal in $X$ in which case $H$ is called a \emph{novelty}. By~\cite[Tables 8.10 and 8.11]{b:BHR-Max-Low}, the complete list of maximal subgroups of an almost simple group $G$ with socle $\mathrm{PSU}_{4}(q)$ are known, and in this case, there arise only three novelties.
\begin{lemma}\label{lem:maxes}
Let $G$ be a group such that $\mathrm{PSU}_{4}(q)\lhd G \leq \mathrm{Aut}(X)$, and let $H$ be a maximal subgroup of $G$ not containing $X=\mathrm{PSU}_{4}(q)$ and $d=\gcd(4,q+1)$. Then $X\cap H$ is (isomorphic to) one of the subgroups listed in {\rm Table~\ref{tbl:maxes}}.
\end{lemma}
\begin{proof}
The maximal subgroups $H$ of $G$ can be read off from~\cite[Tables 8.10 and 8.11]{b:BHR-Max-Low}.
\end{proof}
\begin{table}[h]
\scriptsize
\centering
\caption{Maxiamal subgroups $H$ of almost simple groups with socle $X=\mathrm{PSU}_{4}(q)$.}\label{tbl:maxes}
\begin{tabular}{cp{4cm}p{5cm}}
\noalign{\smallskip}\hline\noalign{\smallskip}
Line& $H\cap X$ & Comments \\
\hline\noalign{\smallskip}
$1$ & $^{\hat{}}E_{q}^{1+4}:\mathrm{SU}_{2}(q):(q^{2}-1)$ &
\\
%
$2$ & $^{\hat{}}E_{q}^{4}:\mathrm{SL}_{2}(q^{2}):(q-1)$ & \\
%
$3$ & $^{\hat{}}\mathrm{GU}_{3}(q)$ &
\\
%
$4$ & $^{\hat{}}(q+1)^{3}:S_{4}$ & novelty if $q=3$
\\
%
$5$ & $^{\hat{}}\mathrm{SU}_{2}(q)^{2}:(q+1)\cdot 2$ & $q\geq 3$
\\
%
$6$ & $^{\hat{}}\mathrm{SL}_{2}(q^{2})\cdot (q-1)\cdot 2$ & $q\geq 4$, novelty if $q=3$
\\
%
$7$ & $^{\hat{}}\mathrm{SU}_{4}(q_{0})$ & $q=q_{0}^r$ and $r$ odd prime
\\
%
$8$ & $^{\hat{}}\mathrm{Sp}_{4}(q)\cdot \gcd(2,q+1)$ &
\\
%
$9$ & $^{\hat{}}{\mathrm{SO}_{4}^{+}}(q)\cdot d$ & $q\geq 5$ odd
\\
%
$10$ & $^{\hat{}}{\mathrm{SO}_{4}^{-}}( q)\cdot d$ & $q$ odd
\\
%
$11$ & $^{\hat{}}(4\circ 2^{1+4})^{\cdot}S_{6}$ & $p=q\equiv 7 \mod 8$ \\
%
$12$ & $^{\hat{}}(4\circ 2^{1+4})\cdot A_{6}$ & $p=q\equiv 3 \mod 8$ \\
%
$13$ & $^{\hat{}}d\circ 2^{\cdot}\mathrm{PSL}_{2}(7)$ & novelty, $q=p\equiv 3, 5, 6 \mod 7$, $q\neq 3$ \\
%
$14$ & $^{\hat{}}d\circ 2^{\cdot}A_{7}$ & $q=p\equiv 3, 5, 6 \mod 7$ \\
%
$15$ & $^{\hat{}}{4_{2}^{\cdot}}\mathrm{PSL}_{3}(4)$ & $q=3$\\
%
$16$ & $^{\hat{}}d\circ 2^{\cdot}\mathrm{PSU}_{4}(2)$ & $q=p\equiv 5 \mod 6$ \\
\hline\noalign{\smallskip}
\multicolumn{3}{l}{Note: $d:=\gcd(4,q+1)$}
\end{tabular}
\end{table}
\section{\bf Proof of the main result}\label{sec:proof}
In this section, suppose that $\mathcal{D}$ is a nontrivial symmetric $(v, k, \lambda)$ design and $G$ is an almost simple automorphism group $G$ with simple socle $X:=\mathrm{PSU}_{4}(q)$, where $q = p^a$ with $p$ prime, that is to say, $X\lhd G \leq \mathrm{Aut}(X)$. Suppose also that $V=\mathbb{F}_{q}^{4}$ is the underlying vector space of $X$ over the finite field $\mathbb{F}_{q}$.
Let now $G$ be a flag-transitive and point-primitive automorphism group of $\mathcal{D}$. Then the point-stabiliser $H := G_{\alpha}$ is maximal in $G$ \cite[Corollary 1.5A]{b:Dixon}. Set $H_{0}:= H\cap X$. Then by Lemma~\ref{lem:maxes}, the subgroup $H_{0} $ is (isomorphic to) one of the subgroups as in Table~\ref{tbl:maxes}. Moreover, by Lemma~\ref{lem:New},
\begin{align}
v=\frac{|X|}{|H_{0}|}=\frac{q^6(q^2-1)(q^3+1)(q^4-1)}{\gcd(4, q+1)\cdot |H_{0}|},\label{eq:v}
\end{align}
Note that $|\mathrm{Out}(X)|=2a\cdot \gcd(4,q+1)$. Therefore, by Lemmas~\ref{lem:New}(b) and~\ref{lem:six}(c),
\begin{align}\label{eq:k-out}
k \mid 2a\cdot \gcd(4,q+1) \cdot |H_{0}|.
\end{align}
We now consider all possibilities for the subgroup $H_{0}$ as in Table~\ref{tbl:maxes}, and prove that the only possible cases are those have been listed in Table~\ref{tbl:main}.
\begin{lemma}\label{lem:su2}
If $H_{0} =$ $^{\hat{}}E_{q}^{1+4}:\mathrm{SU}_{2}(q):(q^{2}-1)$, then $q=2$ and $(v, k, \lambda)=(45, 12, 3)$.
\end{lemma}
\begin{proof}
In this case, $|H_0|=q^6(q^2-1)^2/\gcd(4,q+1)$, and so by~\eqref{eq:v}, we have that $v=q^5+q^3+q^2+1$. Then by Lemma \ref{lem:six}(a), $k$ divides $\lambda(v-1)=\lambda q^2(q^3+q+1)$. It follows from Lemma \ref{lem:subdeg} that $G$ has a subdegree $p^b$ of prime power $p$, and so by Lemma~\ref{lem:six}(e), we conclude that $k$ divides $\lambda p^b$. Hence $k$ divides $\lambda \gcd(p^b,v-1)=\lambda\gcd(p^{b},q^2(q^3+q+1))$, and since $p^b$ divides $q^6$, it follows that $k$ divides $\lambda q^2$. Let now $m$ be a positive integer such that $mk = \lambda q^2$. Since $\lambda<k$, we have that
\begin{align}\label{eq:case-1-m}
m<q^2.
\end{align}
By Lemma~\ref{lem:six}(a), $k(k-1)=\lambda(v-1)$, and so
\begin{align*
\frac{\lambda q^2}{m}(k-1) = \lambda(q^5+q^3+q^2).
\end{align*}
Thus,
\begin{align}
k= m(q^3+q+1)+1 \quad \text{and} \quad
\lambda =m^2q+\frac{m^2(q+1)+m}{q^2}.\label{eq:case1-lam}
\end{align}
Since $\lambda$ is integer, (\ref{eq:case1-lam}) implies that
\begin{align}\label{eq:case1-2}
q^2\mid m^2(q+1)+m.
\end{align}
It is easy to know that $\gcd(q^{2},m)=1$, and so $q^{2}$ divides $m(q+1)+1$. Let $n$ be a positive integer such that $m(q+1)+1=nq^{2}$. Then
\begin{align*}
m=\frac{nq^2-1}{q+1}=n(q-1)+\frac{n-1}{q+1}.
\end{align*}
If $n\neq 1$, then $q+1$ would divide $n-1$, and so $n\geq q+2$. Note by~\eqref{eq:case-1-m} that $nq^2=m(q+1)+1<q^2(q+1)+1$ which implies that $n\leq q+1 $, which is a contradiction. Therefore, $n=1$, and hence $m=q-1$. It follows from~\eqref{eq:case1-lam} that $k=q^2(q^2-q+1)$. By~\eqref{eq:k-out}, $k$ divides $2aq^5(q-1)^2$. Therefore, $q^2-q+1$ must divide $2a(q^2-1)^2$. Since $\gcd(q^2-q+1, q-1)=1$ and $\gcd(q^2-q+1, q^2+2q+1)$ divides $3$, $q^2-q+1$ must divide $6a$.
This holds only when $q=2$ in which case $v=45$, $k=12$ and $\lambda=3$. By~\cite[Theorem 3.3]{a:Praeger-45-12-3}, this design is unique (up to isomorphism) with full automorphism group $\mathrm{PSU}_{4}(2):2$.
\end{proof}
\begin{lemma}\label{lem:sl2}
The subgroup $H_{0}$ cannot be $^{\hat{}}E_{q}^{4}:\mathrm{SL}_{2}(q^{2}):(q-1)$.
\end{lemma}
\begin{proof}
Here $|H_0|=d^{-1}q^6(q^4-1)(q-1)$, where $d=\gcd(4,q+1)$. According to ~\eqref{eq:v}, we have that $v=q^4+q^3+q+1$. Note by Lemma \ref{lem:six}(a) that $k$ divides $\lambda(v-1)=\lambda q(q^3+q^2+1)$. Moreover, by Lemma \ref{lem:subdeg} and Lemma~\ref{lem:six}(e), $k$ divides $\lambda p^b$, where $p^b$ is a prime power subdegree of $G$. Therefore $k$ divides $\lambda \gcd(p^b,v-1)=\lambda\gcd(p^{b},q(q^3+q^2+1))$, and since $p^b$ divides $q^6$, it follows that $k$ divides $\lambda q$. If $m$ is a positive integer such that $mk = \lambda q$, then since $\lambda<k$, we have that
\begin{align}\label{eq:case-2-m}
m<q.
\end{align}
By Lemma~\ref{lem:six}(a), $k(k-1)=\lambda(v-1)$, and so
\begin{align*
\frac{\lambda q}{m}(k-1) = \lambda(q^4+q^3+q).
\end{align*}
Thus,
\begin{align}
k= m(q^3+q^2+1)+1 \quad \text{and} \quad
\lambda = m^2(q^2+q)+\frac{m^2+m}{q}.\label{eq:case2-lam}
\end{align}
It follows from \eqref{eq:case2-lam} that $q\mid m^2+m$. It is easy to know that $\gcd(q,m)=1$, and so $q$ divides $m+1$. By \eqref{eq:case-2-m}, we conclude that $m=q-1$, and hence $k=q(q^3-q+1)$ and $\lambda=q^3-1$ by \eqref{eq:case2-lam}. Note by Lemma~\ref{lem:six}(c) that $k$ divides $8aq^6(q^4-1)(q-1)$. Then $q^3-q+1$ must divide $8a(q^4-1)(q-1)$. Since $\gcd(q^3-q+1, q-1)=1$, $q^3-q+1$ must divide $8a(q^3+q^2+q+1)$. Therefore $q^3-q+1$ divides $8a(q^2+2q)$, which is impossible.
\end{proof}
\begin{lemma}\label{lem:GU3}
If $H_{0}$ is $^{\hat{}}\mathrm{GU}_{3}(q)$, then $q=2$ and $(v, k, \lambda)=(40, 27, 18)$.
\end{lemma}
\begin{proof}
Let $\{u_1,u_2,u_3,u_4\}$ be a canonical basis for the underlying unitary space $V$. In this case, $H=G_U$, where $U$ is a $1$-dimensional non-degenerate subspace, say $U=\langle u_1\rangle $. Then $|H_0|=q^3(q^2-1)(q^3+1)(q+1)/\gcd(4,q+1)$ which implies by \eqref{eq:v} that $v=q^{3}(q-1)(q^{2}+1)$. Let now $W:=\langle u_{1}, u_{2} \rangle$. Then $G$ has a subdegree $|G_U:G_{U,W}|$ dividing $(q+1)(q^{3}+1)$ (see \cite[p. 549]{a:reg-classical} and \cite[p. 336]{a:Saxl2002}). Therefore Lemma~\ref{lem:six}(d) implies that $k$ must divide $\lambda (q+1)(q^{3}+1)$. On the other hand, $k$ divides $\lambda(v-1)=\lambda(q^{2}-q+1)(q^{4}-q-1)$. Therefore, $k$ divides $\lambda(q^{2}-q+1)$, and so $mk=\lambda(q^{2}-q+1)$, for some positive integer $m$. Then
\begin{align}\label{eq:GU3-m}
m<q^{2}-q+1.
\end{align}
By Lemma~\ref{lem:six}(a), we have that $k(k-1)=\lambda(v-1)$, and so
\begin{align}
k= m(q^4-q-1)+1.\label{eq:GU3-k}
\end{align}
We first show that $q^2$ does not divide $k$. If $q^2$ would divide $k$, then by~\eqref{eq:GU3-k}, $q^2$ should divide $m(q+1)-1$. Let now $n$ be a positive integer such that $m(q+1)-1=nq^{2}$. Then
\begin{align*}
m=\frac{nq^2+1}{q+1}=n(q-1)+\frac{n+1}{q+1}.
\end{align*}
Therefore, $q+1$ must divide $n+1$, and so $n\geq q$. Note by \eqref{eq:GU3-m} that $nq^2=m(q+1)-1<(q^2-q+1)(q+1)-1=q^3$. Thus $n \leq q-1$, which is a contradiction. Therefore, $q^2$ does not divide $k$.\smallskip
Note by Lemma~\ref{lem:New}(b) that $k$ divides $2ag(q)$, where $g(q)=q^3(q+1)(q^2-1)(q^3+1)$.
Since $k$ is not a multiple of $q^2$, we must have $k\mid 2ag_{1}(q)$, where $g_{1}(q)=g(q)/q=q^2(q+1)(q^2-1)(q^3+1)$. Then, by~\eqref{eq:GU3-k}, we must have
\begin{align}\label{eq:GU3-q2}
m(q^4-q-1)+1\mid 2ag_{1}(q).
\end{align}
Let now $d(q)=q^3+q^2-4q-3$ and $h(q)=q^4+q^3-q^2+q+3$. Then $2ah(q)[m(q^4-q-1)+1]-2mag_{1}(q)=2mad(q)+2ah(q)$, and so~\eqref{eq:GU3-q2} implies that $m(q^4-q-1)+1$ divides $2mad(q)+2ah(q)$. Thus $m[q^4-q-1-2ad(q)]+1\leq 2ah(q)$. Note that $m\leq 2ah(q)/[q^4-q-1-2ad(q)]\leq 33$, for $q\neq 4$. Therefore, \eqref{eq:GU3-m} implies that $m\leq \min\{33, q^2-q+1\}$, for all $q=p^a$.\smallskip
We now show that $q$ does not divide $k$. If $q$ would divide $k$, then by~\eqref{eq:GU3-k}, $q$ should divide $m-1$. As $m\leq \min\{33, q^2-q+1\}$, it follows that $q\leq 32$. Therefore,
\begin{align}\label{eq:GU3-ap-1}
\begin{array}{llll}
p =2, & \quad a\leq 5; \\
p =3, & \quad a\leq 3; \\
p =5, & \quad a\leq 2; \\
p =7, 11, 13, 17, 19, 23, 29, 31,& \quad a= 1.
\end{array}
\end{align}
For the pairs $(p,a)$ as in \eqref{eq:GU3-ap-1}, since $m\leq \min\{33, q^2-q+1\}$ and $m\equiv 1 \mod q$, the parameter $k= m(q^4-q-1)+1$ does not divide $2ag_{1}(q)$, which is a contradiction. Therefore, $k$ is not a multiple of $q$. Again applying Lemmas~\ref{lem:New}(b) and \eqref{eq:GU3-k}, we have that
\begin{align}\label{eq:GU3-1}
m(q^4-q-1)+1\mid 2ag_{2}(q),
\end{align}
where $g_{2}(q)=g(q)/q^2=q(q+1)(q^2-1)(q^3+1)$. If $d_{1}(q)=3q^3-q^2-q+1$ and $h_{1}(q)=q^3+q^2-q+1$, then $2ah_{1}(q)[m(q^4-q-1)+1]-2mag_{2}(q)=2a[md_{1}(q)-h_{1}(q)]$.
It follows from \eqref{eq:GU3-1} that $m(q^4-q-1)+1$ divides $2a[md(q)-h(q)]$, and so $q^4-q-1<2a[|d(q)|+|h(q)|]$. This inequality holds only for $q \in \{2, 3, 4, 5, 7, 8, 9, 16, 32\}$. For these values of $q$, as $k= m(q^4-q-1)+1$ divides $2ag_{2}(q)$, for $m\leq \min\{33, q^2-q+1\}$, we conclude that $q=2$ in which case $v=40$, $k=27$ and $\lambda=18$. It follows from~\cite{a:Braic-2500-nopower,a:rank3} that the design $\mathcal{D}$ is the complement of $\mathrm{PG}(3,3)$ with parameters $(40,13,4)$ and flag-transitive and point-primitive automorphism group $\mathrm{PSU}_{4}(2)$ or~$\mathrm{PSU}_{4}(2):2$.
\end{proof}
\begin{lemma}\label{lem:q+1:s4}
If $H_{0}$ is $^{\hat{}}(q+1)^3:S_{4}$, then $q=2$ and $(v, k, \lambda)=(40, 27, 18)$.
\end{lemma}
\begin{proof}
In this case, $|H_0|=24d^{-1}(q+1)^3$, where $d=\gcd(4,q+1)$. Then by~\eqref{eq:v}, we have $v=q^6(q-1)^2(q^2+1)(q^2-q+1)/24 $, and since $|\mathrm{Out} (X)|= 2a\cdot\gcd(4, q+1)$, it follows from (\ref{eq:k-out}) that $k$ divides $48a(q+1)^3$. By \cite{a:reg-classical,a:Zhou-lam3-classical} and Lemma~\ref{lem:six}(c), we may assume that $\lambda$ is at least $4$, and so
\begin{align*}
\frac{q^6(q-1)^2(q^2+1)(q^2-q+1)}{6}\leq \lambda v< k^2 \leq 48^2a^2(q+1)^6.
\end{align*}
This implies that $q^6(q-1)^2(q^2+1)(q^2-q+1)<13824 a^2 (q+1)^6$. Thus
\begin{align*}
\frac{q^6(q-1)^2(q^2+1)(q^2-q+1)}{(q+1)^6}<13824 a^2 .
\end{align*}
This inequality is true only when $q \in \{2,3,4,5,8\}$. Since $k$ is a divisor of $48a(q+1)^3$, for each such $q=p^a$, the possible values of $k$ and $v$ are listed in Table~\ref{tbl:(q+1)3:S4}.
\begin{table}[h]
\centering\scriptsize
\caption{Possible value for $k$ and $v$ when $q \in \{2,3,4,5,8\}$.}
\label{tbl:(q+1)3:S4}
\begin{tabular}{clllll}
\noalign{\smallskip}\hline\noalign{\smallskip}
$q$ & $2$ & $3$ & $4$ & $5$ & $8$ \\
\hline\noalign{\smallskip}
$v$ & $40$ & $8505$ & $339456$ & $5687500$ & $1982955520$ \\
$k$ divides & $1296$ & $3072$ & $12000$ & $10368$ & $104976$\\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
The only possible parameters $(v,k,\lambda)$ satisfying $\lambda < k<v-1$ and $\lambda(v-1)=k(k-1)$ is $(v, k, \lambda)=(40, 27, 18)$ when $q=2$. By~\cite{a:Braic-2500-nopower,a:rank3}, the design $\mathcal{D}$ is the Higman design with parameters $(40,13,4)$ and flag-transitive and point-primitive automorphism group $\mathrm{PSU}_{4}(2)$ or $\mathrm{PSU}_{4}(2):2$.
\end{proof}
\begin{lemma}\label{lem:SU22:(q+1)}
The subgroup $H_{0}$ cannot be $^{\hat{}}\mathrm{SU}_{2}(q)^{2}:(q+1)\cdot 2$, for $q\geq 3$.
\end{lemma}
\begin{proof}
In this case, $H$ preserves a decomposition $V = V_{1}\bigoplus V_{2}$ of nonsingular subspaces $V_{1}=\langle u_1,u_2\rangle$ and $V_{2}=\langle u_3,u_4\rangle$. Take the partition $y:=\{\langle u_{1},u_{3} \rangle,\langle u_{2}, u_{4} \rangle\}$. Then the subdegree $|H:H_y|$ of $G$ divides $2(q^{2}-1)^2$ (see \cite[p. 550]{a:reg-classical} and \cite[pp. 336-337]{a:Saxl2002}). Thus by Lemma~\ref{lem:six}(e), we conclude that $k$ divides $2\lambda (q^{2}-1)^2$. Note in this case that $|H_0|=2d^{-1}q^2(q^2-1)^2(q-1)$, where $d=\gcd(4,q+1)$. By~\eqref{eq:v}, we have that $v=q^4(q^2-q+1)(q^2+1)/2$.
Since $2(v-1)=(q+1)(q^7+2q^5+q^4+2q^3+2q^2+2q+2)+4$, we conclude that $\gcd(v-1, q+1)$ divides $2$. Note also that $q-1$ divides $v-1$. Therefore, $\gcd(v-1,2(q^{2}-1)^2)$ divides $8(q-1)^2$. Since $k$ divides $\lambda\gcd(v-1,2(q^{2}-1)^2)$, we conclude that $k$ divides $\lambda f(q)$, where $f(q)=8(q-1)^2$. Thus $mk = \lambda f(q)$, for some positive integer $m$. Since $k(k-1)=\lambda(v-1)$ and $\lambda<k$, it follows that
\begin{align}\label{eq:case5-1}
k= \frac{m(v-1)}{f(q)}+1,
\end{align}
where $f(q)=8(q-1)^2$ and
\begin{align}\label{eq:case2-su2-m}
m<8(q-1)^2.
\end{align}
Note by (\ref{eq:k-out}) that $k\mid 4ag(q)$, where $g(q)=q^2(q-1)^2(q+1)^3$. Then, by~\eqref{eq:case5-1}, we must have
\begin{align}\label{eq:case5-2}
m(v-1)+f(q)\mid 4af(q)g(q).
\end{align}
Let now $d(q)=80q^7-64q^6-32q^5+48q^4+16q^3-16q^2-32q$ and $h(q)=32q$. Then
\begin{align*}
ah(q)[m(v-1)+f(q)]-4am f(q)g(q)= mad(q)+af(q)h(q).
\end{align*}
Therefore, (\ref{eq:case5-2}) implies that
\begin{align*}
m(v-1)+f(q)&\leq a(md(q)+f(q)h(q))
\end{align*}
So $q^4(q^2-q+1)(q^2+1)<2a[d(q)+f(q)h(q)]$. Since also $d(q)+f(q)h(q)< 80q^{7}$, for all $q\geq 2$. Therefore, $(q^2-q+1)(q^2+1)<160 a q^3$. This inequality holds only for pairs $(p,a)$ as in Table~\ref{tbl:case5-ap} below:
\begin{table}[h]
\centering\scriptsize
\caption{Some parameters for Lemma~\ref{lem:SU22:(q+1)}\label{tbl:case5-ap}}
\begin{tabular}{cccccccccc} \noalign{\smallskip}\hline\noalign{\smallskip}
$p$ &
$2$ &
$3$ &
$5$ &
$7$ &
$11$, $13$, $17$ &
$19$, \ldots, $157$ \\
\hline\noalign{\smallskip}
%
$a\leq $ &
$10$ &
$6$ &
$4$ &
$3$ &
$2$ &
$1$ \\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
For these values of $q=p^a$, and the parameter $m$ as in \eqref{eq:case2-su2-m}, there is no parameter $k$ satisfying \eqref{eq:case5-1} for which the fraction $k(k-1)/(v-1)$ is a positive integer, which is a contradiction.
\end{proof}
\begin{lemma}\label{lem:SL2:(q-1).2}
The subgroup $H_{0}$ cannot be $^{\hat{}}\mathrm{SL}_{2}(q^{2}):(q-1)\cdot 2$.
\end{lemma}
\begin{proof}
Let $\{e_1,e_2,f_1,f_2\}$ be a standard basis for underlying unitary space $V$. In this case, $H$ preserves a partition $V = V_{1}\bigoplus V_{2}$ of totally singular subspaces $V_{1}$ and $V_{2}$ of dimension $2$, say $V_1=\langle e_1,e_2\rangle$ and $V_2=\langle f_1,f_2\rangle$. Let now $y=\{\langle e_{1}, f_{4} \rangle,\langle e_{2}, f_{3} \rangle\}$. Then the subdegree $|H:H_{y}|$ of $G$ divides $2(q^{4}-1)$ (see \cite[p. 550]{a:reg-classical} and \cite[pp. 336-337]{a:Saxl2002}). Thus by Lemma~\ref{lem:six}(e), we conclude that $k$ divides $2\lambda (q^{4}-1)$. Here $|H_0|=2q^2(q^4-1)(q+1)/\gcd(4,q+1)$, and so \eqref{eq:v} implies that $v=q^4(q^3+1)(q+1)/2$.
Note that $2(v-1)=(q-1)(q^7+2q^6+2q^5+3q^4+4q^3+4q^2+4q+4)+2$ and $2(v-1)=(q+1)(q^7+q^4)-2$. Then $v-1$ is coprime to $q^{2}-1$. Note also that $q^{2}+1$ divides $2(v-1)$. Thus $\gcd(v-1,2(q^{4}-1))$ divides $2(q^{2}+1)$. Since $k$ divides $\lambda \gcd(v-1, 2(q^{4}-1))$, it follows that $k$ divides $\lambda f(q)$, where $f(q)=2(q^2+1)$, and hence $mk = \lambda f(q)$, for some positive integer $m$. Therefore,
\begin{align}\label{eq:case6-1}
k= \frac{m(v-1)}{f(q)}+1,
\end{align}
where
\begin{align}\label{eq:case2-sl2-m}
m<2(q^2+1).
\end{align}
Note by (\ref{eq:k-out}) that $k\mid 4ag(q)$, where $g(q)=q^2(q^4-1)(q-1)$. Then, by~\eqref{eq:case6-1}, we must have
\begin{align}\label{eq:case6-2}
m(v-1)+f(q)\mid 4af(q)g(q).
\end{align}
Let now $d(q)=48q^7-32q^6+48q^4-16q^3+16q^2+32q-64$ and $h(q)=32(q-2)$. Then
\begin{align*}
4am f(q)g(q)-ah(q)[m(v-1)+f(q)]= am d(q)-a f(q)h(q).
\end{align*}
Therefore, \eqref{eq:case6-2} implies that
$m(v-1)+f(q)\leq |am d(q)-a f(q)h(q)|< am [d(q)+f(q)h(q)]$.
So $q^4(q^3+1)(q+1)<2a[d(q)+f(q)h(q)]$. Since $d(q)+f(q)h(q)< 48q^{7}$, for all $q\geq 2$, $(q^3+1)(q+1)< 96a q^3$. This inequality holds only for pairs $(p,a)$ as in Table~\ref{tbl:case6-ap} below:
\begin{table}[h]
\centering\scriptsize
\caption{Some parameters for Lemma~\ref{lem:SL2:(q-1).2}\label{tbl:case6-ap}}
\begin{tabular}{c|ccccccccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
$p$ &
$2$ &
$3$ &
$5$ &
$7$, $11$, $13$ &
$17$, \ldots, $89$\\
\hline\noalign{\smallskip}
%
$a\leq $ &
$9$ &
$5$ &
$3$ &
$2$ &
$1$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
The only value of $q=p^a$ satisfying \eqref{eq:case6-1} when $m$ is as in \eqref{eq:case2-sl2-m} for which the fraction $k(k-1)/(v-1)$ is a positive integer is $q=4$ when $m=2$. In which case, we obtain the parameters $(v, k, \lambda)=(41600, 2448, 144)$ with $X=\mathrm{PSU}_{4}(4)$. In what follows, we make use of the software \textsf{GAP} \ \cite{GAP4} and show that such a design never exists.
Let $G$ be one of the groups $X$, $X:2$ or $X:4$, and $H$ is $H_0$, $H_0\cdot 2$ or $H_0\cdot 4$, respectively.
We note that the group $G$ has one conjugacy class of subgroup containing $H_0$. We use the command \verb|AtlasGroup("U4(4)")| to define the group $X$, and then we find all subgroups $G$ of $\mathrm{Aut}(X)$ containing $X$. Since the maximal subgroups $H$ of $G$ is not available in \textsf{GAP}, we need to construct $H$ as a subgroup of $G$. We first define the semidirect product $T:=\mathrm{PSL}_2(2^4)\cdot 3$ and then we embed this group $T$ into $G$ as a subgroup via command \verb|IsomorphicSubgroups(G,T)|. For each group $G$, there is only one such isomorphic subgroup $K$ in $G$, and then by \verb|IntermediateSubgroups(G,K)|, we find the overgroups of $K$. Now we can choose those subgroups $H$ of index $41600$. Then we define the right coset action of $G$ on the set $\mathcal{P}:=\mathcal{R}_H$ of right cosets of $H$ in $G$, and so we can view $G$ and $H$ as subgroups of $S_{41600}$ by taking image of the permutation representation of the right coset action. We now obtain the $H$-orbits on $\mathcal{P}$ and the subdegrees of $G$ which are listed in Table~\ref{tbl:subdegs}. Since $G$ is flag-transitive, each $H$-orbit of size $2448$ (if there exists) would be a possible base block $B$ for $\mathcal{D}$. At this stage, we obtain two base blocks for each group $G$, see Table~\ref{tbl:subdegs}. Although, the command \verb|BlockDesign( 41600, [B], G )| returns true for the obtained base blocks, these designs are not symmetric as $|B^x\cap B|\neq 144$, for some $x\in G$.
\end{proof}
\begin{table}[h]
\centering\scriptsize
\caption{Some subdegrees of almost simple group $G$ with socle $\mathrm{PSU}_{4}(4)$.}\label{tbl:subdegs}
\begin{tabular}{lp{11cm}}
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{1}{c}{$G$} &
\multicolumn{1}{c}{Subdegrees} \\
\hline\noalign{\smallskip}
%
$\mathrm{PSU}_{4}(4)$&
$1$, $102$, $136$, $153$, $204$, $408$, $816$, $816$, $1224$, $1632$, $2040$, $2040$, $2448$, $2448$, $3060$, $ 4080$, $4080$, $4896$, $4896$, $6120$ \\
%
%
$\mathrm{PSU}_{4}(4):2$ &
$1$, $102$, $136$, $153$, $204$, $408$, $816$, $816$, $1224$, $1632$, $2040$, $2040$, $2448$, $2448$, $3060$, $ 4080$, $4080$, $
4896$, $4896$, $6120$
\\
%
%
$\mathrm{PSU}_{4}(4):4$&
$1$, $102$, $136$, $153$, $204$, $408$, $1224$, $1632$, $1632$, $ 2448$, $2448$, $3060$, $4080$, $6120$, $8160$, $9792$\\
\hline\noalign{\smallskip}
%
\end{tabular}
\end{table}
\begin{lemma}\label{lem:D-last}
The subgroup $H_{0}$ cannot be $^{\hat{}}\mathrm{SU}_{4}(q_{0})$, where $q=q_{0}^r$ and $r$ odd prime.
\end{lemma}
\begin{proof}
In this case, $|H_{0}|=q_{0}^{6}(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)/\gcd(4,q_0^r+1)$. It follows from~\eqref{eq:v} that
\begin{align}\label{eq:lem-SU4-v}
v= \frac{q_{0}^{6r}(q_{0}^{4r}-1)(q_{0}^{3r}+1)(q_{0}^{2r}-1)}{q_{0}^{6}(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)}
\end{align}
Note by (\ref{eq:k-out}) that $k$ divides $2aq_{0}^{6}(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)$. We may assume that $\lambda\geq 4$ by \cite{a:reg-classical,a:Zhou-lam3-classical}. Moreover, $a^2\leq q_{0}^r$ as $q=q_{0}^{r}$. Since $\lambda v<k^2$ by Lemma~\ref{lem:six}(b), we must have
\begin{align*}
\frac{4q_{0}^{6r}(q_{0}^{4r}-1)(q_{0}^{3r}+1)(q_{0}^{2r}-1)}{q_{0}^{6}(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)}
\leq \lambda v< k^2
&\leq4a^2\cdot q_{0}^{12}(q_{0}^{4}-1)^2(q_{0}^{3}+1)^2(q_{0}^{2}-1)^2 \\
&\leq 4\cdot q_{0}^{12+r}(q_{0}^{4}-1)^2(q_{0}^{3}+1)^2(q_{0}^{2}-1)^2
\end{align*}
and hence
\begin{align*}
q_{0}^{6r}(q_{0}^{4r}-1)(q_{0}^{3r}+1)(q_{0}^{2r}-1)< q_{0}^{18+r}(q_{0}^{4}-1)^3(q_{0}^{3}+1)^3(q_{0}^{2}-1)^3.
\end{align*}
Note that $q_{0}^{15r-1}\leq q_{0}^{6r}(q_{0}^{4r}-1)(q_{0}^{3r}+1)(q_{0}^{2r}-1)$ and $q_{0}^{18+r}(q_{0}^{4}-1)^3(q_{0}^{3}+1)^3(q_{0}^{2}-1)^3\leq q_{0}^{45+r}$. Then $q_{0}^{15r-1}< q_{0}^{45+r}$, and this implies that $r =2$ or $3$. Since $r$ is odd, we must have $r=3$. Therefore,
\begin{align}\label{lem:case-7-r2-v}
v=\frac{q_{0}^{12}(q_{0}^{12}-1)(q_{0}^{9}+1)(q_{0}^{6}-1)}{(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)}.
\end{align}
By (\ref{eq:k-out}), $k$ divides $2aq_{0}^{6}(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)$. Then by Lemma~\ref{lem:six}(c), we have that
\begin{align*}
\lambda \cdot\frac{q_{0}^{12}(q_{0}^{12}-1)(q_{0}^{9}+1)(q_{0}^{6}-1)}{(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)}< k^{2}\leq 4a^2q_{0}^{12}(q_{0}^{4}-1)^2(q_{0}^{3}+1)^2(q_{0}^{2}-1)^2.
\end{align*}
Therefore,
\begin{align}\label{eq:lem13-case1-lam}
\lambda< 4a^2\cdot \frac{(q_{0}^{4}-1)^3(q_{0}^{3}+1)^3(q_{0}^{2}-1)^3}{(q_{0}^{12}-1)(q_{0}^{9}+1)(q_{0}^{6}-1)}\leq 4a^2.
\end{align}
Since $k$ divides $2aq_{0}^{6}(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)$ and $v-1$ is coprime to $q_{0}$ by Lemma~\ref{lem:Tits}, $k$ must divide $2\lambda a (q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)$. Now Lemma~\ref{lem:six}(c) implies that
\begin{align*}
\lambda\cdot \frac{q_{0}^{12}(q_{0}^{12}-1)(q_{0}^{9}+1)(q_{0}^{6}-1)}{(q_{0}^{4}-1)(q_{0}^{3}+1)(q_{0}^{2}-1)}<k^{2} \leq 4\lambda^2 a^2(q_{0}^{4}-1)^2(q_{0}^{3}+1)^2(q_{0}^{2}-1)^2,
\end{align*}
and so
\begin{align}\label{eq:psl-3}
\frac{q_{0}^{12}(q_{0}^{12}-1)(q_{0}^{9}+1)(q_{0}^{6}-1)}{(q_{0}^{4}-1)^3(q_{0}^{3}+1)^3(q_{0}^{2}-1)^3}< 4 \lambda a^2.
\end{align}
Since $\lambda\leq 4 a^{2}$ by (\ref{eq:lem13-case1-lam}), it follows that
\begin{align*}
q_{0}^{12}<16 a^{4}.
\end{align*}
Since also $q_{0}\geq 2$, $2^{6a}<16\cdot a^{4}$, which is impossible.
\end{proof}
\begin{lemma}\label{lem:Sp4}
If $H_{0}$ is $^{\hat{}}\mathrm{Sp}_{4}(q)\cdot \gcd(2,q+1)$, then $q=2$ and $(v, k, \lambda)=(36, 15, 6)$.
\end{lemma}
\begin{proof}
Here $|H_0|=d^{-1}cq^4(q^2-1)(q^4-1)$, where $d=\gcd(4,q+1)$ and $c=\gcd(2,q+1)$. So by~\eqref{eq:v}, we have $v=q^2(q^3+1)/c$, where $c=\gcd(2,q+1)$. It follows from \eqref{eq:k-out} that $k$ divides $2ag(q)$, where $g(q)=q^4(q^2-1)(q^4-1)$. We now consider the following two cases.\smallskip
\noindent \textbf{Case 1:}
Let $q$ be even. Then $c=\gcd(2,q+1)=1$. If $q=2$, then $v=36$. It follows from~\eqref{eq:k-out} that $k$ divides $1440$. We then easily observe that for each divisor $k$ of $1440$, the fraction $k(k-1)/(v-1)$ is not a positive integer unless $k=15$, in which case $v=36$ and $\lambda=6$. By~\cite{a:Braic-2500-nopower,a:rank3}, this design is a Menon design with parameters $(36, 15, 6)$ and flag-transitive automorphism group $\mathrm{PSU}_{4}(2)$ or $\mathrm{PSU}_{4}(2):2$.\smallskip
Let now $q\geq 4$. Note that $v-1$ is coprime to $q(q^2-1)$. Moreover, since $v-1=(q^2+1)(q^3-q+1)+q-2$ and $q^2+1=(q-2)(q+2)+5$, it follows that $\gcd(v-1,q^2+1)$ divides $\gcd(5,q-2)$. Therefore, $\gcd(v-1,(q^2-1)(q^4-1))$ divides $\gcd(5, q-2)$, we have that $k$ is a divisor of $\lambda e a $, where $e:=\gcd(5, q-2)$. Then there exists a positive integer $m$ such that $mk=\lambda e a$. Thus,
\begin{align}
k= \frac{m(v-1)}{ea}+1,\label{eq:case1-Sp4-k}
\end{align}
where
\begin{align}
m<ea=\gcd(q-2,5)a.\label{eq:case1-Sp4-m}
\end{align}
We first show that $q$ does not divide $k$. Let $q$ divide $k$. Then~\eqref{eq:case1-Sp4-k} implies that $q$ divides $ea-m$. Thus $q\leq ea-m\leq \gcd(q-2,5)a-1$, which is impossible.
Therefore, $q$ does not divide $k$, and so it follows from Lemma~\ref{lem:New}(b) and~\eqref{eq:case1-Sp4-k} that
\begin{align}\label{eq:case1-Sp4-k-2}
m(v-1)+ea\mid 2ea^{2}g_{1}(q),
\end{align}
where $g_{1}(q)=g(q)/q^3=q(q^2-1)(q^4-1)$. Let $d(q)=q^4+q^3-q^2-q$ and $h(q)=q^2-1$. Then
\begin{align}
2ea^{2}h(q)[m(v-1)+ea]-2mea^{2}g_{1}(q)=2ea^{2}[md(q)+eah(q)].
\end{align}
Therefore, by \eqref{eq:case1-Sp4-k-2}, we conclude that $v-1<2ea^{2}[|d(q)|+ea|h(q)|]$. This inequality holds only when $a\leq 9$. Then for each $q=2^a$ with $a\leq 9$, the possible values of $v$ are listed in Table~\ref{tbl:case1-Sp4-mv} below. By~\eqref{eq:case1-Sp4-m}, we can also find an upper bound for $m$ listed as in the third column of Table~\ref{tbl:case1-Sp4-mv}.
\begin{table}[h]
\centering
\scriptsize
\caption{Possible value for $m$ and $v$ when $q=2^{a}$ with $1<a\leq 9$.}
\label{tbl:case1-Sp4-mv}
\begin{tabular}{lll}
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{1}{c}{$q$} &
\multicolumn{1}{c}{$v$} &
\multicolumn{1}{c}{$m<$} \\
\hline\noalign{\smallskip}
$4$ & $1040$ &$2$\\
$8$ & $32832$ &$3$\\
$16$ & $1048832$ &$4$\\
$32$ & $33555456$ &$25$\\
$64$ & $1073745920$ &$6$\\
$128$ & $34359754752$ &$7$\\
$256$ & $1099511693312$ &$8$\\
$512$ & $35184372350976$ &$45$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
We now obtain by~\eqref{eq:case1-Sp4-k}, the parameter $k$, but for such $k$, we can not find any possible parameter $\lambda$ satisfying Lemma~\ref{lem:six}(a), which is a contradiction.\smallskip
\noindent \textbf{Case 2:}
Let $q$ be odd. Then $c=\gcd(2,q+1)=2$ and $v=q^2(q^3+1)/2$. Note that $q-1$ divides $2(v-1)=q^2(q^3+1)-2$. Set $w(q):=2(v-1)/(q-1)=q^4+q^3+q^2+2q+2$. Then $q+1$ is coprime to $w(q)$. Moreover, $w(q)=(q-1)(q^3+2q^2+3q+5)+7$, $w(q)=(q^{2}+1)(q^{2}+q)+q+2$ and $q^{2}+1=(q+2)(q-2)+5$. Therefore $\gcd(v-1, 2(q^4-1)(q^2-1))$ divides $\gcd(q+2, 5)\gcd(q-1, 7)(q-1)$, and so Lemmas~\ref{lem:six}(a) and~\ref{lem:Tits} imply that $k$ divides $\lambda as f(q)$, where $f(q)=q-1$ and
\begin{align}
s= \gcd(q+2, 5)\gcd(q-1, 7).\label{eq:case2-Sp4-s}
\end{align}
Then $mk=\lambda a s f(q)$, for some positive integer $m$, and so \begin{align}
k= \frac{m(v-1)}{as f(q)}+1,\label{eq:case2-Sp4-k}
\end{align}
where $f(q)=q-1$, $s= \gcd(q+2, 5)\gcd(q-1, 7)$ and
\begin{align}\label{eq:case2-Sp4-m}
m< as (q-1).
\end{align}
As in Case 1, we first show that $q$ does not divide $k$. Assume the contrary. Then~\eqref{eq:case2-Sp4-k} implies that $q\mid m+as$,
and so $nq=m+sa$, for some positive integer $n$. Thus
\begin{align}\label{eq:case2-Sp4-m1}
m=nq-as.
\end{align}
Since $m<sa (q-1)$, we have that
\begin{align}\label{eq:case2-n}
n<as .
\end{align}
Since also $mk=\lambda as f(q)$ and $k(k-1)=\lambda(v-1)$, it follows that
\begin{align*}
2a^2s^2 \lambda = m^{2}(q^3+2q^2+3q+5)+\frac{7m^2+2mas}{q-1}
\end{align*}
and so $q-1$ divides $7m^2+2mas$. Therefore,
by~\eqref{eq:case2-Sp4-m1}, we conclude that
\begin{align}\label{eq:case2-Sp4-q-1-m-2}
q-1\mid 7(n^2q^2-2nasq+a^2s^2)+2(nasq-a^2s^2).
\end{align}
As
\begin{multline*}
7(n^2q^2-2nasq+a^2s^2)+2(nasq-a^2s^2)= \\
7n^2(q^2-1)-12nas(q-1)+7n^2-12nas+5a^2s^2,
\end{multline*}
$q-1$ must divide $7n^2-12nas+5a^2s^2$. Since now $n<as$, we conclude that $7n^2-12nas+5a^2s^2>0$, and so $q-1\leq 7n^2-12nas+5a^2s^2$. Moreover, $7n^2-12nas<0$. Therefore,
\begin{align}\label{eq:case2-Sp4-q-1-n}
q-1\leq 5a^2s^2.
\end{align}
If $a>1$, then the inequality~\eqref{eq:case2-Sp4-q-1-n} holds only for the pairs $(p,a)$ as below:
\begin{align}\label{eq:sp4-case2-ap-1}
\begin{array}{llll}
p =3, & \quad a=2,3,4,5,6; \\
p =7, 11, 37, & \quad a=3; \\
p =13, 29, & \quad a=2.
\end{array}
\end{align}
Note that $n<as$ and $m=nq-as$, for the values of $(p,a)$ as in~\eqref{eq:sp4-case2-ap-1}, we can find the parameter $k$ from \eqref{eq:case2-Sp4-k}, and hence we easily observe that for these values $k$, the fraction $k(k-1)/(v-1)$ is not a positive integer, which is a contradiction. Therefore, $a=1$. In this case, $m=nq-s$ and $n<s$, where $s\in \{5,7,35\}$ by \eqref{eq:case2-Sp4-s} and \eqref{eq:case2-n}. Therefore, $n$ is at most $4$, $6$ or $34$ respectively for $s=5$, $7$ or $35$. Moreover, for these values of $n$ and $s$, $q-1$ divides $7n^{2}-12ns+5s^{2}$. Therefore, $(s,q,n)$ is as in Table \ref{tbl:Sp4-snq} for which, by \eqref{eq:case2-Sp4-k}, we cannot find any possible parameters $k$ and $\lambda$. Hence, $k$ is not a multiple of $q$.
\begin{table}[h]
\centering
\scriptsize
\caption{Possible value of $(s,q,n)$ in Lemma \ref{tbl:Sp4-ap}.}\label{tbl:Sp4-snq}
\begin{center}
\begin{tabular}{llll}
\noalign{\smallskip}\hline\noalign{\smallskip}
$s$ & $q$ & $n$ \\
\hline\noalign{\smallskip}
$5$ & $3$ & $1,3$ \\
$5$ & $13$ & $1$ \\
$5$ & $73$ & $1$ \\
$7$ & $29$ & $1,3$ \\
$35$ & $43$ & $1,5,7,11,13,17,19,23,29,31$\\
$35$ & $113$ & $1,3,9,11,17,19,27,33$\\
$35$ & $463$ & $13$\\
$35$ & $673$ & $19$\\
$35$ & $883$ & $7$\\
$35$ & $953$ & $1$\\
\hline\noalign{\smallskip}
\end{tabular}
\end{center}
\end{table}
Therefore, by Lemma~\ref{lem:New}(b) and~\eqref{eq:case2-Sp4-k}, we conclude that
\begin{align}\label{eq:Sp4-case2-1}
m(v-1)+asf(q)\mid 2a^2sf(q)g_{1}(q),
\end{align}
where $g_{1}(q)=g(q)/q^{3}=2q(q^2-1)(q^4-1)$ and $f(q)= q-1$. Let now $d(q)=8q^3-2q^2-6q$ and $h(q)=2q^3-2q^2-2q$. Then
$2a^2smf(q)g_{1}(q)-4a^2sh(q)[m(v-1)+asf(q)]=2a^2s[md(q)-2asf(q)h(q)]$.
It follows from \eqref{eq:Sp4-case2-1} that $v-1<2a^2s[|d(q)|+2as|f(q)h(q)|]$. This inequality holds only for pairs $(p,a)$ as in Table~\ref{tbl:Sp4-ap} below:
\begin{table}[h]
\centering\scriptsize
\caption{Some parameters for Lemma~\ref{lem:Sp4}\label{tbl:Sp4-ap}}
\begin{tabular}{ll}
\noalign{\smallskip}\hline\noalign{\smallskip}
$p$ & $a\leq $ \\
\hline\noalign{\smallskip}
$3$ & $12$\\
$5$ & $6$\\
$7$, $11$, $17$, $23$, $37$,$67$ & $3$\\
$13$ & $4$ \\
$29$, $41$, $43$, $71$ & $2$ \\
$53$, $73, \ldots,19433$ & $1$\\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Again these values of $q=p^a$ do not give rise to any possible parameters, which is a contradiction.
\end{proof}
\begin{lemma}\label{lem:So4+}
The subgroup $H_{0}$ cannot be $^{\hat{}}{\mathrm{SO}_{4}^{+}}(q)\cdot d$ with $q\geq 5$ odd.
\end{lemma}
\begin{proof}
In this case, $|H_0|=q^2(q^2-1)^2$. Then by~\eqref{eq:v}, we have that $v=q^4(q^3+1)(q^2+1)/d$, where $d=\gcd(q+1, 4)$. Note in this case that $d$ is either $2$ or $4$.\smallskip
Suppose first $d=2$. It follows from \eqref{eq:k-out} that $k$ divides $4ag(q)$, where $g(q)=q^2(q^2-1)^2$. Moreover, Lemma~\ref{lem:six}(a) implies that $k$ divides $\lambda(v-1)$. Note that $v-1=[q^4(q^3+1)(q^2+1)-2]/2$. Since $\gcd(v-1, 4q^2(q^2-1)^2)=1$, we have that $k$ is a divisor of $\lambda a$. Then there exists a positive integer $m$ such that $mk=\lambda a$. Since now $k(k-1)=\lambda(v-1)$, it follows that $k= [m(v-1)/a]+1$, and since $k\mid 4ag(q)$, we must have $m(v-1)+a\mid 4a^{2}g(q)$. As $m\geq 1$, $v <4a^{2}g(q)=4a^2q^2(q^2-1)^2$, for $q$ odd, and this does not hold for any $q$, which is a contradiction.\smallskip
Suppose now $d=4$. Then $v=q^4(q^3-1)(q^2+1)/4$. Since $\gcd(v-1,8q^2(q^2-1)^2)$ divides $(q-1)^2$. Then $mk=\lambda a f(q)$, where $f(q)=(q-1)^2$ and $m$ is a positive integer. Thus $k= [m(v-1)/af(q)]+1$. As $k\mid 8ag(q)$, we must have $m(v-1)+af(q)\mid 8a^{2}g(q)f(q)$, and so $v <8a^{2}g(q)f(q)=8a^2q^2(q^2-1)^2(q-1)^2$, for $q$ odd. Thus $q\in \{3,7,11,19,23,27,243\}$, however, for these values of $q$, we cannot find any possible parameters.
\end{proof}
\begin{lemma}\label{lem:So4-}
The subgroup $H_{0}$ cannot be $^{\hat{}}{\mathrm{SO}_{4}^{-}}( q)\cdot d$ with $q$ odd.
\end{lemma}
\begin{proof}
In this case, $|H_0|=q^2(q^2+1)(q^2-1)$, and so by~\eqref{eq:v}, we have that $v=q^4(q^3+1)(q^2-1)/d$, where $d=\gcd(4,q+1)$. It follows from \eqref{eq:k-out} that $k$ divides $2ag(q)$, where $g(q)=q^2(q^4-1)$. Moreover, Lemma~\ref{lem:six}(a) implies that $k$ divides $\lambda(v-1)$. As $q$ is odd, $d=2$ or $4$. Let $f(q)$ be $q-2$ if $d=2$, and $q-3$ if $d=4$. Then $\gcd(v-1,q^2(q^4-1))$ divides $f(q)$, and so $k$ is a divisor of $\lambda a f(q)$. Suppose that $m$ is a positive integer such that $mk=\lambda a f(q)$. Since now $k(k-1)=\lambda(v-1)$, it follows that $k= [m(v-1)/a f(q)]+1$, and since $k\mid 4ag(q)$, we must have $m(v-1)+a f(q)\mid 2a^{2}dg(q)$. Therefore, $v <2a^{2}df(q)g(q)$, for $q$ odd, and this does not give rise to any possible parameters.
\end{proof}
\begin{lemma}\label{lem:S6}
The subgroup $H_{0}$ cannot be the subgroups as in the lines {\rm 11-16} of {\rm Table~\ref{tbl:maxes}}.
\end{lemma}
\begin{proof}
By Lemmas~\ref{lem:New}(b) and \ref{lem:six}(c), we have that $|X|\leq |\mathrm{Out}(X)|^{2}\cdot |H_{0}|^{3}$. Therefore, the lines {\rm 13-14} can be ruled out. For the remaining cases, this inequality holds only for $q$ listed as in Table~\ref{tbl:S6}. However, for such $q$, no divisor $k\geq 4$ of $|\mathrm{Out}(X)|\cdot |H_{0}|$ exists such that $k(k-1)/(v-1)$ is a positive integer, which is a contradiction.
\begin{table}[h]
\centering\scriptsize
\caption{Possible cases in Lemma~\ref{lem:S6}\label{tbl:S6}}
\begin{tabular}{ll}
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{1}{c}{$H_{0}$} &\multicolumn{1}{l}{$q$}\\
\hline\noalign{\smallskip}
$^{\hat{}}(4\circ 2^{1+4})^{\cdot}S_{6}$ & $7$\\
$^{\hat{}}(4\circ 2^{1+4})\cdot A_{6}$ &$3$ \\
$^{\hat{}}{4_{2}^{\cdot}}\mathrm{PSL}_{3}(4)$ & $3$ \\
$^{\hat{}}d\circ 2^{\cdot}\mathrm{PSU}_{4}(2)$ & $5,11$ \\
\hline\noalign{\smallskip}
\end{tabular}
\end{table}
\end{proof}
\begin{proof}[\rm \textbf{Proof of Theorem~\ref{thm:main}}]
The proof of the main result follows immediately from Lemmas \ref{lem:su2}--\ref{lem:S6}.
\end{proof}
\section*{Acknowledgements}
The authors would like to thank anonymous referees for providing us helpful and constructive comments and suggestions. They would also like to mention that their names has been written in alphabetic order.
\section*{References}
|
1,108,101,564,194 | arxiv | \section{Methods}
Many machine learning schemes have been used to link structures and properties~\cite{VonLilienfeld2013, fabe+17jctc}, including more or less sophisticated neural networks~\cite{bhol+07nimpr, behl11jcp, Chmiela2017, smit+17cs, Zhang2018}.
Based on the few comparative studies that have appeared in the literature~\cite{fabe+17jctc,nguy+18jcp,Qu2018}, it appears that, when it comes to predicting atomic-scale properties, simple regression techniques such as kernel ridge (Gaussian process) regression perform as well as or better than their more sophisticated counterparts.
Given our focus on structure representations, in this work we used kernel ridge regression (KRR), in which the properties of a structure $\mathcal{A}$ are written as a linear combination of non-linear kernel functions $K(\mathcal{X},\mathcal{X}')$ that evaluate the similarity between two structures, i.e.
\begin{equation}
y(\mathcal{A}) = \sum_M x_M K(\mathcal{A},\mathcal{X}_M),
\end{equation}
where $\mathcal{X}_M$ correspond to a set of reference atomic structures, and $x_M$ are weights that can be optimized by minimizing the discrepancy between the predictons $y(\mathcal{A})$ and the actual values computed on a set of training structures.
The details of KRR have been discussed at length elsewhere~\cite{rasm06book, Bishop2016} and so here we will focus instead on the definition and optimization of the kernel function.
\subsection{SOAP in a bra-ket notation}
The representer theorem guarantees that every well-behaved kernel corresponds to a scalar product between feature vectors that associate each input to a point in a -- possibly infinite dimensional -- Hilbert space, $K(\mathcal{X},\mathcal{X}')=\boldsymbol{\Phi}(\mathcal{X})^T\boldsymbol{\Phi}(\mathcal{X}')$~\cite{Cuturi2010}.
The Dirac notation provides a convenient formalism to express vectors in a Hilbert space in an abstract way that is basis-set independent. This makes it very suitable to express results in quantum mechanics, and -- due to the similar algebraic structure -- in the context of machine-learning based on kernel ridge regression~\cite{rasm06book, arxiv_p2}, where one can write $K(\mathcal{X},\mathcal{X}')=\bra{\mathcal{X}}\ket{\mathcal{X}'}$.
In the SOAP framework~\cite{Bartok2013}, spherical environments centered on each atom in the system are expressed as densities, which are constructed by superimposing Gaussians functions $g(\mathbf{r})$, centered on each atom. We write such an atom density as a position representation of a ket $\ket{\mathcal{X}_{j}}$,
\begin{equation}
\bra{\mathbf{r}}\ket{\mathcal{X}_{j}} = \sum_i f_c(r_{ij})
g(\mathbf{r}-\mathbf{r}_{ij})\ket{\alpha_i},
\label{eq:dirac-rxj}
\end{equation}
where $\mathbf{r}_{ij}$ is the displacement vector between atoms $i$ and $j$ and we have introduced orthonormal element kets $\ket{\alpha}$ that represent the chemical nature of atoms, and a smooth cutoff function $f_c(r)$ that limits the density to a neighborhood of atom $j$.
We can collect together the density from all the atoms of the same species to define an element-specific density
\begin{equation}
\bra{\alpha \mathbf{r}}\ket{\mathcal{X}_{j}} \equiv \psi^\alpha_{\mathcal{X}_{j}}(\mathbf{r}) = \sum_{i \in \alpha} f_c(\mathbf{r}_{ij})
g(\mathbf{r}-\mathbf{r}_{ij}),
\label{eq:dirac5}
\end{equation}
where we used the fact that element kets are taken to be orthogonal.
This makes it possible to write the position representation of $\ket{\mathcal{X}_{j}}$ as a sum over the elements
\begin{equation}
\bra{\mathbf{r}}\ket{\mathcal{X}_{j}} = \sum_\alpha \psi^\alpha_{\mathcal{X}_{j}}(\mathbf{r})\ket{\alpha}.
\label{eq:dirac4}
\end{equation}
In the original formulation of SOAP~\cite{Bartok2013}, the atom density is expressed by expanding the environmental density in a basis of orthogonal radial basis functions $R_{n}(r)$ and spherical harmonics $Y_{m}^{l}(\hat{\mathbf{r}})$,
\begin{equation}
\bra{\alpha nlm}\ket{\mathcal{X}_{j}} = \int \textrm{d}\br \, R_{n}(r)Y_{m}^{l}(\hat{\mathbf{r}})\bra{\alpha\mathbf{r}}\ket{\mathcal{X}_{j}}.
\label{eq:anlm-xj}
\end{equation}
This representation is invariant to permutations of atoms of the same kind, and to rigid translations. It is not, however, rotationally invariant, and so the kernel built as the overlap between two environments would not be consistent with one fundamental physical symmetry.
To remedy this shortcoming, one can average the kernel over the $SO(3)$ rotation group to obtain
\begin{equation}
K^{(\nu)}(\mathcal{X}_{j},\mathcal{X}_{k}) = \int \textrm{d}\hat{R} \bra{\mathcal{X}_{j}}\hat{R}\ket{\mathcal{X}_{k}}^\nu. \label{eq:soap-r-int}
\end{equation}
A remarkable result of the SOAP framework is that the representations that are associated with this kernel can be computed explicitly.
For the case with $\nu=2$, the SOAP representations correspond to the power spectrum,
\begin{equation}
\bra{\alpha n\alpha' n'l}\ket{\mathcal{X}_{j}^{(2)}} \propto \frac{1}{\sqrt{2l+1}} \sum_{m} \bra{\alpha nlm}\ket{\mathcal{X}_{j}}^{\star}
\bra{\alpha' n'lm}\ket{\mathcal{X}_{j}},
\end{equation}
where $^{\star}$ denotes complex conjugation.
One can show that
\begin{equation}
\bra{\mathcal{X}_{j}^{(2)}}\ket{\mathcal{X}_{k}^{(2)}}
=\sum_{\alpha n\alpha' n'l}\bra{\mathcal{X}_{j}^{(2)}}\ket{\alpha n\alpha' n'l}\bra{\alpha n\alpha' n'l}\ket{\mathcal{X}_{k}^{(2)}}
\end{equation}
is precisely the rotationally-averaged kernel \eqref{eq:soap-r-int} for $\nu=2$, and that it captures the 3-body correlations between atoms within the environment~\cite{Glielmo2018}.
To complete our summary of the SOAP framework, we should mention that in many applications thus far the $SO(3)$ vectors have been normalised,
\begin{equation}
\ket{\mathcal{X}_{j}^{(\nu)}}/\sqrt{\bra{\mathcal{X}_{j}^{(\nu)}}\ket{\mathcal{X}_{j}^{(\nu)}}} \to \ket{\mathcal{X}_{j}^{(\nu)}},
\end{equation}
and raised to an integer power $\zeta$,
\begin{equation}
\underbrace{\ket{\mathcal{X}_{j}^{(\nu)}}\otimes \ket{\mathcal{X}_{j}^{(\nu)}}\otimes \dots\otimes \ket{\mathcal{X}_{j}^{(\nu)}}}_{{\zeta}} \to \ket{\mathcal{X}_{j}^{(\nu)}},
\label{eq:soap-zeta}
\end{equation}
which makes it possible to go beyond the body order implied by $\nu$ in the definition of the environmental kernel, avoiding the complications of higher-order SOAP representations.
Having constructed the $SO(3)$ vectors, there are a variety of ways to obtain a global correlation measure between atomic configurations~\cite{De2016}. The simplest approach (which we follow in this work, and is appropriate to learn properties that can be decomposed in atom-centered contributions) is the average kernel
\begin{equation}
K^{(\nu)}(\mathcal{A},\mathcal{B}) = \frac{1}{N_{\mathcal{A}}N_{\mathcal{B}}} \sum_{j\in A}\sum_{k\in B}
\bra{\mathcal{X}_{j}^{(\nu)}}\ket{\mathcal{X}_{k}^{(\nu)}},
\label{eq:global1}
\end{equation}
where $N_{\mathcal{A}}$ ($N_{\mathcal{B}}$) is the number of environments in $\mathcal{A}$ ($\mathcal{B}$).
\section{Generalising the SOAP environmental kernel}
The SOAP formalism provides an elegant framework to construct a rotationally-invariant representation of the atomic density that can be used for machine-learning purposes. While the formalism provides a complete representation of structural correlations of a given order within an atomic environment, the quality and the computational cost of the regression scheme can be improved substantially in practice by modifying the representation so that it incorporates some degree of chemical intuition. For instance, the combination of multiple kernels corresponding to different interatomic distances has been shown to improve the quality of ML models\cite{Bartok2017}, and the use of an alchemical kernel matrix to describe the similarity of different elements has been shown to be beneficial as well~\cite{De2016,Bartok2017}.
\subsection{Radially-scaled kernels}
\label{sub:radial}
In a system with relatively uniform atom density, the overlap between environments $\bra{\mathcal{X}_{j}}\ket{\mathcal{X}_{k}}$ is dominated by the region farthest from the centre. This could be regarded as rather unphysical, since the interactions between atoms decay with distance, and so the closest atoms should give the most significant contribution to the properties, and is reflected in the observation that multi-scale kernels tend to perform best when very low weights are assigned to the long-range kernels~\cite{Bartok2017,paruzzo2018chemical}.
Likewise, a scaling of the weights of different atomic distances within an environment has been shown to be beneficial when using ML to predict atomic-scale properties using a different density-based representation~\cite{Faber2018}.
One could modify SOAP features to compensate for this effect by multiplying the atomic probability amplitude~\eqref{eq:dirac-rxj} with a radial scaling $u(r)$.\cite{Huang2016}
For ease of implementation, however, we apply the scaling directly in the definition of $\psi_{\mathcal{X}_{j}}(\mathbf{r})$, using $u(r)$ to determine weights associated with each atom in the environment,
\begin{equation}
\bra{\alpha\mathbf{r}}\ket{\mathcal{X}_{j}} = \sum_{i\in \alpha} f_c(r_{ij}) u (r_{ij})
g(\mathbf{r}-\mathbf{r}_{ij}). \label{eq:dirac-rxj-ur}
\end{equation}
While this construction is an accurate realisation of a density scaling only when the width of the atomic Gaussians is small compared to the variation of $u(r)$, it provides a simple way to test the general idea that requires minimal changes to existing SOAP code.\cite{QUIP}
One should also consider that the atom that sits at the centre of the environment has a special status in the SOAP framework. While atoms in the environment provide information on the structural correlations, the $j$-th atom sits at the centre of the environment by construction.
As a consequence, it is best to treat separately the weight $u_0$ associated with the central atom, i.e. to consider
\begin{equation}
\bra{\mathbf{r}}\ket{\mathcal{X}_{j}} = u_0 g(\mathbf{r}) \ket{\alpha_j}+ \sum_{i \ne j} f_c(r_{ij}) u (r_{ij})
g(\mathbf{r}-\mathbf{r}_{ij}) \ket{\alpha_i}. \label{eq:dirac-rxj-ur-c0}
\end{equation}
\subsection{Alchemical kernels}
In the presence of multiple elements, the Dirac notation makes it evident that SOAP representations consider each element separately, and do not include a notion of similarity between different elements. This makes the computational cost grow steeply with chemical diversity, and makes it impossible to exploit the similar behavior of different elements across the periodic table. In Refs.~\cite{De2016,Bartok2017} it was shown that extending SOAP with an alchemical kernel $\kappa_{\alpha\alpha'}$ coupling different elements improved the learning efficiency. It however led to a large increase of the computational cost, as it required considering more terms in the scalar product between two representations,
\begin{equation}
\begin{split}
\bra{\mathcal{X}_{j}^{(2)}}\ket{\mathcal{X}_{k}^{(2)}}_\kappa = &
\sum_{\alpha \beta n\alpha' \beta' n' l}
\bra{\mathcal{X}_{j}^{(2)}}\ket{\alpha n \alpha' n' l} \\
\times &
\kappa_{\alpha\beta} \kappa_{\alpha'\beta'} \bra{\beta n \beta' n' l}\ket{\mathcal{X}_{k}^{(2)}}.
\end{split}
\label{eq:alchemy-kappa}
\end{equation}
The bra-ket notation suggests that $\kappa_{\alpha\alpha'}$ serves essentially the purpose of an operator coupling the elements $\ket{\alpha}$ and $\ket{\alpha'}$.
In this spirit, one can write a decomposition of $\kappa$,
\begin{equation}
\kappa_{\alpha\alpha'} = \sum_{J=1}^{d_J} u_{\alpha J } u_{\alpha' J},
\end{equation}
where the coefficients can be seen as the components of the element kets on an ``elemental feature'' $\ket{J}$, i.e. $u_{\alpha J}=\bra{J}\ket{\alpha}$.
One can then rewrite Eq.~\eqref{eq:alchemy-kappa} as
\begin{equation}
\label{eq:alchemy-kernel}
\bra{\mathcal{X}_{j}^{(2)}}\ket{\mathcal{X}_{k}^{(2)}}_\kappa =
\sum_{JnJ'n'l} \bra{\mathcal{X}_{j}^{(2)}}\ket{JnJ'n'l} \bra{JnJ'n'l}\ket{\mathcal{X}_{k}^{(2)}},
\end{equation}
in which we have introduced a partially contracted version of the original representation,
\begin{equation}
\begin{split}
\nonumber
\bra{JnJ'n'l}\ket{\mathcal{X}_{j}^{(2)}} =& \sum_{\alpha \alpha'} u_{J\alpha}u_{J'\alpha'} \bra{\alpha n \alpha' n'l}\ket{\mathcal{X}_{j}^{(2)}}.
\label{eq:alchemy-feat}
\end{split}
\end{equation}
The transformed SO(3) vector components can be written in terms of the components of $\ket{J}$ in the element basis, $u_{J\alpha} = \bra{J}\ket{\alpha}$. If the number of basis kets $d_J$ is smaller than the number of elements under consideration, then the effective SO(3) vectors occupy a smaller space than $\{\ket{\alpha n\alpha' n'l}\}$.
This low-dimensionality representation of the chemical space can help improve the accuracy of a ML model in the presence of a large number of elements, and can also translate into substantial savings in terms of memory usage and computational effort. It does, however, break the sparsity of the representations, which can negatively affect the computational efficiency for some systems.
Note that this transformation can also be expressed directly in terms of the atom density, i.e. one can write
\begin{equation}
\bra{J\mathbf{r}}\ket{\mathcal{X}_{j}} = \sum_\alpha u_{J\alpha} \psi_{\mathcal{X}_{j}}^\alpha(\mathbf{r}).
\end{equation}
In the case with $d_J=1$ this formulation is analogous to several recent attempts to mitigate the complexity of ML models including many chemical species~\cite{nong+17prb,gast+18jcp,Huo2017} by representing heterogeneous systems with a single density, and different weights assigned to various elements.
Rewriting the SOAP environmental kernel as Eq.~(\ref{eq:alchemy-kernel}) makes it possible to consider the $u_{J\alpha} = \bra{J}\ket{\alpha}$ as optimizable parameters, to improve the performance of the alchemical kernel $\kappa_{\alpha\alpha'} = \sum_{J} u_{\alpha J}u_{J \alpha'}$, or to force it to be low-rank.
Different strategies can be used to determine the optimal $u_{J\alpha}$. Here we propose a cross-validation scheme that exploits the scalar-product nature of the SOAP kernel to re-cast one part of the problem as linear regression,\cite{Bishop2016} which we discuss in detail in the Supporting Information.
\subsection{Multiple-kernel learning}
We have shown that by manipulating the form of the SOAP kernel, e.g. by including a radial scaling, by introducing correlations between elements, or by adjusting other hyperparameters, such as the cutoff radius or the shape of the atomic Gaussian functions, it is possible to obtain different perspectives of the structural correlations, and to tune them to give the best possible performance in a regression task.
Determining the most effective representation of a given input is typically what deep neural networks are thought to excel at~\cite{goodfellow2016deep}, and exploring this possibility will be the subject of future investigation.
Remaining in the context of kernel ridge regression, one can attempt a different approach to further improve the performance of the regression. As done in Ref.~\cite{Bartok2017}, one can build a composite kernel out of a selection of different models, i.e.
\begin{equation}
K(\mathcal{A},\mathcal{B}) = \sum_\aleph w_\aleph K_\aleph(\mathcal{A},\mathcal{B}).
\label{eq:multi-k}
\end{equation}
This multiple-kernel model makes it possible to find the best combination of different representations of the atomic environments, using short and long-range, 2 and 3-body, radially-scaled and alchemically-contracted terms.
In a Gaussian Process Regression language, each model is meant to contribute $\sqrt{w_\aleph}$ to the variance of the target property. The weights can be set manually based on an intuitive understanding of how they contribute to a property, or -- more simply -- optimized by cross-validation.
Note that such combined kernels can still be seen as an explicit inner product between representations. In other words, taking sums of multiple kernels can be interpreted equivalently as generalisations of kernels, or as generalisations of representations that take the form
\begin{align}
\ket{\mathcal{X}} = \sqrt{w}_{1} \ket{\mathcal{X}^{1}} \oplus \sqrt{w}_{2} \ket{\mathcal{X}^{2}} \oplus \dots,
\end{align}
where $\oplus$ denotes concatenation.
\section{Results and discussion}
Having discussed different ways SOAP representations can be modified to represent in a more efficient way structure-property relations in complex data sets, we now verify what the practical implications of such modifications are.
In order put these ideas to the test, we chose two data sets, one of which contains geometrically diverse, isolated organic molecules while the other contains elementally diverse inorganic crystals.
\noindent{\bf The QM9 data set} is a collection of about 134k DFT-optimized (B3LYP/6-31G) structures of small organic molecules.\cite{Ramakrishnan2014} Ea ch molecule contains up to nine heavy atoms (C, N, O and F) in addition to H. While the data set comprises only five atomic elements, it encompasses 621 distinct stoichiometries and is therefore very diverse geometrically. We followed Ref.~\citenum{Ramakrishnan2014} by removing all the 3,054 structures that failed the SMILES consistency test.
The QM9 data set has been used in many pioneering studies of machine learning for molecules, notably for the demonstration of the predictive power of methods based on Coulomb matrices\cite{Ramakrishnan2015a, Huang2016}, radial distribution functions\cite{fabe+18jcp} and SOAP.\cite{Bartok2017} It has also been used together with deep-learning schemes, such as Sch-Net~\cite{schu+18jcp} and HIP-NN~\cite{lubb+18jcp}. QM9 is a very heterogeneous data set, with some stoichiometries being heavily represented, and some considerably less sampled (e.g. F-containing compounds). This, together with the fact that it has been thoroughly benchmarked with several different representations and regression strategies~\cite{fabe+17jctc}, makes it an ideal benchmark to demonstrate the improved learning that is made possible by the scheme we introduce here.
\noindent{\bf The elpasolite data set} comprises about 11k DFT-optimized quaternary structures with stoichiometry ABC\textsubscript{2}D\textsubscript{6} (elpasolite AlNaK\textsubscript{2}F\textsubscript{6} being the archetype). We have used the elpasolite data set of Faber \textit{et al.}\cite{Faber2016} in which the four elements constituting each structure were chosen from the 39 main group elements H to Bi. The DFT-relaxed geometries of each structure in the elpasolite crystals are almost identical which means that the data set is geometrically uniform but elementally diverse.
\subsection{Training data selection}
For each data set, we randomly selected two subsets: an optimization set (A) to be used to determine the hyperparameters of the model by cross-validation, and the other (B) to be used for training and testing. All of the optimizations discussed in this article (radial scaling, alchemical kernel learning and multiple-kernel learning) were performed on the A set. Once each optimization was performed, we randomly shuffled and partitioned set B multiple times to produce training set and test set pairs. In order to account for the variability of the model accuracy with respect to the composition of the training and test sets, we averaged over the learning curves for each pair to create the figures presented here.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.50]{fig1.pdf}
\end{center}
\caption{Learning curves for the elpasolite crystals. The standard SOAP curve is shown in black, the best curve from Ref.~\cite{Faber2018} is shown in bright red and the optimized curves are shown in dark red ($d_J=1$), purple ($d_J=2$) and blue ($d_J=4$). For each of these models, the kernels were constructed with $r_{c}=5$\AA \, and $\zeta=1$. The multiple-kernel model (shown in grey) combines three standard SOAP kernels ($\zeta=1$, $r_{c}=4$; $\zeta=1$, $r_{c}=6$; $\zeta=4$, $r_{c}=6$) and one optimized kernel ($d_{J}=4$, $\zeta=1$, $r_{c}=5$) in the ratio $4:3:1:220$. All of the kernels were constructed with $\nu=2$, $n_\text{max}=12$ radial basis functions and $l_\text{max}=9$ non-degenerate spherical harmonics. Error bars are omitted because they are as small as the data point markers.
}
\label{fig:elpasolites-lc}
\end{figure}
\subsection{Reduced-dimensionality alchemical kernels}
For the elpasolite crystals, our optimization set contained 2k structures and the remainder were used to construct five training and test set pairs at random (6k and 2k structures respectively). Figure \ref{fig:elpasolites-lc} shows the averaged learning curves. The reference curve (bright red line) was taken from Ref.~\cite{Faber2018} and corresponds to recently-proposed density-based representations.
The dark red, purple and blue curves show the result of optimizing the alchemical kernel, which we did by initializing low-dimensional $u_{J\alpha}$ based on the $d_J$ principal components of the alchemical kernel,
\begin{equation}
\label{eq:kappa-er}
\kappa_{\alpha\beta}=e^{-(\epsilon_\alpha-\epsilon_\beta)^2/2\sigma_\epsilon^2 - (r_\alpha-r_\beta)^2/2\sigma_r^2 },
\end{equation}
where $\epsilon_\alpha$ and $r_\alpha$ correspond to Pauling atomic electronegativity and van der Waals radius for the element $\alpha$.
The values of $u_{J\alpha}$ were then optimized with an iterative scheme working in the primal formulation of ridge regression for $\zeta=1$ (see SI).
Reducing the dimensionality of the SOAP representations by three orders of magnitude with $d_{J}=1$ leads to a poor learning rate (dark red line). The learning behaviour is much improved with $d_J = 2$ (purple line), which corresponds to a reduction in the dimensionality of the SOAP representations by a factor of 380. For fewer than 2k structures, the performance is better than standard SOAP (black line), but the learning rate gradually decreases (saturation) as the number of training structures increases. This suggests that the $d_{J}=2$ representation is unable to represent diversity adequately in large sets of structures because of its low dimensionality, in much the same way as reducing $\zeta$ has been found to lead to saturation in SOAP models trained on the QM9 data set.\cite{Bartok2013a}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig2.pdf}
\end{center}
\caption{Data-driven representations of the chemical space. (a) A 2D map of the elements contained in the elpasolite data set, with the coordinates corresponding to $u_{1\alpha}$ and $u_{2\alpha}$, for the case $d_J=2$. Points are colored according to the group. (b) A periodic table colored according to the coordinates in the 2D chemical space. $u_{1\alpha}$ corresponds to the red channel and $u_{2\alpha}$ to the blue channel. (c) A periodic table colored according to $u_{1\alpha}$ (red channel) for a 1D chemical space. (d) A periodic table colored according to 4D chemical coordinates ($u_{1\alpha}$: red channel, $u_{2\alpha}$: green channel, $u_{3\alpha}$: blue channel, $u_{4\alpha}$: hatches opacity)}
\label{fig:periodictable}
\end{figure}
By increasing $d_J$ to 4 (blue line), which corresponds to a reduction in the dimensionality of the SOAP representations by 99\%, the resulting model outperforms both the reference (bright red line) and standard SOAP models. There is still, however, a reduction in the learning rate as the number of training structures increases. Again, this is likely an indication that the low dimensionality of the representation is unable to represent diversity adequately in large sets of structures (in contrast to the higher-dimensional standard SOAP representation).
To test this idea, we combined multiple kernels in linear combination, including full-dimensionality standard SOAP kernels for $r_{c}=4, 5, 6$ and $\zeta=1, 2, 3, 4$, and the optimal alchemical kernels for $d_J = 1, 2, 4$. This multiple-kernel model (grey line) combines the optimized element correlations of the alchemical representation with the resistance to saturation of the standard SOAP representation, leading to an improvement in performance over standard SOAP and the state of the art by some 30\% on the full training set.
It is worth noting that our regression model also outperforms by a factor of two a recently-proposed scheme to determine similarities between elements based on artificial intelligence techniques~\cite{pnas}.
The performance of the model for different levels of compression of the chemical space reflect the tradeoff between the available data and the complexity of the representation.
Training of the extended model entails non-linear optimization of $d_J\times n_\text{elements}$ weights, combined with KRR in a SOAP representation that contains $d_J^2$ ``element channels''. A low-dimensional model can extrapolate more reliably to combinations of elements that are not present in the train set, but may not have sufficient flexibility to maintain high learning rates when larger amounts of data are available.
This tradeoff is evident when considering the apparent contradiction between the fact that we observed little improvement in model performance when increasing $d_{J}$ beyond four, and the fact that a multi-kernel that includes full SOAP models does improve significantly the prediction accuracy. We attribute this to the fact that the number of free parameters grows steeply with $d_{J}$, which leads to failure of cross-validation scheme to extract meaningful information from the relatively small optimization set. Conversely, the multi-kernel model provides an approach to include full element information, with only a small number of hyperparameters defining how much weight this information should be given in comparison to more coarse-grained descriptions.
\subsection{A data-driven periodic table of the elements}
The eigenvectors of the alchemical kernel $\kappa_{\alpha\alpha'}$ lend themselves naturally to be interpreted as spanning a continuous alchemical space in which the element kets $\ket{\alpha}$ are embedded.
In other terms, they make it possible to obtain a low-dimensional representation of the elements, in which case elements that behave in a similar way with respect to the target property lie close to each other.
Figure \ref{fig:periodictable} (a) shows the optimized distribution of the elements $u_{\alpha J}$ in the two-dimensional space spanned by $\ket{1}$ and $\ket{2}$ for $d_J=2$. Elements within different groups of the periodic table are coloured differently.
It is immediately apparent from this colouring scheme that optimization of the alchemical kernel leads to clustering of elements that is reminiscent of their position in the periodic table.
The correlation between the data-driven element representations and the position in the periodic table is perhaps even more apparent in Fig.~\ref{fig:periodictable} (b), in which the periodic table is color-coded according to the values of $u_{J\alpha}$.
This fascinating observation suggests that one could in principle construct a reasonable alchemical kernel using chemical intuition alone. However, there are two significant advantages to the approach presented here. First, the optimization is performed automatically on the data set under consideration. Second, the optimization can be performed just as well in a lower or higher-dimensional space (e.g. $d_J = 1$ or $d_J=4$, Fig.~\ref{fig:periodictable} (c) and (d)), where intuition based on the (two-dimensional) periodic table is likely to hinder the performance of the model.
It should also be noted that the elpasolite data set consists of configurations that share the same structure, and span a space that is dominated by element correlations, making an optimization that ignores geometric correlations particularly effective.
More structurally diverse data sets will imply stronger coupling between geometry and composition, making it advisable to consider more general extensions of the SOAP representations to extract comparable insight.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig3.pdf}
\end{center}
\caption{Learning curves for the QM9 data set. Four of the lines show the MAE on the test set for various standard SOAP kernels ($\zeta = 2$) with different cutoff radii (dashed lines graduating from red to blue). The other lines show the MAE on the test set for the optimal radially-scaled (RS) and multiple-kernel (MK) SOAP models (black and grey lines respectively). In every model, the kernels were constructed with $\nu=2$, $n_\text{max}=12$ radial basis functions and $l_\text{max}=9$ non-degenerate spherical harmonics. The inset shows the radial-scaling function $u(r)$ from $r = 0$\AA \, to $r = 5$\AA \, with the parameters that were found to minimize the ten-fold cross validation MAE on the optimization set through a grid search, $r_{0} = 2$\AA \, and $m=7$. The multiple-kernel model combines the $r_{c}=2, 3, 4$ and RS kernels in the ratio $100,000:1:2:10,000$, and the learning curve agrees with the RS result to within graphical accuracy. Error bars are omitted because they are as small as the data point markers.
Note that errors are expressed on a per-atom basis. Error per molecule expressed in kcal/mol can be obtained approximately by multiplying the scale by 0.4147, that is computed based on the average size of a molecule in the QM9 database.
}
\label{fig:QM9-lc-rs}
\end{figure}
\subsection{Radial scaling in the QM9 data set}
Molecular databases such as the QM9 are less elementally diverse (containing only 5 elements), but contain a broad variety of structures.
It has been shown that SOAP kernels can predict with great accuracy the stability of these molecules. However, reaching the best accuracy requires a combination of kernels, as in Eq.~\eqref{eq:multi-k}, with different cutoff radii.
The combination of kernels with different length scales has been interpreted in terms of the need for encoding in the kernel the notion of multiple length scales in molecular interactions~\cite{Bartok2017}.
The same argument can be applied to the optimization of a radial scaling function $u(r)$ (see Section~\ref{sub:radial}), so it should be possible to obtain similar accuracy to a multi-scale kernel by simply optimizing a suitable parameterization of such scaling.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.50]{fig4.pdf}
\end{center}
\caption{Learning curves for the QM9 data set after inclusion of radially-scaled and alchemically-optimized SOAP kernels. Standard SOAP kernels with different cutoff radii are compared with the result of optimizing alchemical correlations using the scheme presented previously for the elpasolite crystal data set (blue and red lines). The learning curve of the optimized radially-scaled kernel (dashed black line with circles) is improved through inclusion of a Gaussian alchemical kernel (dashed black line with squares), which was optimized specifically for $\zeta=2$ using a grid search. The combined optimization of the radial scaling and alchemical correlations leads to a model that matches the accuracy of the state of the art curve (dashed red line), which corresponds to the representations from Ref.~\cite{Faber2018}, with the errors normalized by the average size of a molecules in the QM9 database. In every SOAP-based model, the kernels were constructed with $\nu=2$, $n_\text{max}=12$ radial basis functions and $l_\text{max}=9$ non-degenerate spherical harmonics. Error bars are omitted because they are as small as the data point markers.
}
\label{fig:QM9-lc-alchemy}
\end{figure}
Following Eq.~\eqref{eq:dirac-rxj-ur-c0}, we consider a simple functional form with a long-range algebraic decay and smooth behavior at $r\rightarrow 0$,
\begin{equation}
u(r) = \begin{cases}
\frac{1}{(r/r_0)^m} &\text{if c=0,}\\
1 &\text{if m=0,} \\
\frac{c}{c+(r/r_0)^m} &\text{else}.
\end{cases} \label{eq:radial-u}
\end{equation}
We optimized $r_0$ and $m$ using a grid search and 10-fold cross validation over an optimization set of 5,000 randomly-selected molecules with $c=1$. The full set of parameters that we tested is given in the SI.
Figure~\ref{fig:QM9-lc-rs} compares the learning curves of conventional SOAP for different cutoff radii with the best radial scaling determined on the A set.
Radial scaling leads to a substantial (~25\%{}) improvement in the performance of the model\footnote{It is important to stress that the results we report here are about 20\% better than those in Ref.~\cite{Bartok2017}, because we removed the 3,054 structures that failed the SMILES consistency test, as is done by other papers using this data set as benchmark, including Ref.~\cite{Faber2018}.} . The learning rate does not decrease when the training is extended to larger fractions of the QM9. At the level of 100k reference configurations, the radially-scaled kernel achieves a MAE as low as 0.34 meV/atom, corresponding to 0.14 kcal/mol.
When considering state-of-the-art results achieved in the past year using more generally-applicable representations, our optimized model achieves an improvement which is between 25 and 60\%{}. Multi-kernel SOAP~\cite{Barker2017} yields 0.18 kcal/mol MAE, and two different neural network models reach 0.26~\cite{lubb+18jcp} or 0.32~\cite{schu+18jcp} kcal/mol MAE.
We also attempted to build a multi-kernel model including both conventional SOAP kernels and the best radially-scaled kernels. The improvement we could achieve is marginal, which reinforces the notion that an optimal radial scaling of the representation is essentially equivalent to an optimized combination of representations with different scales.
Although the QM9 data set exhibits a low degree of composition diversity, one can attempt to further improve the performance of the model by introducing correlations between chemical species.
In this case it is necessary to use a $\zeta=2$ exponent to incorporate many-body interactions in the regression, which makes the application of the primal-based optimization scheme we used for elpasolites impractical\footnote{Note that the $u_{J\alpha}$ optimized for the $\zeta=1$ representations lead to a degradation of the accuracy when used for the $\zeta=2$ case.}.
For this reason, and inspired by previous results based on a heuristic determination of $\kappa_{\alpha\beta}$ based on the Pauling electronegativity of the atoms~\cite{Bartok2017}, we just used Eq.~\eqref{eq:kappa-er} and performed a grid search to find the optimal values of $\sigma_\epsilon$ and $\sigma_r$ (see the SI for more details).
Fig.~\ref{fig:QM9-lc-alchemy} shows that this simple ansatz improves by a further 10\%{} the performance of a SOAP-based KRR model, and also combines with the optimized radial scaling to yield a model which is essentially equivalent in performance to the optimized representations of Ref.~\cite{Faber2017}.
The success of the rather primitive form of this feature optimization protocol suggests that a more general strategy in which structural and chemical correlations are tuned simultaneously could improve even further beyond the state of the art.
\section{Conclusions}
Thanks to their mathematically sound, unbiased constructions, SOAP representations are particularly well-suited to be extended,
incorporating information on correlations between structure, composition and properties.
We have given two examples of such extensions, representing the behavior of different chemical species as low-dimensional vectors, and modulating the information content of the representations with a radial scaling function.
These optimizations improve significantly the performance of SOAP representations, matching or surpassing the state of the art on two very different data sets -- a chemically diverse set of quaternary solid compounds, and a collection of small organic molecules.
The framework we use to simplify the description of atomic species can reduce dramatically the complexity and computational costs of machine-learning models for multi-component systems, and could also be applied to coarse-grained models, in which beads correspond to functional groups, and a reduced-dimensionality description could identify features such as polarity or hydrophobicity.
The exercise of optimizing SOAP representations does not only lead to more effective machine learning of molecular and materials stability.
As we have demonstrated by re-discovering the periodic table of the elements, and extending it to one and four dimensions, it also makes it possible to extract useful insights from the inspection of the optimal combinations of features.
When it comes to the applications of machine learning to chemistry, physics and materials science, accuracy and understanding go hand in hand.
\section*{Acknowledgements}
The Authors would like to thank G\'abor Cs\'anyi for insightful discussion and comments on an early version of the manuscript. MC and MJW
were supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 677013-HBMAP). FM was supported by the the NCCR MARVEL, funded by the Swiss National Science Foundation.
\bibliographystyle{bibliography/aip}
|
1,108,101,564,195 | arxiv | \section{Introduction}
Over the past two decades, the model space of WIMP-like dark matter (DM) has expanded dramatically. While canonical WIMPs with TeV-scale masses remain an attractive class of models, great attention has turned to sectors with a broader range of interactions and lighter states. The cosmological evolution and experimental signatures often differ significantly from traditional WIMP scenarios.
One such difference is the presence of inelastic scattering. The existence of an excited state $\chi^*$ of the $\chi$ DM particle can dramatically alter the energy spectrum in direct detection experiments through endothermic \cite{TuckerSmith:2001hy} or exothermic scattering processes from long-lived states \cite{Finkbeiner:2009mi,Batell:2009vb,Lang:2010cd,Graham:2010ca}. In particular, the introduction of a new scale, $\delta = m_\ensuremath{\chi^*}-m_\chi$ changes the kinematics of processes at or below the mass splitting scale. Decays of the excited state can have profound implications for direct \cite{Finkbeiner:2009ug,Chang:2010en,Feldstein:2010su}, indirect, and accelerator signals~\cite{Morrissey:2014yma,Izaguirre:2015zva,Izaguirre:2017bqb,Berlin:2018jbm,Izaguirre:2015yja}.
Another important change has been the introduction of new, light mediators between the dark sector and the Standard Model (SM) \cite{Alexander:2016aln}. In models with dark photons, in particular, very light ($\sim \text{MeV}$) thermal DM is allowed.
These two modifications are connected. Light Dirac fermion DM is in severe conflict with measurements of the CMB if it can annihilate at the time of recombination \cite{Padmanabhan:2005es,Slatyer:2009yq,Madhavacheril:2013cna}. However, a pseudo-Dirac fermion escapes the CMB constraints if only the ground state $\chi$ has a significant abundance. Thus, a broad class of light DM models necessarily introduces both of these components.
If the splitting is present, the elastic scattering process is highly suppressed in light DM models with dark photon mediators. As a consequence, understanding the possible presence of these excited states and their signals is imperative.
For MeV mass DM particles, there are two natural scales to consider. The first is an $\mathcal{O}(1)\sim\mathcal{O}(\text{MeV})$ splitting in the dark sector, which decouples the excited states from questions of direct detection. However, the splitting breaks a symmetry of the theory, and has a scale generally smaller than the other scales in the theory. Thus a second possibility is splittings in the $\alpha/4\pi \times \mathcal{O}(\text{MeV}) \sim \mathcal{O}(\text{keV})$ range, which change the signatures of direct detection experiments. While the nuclear signals of these excited states have been studied \cite{Graham:2010ca,Fox:2013pia,Frandsen:2014ima}, the electronic signals are relatively unexplored.
While MeV-scale excitations typically decay promptly into $e^+e^-$ pairs, keV excitations are potentially long-lived and of the appropriate energy scale to impact direct detection. It is this latter case that we consider in this paper, with a particular emphasis on electronic signals of $\chi^*$ down-scattering in direct detection experiments. A systematic study of this parameter space and cosmological history, including signals of nuclear recoils from primordial states, is expected to soon appear in~\cite{Natalia}.
Elastic WIMP recoils of light particles typically deposit $\sim v^2 m_\chi \sim 10 \,\text{eV}$ of energy \cite{Essig:2011nj,Essig:2015cda,Essig:2017kqs}. Such small energies are currently observable with precision experiments with small target masses \cite{Barak:2020fql,Aguilar-Arevalo:2019wdi} or large targets at the expense of less background rejection \cite{Akerib:2017uem,Ren:2018gyx,Aprile:2019xxb}. Inelastic scatters of $\textrm{keV}$ energies allows one to consider limits in energy ranges of the experiment where one can properly fiducialize, including in the current world-leading xenon experiments \cite{Fu:2017lfc,Akerib:2017uem,Aprile:2020tmw}. In this paper, we shall show that ongoing experiments are sensitive to thermal relics through their inelastic scattering. Recently, the XENON1T collaboration has reported an excess of electron recoil events \cite{Aprile:2020tmw}. We shall see that the scenarios we shall consider provide possible explanations to this excess and predict future testable experimental consequences.
The layout of this paper is as follows: in Sec.~\ref{sec:modelspace}, we present the parameter space of the model in question. In Secs.~\ref{sec:universe}-\ref{sec:earth}, we consider various possible sources of excited states, including the early universe, the Sun, and the Earth, the resulting implications of these states in direct detection experiments. We consider other terrestrial and astrophysical signatures, and make a connection to the 3.5 keV line observed from nearby galaxies and galaxy clusters as well as future laboratory experiments. Finally, we conclude in Sec.~\ref{sec:discussion}.
\section{Model Space}
We consider models of DM with excited states. Such models are common in particle physics, and span a wide range of mass. Since we will be interested in the energy deposition of these excited states into direct detection experiments, we will need a means of both producing and de-exciting these states. We focus on thermal relics, which can make up all of the DM, or can be a subcomponent.
We shall operate within a specific framework of thermal relics with interactions mediated by a dark photon.
The ingredients of the model under question combines features discussed broadly in the literature over more than a decade \cite{Holdom:1985ag,Boehm:2003hm,Finkbeiner:2007kk,ArkaniHamed:2008qn,Pospelov:2008jd,Hooper:2008im,Knapen:2017xzo,Cohen:2010kn}. We consider a light DM particle, coupled to a massive dark photon which is kinetically mixed with the Standard Model photon. We consider the DM to be a pseudo-Dirac fermion $\chi$ and \ensuremath{\chi^*}\, split by an amount $\delta$. We further consider a dipole moment allowing transitions between the DM. The interaction Lagrangian is
\label{sec:modelspace}
\begin{eqnarray}\begin{aligned}
{\cal L} \supset \frac{\epsilon}{2} \, F_{\mu\nu} F^{\prime \mu\nu} + i e_D \,A^\prime_\mu \, \bar{\chi}^* \gamma^\mu \chi + \frac{1}{\Lambda_d} ~ \bar{\chi}^* \sigma^{\mu \nu} \chi \, F_{\mu\nu}
\, ,
\label{eq:lagrangian}
\end{aligned}\end{eqnarray}
where $e_D = \sqrt{4 \pi \alpha_D}$ is the dark photon gauge coupling. The dark photon and dipole can each allow for transitions between the ground and excited DM state. Both vector interactions as well as dipole operators are off-diagonal between the mass eigenstates of the pseudo-Dirac fermion.
We first consider the processes arising from the presence of the dark photon. The up- and down-scattering cross section is parametrized by the cross section in the elastic limit. For a target $T$ with charge $Z_T$ this is simply
\begin{equation}
\sigma _T=\frac{16 \pi \, Z_T^2 \, \alpha \, \alpha_D \, \mu^2 \, \epsilon ^2}{m_{A^\prime}^4}=\frac{16 \pi \, Z_T^2 \, \alpha \, \mu^2 \, y}{m_\chi^4}
\, ,
\label{eq:DPxsec}
\end{equation}
where $\mu$ is the reduced mass of the target and DM particles and the variable
\begin{equation}
y \equiv \epsilon^2 \, \alpha_D \, (m_\chi/m_{A^\prime})^4
\end{equation}
is a commonly used parameterization of the DM couplings~\cite{Izaguirre:2015yja}. $\mu=\mu_{\chi T}$ is the reduced mass of the $\chi$-target system, and subscripts are suppressed when the meaning is clear.
When the kinematics are unimportant, this characterizes the overall process for up- and down-scattering. When the splitting is significant compared to the overall kinematics, the width of the recoil energy distribution is corrected compared to the elastic case
\begin{equation}
\Delta E_R = \frac{2 \mu ^2 v^2 \sqrt{1\pm\frac{2 \delta}{{\mu v}^2}}}{{M_T}}=\Delta E_{\rm R,elastic} \sqrt{1\pm\frac{2 \delta}{\mu v^2}}
\, ,
\end{equation}
where $+$ is for exothermic and $-$ for endothermic inelastic scattering and $v$ is the DM velocity. This corrects the overall scattering cross section
\begin{equation}
\sigma_{T, \text{inel}}=\sigma_{T, \text{elastic}} \, \sqrt{1\pm\frac{2 \delta}{\mu v^2}}
~,
\label{eq:kinenhance}
\end{equation}
(see, e.g., Refs.~\cite{TuckerSmith:2001hy,Lang:2010cd,Finkbeiner:2009mi,Graham:2010ca}).
Decays of excited states can occur through the dark photon; in the parameter range the we consider, the lifetime is longer than the age of the universe \cite{Finkbeiner:2009mi,Batell:2009vb}. Only once the splitting is $\mathcal{O}(\text{MeV})$ and de-excitations into $e^+e^-$ pairs are allowed, does the lifetime become short.
The higher-dimensional dipole operator in Eq.~\eqref{eq:lagrangian} can also result in up- and down- scattering, and scenarios involving this have been discussed previously \cite{Chang:2010en,Feldstein:2010su}. For our studies, the dipole will be essential for scenarios with short decay lifetimes.
The excited state decays at a rate of
\begin{equation}\tau^{-1} \sim\pi \delta^3 /\Lambda_d^2 \sim \sec^{-1} \times \left(\frac{\delta}{\textrm{keV}}\right)^3\left(\frac{\text{TeV}}{\Lambda_d}\right)^2.
\end{equation}
The scale of the dipole moment is important in determining the possible source of the excited states. In principle, a dark photon-interacting thermal relic need have no dipole operator with electromagnetism at all. A Planck-suppressed dipole operator (i.e., $\Lambda_d \sim M_{\text{pl}}$) does not mediate a decay over the age of the universe for $\delta \lesssim \text{MeV}$. Thus, a natural starting point would be to consider excited states which are produced primordially and are stable on cosmological timescales, e.g.~\cite{Finkbeiner:2009mi,Batell:2009vb}.
In the presence of a larger dipole moment (or another means of decay), the primordial excited abundance is depleted and more local production mechanisms become important. For instance, inelastic up-scattering of DM on targets nearby earth can regenerate a local flux of excited states. The Sun is a natural possibility, given its (relatively) high temperature. The Earth is also a possibility, if the kinetic energy of the DM is large enough. For the Sun to facilitate a signal, the $\chi^*$ lifetime must be approximately $1 \ \text{AU}/ v_0 \sim 10^6 \ \text{sec}$ in order to make it to the Earth, where $v_0$ is the DM velocity dispersion. If $\chi^*$ is accelerated from a solar scattering as we shall consider, lifetimes can be a bit shorter, but not in a way that qualitatively changes the picture. For up-scattering in the Earth, the lifetime can be much shorter. Indeed, scenarios with very short ($\sim 100 \ \text{$\mu$sec}$) lifetimes have been considered \cite{Chang:2010en}. We will focus on scenarios where the entire Earth is a source of excited states, and thus lifetimes greater than $r_{\text{earth}}/v_0 \sim 100 \ \text{sec}$. These requirements broadly are satisfied for $\Lambda_d \gtrsim (10 - 100) \ \text{TeV}$. Given that radiative dipoles are typically suppressed by $e\, m_f$, this is a relatively mild constraint.
Since we will be focused on Xenon signals of such deexcitations and decays, let us take a moment to describe our event rates. Given the ongoing searches for light DM in electron recoils, there has been incredible progress in understanding the ionization rates from DM scattering, e.g., Refs.~\cite{Essig:2011nj,Essig:2012yx,Essig:2015cda, Essig:2017kqs,Catena:2019gfa}. For ionization of electrons from the elastic scattering of slow-moving DM heavier than an MeV, the momentum transfer is $q \sim m_e v_e$, where the typical electron velocity is $v_e \sim \alpha$. Thus, in order to achieve $\gtrsim \text{keV}$ electron recoil energies above threshold in xenon detectors, one must go well away from the peak of the electron form factor in order to take advantage of the large electron momenta close to the nucleus. In contrast, the scenarios we shall consider in this work involve down-scattering with sizable splittings $\delta \gtrsim \text{keV} \gg \alpha^2 m_e$, such that $q \sim ( m_e \, \delta)^{1/2}$ and the recoil energy is peaked at $\delta$. In other words, the primary support of the process comes from the peak of the electron form factor, and hence, the electron momentum does not play a major role in the scatterings we consider. We estimate the event rates in a xenon detector by assuming the $n=4, 5$ orbitals are accessible and populated by ``free'' electrons, and ignore the $n=1,2,3$ orbitals, which are more tightly bound. We expect this to be a reasonable approximation for mass splittings $\delta \gtrsim \text{keV}$ and anticipate $\mathcal{O}(1)$ corrections; a more precise study is warranted in future work.
Having laid out the basic tools for studying the electromagnetic signals of inelastic DM scatters, we are now in a position to consider the actual signals. We will consider three distinct sources of excited states: primordial abundances, excitations from solar reflection, and up-scatterings in the Earth. The signals of each of these will be different and provide different signal constraints.
\section{Excited States from the Early Universe}
\label{sec:universe}
The high densities and temperatures of the early universe provide an efficient means to generating a cosmologically stable $\chi^*$ abundance. If DM was part of a thermal bath in the primordial universe, chemical equilibrium drives the relative abundances of $\chi$ and $\chi^*$ to comparable values. This is indeed the case in standard cosmologies of thermal relics in which DM was once in equilibrium with ordinary matter. At much later times, typically once the temperature is much smaller than the mass splitting $\delta$, the relative abundance of $\chi^*$ (compared to $\chi$) is exponentially suppressed and freezes out. Thus, estimating the primordial fraction of $\chi^*$ at late times requires tracking the cosmological evolution across the periods of $\chi , \chi^* \leftrightarrow \text{SM}$ chemical and kinetic decoupling, as well as the period of $\chi \leftrightarrow \chi^*$ decoupling. In the following discussion, we give simple analytic expressions for the rates of these various processes.
Within the context of Eq.~(\ref{eq:lagrangian}), DM can maintain chemical equilibrium with the SM bath through coannihilations to electromagnetically charged SM particles $f$, as mediated by the dark photon, $\chi \chi^* \leftrightarrow A^\prime \leftrightarrow f f$. For temperatures $T \ll m_{A^\prime}$, this process is dominated by the the exchange of an off-shell $A^\prime$. The total comoving $\chi + \chi^*$ density is dictated by the temperature at which these coannihilations decouple. If this occurs at a temperature much greater than the mass splitting $\delta$, the rate for such processes scales as $\sigma v \sim \alpha \, y / m_\chi^2$. The conserved $\chi + \chi^*$ comoving density is then consistent with observations of the DM energy density provided that $\sigma v \sim 1 / (T_{\text{eq}} \, m_{\text{pl}})$, where $T_{\text{eq}} \sim 0.8 \ \text{eV}$ is the temperature at matter-radiation equality and $m_{\text{pl}}$ is the Planck mass, which is equivalent to
\begin{equation}
\label{eq:yfreezeout}
y \sim 10^{-10} \times \left( \frac{m_\chi}{100 \ \text{MeV}} \right)^2
~.
\end{equation}
After chemically decoupling from the SM, $\chi$ and $\chi^*$ remain chemically coupled to each other through $\chi^* \chi^* \leftrightarrow \chi \chi$ and $\chi^* f \leftrightarrow \chi f$, where the latter process also enforces kinetic equilibrium between the dark sector and the SM. Neither process alters the total $\chi + \chi^*$ number, but each drives the relative number density to the equilibrium value $n_\chi^* / n_{\chi} \sim e^{- \delta / T_\chi}$, where $T_\chi$ is the temperature of the $\chi + \chi^*$ bath. Once $\chi$ and $\chi^*$ chemically decouple from each other, the primordial comoving abundance of the excited state $\chi^*$ is no longer depleted by annihilation or scattering processes.
The DM temperature $T_\chi$ is governed by the temperature of kinetic decoupling $T_{\text{kin}} \ll m_\chi$, which is in turn dictated by DM-electron down-scattering $\chi^* e \leftrightarrow \chi e$ for $m_\chi \lesssim \text{GeV}$. For $m_\chi \gg \text{MeV}$, $T \lesssim T_{\text{kin}}$ occurs well after $\chi, \chi^*$ become non-relativistic and chemically decouple from the SM, due to the enhanced abundance of electrons compared to DM particles at early times. In the limit that $m_e \ll T \ll m_{A^\prime}$, the thermally-averaged rate for $\chi^* e \leftrightarrow \chi e$ is
\begin{equation}
\Gamma_{\chi e} \simeq \frac{360 \, \zeta(5)}{\pi} ~ \frac{\alpha \, \alpha_D \, \epsilon^2 \, T^5}{m_{A^\prime}^4}
~.
\end{equation}
At much lower temperatures, $T \ll m_e$, $\Gamma_{\chi e}$ is exponentially suppressed, due to the dwindling electron abundance. We estimate $T_{\text{kin}}$ as the temperature at which $\Gamma_{\chi e}$ drops below the rate of Hubble expansion $H$. For $T \lesssim T_{\text{kin}}$, the DM temperature evolves independently of the SM plasma as $T_\chi \sim T^2 / T_{\text{kin}}$. In most of the parameter space that we investigate, kinetic decoupling occurs near or slightly below the electron mass threshold.
\begin{figure}\begin{center}
\begin{adjustwidth}{-0.65in}{-0.75in}
\includegraphics[scale=0.64]{iDM_f2_delta3.pdf}
\includegraphics[scale=0.64]{iDM_f2_delta100.pdf}
\end{adjustwidth}
\caption{The fraction $f_*$ of dark matter that is composed of excited states (shaded green) as a function of dark matter mass $m_\chi$ and dark sector coupling $\alpha_D$ for $m_{A^\prime} = 3 \, m_\chi$ and various values of the $\chi^* - \chi$ mass splitting, $\delta = 3 \ \text{ keV}$ (left) and $100 \text{ keV}$ (right). For each point in parameter space, we fix the kinetic mixing parameter $\epsilon$ such that the abundance of $\chi$ agrees with the observed dark matter energy density (cyan). Shown in gray are regions excluded by elastic self-scattering of dark matter~\cite{Berlin:2018jbm,Tulin:2017ara} and distortions of the CMB from late-time annihilations~\cite{Aghanim:2018eyx}.}\label{fig:f2eps}\end{center}
\end{figure}
\begin{figure}\begin{center}
\begin{adjustwidth}{-0.58in}{-0.75in}
\includegraphics[scale=0.64]{iDM_delta3.pdf}
\includegraphics[scale=0.64]{iDM_delta100.pdf}
\end{adjustwidth}
\caption{In blue, the event yield at XENON1T from down-scattering of a primordial excited pseudo-Dirac dark matter subcomponent as a function of $\epsilon$ and $m_\chi$, for $\alpha_D = 0.5$, $m_{A^\prime} / m_\chi = 3$, and various choices of the mass splitting, $\delta = 3 \ \text{keV}$ (left) and $\delta = 100 \ \text{keV}$ (right). Throughout, we assume that $\chi$ makes up the entirety of the dark matter abundance; along the black contours, the thermal abundance of $\chi$ is consistent with the observed dark matter energy density. Also shown are regions excluded by recent missing energy/momentum searches at NA64~\cite{NA64:2019imj} and BaBar~\cite{Lees:2017lec} (solid gray), as well as the projected sensitivities of searches for similar signals at LDMX and Belle II (dashed)~\cite{Izaguirre:2014bca,Battaglieri:2017aum,Akesson:2018vlm,Berlin:2018bsc}. Exclusions derived from distortions of the CMB anisotropies are also shown (solid gray)~\cite{Aghanim:2018eyx}.}\label{fig:epsmx}\end{center}
\end{figure}
Even at temperatures well below the electron threshold, $\chi$ and $\chi^*$ can remain in chemical equilibrium through DM-DM scattering $\chi^* \chi^* \leftrightarrow \chi \chi$, which is independent of $\epsilon$. Assuming that $\chi$ and $\chi^*$ are chemically coupled, $n_{\chi^*} \sim e^{-\delta / T_\chi} \, n_\chi$. The corresponding thermally-averaged rate is roughly
\begin{equation}
\Gamma_{\chi^* \chi} \simeq \, e^{- \delta / T_\chi} \, n_\chi \, \frac{2^{5/2} \pi \, \alpha_D^2 \, m_\chi^{3/2}}{m_{A^\prime}^4} ~ \text{max} \left(\frac{2}{\pi} \, T_\chi \, , \, \delta \right)^{1/2}
~.
\end{equation}
We denote the DM temperature at which $\Gamma_{\chi^* \chi} \sim H$ as $T_{\chi \chi^*}$, which we evaluate numerically. Since $\chi^* e \leftrightarrow \chi e$ also enforces $\chi - \chi^*$ chemical equilibrium, the DM temperature of $\chi - \chi^*$ chemical decoupling is $T_{\chi, \text{chem}} \sim \text{min}(T_{\text{kin}}, T_{\chi \chi^*})$ and is thus controlled by whichever process, $\chi^* e \leftrightarrow \chi e$ or $\chi^* \chi^* \leftrightarrow \chi \chi$, decouples last. Assuming that $\chi$ makes up the dominant component of the DM abundance at late times, the number density $n_\chi$ in the expression above corresponds to $n_\chi \sim T_{\text{eq}} \, T^3 / m_\chi$. If $\chi^*$ is cosmologically stable, its late-time fractional abundance is then approximated by
\begin{equation}
f_* \equiv \frac{n_{\chi^*}}{n_\chi + n_{\chi^*}} \simeq e^{-\delta / T_{ \chi, \text{chem}}}
~.
\end{equation}
Ignoring the $m_\chi$-dependence of $T_{\text{kin}} \sim m_e$ and taking $T_{\chi, \text{chem}} \lesssim \delta$, $f_*$ then scales as
\begin{align}
f_* &\sim \frac{m_\chi^{7/2}}{\alpha_D^2 \, (T_{\chi, \text{chem}} \, \delta)^{1/2}} ~ \frac{(m_{A^\prime} / m_\chi)^4}{m_e^{1/2} \, T_{\text{eq}} \, m_{\text{pl}}}
\nonumber \\
&\sim \text{few} \times 10^{-4} \times \left( \frac{m_\chi}{100 \ \text{MeV}} \right)^{7/2} \left( \frac{\alpha_D}{0.5} \right)^{-2} \left( \frac{\delta}{\text{keV}} \right)^{-1} \left( \frac{T_{\chi, \text{chem}}}{\delta} \right)^{-1/2} \left( \frac{m_{A^\prime} / m_\chi}{3} \right)^{4}
\end{align}
for $m_\chi \sim \mathcal{O}(\text{MeV})$, where the ratio $T_{\chi, \text{chem}} / \delta \lesssim 1$ grows logarithmically with increasing $m_\chi$.
For DM masses well below the GeV-scale, the remaining fractional abundance of excited states $\chi^*$ is typically very small, $f_* \ll 1$. The dependence of $f_*$ on various parameters is shown in Fig.~\ref{fig:f2eps}, in which we vary $\epsilon$ as a function of $\alpha_D$ and $m_\chi$ by fixing the late-time abundance of $\chi$ to the observed DM energy density. From Eq.~(\ref{eq:yfreezeout}), this corresponds to
\begin{equation}
\epsilon \sim 10^{-4} \times \left( \frac{m_\chi}{100 \ \text{MeV}} \right) \left( \frac{m_{A^\prime} / m_\chi}{3} \right)^{2} \left( \frac{\alpha_D}{0.5} \right)^{-1/2}
~.
\end{equation}
This ``thermal target'' is also shown as the black contours in the $\epsilon-m_\chi$ parameter space of Fig.~\ref{fig:epsmx}. As discussed above, this is driven by $\chi \chi^* \leftrightarrow ff$ freeze-out, and in our numerical analysis, we include the effect of hadronic resonances and final states~\cite{Izaguirre:2015zva}. For smaller $\alpha_D$ or larger $m_\chi$, the ability to deplete the primordial $\chi^*$ abundance diminishes, leading to an increased primordial excited state fraction $f_*$. For $m_\chi \gtrsim \text{few} \times \text{GeV}$, $\chi^*$ constitutes an $\mathcal{O}(1)$ fraction of the DM density.
\begin{figure}\begin{center}
\begin{adjustwidth}{-0.6in}{-0.75in}
\includegraphics[scale=0.64]{thermal_iDM_delta3.pdf}
\includegraphics[scale=0.64]{thermal_iDM_delta100.pdf}
\end{adjustwidth}
\caption{As in Fig.~\ref{fig:epsmx}, but now in the $\alpha_D - m_\chi$ plane. At each point in parameter space, the value of $\epsilon$ is fixed such that $\chi$ freezes out with an abundance that is consistent with the observed dark matter energy density, as in Fig.~\ref{fig:f2eps}.}\label{fig:alphamx}\end{center}
\end{figure}
For mass splittings $\delta \gtrsim 2 m_e$, the dark photon induced decay $\chi^* \to \chi +2 e$ may deplete the remaining $\chi^*$ abundance to completely negligible levels~\cite{Finkbeiner:2009mi,Batell:2009vb}. However, for $\delta \ll m_e$, in the absence of an additional dipole-type interaction, the only kinematically allowed decays are $\chi^* \to \chi + 3 \gamma$ and $\chi^* \to \chi + 2 \nu$, with a corresponding lifetime that is cosmologically stable. In this case, the primordial $\chi^*$ fraction generically survives to late times, potentially giving rise to detectable signatures in cosmological and terrestrial observations.
Near the time of recombination, the primordial abundance of $\chi^*$ facilitates late time coannihilations to SM particles, depositing energy into the SM plasma and leading to small distortions in the CMB anisotropies. This process is suppressed by the small residual fraction $f_*$, but is compensated by the large number density of $\chi$ for $m_\chi \ll \text{GeV}$. The resulting energy injected into the SM plasma is strongly constrained by Planck observations, leading to $f_*\, \sigma v \lesssim \text{pb} \times (m_\chi / 60 \ \text{GeV})$ for electromagnetic final states~\cite{Aghanim:2018eyx}. The corresponding cross section for coannihilations to leptonic final states is
\begin{equation}
\sigma v (\chi \chi^* \to \ell \ell) \simeq \frac{16 \pi \, \alpha \, \alpha_D \, \epsilon^2 \, m_\chi^2}{m_{A^\prime}^4}
~.
\end{equation}
The resulting Planck bound is shown in gray in Figs.~\ref{fig:f2eps}-\ref{fig:alphamx}. As shown explicitly in Fig.~\ref{fig:f2eps}, this is most constraining for $m_\chi \sim \ \text{GeV}$, in which case $f_* \gtrsim \mathcal{O}(10^{-1})$. For much larger masses, $f_*$ saturates at $f_* \sim \mathcal{O}(1)$, while the DM number density falls as $\sim 1 / m_\chi$, leading to a weakening of the bounds.
The consideration of DM halo shapes and merging galaxy clusters constrain the rate for DM elastic scattering to be $\sigma (\chi \chi \to \chi \chi ) / m_\chi \lesssim 10 \ \text{cm}^2 / \text{g}$~\cite{Tulin:2017ara}. Such limits therefore restrict large values of $\alpha_D$ and are especially relevant at small DM masses, as shown in gray in Figs.~\ref{fig:f2eps} and \ref{fig:alphamx}. In scenarios involving $f_* \ll 1$ and mass splittings greater than the typical DM kinetic energy, the dominant process at small masses arises from elastic scattering $\chi \chi \to \chi \chi$ that is radiatively induced by $A^\prime$ exchange (see, e.g., Ref.~\cite{Berlin:2018jbm}).
The presence of a long-lived primordial $\chi^*$ component can also lead to signals in terrestrial direct detection experiments. In particular, if $\chi^*$ makes up a subcomponent of the galaxy's DM halo, down-scattering off of electrons $\chi^* e \to \chi e$ leads to a mono-energetic recoil energy of approximately $E_R \simeq \mu \, \delta/ m_e$ (provided that the mass splitting is greater than $\delta \gtrsim \mu \, v^2$) where $v$ is the $\chi^*$ velocity. In the limit that $\delta \ll m_\chi \ll m_{A^\prime}$, the differential cross section for down-scattering is
\begin{equation}
\frac{d \sigma}{d E_R} \simeq \frac{8 \pi \, \alpha \, \alpha_D \, \epsilon^2 \, m_e}{m_{A^\prime}^4 v^2}
~.
\end{equation}
At the level of our ``free" electron approximation, the expected signal rate $R$ is then given by
\begin{equation}
\label{eq:PrimRate1}
R \equiv \frac{dN_{\text{sig}}}{dt \, dM_{\text{det}}} \simeq \text{effic.} \times \frac{N_A \, Z_{\text{free}}}{A \ \text{g}} \, \frac{f_* \, \rho_\chi}{m_\chi} \, \, \frac{8 \pi \, \alpha \, \alpha_D \, \epsilon^2 \, m_e}{m_{A^\prime}^4} \, \int_0^\infty dE_R \, \int_{v_\min}^\infty dv ~ \frac{f_{\text{halo}}(v)}{v}
~,
\end{equation}
where ``effic." accounts for the detector efficiency, $\rho_\chi \simeq 0.4 \ \text{GeV} / \text{cm}^3$ is the local DM energy density, $v_\min \simeq |m_e E_R - \mu \delta| / (\mu \sqrt{2 m_e E_R})$ is the minimum kinematically allowed $\chi^*$ velocity, $M_{\text{det}}$ is the detector mass, $N_A$ is Avogadro's number, $Z_{\text{free}}$ is the number of electrons in the $n = 3, 4$ orbitals of xenon, and $A$ is the atomic mass. Approximating the halo velocity distribution $f(v)$ as Maxwellian with dispersion $v_0 \ll \sqrt{\delta / \mu}$, the recoil energy and velocity integrals reduce to
\begin{equation}
\int_0^\infty dE_R \, \int_{v_\min}^\infty dv ~ \frac{f_{\text{halo}}(v)}{v} \simeq \frac{(2 \mu)^{3/2} \delta^{1/2}}{m_e}
~.
\end{equation}
Using this in Eq~(\ref{eq:PrimRate1}) then leads to
\begin{equation}
R \sim 10^6 \ \left( \text{tonne-year} \right)^{-1} \times f_* \, \Big( \frac{\delta}{\text{keV}} \Big)^{1/2} \Big( \frac{y}{10^{-10}} \Big) \Big( \frac{m_\chi}{100 \ \text{MeV}} \Big)^{-5}
~.
\end{equation}
Hence, even a very subdominant primordial fraction $f_* \ll 1$ may potentially lead to detectable rates.
In Figs.~\ref{fig:epsmx} and \ref{fig:alphamx}, we highlight regions of parameter space in which an excited component of the DM energy density leads to electron down-scattering event rates at XENON1T, ranging from $(1-100) / (\text{tonne-year})$. As shown in Fig.~\ref{fig:f2eps}, larger $\alpha_D$ leads to a smaller primordial $\chi^*$ abundance, thus suppressing the down-scattering rate in terrestrial detectors. Also shown are regions excluded from recent missing energy/momentum searches at the low-energy accelerator experiments NA64 and BaBar~\cite{Lees:2017lec,NA64:2019imj}, as well as the projected sensitivities of a search for similar signals at LDMX and Belle II~\cite{Izaguirre:2014bca,Battaglieri:2017aum,Akesson:2018vlm,Berlin:2018bsc}.
Fig.~\ref{fig:epsmx} focuses on the $\epsilon-m_\chi$ parameter space. In a standard cosmology, $\chi$ freezes out via $\chi \chi^* \leftrightarrow f f$ with an abundance consistent with the observed DM energy density along the black contours. Above or below these contours, $\chi$ is a subdominant DM component or is overabundant assuming a standard cosmology. For concreteness, when calculating the signal even rate we take $\chi$ to make up all of the DM throughout all of the parameter space shown. In Fig.~\ref{fig:alphamx}, $\epsilon$ is varied consistently in the $\alpha_D - m_\chi$ plane such that $\chi$ makes up all of the DM energy density.
Regions in excess of $100 / \text{tonne-year}$ are constrained by a recently reported search for electron recoils in XENON1T~\cite{Aprile:2020tmw}.
Assuming that thermal decoupling of $\chi \chi^* \leftrightarrow f f$ sets the late-time $\chi$ abundance, scenarios in which the ground state $\chi$ makes up a subdominant component of the DM lead to increasingly larger signal rates for $\chi^*$ down-scattering in terrestrial detectors. To see this, note that if $\chi \chi^* \leftrightarrow f f$ decouples at temperatures much greater than $\delta$, then $f_\chi \propto 1/(\alpha_D \, \epsilon^2)$, where $f_\chi \equiv n_\chi / n_{_{\text{DM}}} \leq 1$ is the DM fraction composed of $\chi$. If the decoupling of $\chi^* \chi^* \leftrightarrow \chi \chi$ is responsible for setting the the $\chi^*$ abundance at much later times, then $f_* \propto 1/(\alpha_D^2 \, f_\chi)$. The down-scattering signal rate at direct detection experiments is then controlled by the product $f_* f_\chi \alpha_D \epsilon^2 \propto f_* \propto 1/(\alpha_D^2 f_\chi)$. Hence, smaller $\chi$ abundances imply larger signals in such cosmologies.
\section{Excited States from the Sun}
\label{sec:sun}
When the excited state $\chi^*$ is no longer stable due to the existence of, e.g., an electromagnetic dipole transition, the primordial abundance of $\chi^*$ can be severely depleted if the decay lifetime is much shorter than the age of the universe. In this case, the production and successful detection of $\chi^*$ at direct detection experiments are only possible with a source of up-scattering.
For a decay lifetime that is much larger than 1 AU divided by dark matter velocity, the Sun can act as a source of $\chi^*$. The Sun has a high internal temperature which we take to be $T_\odot = 1.1\, \textrm{keV}$, and is capable of up-scattering the DM particles that come through it with $\sim \textrm{keV}$ energies. Gravitational focusing due to the large gravitational field also enhances the flux of DM particles incident on the solar core.
The idea to use ``reflected'' DM from the Sun was proposed in Ref.~\cite{An:2017ojc} in the context of elastic scattering. However, typically the reflected rates and energies are sufficiently low that the terrestrial experiments are typically more sensitive to the background primordial flux. This is not the case for inelastic WIMPs. For light WIMPs, even $\delta \sim \, 100 \ \text{eV}$ can be inaccessible for up-scattering in a terrestrial experiment from the standard DM flux. Thus, {\em any} production in the sun of an excited state which is suitably long-lived can produce a signal in a terrestrial experiment which would otherwise be absent.
To calculate the rate, we consider the problem as follows. Infalling DM particles in the core of the Sun have velocities $v \sim v_{\text{esc}} = 5 \times 10^{-3} \, c = 1500 \text{ km sec}^{-1}$, the escape velocity at the surface of the core. This is a high velocity compared to typical halo DM, $v_0 \sim 10^{-3} \, c$. However, the electrons in the sun are moving with velocity $v_e \sim \sqrt{2T/m_e}\sim 0.05 \, c = 1.5 \times 10^4\text{ km sec}^{-1}$. Since $v_{\text{esc}} \ll v_e$, we should think about the solar up-scattering with DM particles being essentially at rest, and being bombarded by thermal electrons from all around them. The quantity of interest is therefore the steady-state density of DM in the sun, and not the flux of DM on the sun.
The number density $n_{\chi,\odot}$ of $\chi$ in the core of the sun is simple to determine. Infalling DM is gravitationally focused, enhancing the solar cross section by a factor $1+v_{\text{esc}}^2/v_{0}^2$. Inside the sun, this appears as an enhancement of the overall DM number density. On the other hand, the higher velocity spreads the DM particles more thinly due to conservation of flux, suppressing the density by $v_{\text{esc}}/v_0$. Thus, we have $n_{\chi,\odot} \simeq n_0 \times v_{\text{esc}}/v_0$.\footnote{Precisely, the focusing is true for a $1/r$ potential. Inside the sun, this is no longer the case. However, approximately 50\% of the mass of the Sun is contained inside of $r<r_\odot/4$. Thus, we consider the 1/r potential to be reasonable down to these distances at the level of accuracy we have here.} The flux $\Phi$ of $\chi^*$ on Earth is then given by
\begin{equation}
\Phi= n_e\vev{\sigma_{\chi \to \chi^*} v_e}\times \frac{n_{\chi,\odot} V_\odot}{4 \pi (1 \text{ AU})^2} \,,
\end{equation}
where $\langle \sigma_{\chi \to \chi^*} v_e \rangle$ is the velocity-averaged cross-section of $\chi e^- \to \chi^* e^-$, and $V_\odot$ is the volume of the sun's core. We also have the derivative of the flux with respect to kinetic energy $K_{\chi^*}$ of the up-scattered $\chi^*$, written explicitly as
\begin{equation}
\frac{d\Phi}{dK_{\chi^*}} = n_e\left<\frac{d\sigma_{\chi \to \chi^*}}{dK_{\chi^*}} v_e\right> \times \frac{n_{\chi,\odot} V_\odot}{4 \pi (1 \text{ AU})^2} \, .
\label{eq:flux_per_energy}
\end{equation}
We take the solar parameters to be $V_\odot = 2.2 \times 10^{31} \text{ cm}^3$ and $n_e = 2 \times 10^{25} \text{ cm}^{-3}$ which is approximately the mean electron density in the solar core \cite{Bahcall:2000nu}. To get a sense of how large this flux is, We can compare this to the background flux of DM particles in the halo, $\Phi_0 = n_0 v_0$:
\begin{multline}
\frac{\Phi}{\Phi_0} \simeq 5 \times 10^{-8} \left( \frac{n_e}{2 \times 10^{25} \text{ cm}^{-3}} \right) \left( \frac{220 \text{ km/s}}{v_0} \right) \\
\times \left( \frac{\langle \sigma_{\chi \to \chi^*} v_e \rangle}{10^{-30} \text{ cm}^3 \text{ s}^{-1}} \right) \left( \frac{V_\odot}{2.2 \times 10^{31} \text{ cm}^3} \right) \left( \frac{v_{\text{esc}}/v_0}{7.0} \right) \,.
\label{eq:solar_flux_ratio}
\end{multline}
A simple expression for $\langle \sigma_{\chi \to \chi^*} v_e \rangle$ can be found in the nonrelativistic limit and with $\delta \ll m_e, m_\chi$, since the electron velocity distribution is Maxwellian. The differential scattering cross section in this limit is
\begin{equation}
\frac{d \sigma_{\chi \to \chi^*}}{dK_{\chi^*}} = \frac{\overline{\sigma}_e m_\chi}{2 \mu_{\chi e}^2 v_e^2} \,,
\end{equation}
where $K_\chi$ is the recoil kinetic energy of $\chi$, and $\overline{\sigma}_e$ is defined in Eq.~\eqref{eq:DPxsec}. The velocity-averaged cross section is then
\begin{equation}
\langle \sigma_{\chi \to \chi^*} v_e \rangle = \int_0^\infty dK_{\chi^*} \int_{v_\min}^\infty dv_e \, f_{\text{MB}}(v_e) \frac{d\sigma}{dK_{\chi^*}} v_e \,,
\label{eq:sigmave}
\end{equation}
where $v_{\text{min}}$ is the minimum velocity at fixed $K_\chi$ given by the kinematics of the up-scattering,
\begin{equation}
v_{\text{min}} = \frac{1}{\sqrt{2 m_\chi K_{\chi^*}}} \left(\frac{m_\chi K_{\chi^*}}{\mu_{\chi e}} + \delta \right) \,,
\end{equation}
and $f_{\text{MB}}(v_e)$ is the Maxwell-Boltzmann velocity distribution,
\begin{equation}
f_{\text{MB}}(v_e) = 4 \pi v_e^2 \left(\frac{m_e}{2 \pi T_\odot}\right)^{3/2} \exp \left(- \frac{m_e v_e^2}{2 T_\odot}\right) \,.
\end{equation}
The integrals in Eq.~(\ref{eq:sigmave}) can be evaluated analytically, giving
\begin{align}
\langle \sigma_{\chi \to \chi^*}v_e \rangle &= \overline{\sigma}_e \sqrt{\frac{2 m_e}{\pi T_\odot}} \frac{\delta}{\mu_{\chi e}} \exp \left( -\frac{m_e \delta}{\mu_{\chi e} T_\odot} \right) K_1 \left( \frac{m_e \delta}{\mu_{\chi e} T_\odot} \right) \\
&\simeq \overline{\sigma}_e \begin{cases}
\sqrt{\frac{2 T_\odot}{\pi m_e}} \,, & 2 \delta / \mu_{\chi e} \ll 2T_\odot / m_e \,,\\
\sqrt{\frac{2\delta}{\mu_{\chi e}}} \exp \left(- \frac{m_e \delta}{\mu_{\chi e} T_\odot}\right) \,, & 2 \delta / \mu_{\chi e} \gg 2T_\odot / m_e \, ,
\end{cases}
\label{eq:sigmave_analytic}
\end{align}
where we have expanded the Bessel function $K_1$ assuming a large argument for the final approximation.
The factor of $\sqrt{2 \delta/\mu_{\chi e}}$ is a characteristic velocity of the up- and down-scattering process, with the exponential suppression coming from the fact that only electrons with $v_e \gtrsim \sqrt{2 \delta/\mu_{\chi e}}$ are capable of up-scattering $\chi$. Consequently, the cross section is not suppressed exponentially compared to the elastic cross section if $\delta \sim T_\odot$. However, we still have a somewhat surprising fact in that inelasticity benefits the signal tremendously. Ordinarily, the DM can only carry away $\sim \mu^2/m_\chi m_e\sim m_e/m_\chi$ fraction of the energy. However, because of the inelasticity, $\chi$'s can exit the sun with a substantial amount of energy to deposit in the detector. Thus, although the scattering rate is not significantly changed from the elastic case, the {\em detectable} signal has been enhanced tremendously.
While primordial down-scatters yield a narrow recoil electron spectrum in direct detection experiments centered on the splitting $\delta$, the $\chi^*$ flux from the sun is broadened by the scattering off the thermal distribution of electrons; the rate per energy of $\chi^*$ particles produced for $m_\chi = 3.7 \, \text{MeV}$ thermal DM with a splitting of $\delta = 3.5 \, \textrm{keV}$ is shown in Fig. \ref{fig:solarspectrum}.
\begin{figure}\begin{center}
\includegraphics[width=0.6\textwidth]{solar_DM_spectrum.pdf}
\caption{The flux $\Phi$ of $\chi^*$ particles up-scattered by electrons per energy $K$, assuming $m_\chi = 4 \, \text{MeV}$, $\delta = 3.5 \, \textrm{keV}$ and a thermal annihilation cross section.}\label{fig:solarspectrum}\end{center}
\end{figure}{}
With the DM flux per energy $d \Phi/ dK_{\chi^*}$ in Eq.~\eqref{eq:flux_per_energy}, we can write the electron recoil spectrum per detector mass per time $dR/dE_R$ observed at a direct detection experiment as
\begin{equation}
\frac{dR}{dE_R} = \frac{N_T}{M_{\text{det}}} \int dK_{\chi^*} \, \frac{d\Phi}{dK_{\chi^*}} \frac{d \sigma_{\chi^* \to \chi}}{dE_R} \,,
\end{equation}
where $E_R$ is the electron recoil energy, $\sigma_{\chi^* \to \chi}$ is the down-scattering cross section, $N_T$ is the number of targets in the detector, and $M_{\text{det}}$ is the detector mass. This expression can be evaluated numerically, but we can gain significant analytic understanding of $R$, the expected number of events per detector mass per time, at a direct detection experiment by assuming the nonrelativistic limit and $\delta \ll m_e,m_\chi$ once again. In this limit, the down-scattering cross section can be written in a particularly simple form:
\begin{equation}
\frac{d \sigma_{\chi^* \to \chi}}{dE_R} \simeq \frac{\overline{\sigma}_e m_e}{2 \mu_{\chi e}^2 v_{\chi^*}^2} \,.
\end{equation}
The cross section of scattering for a given DM velocity $v_{\chi^*}$ can be obtained by integrating this expression up to the kinematic limit. In the elastic limit, this simply gives $\overline{\sigma}_e$; however, the existence of the splitting $\delta$ can extend this kinematic limit significantly, giving $\sigma_{\chi^* \to \chi} \simeq F \overline{\sigma}_e$, where following Eq.~\eqref{eq:kinenhance} we have
\begin{equation}
F \equiv \sqrt{1 + \frac{2 \delta}{\mu_{\chi e} \langle v_{\chi^*}^2 \rangle}} \,,
\end{equation}
with $\langle v_{\chi^*}^2 \rangle$ the mean square velocity of $\chi^*$ from the sun; examining the kinematics of the $\chi$-electron scattering shows that
\begin{equation}
\langle v_{\chi^*}^2 \rangle = \frac{8 \mu_{\chi e}^2 T_\odot}{m_\chi^2 m_e} \,.
\end{equation}
$F$ represents an enhancement with respect to the elastic scattering cross section, which is significant whenever the velocity scale $2\delta/\mu_{\chi e} \gg v_{\chi^*}^2$. With this result, we can write $R$ as
\begin{equation}
R \simeq \frac{N_T}{M_{\text{det}}} \Phi F \overline{\sigma}_e \,.
\end{equation}
Combining the equation with the expression for the ratio of the solar flux to the DM halo flux in Eq.~\eqref{eq:solar_flux_ratio} and the analytic estimate for $\langle \sigma_{\chi \to \chi^*} v_e \rangle$ in Eq.~\eqref{eq:sigmave_analytic}, we obtain the following numerical estimate for $R$ for a xenon experiment in the solar inelastic DM model:
\begin{multline}
R \simeq 23\, \left(\text{tonne-year}\right)^{-1}\left( \frac{n_e}{2 \times 10^{25} \text{cm}^{-3}} \right) \left( \frac{\overline{\sigma}_e}{10^{-38} \text{ cm}^2 } \right)^2 \left( \frac{V_\odot}{2.2 \times 10^{31} \text{ cm}^3} \right) \left( \frac{v_{\text{esc}} / v_0}{7.0}\right) \\
\times \left( \frac{\rho_0}{0.3 \text{ GeV cm}^{-3}} \right) \left(\frac{4\text{ MeV}}{m_\chi} \right) \left( \frac{F}{8.0} \right) \left( \frac{\sqrt{2\delta/\mu_{\chi e}} \exp[-m_e \delta/\mu_{\chi e} T_\odot]}{3 \times 10^{-3}} \right) \,,
\label{eq:solar_rate_estimate}
\end{multline}
where $\rho_0$ is the local DM mass density. The values shown for comparison are either exactly the solar parameters adopted for our calculations, or are close to the actual values of these parameters when $m_\chi = 4 \, \text{MeV}$.
Armed with this analytic understanding, we are now ready to examine the numerical results. In Fig.~\ref{fig:xesolarspec}, we show an expected solar inelastic DM spectrum $dR/dE_R$ at XENON1T, together with the latest measurement of the event rate in the $(0-30) \ \text{keV}$ range and the experiment's background model~\cite{Aprile:2020tmw}. Here, we have chosen parameters that are consistent with a thermal inelastic DM model, with $m_\chi = 4 \, \text{MeV}$, $m_{A^\prime}/m_\chi = 3$ and $\delta = 3.5 \, \textrm{keV}$; these parameters lead to approximately 60 events per tonne-year at a xenon detector. Because the DM flux is generated by scattering with thermal electrons in the solar core, the expected spectrum detected is significantly broader than the line-like signal expected from the primordial model.
\begin{figure}\begin{center}
\includegraphics[width=0.6\textwidth]{xesolarspec}
\caption{Detected electron recoil spectrum in the XENON1T experiment. We show the background model $B_0$ (gray) provided by Ref.~\cite{Aprile:2020tmw}, together with the $B_0$+signal for the thermal inelastic DM in the solar up-scattering with $m_\chi = 3.7\, \text{MeV}$, $\delta = 3.5 \, \textrm{keV}$ (blue) and the primordial excited states scenarios with $m_\chi = 30 \, \text{MeV}$, $\delta = 3 \, \textrm{keV}$ (red).}\label{fig:xesolarspec}\end{center}
\end{figure}
Fig.~\ref{fig:solar_thermal_delta_fchi} (left) shows the expected rate $R$ at XENON1T as a function of the DM mass $m_\chi$ and the splitting $\delta$. For small splittings $\delta \ll T_\odot$, the enhancement in the down-scattering rate encoded in $F$ is close to 1, leading to a small rate. As the splitting increases to $\delta \sim \, \textrm{keV}$, the enhancement becomes significant, and event rates of 100 per tonne-year can be expected for $m_\chi \lesssim 5$ MeV. Once $\delta \gg T_\odot$, however, few electrons in the solar core have sufficient energies to up-scatter $\chi$, leading to the exponential suppression shown in Eq.~\eqref{eq:solar_rate_estimate}. For a thermal model, since $\overline{\sigma}_e \propto \langle \sigma v \rangle_{\text{ann}} \mu_{\chi e}^2/m_\chi^2$ and for a sufficiently large splitting, $F \propto m_\chi$, we obtain $R \propto \delta e^{-m_e \delta/(\mu_{\chi e} T_\odot)} m_\chi^{-4} $, leading to a power law drop in $R$ as $m_\chi$ increases, and an exponential decrease as $\delta$ increases. There are currently no other experimental constraints in this range of parameters, but LDMX~\cite{Akesson:2018vlm,Berlin:2018bsc} will be sensitive to this entire space.
\begin{figure*}[!htbp]
\begin{adjustwidth}{-0.7in}{-0.45in}
\centering
\includegraphics[scale=0.58]{thermal_iDM_delta}
\includegraphics[scale=0.58]{thermal_iDM_fchi}
\end{adjustwidth}
\caption{(Left) Expected event rate at XENON1T for the solar thermal inelastic DM model (blue), as a function of DM mass $m_\chi$ and the splitting $\delta$ (left) and as a function of $m_\chi$ and the thermal DM abundance by mass $f_\chi$ (right). Current limits from NA64~\cite{NA64:2019imj}(gray) as well as the future reach of LDMX~\cite{Akesson:2018vlm,Berlin:2018bsc} (red, dashed) are also shown. Note that the entire $m_\chi$--$\delta$ parameter space will be probed by LDMX.}
\label{fig:solar_thermal_delta_fchi}
\end{figure*}
Fig. 6 (right) shows a similar result but in the $m_\chi$--$f_\chi$ plane, where $f_\chi$ is the fractional mass abundance of $\chi$, which we assume to be thermally produced. Under this assumption, $\rho_\chi \propto f_\chi$ and $\langle \sigma v \rangle \propto 1/f_\chi$, and so the overall rate at a direct detection experiment grows as $1/f_\chi$, making subdominant components easier to detect. A similar argument as before gives $R \propto f_\chi^{-1} m_\chi^{-4}$, so that lines of constant event rate on the $m_\chi$--$f_\chi$ plane follows $f_\chi \propto m_\chi^{-4}$. XENON1T can probe thermal iDM through solar scattering of all abundances below 10 MeV for $\delta \sim 3 \, \textrm{keV}$. Other constraints on this plane include the NA64 experiment~\cite{NA64:2019imj}, which has ruled out all sub-100 \text{MeV}\, thermal dark matter with $f_\chi \lesssim 0.01$, and the future LDMX experiment~\cite{Akesson:2018vlm,Berlin:2018bsc} which probes a similar parameter space to XENON1T.
\begin{figure*}[!htbp]
\begin{adjustwidth}{-0.8in}{-0.4in}
\centering
\includegraphics[scale=0.58]{solar_iDM_alpha_5e-1}
\includegraphics[scale=0.58]{solar_iDM_thermal_alphaD_mchi}
\end{adjustwidth}
\caption{(Left) Expected event rate at XENON1T for the solar inelastic DM model (blue), as a function of $m_\chi$ and $\epsilon$ \textit{without} assuming thermal production (left), and as a function of $m_\chi$ and $\alpha_D$ \textit{with} thermal production. Current constraints from NA64~\cite{NA64:2019imj}, BaBar~\cite{Lees:2017lec} and self-interaction of DM~\cite{Tulin:2017ara} are shown in gray, with the future reaches of Belle II~\cite{Izaguirre:2015zva,Battaglieri:2017aum} (green, dashed) and LDMX~\cite{Akesson:2018vlm,Berlin:2018bsc} (red, dashed) displayed in both plots.}
\label{fig:solar_eps_alpha}
\end{figure*}
In Fig. 7 (right), we consider the $m_\chi$--$\alpha_D$ plane for a thermally produced dark matter with $f_\chi = 1$. In this plane, the rate does not depend on $\alpha$ since $\epsilon$ is always chosen to hold $\langle \sigma v \rangle_{\text{ann}}$ approximately constant. Once again, we see the relation $R \propto m^{-4}$. At higher DM masses $m_\chi \gtrsim 30 \, \text{MeV}$, current and future $B$-factories like BaBar~\cite{Lees:2017lec} and Belle II~\cite{Izaguirre:2015zva,Battaglieri:2017aum}. Self-interaction limits of 10 cm$^2$ g$^{-1}$~\cite{Tulin:2017ara} also place constraints at large values of $\alpha_D$ and small values of $m_\chi$.
Finally, in Fig. 7 (left), we lift the assumption of a thermally produced dark matter, but still assume that $\chi$ makes up all of the dark matter through some nonthermal production mechanism that we do not specify. After fixing the coupling $\alpha_D = 0.5$, the mixing parameter $\epsilon$ is still a free parameter; we show the region of parameter space where we expect 1--100 events per tonne-year in XENON1T, as well as existing and future beam dump constraints. Without the thermal dark matter assumption, the choice of $m_{A'} / m_\chi = 3$ means that $\overline{\sigma}_e \propto m_\chi^{-8}$, so that overall the event rate at XENON1T scales as $R \propto \epsilon^4 m_\chi^{-8}$. For this particular choice of mass splittings, we can see that xenon direct detection experiments have the potential to probe the thermal target line up to $m_\chi \sim 20 \, \text{MeV}$, with a reach comparable to that of the future LDMX.
Note that constraints from indirect detection and the CMB power spectrum do not apply to the solar inelastic DM, since we assume that the excited state is completely depleted over cosmological timescales, making the annihilation of $\chi_1$ into Standard Model particles negligible.
It has been argued that there are signs of astrophysical excesses in X-ray spectra at 3.5 keV \cite{Bulbul:2014sua,Boyarsky:2014jta}. DM up-scattering followed by a decay through a dipole has been proposed as an explanation \cite{Finkbeiner:2014sja,DEramo:2016gqz}. It was argued that a very large dipole couple mediate electron-DM scattering in hot gas, yielding an excited state \cite{DEramo:2016gqz}. Here, we revisit the idea of excitation, but mediated by the dark photon interaction.
In Ref.~\cite{DEramo:2016gqz} it is claimed that the rate from the Perseus cluster from DM excitation can be computed to be
\begin{equation}
\Phi \simeq 10^{-5}\ {\rm sec^{-1}\ cm^{-2}}\times\left(\frac{\text{MeV}}{m_\chi}\right)\left(\frac{\vev{\sigma v}}{10^{-24}\ \rm cm^3 \ sec^{-1}}\right).
\end{equation}
This should be compared to the flux from Perseus of around $10^{-5}{\rm \ sec^{-1} \ cm^{-2}}$. A similar calculation to that of the sun, but using $T=6.8\ \textrm{keV}$ yields $\vev{\sigma v} \simeq 4 \times 10^{-29}\ {\rm cm^3 \ sec^{-1}}$ for $m_\chi = 4\text{MeV}$ and $\delta=3.5 \, \textrm{keV}$, several orders too low the putative signal. Moreover, if it is thermal, increasing the cross section will decrease the relic abundance, so the expected rate is the same, even for a subdominant component of DM. Nonetheless, the possibility is intriguing and we leave a detailed study for future work.
\section{Excited States from the Earth}
\label{sec:earth}
Finally, we consider excited states with the shortest lifetimes, which can be populated locally by dark-photon-mediated up-scattering in the Earth. Subsequent electromagnetic decays can yield a detectable signal in parts of parameter space where the scattering process itself is currently unobservable. These classes of `luminous' models have been considered in the context of dipole up-scattering in the Earth \cite{Feldstein:2010su,Eby:2019mgs}, in material near the target \cite{Pospelov:2013nea}, or even in the detector itself \cite{Chang:2010en,Lin:2010sb}.
Unlike in the hot environments in the Sun and the early universe, the relative velocities on Earth are too low for DM-electron scattering to populate splittings on the scale of a keV, so we focus here on nuclear scatterings. Furthermore, the DM must have sufficient mass to kinematically up-scatter at velocities $\sim 10^{-3}\ c$: for $\delta \sim \textrm{keV}$ one must have $m_\chi \gtrsim \text{GeV}$ to scatter without kinematical suppression. At the same time, for $m_\chi \gtrsim 10\ \text{GeV}$ , the energy deposited in the detector directly through nuclear recoil can become observable. Thus the range of primary interest for up-scattering followed by electromagnetic de-excitation in the models we are considering is $m_\chi \sim \mathcal{O}(\text{GeV})$.
In this mass range, the thermal relic makes up the full DM abundance for $y\sim 10^{-8}$, with a DM-proton scattering cross section of $\sigma_{p} \simeq 2 \times 10^{-37}{\rm cm^2}\left( \frac{\rm GeV}{m_\chi}\right)^2$
in the elastic limit. While the elastic cross section of this magnitude is excluded by CRESST \cite{Abdelhameed:2019hmk}, a splitting $\delta$ produces a kinematic suppression $f_{\text{inel}}$ in the scattering rate of $f_{\text{inel}}\sim10^{-2}-10^{-5}$ for $\textrm{keV}\lesssim\delta \lesssim 2\ \textrm{keV}$ at $m_{\chi}\simeq\text{GeV}$, and $f_{\text{inel}}\sim10^{-1}-10^{-7}$ for $\textrm{keV}\lesssim\delta \lesssim 2.5 \ \textrm{keV}$ at $m_{\chi}\simeq1.2\ \text{GeV}$, with steep sensitivity to the masses and halo parameters. This suppression is important because it also naturally suppressed signals in nuclear recoils in direct detection experiment. For $f_{\text{inel}}\sigma_p \sim 10^{-42}\rm cm^2$, for instance, particles of $m_\chi = 3 \, \text{GeV}$ would evade current limits. Without the inelastic suppression they would already have been excluded.
Each volume unit in the Earth acts as a source of up-scattered states and generates a local flux at a detector. Considering the Earth as composed of crust, mantle and core, and the scatterings dominated by silicon and iron densities as in Ref.~\cite{Eby:2019mgs}, we find the resulting flux of up-scattered excited states relative to the DM flux is given by
\begin{equation}
\Phi_{*}/\Phi_{\rm DM} \sim \frac{f_{\rm inel} \,\sigma_p}{5 \times10^{-34} \rm{cm}^2},
\end{equation}
At splittings close to the kinematic threshold, the flux is further enhanced because the up-scattered states have lower average velocity than that of the DM.
Because the cross sections are proportional to the reduced mass of the system, the electron down-scattering cross section is suppressed by a factor $\sim m_e^2/m_\chi^2 \sim 10^{-6}$. However, for lifetimes long compared to the time to traverse the Earth $R_E/v$, the decays can produce significant event rates in direct detection experiments. As previously discussed, a lifetime of $100 \sec$ requires a dipole suppressed $\Lambda_d \gtrsim 100 \text{TeV}$.
The rate per unit mass in XENON1T is approximately,
\begin{equation}
R\sim\frac{60}{{\rm tonne-year}}\left(\frac{f_{\rm inel} }{10^{-5}}\right)\left(\frac{\sigma_p}{10^{-37}\rm cm^2}\right)\left(\frac{\text{GeV}}{m_\chi}\right)\left(\frac{\rm 100\, sec}{\tau}\right)
\end{equation}
Here, the lifetime of 100 seconds allows the entire Earth to act as a source.
While the inelastic suppression and unknown lifetime makes quantitative expectations very challenging, it is still noteworthy that a thermal relic can naturally give a detectable rate in the $\text{GeV}$ mass range. Moreover, as before, if this is a subdominant component, the increased cross section and decreased abundance will cancel, leaving the overall rate intact. We thus conclude that a thermal relic is quite capable of yielding a photon signal in direct detection experiments, although with no real handle on the precise rate.
The excited states propagate outside the Earth and can decay outside, producing a diffuse X-ray background peaked at the energy of the splitting. As the states are generated in the Earth, the flux of decaying state falls off as $1/r^2$ and the X-ray signal is dominated by the excited DM particles closest to the Earth. The rate is directly proportional to the volumetric rate ${dN}/{dtdV}$ in a DM direct detection experiment, yielding a flux of
\begin{equation}
\Phi_{\rm diffuse} \sim 0.1 \frac{\rm photons}{\rm \ sr \ cm^2 \, sec}\left(\frac{{dN}/{dtdV}}{10^4\rm \ m^3 \ year}\right)
\end{equation}
The limits are $O(0.1) {\rm \ sr^{-1} \ cm^{-2} \ sec^{-1}}$ in this energy range \cite{moretti2012spectrum}, making current terrestrial detectors more sensitive than X-ray satellites, as long as the decay length is large compared to the Earth radius.
In addition, DM-DM scattering can give rise to a population of excited states which then decay, again giving rise to a potential excesses in X-ray spectra \cite{Finkbeiner:2014sja}, with flux comparable to the flux of Perseus for large enough cross sections,
\begin{equation}
\Phi_{\rm Perseus} \simeq 10^{-5}{\rm \ sec^{-1}\ cm^{-2}}\times\left(\frac{\text{GeV}}{m_\chi}\right)^2\left(\frac{\vev{\sigma v}}{10^{-21}\rm \ cm^3 \ sec^{-1}}\right).
\end{equation}
For GeV-mass DM considered here, we find $\vev{\sigma v} \sim 10^{-21} {\rm \ cm^3 \ sec^{-1}}$, potentially of the right order to source the tentative signal.
\section{Discussion}
\label{sec:discussion}
Models of light dark matter are simple and viable and result in a new class of experimental signatures. For light fermions coupled to a dark photon, CMB constraints naturally point to a pseudo-Dirac class of models. These models come with an excited state that is often swept aside in the discussion of the DM phenomenology. In this paper, we have considered these excited states and their implications.
We have found that far from being a side note, these excited states can offer powerful signatures of this class of DM models. The fact that the scenarios we have considered may explain the putative excess at XENON1T adds to the excitement.
We have investigated three separate scenarios: primordial excitations, excitations in the sun, and excitations in the Earth. Each of them probes different regions of parameter space, and provides different implications for future data and experiments.
For primordial excitations, we have found that the abundance of the excited state typically becomes exponentially suppressed, with excited fractions as small as $f_* \sim 10^{-9}$ arising for light thermal dark matter. We find this is true for splittings over a wide range of $\sim\textrm{keV} - 100\ \textrm{keV}$. Larger $m_\chi \sim \text{GeV}$ mass particles see less pronounced, but still present, suppression of the excited state abundance. Note that this is quite unlike previous scenarios with heavy DM particles, where often $\chi$ and $\ensuremath{\chi^*}$ are present in roughly equal abundances.
The suppression of the excited state abundance naturally changes the signal rate. Nonetheless, we find that existing and upcoming liquid Xenon experiments exclude some regions of parameter space for light dark matter, and are sensitive to much of the remainder. In particular, we find that for thermal relic DM and low values of $\alpha_D$, these scenarios predict in excess of 100 events/tonne/year at a Xenon experiment. For larger values of $\alpha_D$, lower rates are possible, but still in excess of 1 event/tonne/year is expected, making future Xenon experiments capable of testing much of this remaining parameter space. Much, but not all, of the parameter space will be tested by LDMX and Belle II as well. Remarkably, subdominant components are even {\em more} constrained by experimental searches as they typically have a higher rate of primordial fraction. A signal here would show as a fairly narrow line.
For cases where the primordial states are unstable, this will not be a signal. However, local up-scatterings offer promise over a narrow, yet interesting, parameter space.
The Sun is capable of up-scattering light dark matter into the excited state in cases where $\delta\sim T_\odot$. This allows dark matter to carry energy out of the sun and then deposit it into terrestrial experiments at energies above threshold. For a thermal relic making up all of the dark matter, one can expect detectable rates up to splittings as large as $10 \ \textrm{keV}$ and masses up to $13 \ \text{MeV}$. For subdominant components, the scattering rate in the sun remains constant as the parameters of the theory are increased, thus the flux of excited states at Earth does not decrease even for subdominant components of dark matter (until the Sun becomes opaque to them). However, the scattering cross section goes up and thus again we find direct detection experiments are more sensitive to subdominant components of dark matter, i.e., when $\rho_\chi<\rho_{\text{DM}}$. We find interesting signal rates up to masses of $\sim 50\ \text{MeV}$. This entire parameter space should be tested by LDMX.
An important difference between these two cases is that solar scatters naturally have a measurable line width. With sufficient data, this should be a means of distinguishing these two cases.
For $\sim \text{GeV}$ masses, DM can up-scatter via the dark photon in the Earth. These up-scatters can then decay via photon emission as in the Luminous Dark Matter proposal. There is a narrow window at the $\text{GeV}$ scale where one can up-scatter at a detectable rate without conflicting with existing nuclear recoil experiments. Such a rate would likely be detectable by future X-ray satellites.
Given the recent claim of an excess of electron events at XENON1T, it is exciting to consider these three scenarios as possible sources. From our results, we believe all three scenarios are capable of producing an excess. All three make concrete predictions for future experiments. Future datasets from liquid Xenon experiments will be able to differentiate the energy spectra of the line shape predicted by the primordial abundance and the Earth up-scattering scenario versus the broader signal expected from solar up-scattering.
All three scenarios require an excited state near 3.5 keV to explain the XENON1T data. Intriguingly, there have been claims of excess X-ray emission from a variety of astrophysical sources in that range. It is interesting to consider if these could be related.
For up-scatters in the Earth from a dark photon, the natural size of the cross section is adequate to explain the Perseus excess. For the lighter, solar up-scattered model, the rate is far short of what is needed. Nonetheless, it remains an intriguing possibility. Conversely, some models that might explain the 3.5 keV line can be constrained by our analyses here.
In summary, we have considered the electromagnetic signals arising from the excited states that are almost inevitable in models of light fermionic dark matter. We find the presence of these excited states leads to signals which already today constrain the parameter space and provides exciting possibilities for discovery in the future. Any signal from these models are testable in the future, by direct detection and experiments such as LDMX. With adequate data, the solar up-scatter scenario can be distinguished by spectrum alone.
\acknowledgments{}
We thank Rouven Essig for being Rouven Essig and Ken Van Tilburg for helpful discussions and comments on the manuscript. We are also grateful to Natalia Toro for discussions and insight arising from not-yet published work. AB and MB are supported by the James Arthur Fellowship. HL is supported by the DOE under contract DESC0007968 and the NSF under award PHY-1915409. NW is supported by NSF under award PHY-1915409 and the Simons Foundation. This research made use of the \texttt{IPython}~\cite{PER-GRA:2007}, \texttt{Jupyter}~\cite{Kluyver2016JupyterN}, \texttt{matplotlib}~\cite{Hunter:2007}, \texttt{NumPy}~\cite{numpy:2011}, \texttt{seaborn}~\cite{seaborn}, \texttt{SciPy}~\cite{2020SciPy-NMeth}, and \texttt{tqdm}~\cite{da2019tqdm} software packages.
|
1,108,101,564,196 | arxiv | \subsubsection*{\bibname}}
\usepackage[colorlinks=true,linkcolor=blue, citecolor=blue]{hyperref}
\usepackage{url}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\usepackage{xcolor}
\usepackage{mathtools}
\usepackage{graphicx}
\graphicspath{{./figures/}}
\usepackage[linewidth=1pt]{mdframed}
\usepackage{times}
\usepackage{epsfig}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{amsthm,amsmath,amssymb}
\usepackage{array}
\usepackage{bm,bbm}
\usepackage{enumerate}
\usepackage{multirow}
\usepackage{makecell}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argsup}{arg\,sup}
\newcommand{\mathbb{CP}}{\mathbb{CP}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\pi_{\mathrm{\widetilde{P}}}}{\pi_{\mathrm{\widetilde{P}}}}
\newcommand{\pi_{\mathrm{\widetilde{N}}}}{\pi_{\mathrm{\widetilde{N}}}}
\newcommand{\mathop{\mathbb{E}}\limits_{(\boldsymbol{x},y) \sim \mathcal{D}}}{\mathop{\mathbb{E}}\limits_{(\boldsymbol{x},y) \sim \mathcal{D}}}
\newcommand{\mathbb{E}_\mathrm{P}}{\mathbb{E}_\mathrm{P}}
\newcommand{\mathbb{E}_\mathrm{\widetilde{P}}}{\mathbb{E}_\mathrm{\widetilde{P}}}
\newcommand{\mathbb{E}_\mathrm{N}}{\mathbb{E}_\mathrm{N}}
\newcommand{\mathbb{E}_\mathrm{\widetilde{N}}}{\mathbb{E}_\mathrm{\widetilde{N}}}
\newcommand{\mathbb{E}_{\mathrm{u}}}{\mathbb{E}_{\mathrm{u}}}
\newcommand{\ell_{0\text{-}1}}{\ell_{0\text{-}1}}
\newcommand{\ell_{0\text{-}1\text{-}c}}{\ell_{0\text{-}1\text{-}c}}
\newcommand{\ell_{0\text{-}1\text{-}c}}{\ell_{0\text{-}1\text{-}c}}
\newcommand{X_\mathrm{P}}{X_\mathrm{P}}
\newcommand{X_\mathrm{-}}{X_\mathrm{-}}
\newcommand{X_\mathrm{\widetilde{P}}}{X_\mathrm{\widetilde{P}}}
\newcommand{X_\mathrm{\widetilde{N}}}{X_\mathrm{\widetilde{N}}}
\newcommand{n_\mathrm{p}}{n_\mathrm{p}}
\newcommand{n_\mathrm{u}}{n_\mathrm{u}}
\newcommand{\textregistered}{\textregistered}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}}
\newcommand{p_\mathrm{t}}{p_\mathrm{t}}
\newcommand{p_\mathrm{u}}{p_\mathrm{u}}
\newcommand{\alpha^*_{\mathrm{unif}}}{\alpha^*_{\mathrm{unif}}}
\newcommand{\boldsymbol{x}}{\boldsymbol{x}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\ell_\mathrm{sym}}{\ell_\mathrm{sym}}
\newcommand{\crisk}[1]{{R^{\ell_\mathrm{sym}}_{\mathrm{AUC}}(#1) }}
\newcommand{\cemprisk}[1]{{R^{\ell_\mathrm{sym}}_{\mathrm{AUC}}(#1) }}
\newcommand{\corrrisk}[1]{{R^{\ell_\mathrm{sym}}_{\mathrm{\widetilde{AUC}}}(#1) }}
\newcommand{\corremprisk}[1]{{\widehat{R}^{\ell_\mathrm{sym}}_{\mathrm{\widetilde{AUC}}}(#1) }}
\newcolumntype{L}{>{$}l<{$}}
\newcolumntype{C}{>{$}c<{$}}
\begin{document}
\title{A Symmetric Loss Perspective of Reliable Machine Learning}
\author{Nontawat Charoenphakdee$^{1,2}$ \and Jongyeong Lee$^{1,2}$ \and Masashi Sugiyama$^{2,1}$}
\date{
$^1$ The University of Tokyo
$^2$ RIKEN AIP
}
\maketitle
\abstract{
When minimizing the empirical risk in binary classification, it is a common practice to replace the zero-one loss with a surrogate loss to make the learning objective feasible to optimize.
Examples of well-known surrogate losses for binary classification include the logistic loss, hinge loss, and sigmoid loss.
It is known that the choice of a surrogate loss can highly influence the performance of the trained classifier and therefore it should be carefully chosen.
Recently, surrogate losses that satisfy a certain symmetric condition (aka., symmetric losses) have demonstrated their usefulness in learning from corrupted labels.
In this article, we provide an overview of symmetric losses and their applications.
First, we review how a symmetric loss can yield robust classification from corrupted labels in balanced error rate (BER) minimization and area under the receiver operating characteristic curve (AUC) maximization.
Then, we demonstrate how the robust AUC maximization method can benefit natural language processing in the problem where we want to learn only from relevant keywords and unlabeled documents.
Finally, we conclude this article by discussing future directions, including potential applications of symmetric losses for reliable machine learning and the design of non-symmetric losses that can benefit from the symmetric condition.
}
\section{Introduction}
\label{intro}
Modern machine learning methods such as deep learning typically require a large amount of data to achieve desirable performance~\citep{schmidhuber2015deep,lecun2015deep,goodfellow2016deep}.
However, it is often the case that the labeling process is costly and time-consuming.
To mitigate this problem, one may consider collecting training labels through crowdsourcing~\citep{dawid1979maximum,kittur2008crowdsourcing}, which is a popular approach and has become more convenient in the recent years~\citep{deng2009imagenet,crowston2012amazon,sun2014chimera,vaughan2017making,pandey2020crowdsourcing,vermicelli2020can, washington2020precision}.
For example, crowdsourcing has been used for tackling the COVID-19 pandemic to accelerate research and drug discovery~\citep{vermicelli2020can,chodera2020crowdsourcing}.
However, a big challenge of crowdsourcing is that the collected labels can be unreliable because of non-expert annotators fail to provide correct information~\citep{lease2011quality,zhang2014spectral,gao2016exact,imamura2018analysis}.
Not only the non-expert error, but even expert annotators can also make mistakes.
As a result, it is unrealistic to always expect that the collected training data are always reliable.
It is well-known that training from data with noisy labels can give an inaccurate classifier~\citep{noise1, noise2, noise3,frenay2013classification,natarajan2013learning}.
Interestingly, it has been shown that the trained classifier may only perform slightly better than random guessing even under a simple noise assumption~\citep{long2010random}.
Since learning from noisy labels is challenging and highly relevant in the real-world, this problem has been studied extensively in both theoretical and practical aspects~\citep{van2017theory,jiang2018mentornet,algan2019image,liu2020peer,wei2020optimizing,karimi2020deep,han2018co,han2020survey}.
Recently, a loss function that satisfies a certain symmetric condition has demonstrated its usefulness in learning from noisy labels.
A pioneer work in this direction is the work by~\citet{manwani2013noise}, where they showed that using symmetric losses can be robust under random classification noise~(see also~\citet{ghosh2015making} and~\citet{van2015learning}).
However, the assumption of random classification noise can be restrictive since it assumes that each training label can be flipped independently with the fixed probability regardless of its original label.
As a result, it is important to investigate a more realistic noise model to reflect a real-world situation more accurately.
In this article, we review the robustness result of symmetric losses in the setting of mutually contaminated noise~\citep{scott2013classification,menon2015}.
This noise model has been proven to be quite general since
it encompasses well-known noise assumptions such as the random classification noise and class-conditional noise~\citep{menon2015,lu2018minimal}.
Furthermore, many instances of weakly-supervised learning problems can also be formulated into the setting of mutually contaminated noise~\citep{kiryo2017,baosu,lu2018minimal,shimada2019classification}.
In this article, we will discuss how using a symmetric loss can be advantageous in BER and AUC optimization under mutually contaminated noise.
Interestingly, with a symmetric loss, one does not need the knowledge of the noise rate to learn effectively with a theoretical guarantee~\citep{charoenphakdee2019symmetric}.
This article also demonstrates how to use a symmetric loss in a real-world problem in the context of natural language processing.
We discuss an application of symmetric losses for learning a reliable classifier from only relevant keywords and unlabeled documents~\citep{jin2017combining,charoenphakdee2019learning}.
In this problem, we first collect unlabeled documents.
Then, we collect relevant keywords that are useful for determining the target class of interest.
Unlike collecting labels for every training document, collecting keywords can be much cheaper and the number of keywords does not necessarily scale with the number of unlabeled training documents~\citep{chang2008importance,song2014dataless,chen2015dataless,li2018pseudo,jin2017combining,jin2020learning}.
We will discuss how this problem can be formulated into the framework of learning under mutually contaminated noise and how using a symmetric loss can be highly useful for solving this problem~\citep{charoenphakdee2019learning}.
\section{Preliminaries}
\label{sec:prelim}
In this section, we review the standard formulation of binary classification based on empirical risk minimization~\citep{vapnik1998statistical} and well-known evaluation metrics.
Then, we review the definition of symmetric losses and the problem of learning from corrupted labels.
\subsection{Binary classification}
\label{sec:binclass}
Here, we review the problem of binary classification, where the goal is to learn an accurate classifier from labeled training data.
Let $\boldsymbol{x} \in \mathcal{X}$ denote a pattern in an input space $\mathcal{X}$.
For example, an input space $\mathcal{X}$ could be a space of $d$-dimensional real-valued vectors in $\mathbb{R}^d$.
Also, let $y \in \{-1, +1\}$ denote a class label and $g\colon\mathcal{X} \to \mathbb{R}$ denote a prediction function that we want to learn from data.
In binary classification, we use the function sign$(g(\boldsymbol{x}))$ to determine the predicted label of a prediction function, where sign$(g(\boldsymbol{x}))=1$ if $g(\boldsymbol{x})>0$, $-1$ if $g(\boldsymbol{x})<0$, and $0$ otherwise\footnote{$\mathrm{sign}(g(\boldsymbol{x}))=0$ indicates that $g$ suggests to random guess a label. In practice, one may force a prediction function $g$ to not output $0$ when training $g$. For simplicity of explanation, we assume that $g$ does not output zero.} .
In ordinary binary classification, we are given $n$ labeled training examples, $\{\boldsymbol{x}_i, y_i\}_{i=1}^{n}$,
which are assumed to be drawn independently from a joint distribution $\mathcal{D}$ with density $p(\boldsymbol{x}, y)$.
Next, to evaluate the prediction function, we define the zero-one loss as follows:
\begin{align}
\ell_{0\text{-}1}(z)=-\frac{1}{2} \, \mathrm{sign}(z) + \frac{1}{2}.
\end{align}
Given an input-output pair $(\boldsymbol{x},y)$, if the signs of both $y$ and $g(\boldsymbol{x})$ are identical, we have zero penalty, i.e., $\ell_{0\text{-}1}(yg(\boldsymbol{x}))=0$.
On the other hand, we have $\ell_{0\text{-}1}(yg(\boldsymbol{x}))=1$ if the signs of $y$ and $g(\boldsymbol{x})$ are different, which indicates incorrect prediction.
The goal of binary classification is to find a prediction function $g$ that minimizes the following misclassification risk, i.e., the risk corresponding to the classification error rate (CER):
\begin{align}\label{pnrisk}
R^{\ell_{0\text{-}1}}_{\mathrm{CER}}(g) = \mathop{\mathbb{E}}\limits_{(\boldsymbol{x},y) \sim \mathcal{D}} \left[ \ell_{0\text{-}1}(y g(\boldsymbol{x}) )\right].
\end{align}
As suggested by Eq.~\eqref{pnrisk}, our goal is to find the prediction function that performs well w.r.t. the whole distribution on average, that is, the prediction function $g$ should be able to also classify unseen examples from the same data distribution accurately, not only perform well on the observed training examples.
In practice, the misclassification risk $R^{\ell_{0\text{-}1}}_{\mathrm{CER}}$ cannot be directly minimized because we only observe finite training examples, not the whole probability distribution.
By using training examples, the empirical risk minimization framework suggests to find~$g$ by minimizing the following empirical risk~\citep{vapnik1998statistical}:
\begin{align}\label{pnriskemp}
\widehat{R}^{\ell_{0\text{-}1}}_{\mathrm{CER}}(g) = \frac{1}{n} \sum_{i=1}^{n} \ell_{0\text{-}1}(y_ig(\boldsymbol{x}_i)) \text{.}
\end{align}
Although the risk estimator in Eq.~\eqref{pnriskemp} is an unbiased and consistent estimator of the misclassification risk~\citep{vapnik1998statistical}, it is not straightforward to directly minimize it.
Indeed, with the zero-one loss in the empirical risk, the minimization problem is known to be computationally infeasible. It is an NP-hard problem even if the function class to search $g$ is a class of linear hyperplanes~\citep{zeroonenphard2, zeroonenphard1}.
Moreover, the gradient of the zero-one loss is zero almost everywhere and therefore hinders the use of a gradient-based optimization method.
To mitigate this problem, it is a common practice to replace the zero-one loss with a different loss function $\ell$ that is easier to minimize, which is called a \emph{surrogate loss}~\citep{zhang2004statistical,bartlett2006}.
As a result, we minimize the following empirical surrogate risk:
\begin{align}
\widehat{R}_{\mathrm{CER}}^\ell(g) = \frac{1}{n} \sum_{i=1}^{n} \ell(y_ig(\boldsymbol{x}_i)),
\end{align}
where regularization techniques can also be employed to avoid overfitting.
\begin{table*}[t]
\centering
\caption{Classification-calibrated losses and their properties including the convexity and whether they satisfy the symmetric condition $\ell(z)+\ell(-z)=K$, where $K$ is a constant.} \label{table:optimal-loss}
\begin{tabular}{|C|C|C | C |}
\hline
\text{Loss} & \ell(z) & \text{Convex} & \text{Symmetric} \\ \hline
\text{Zero-one} & -\frac{1}{2} \, \mathrm{sign}(z) + \frac{1}{2}& \times &\checkmark\\
\text{Squared} & (1-z)^{2} & \checkmark &\times\\
\text{Hinge} & \max(0, 1-z) & \checkmark &\times \\
\text{Squared hinge} & \max(0, 1-z)^{2} & \checkmark &\times \\
\text{Exponential} & \exp(-z)& \checkmark &\times\\
\text{Logistic} & \mathrm{log}(1+\exp(-z)) & \checkmark &\times \\
\text{Savage} & \left[(1+\exp(2z))^{2}\right]^{-1} &\times & \times \\
\text{Tangent} & (2\mathrm{arctan}(z)-1)^{2} &\times & \times\\
\text{Ramp}& \mathrm{max}(0, \mathrm{min}(1, (1-z)/2)) & \times & \checkmark \\
\text{Sigmoid} & \left[1+\exp(z)\right]^{-1} &\times & \checkmark \\
\text{Unhinged} & 1-z &\checkmark & \checkmark \\
\hline
\end{tabular}
\end{table*}
The choice of a surrogate loss $\ell$ is highly crucial for training a good classifier and should be carefully designed.
This is because the ultimate goal is still to minimize the misclassification risk $R_{\mathrm{CER}}^{\ell_{0\text{-}1}}(g)$, not the surrogate risk $R_{\mathrm{CER}}^{\ell}(g)$.
To ensure that minimizing the surrogate risk $R_{\mathrm{CER}}^\ell(g)$ yields a meaningful solution for the misclassification risk $R_{\mathrm{CER}}^{\ell_{0\text{-}1}}(g)$, a surrogate loss should satisfy a \emph{classification-calibration} condition, which is known to be a minimum requirement for binary classification~(see~\citet{bartlett2006} for more details).
Many well-known surrogate losses in binary classification satisfy this property.
Table~\ref{table:optimal-loss} provides examples of classification-calibrated losses and their properties~\citep{bartlett2006,masnadi2009,masnadi2010design,van2015learning}.
\subsection{Beyond classification error rate}
Although CER has been used extensively, one should be aware that using this evaluation metric may not be informative when the test labels are highly imbalanced~\citep{menon2013statistical}.
For example, consider a trivial prediction function $g_\mathrm{pos}$ such that $g_\mathrm{pos}(\boldsymbol{x}) >0$ for any $\boldsymbol{x}$, that is, $g_\mathrm{pos}$ only predicts a positive label.
If $99\%$ of the test labels are positive, CER of $g_\mathrm{pos}$ is $0.01$, which may indicate very good performance.
However, $g_\mathrm{pos}$ does not give any meaningful information since it always predicts a positive label regardless of an input $\boldsymbol{x}$.
Thus, low CER may mislead someone into thinking that $g_\mathrm{pos}$ is a good classifier.
Here, we review evaluation metrics that can be used as an alternative to CER to prevent such a problem.
\subsubsection{Balanced error rate (BER)}
Let $\mathbb{E}_\mathrm{P}[\cdot] $ and $\mathbb{E}_\mathrm{N}[\cdot]$ be the expectations of $\boldsymbol{x}$ over $p(\boldsymbol{x}|y=+1)$ and $p(\boldsymbol{x}|y=-1)$, respectively.
Then, the BER risk is defined as follows:
\begin{align*}
R^{\ell_{0\text{-}1}}_{\mathrm{BER}}(g) &= \frac{1}{2} \bigg[ \mathbb{E}_\mathrm{P}\left[{\ell_{0\text{-}1}}(g(\boldsymbol{x}))\right] + \mathbb{E}_\mathrm{N}\left[{\ell_{0\text{-}1}}(-g(\boldsymbol{x}))\right] \bigg] \text{.}
\end{align*}
It is insightful to note that CER can also be expressed as
\begin{align*}
R^{\ell_{0\text{-}1}}_{\mathrm{CER}}(g) &= p(y=+1)\mathbb{E}_\mathrm{P}\left[{\ell_{0\text{-}1}}(g(\boldsymbol{x}))\right] + (1-p(y=+1))\mathbb{E}_\mathrm{N}\left[{\ell_{0\text{-}1}}(-g(\boldsymbol{x}))\right] \text{,}
\end{align*}
where $p(y=+1)$ is the class prior given by $p(y=+1)= \int p(\boldsymbol{x}, y=+1) \, \mathrm{d}\boldsymbol{x}$. We can see that the BER minimization problem is equivalent to the CER minimization problem if the class prior is balanced, i.e., $p(y=+1)=\frac{1}{2}$.
Furthermore, unlike $R^{\ell_{0\text{-}1}}_{\mathrm{CER}}$, any trivial prediction function that predicts only one label cannot have $R^{\ell_{0\text{-}1}}_{\mathrm{BER}}$ lower than $\frac{1}{2}$ regardless of the class prior.
As a result, the prediction function $g$ has an incentive to predict both classes to obtain a low balanced error risk $R^{\ell_{0\text{-}1}}_{\mathrm{BER}}$.
Therefore, BER is known to be useful to evaluate the prediction function $g$ under class imbalance~\citep{ber2,ber1,brodersen2010balanced}.
In addition, it is also worth noting that BER can be interpreted as an arithmetic mean of the false positive and false negative rates~\citep{menon2013statistical}.
Similarly to the CER minimization problem, we can minimize the empirical surrogate risk using training data and a classification-calibrated loss as follows:
\begin{align}
\widehat{R}^{\ell}_{\mathrm{BER}}(g) = \frac{1}{2} \left[\frac{1}{n_\mathrm{P}}\sum_{i: y_i=+1}\ell(g(\boldsymbol{x}_i)) + \frac{1}{n_\mathrm{N}}\sum_{j: y_j=-1}\ell(-g(\boldsymbol{x}_j)) \right],
\end{align}
where $n_\mathrm{P}$ and $n_\mathrm{N}$ are the numbers of positive and negative examples, respectively.
\subsubsection{Area under the receiver operating characteristic curve (AUC)}
In binary classification, a receiver operating characteristic (ROC) curve plots the true positive rate against the false positive rate at various decision thresholds.
The area under the ROC curve is called the AUC score, which can be used to evaluate the performance of a prediction function over all possible decision thresholds on average.
The AUC score can also be interpreted as the probability that the prediction function outputs a higher score for a random positive example than a random negative example~\citep{fawcett2006introduction}.
Let us consider the following AUC risk~\citep{narasimhan2013relationship}:
\begin{align} \label{aucrisk}
R^{\ell_{0\text{-}1}}_{\mathrm{AUC}}(g) = \mathbb{E}_\mathrm{P} [ \mathbb{E}_\mathrm{N}[{\ell_{0\text{-}1}}( g(\boldsymbol{x}_\mathrm{P})-g(\boldsymbol{x}_\mathrm{N}))]] \text{,}
\end{align}
which is the complement of AUC score since the expected AUC score is $1-~R^{\ell_{0\text{-}1}}_{\mathrm{AUC}}(g)$.
Therefore, maximizing the AUC score is equivalent to minimizing the AUC risk. Intuitively, the high AUC score indicates that $g$ outputs a higher value to positive examples than negative examples on average.
Unlike CER and BER where the function $\mathrm{sign}(g(\boldsymbol{x}))$ is crucial for the evaluation, in AUC, the sign function is evaluated on the difference between the outputs of $g$ on positive and negative data.
As a result, an evaluation based on AUC is highly related to the bipartite ranking problem~\citep{narasimhan2013relationship,menon2016bipartite}, where the goal is to find a function $g$ that can rank positive examples over negative examples.
It is also worth noting that AUC is highly related to the Wilcoxon-Mann-Whitney
statistic~\citep{mann1947test,hanley1982meaning}.
Similarly to BER, AUC is also known to be a useful evaluation metric under class imbalance~\citep{ber2, ber1}.
Given training data, the empirical surrogate AUC risk can be defined as follows:
\begin{align}
\widehat{R}^{\ell}_{\mathrm{AUC}}(g) = \frac{1}{n_\mathrm{P}\times n_\mathrm{N}}\sum_{i:y_i=+1} \sum_{j:y_j=-1}\ell(g(\boldsymbol{x}_i) - g(\boldsymbol{x}_j)).
\end{align}
However, unlike the CER and BER minimization problems, a loss requirement for AUC optimization should be \emph{AUC-consistent} to guarantee that the optimal solution of a surrogate AUC risk is also optimal for the AUC risk~\citep{gao2015consistency,menon2016bipartite}.
Note that this condition is not equivalent to classification-calibration.
For example, the hinge loss is known to be classification-calibrated but not AUC-consistent~\citep{gao2015consistency,uematsu2017theoretically}.
\subsection{Symmetric losses}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{img/Figure_1.pdf}
\vspace{-1in}
\caption{Examples of loss functions.
All losses in this figure except the logistic loss are symmetric.}
\label{fig:all-loss-ex}
\end{figure}
The term symmetric loss in this article refers to a loss function $\ell_\mathrm{sym}: \mathcal{X} \to \mathbb{R}$ such that $\ell_\mathrm{sym}(z)+\ell_\mathrm{sym}(-z) = K$, where $K$ is a constant\footnote{There are other definitions of symmetric loss. For example, see ~\citet{reid2010} and \citet{natarajan2013learning} for other definitions.}.
It is known that if a loss function is symmetric and non-negative, it must be non-convex~\citep{du2014}.
Figure~\ref{fig:all-loss-ex} illustrates three symmetric losses, which are the zero-one loss, sigmoid loss, and ramp loss.
It is worth noting that both the sigmoid loss and the ramp loss are classification-calibrated and AUC-consistent~\citep{charoenphakdee2019symmetric}.
\subsection{Learning from corrupted labels}~\label{sec:learncor}
The standard formulation of binary classification in Section~\ref{sec:binclass} does not take into account that training labels can be corrupted.
To extend the standard formulation to support such a situation, learning from corrupted labels under mutually contaminated noise assumes that the training data are given as follows~\citep{menon2015,lu2018minimal,charoenphakdee2019symmetric}:
\begin{align*}
X_\mathrm{\widetilde{P}}&:= \{\boldsymbol{x}^\mathrm{\widetilde{P}}_i\}_{i=1}^{n_\mathrm{\widetilde{P}}} \stackrel{\mathrm{i.i.d.}}{\sim} p_{\pi_{\mathrm{\widetilde{P}}}}(\boldsymbol{x}) \text{,}\\
X_{\mathrm{\widetilde{N}}}&:= \{\boldsymbol{x}^\mathrm{\widetilde{N}}_j\}_{j=1}^{n_\mathrm{\widetilde{N}}}\stackrel{\mathrm{i.i.d.}}{\sim} p_{\pi_{\mathrm{\widetilde{N}}}}(\boldsymbol{x}) \text{,}
\end{align*}
where, for $0<\pi_{\mathrm{\widetilde{N}}}<\pi_{\mathrm{\widetilde{P}}}<1$,
\begin{align*}
p_{\pi_{\mathrm{\widetilde{P}}}}(\boldsymbol{x})&= \pi_{\mathrm{\widetilde{P}}} p(\boldsymbol{x}|y=+1)+(1-\pi_{\mathrm{\widetilde{P}}}) p(\boldsymbol{x}|y=-1) \text{,}\\
p_{\pi_{\mathrm{\widetilde{N}}}}(\boldsymbol{x})&= \pi_{\mathrm{\widetilde{N}}} p(\boldsymbol{x}|y=+1)+(1-\pi_{\mathrm{\widetilde{N}}}) p(\boldsymbol{x}|y=-1) \text{.}
\end{align*}
Concretely, the formulation assumes that corrupted positive examples $X_\mathrm{\widetilde{P}}$ are drawn from the distribution where its density is a mixture of the class-conditional positive density $p(\boldsymbol{x}|y=+1)$ and class-conditional negative density $p(\boldsymbol{x}|y=-1)$, where $\pi_{\mathrm{\widetilde{P}}}$ controls the mixture proportion between two densities.
Corrupted negative examples $X_\mathrm{\widetilde{N}}$ are also assumed to be drawn similarly but with a different mixture proportion $\pi_{\mathrm{\widetilde{N}}}$.
We can interpret the assumption of the data generating process as follows.
The given training data is clean if $\pi_{\mathrm{\widetilde{P}}}=1$ and $\pi_{\mathrm{\widetilde{N}}}=0$, i.e., the data of each class is drawn from the class-conditional distribution w.r.t. one class.
On the other hand, we can see that the training data can be highly noisy if $\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}$ is small, i.e., the corrupted positive data and corrupted negative data have a similar distribution and therefore difficult to distinguish.
Note that in this framework, it is reasonable to assume that $\pi_{\mathrm{\widetilde{P}}} > \pi_{\mathrm{\widetilde{N}}}$ because the corrupted positive distribution should still have more information about positive data than the corrupted negative distribution.
\begin{table*}[t]
\centering
\caption{References of related work on evaluation metrics and problem settings mentioned in this article.
Section~\ref{sec:symadvantage} of this article focuses on the problem of BER and AUC optimization from corrupted labels under mutually contaminated noise, which is based on~\citet{charoenphakdee2019symmetric}.}
\label{table:references}
\begin{tabular}{|c|p{0.35\linewidth} | p{0.35\linewidth} |}
\hline
& \multicolumn{1}{c|}{\centering Clean labels} & \multicolumn{1}{c|}{\centering Corrupted labels} \\ \hline
CER & \citet{zhang2004statistical,bartlett2006,ben2012minimizing} & \citet{lu2018minimal,lu2020mitigating} \\ \hline
BER & \citet{brodersen2010balanced,menon2013statistical} & \multicolumn{1}{c|}{\multirow{2}{*}{\centering This article}} \\ \cline{1-2}
AUC & \citet{ling1998data,agarwal2005roc,menon2016bipartite} & \\ \hline
\end{tabular}
\end{table*}
In learning from corrupted labels, it has been shown that CER minimization based on empirical risk minimization is possible if the knowledge of $\pi_{\mathrm{\widetilde{P}}}$, $\pi_{\mathrm{\widetilde{N}}}$, and the class prior of the test distribution is given~\citep{lu2018minimal,lu2020mitigating}.
On the other hand, the problem of estimating $\pi_{\mathrm{\widetilde{P}}}$ and $\pi_{\mathrm{\widetilde{N}}}$ from corrupted labeled data is known to be an unidentifiable problem unless a restrictive condition is applied~\citep{blanchard2010semi, menon2015, scott2015rate}.
Furthermore, the class prior of the test distribution has no relationship with $\pi_{\mathrm{\widetilde{P}}}$ and $\pi_{\mathrm{\widetilde{N}}}$ and it has to be specified if the goal is to minimize the misclassification risk~\citep{lu2018minimal}.
For these reasons, CER minimization can be infeasible if one has access only to corrupted labeled examples without any additional information.
As a result, it is important to explore other evaluation metrics that could be useful in learning from corrupted labels that can be optimized \emph{without requiring the knowledge of $\pi_{\mathrm{\widetilde{P}}}$ and $\pi_{\mathrm{\widetilde{N}}}$}.
In Section~\ref{sec:symadvantage}, we will show that BER and AUC can be effectively optimized in learning from corrupted labels without having to estimate $\pi_{\mathrm{\widetilde{P}}}$ and~$\pi_{\mathrm{\widetilde{N}}}$.
In Table~\ref{table:references}, we show related work on learning from clean and corrupted labels with different evaluation metrics that could be of interest to readers.
\section{A Symmetric Loss Approach to BER and AUC Optimization from Corrupted Labels}
\label{sec:symadvantage}
In this section, we begin by describing the related work in BER/AUC optimization from corrupted labels and then show that using a symmetric loss can be advantageous for BER and AUC optimization from corrupted labels.
\subsection{Background}
In learning from corrupted labels, \citet{menon2015} proved that both BER and AUC are robust w.r.t. the zero-one loss.
More precisely, without using a surrogate loss, the minimizer of the BER/AUC risk w.r.t. the clean distribution and that of the BER/AUC risk w.r.t. the corrupted distribution are identical.
In experiments, they used the squared loss as their choice of the surrogate loss and the comparison between the squared loss and other surrogate losses was not conducted.
Next, \citet{van2015average} generalized the theoretical result of \citet{menon2015} for BER minimization from corrupted labels from the zero-one loss to any symmetric loss.
Then, \citet{charoenphakdee2019symmetric} proved the relationship between the clean and corrupted BER/AUC risks for a general loss function, which elucidates that using a symmetric loss can be advantageous for both BER and AUC optimization from corrupted labels.
Furthermore, \citet{charoenphakdee2019symmetric} also conducted extensive experiments to verify that using symmetric losses can perform significantly better than using non-symmetric losses.
\subsection{BER minimization from corrupted labels}
To verify the robustness of BER minimization from corrupted labels, we investigate the relationship between the clean risk with the corrupted risk for any surrogate loss~$\ell$.
First, let us define the following surrogate risk for BER from corrupted labels:
\begin{align*}
R^\ell_{\mathrm{\widetilde{BER}}}(g) = \frac{1}{2} \big[ R_{\mathrm{\widetilde{P}}}^{\ell}(g) + R_{\mathrm{\widetilde{N}}}^{\ell}(g) \big] \text{,}
\end{align*}
where
\begin{align*}
R_{\mathrm{\widetilde{P}}}^{\ell}(g) &= \mathbb{E}_\mathrm{\widetilde{P}} [\ell(g(\boldsymbol{x}))] =\pi_{\mathrm{\widetilde{P}}} \mathbb{E}_\mathrm{P}[\ell(g(\boldsymbol{x}))] + (1-\pi_{\mathrm{\widetilde{P}}})\mathbb{E}_\mathrm{N}[\ell(g(\boldsymbol{x}))] \text{,} \\
R_{\mathrm{\widetilde{N}}}^{\ell}(g) &= \mathbb{E}_\mathrm{\widetilde{N}} [\ell(-g(\boldsymbol{x}))] = \pi_{\mathrm{\widetilde{N}}} \mathbb{E}_\mathrm{P}[\ell(-g(\boldsymbol{x}))] + (1-\pi_{\mathrm{\widetilde{N}}})\mathbb{E}_\mathrm{N}[\ell(-g(\boldsymbol{x}))]\text{,}
\end{align*}
where $\mathbb{E}_\mathrm{\widetilde{P}}$ and $\mathbb{E}_\mathrm{\widetilde{N}}$ denote the expectations over $p_{\pi_{\mathrm{\widetilde{P}}}}$ and $p_{\pi_{\mathrm{\widetilde{N}}}}$, respectively.
Since we have samples $X_{\mathrm{\widetilde{P}}}$ and $X_{\mathrm{\widetilde{N}}}$ following $p_{\pi_{\mathrm{\widetilde{P}}}}$ and $p_{\pi_{\mathrm{\widetilde{N}}}}$, respectively (see Section~\ref{sec:learncor}), the following empirical risk $\widehat{R}^\ell_{\mathrm{\widetilde{BER}}}$ can be minimized in practice:
\begin{align}
\label{eq:ber-emp}
\widehat{R}^\ell_{\mathrm{\widetilde{BER}}}(g) = \frac{1}{2}\left[ \frac{1}{n_\mathrm{\widetilde{P}}}\sum_{\boldsymbol{x} \in X_{\mathrm{\widetilde{P}}}} \ell(g(\boldsymbol{x})) +
\frac{1}{n_\mathrm{\widetilde{N}}}\sum_{\boldsymbol{x} \in X_{\mathrm{\widetilde{N}}}} \ell(-g(\boldsymbol{x})) \right].
\end{align}
Note that $\widehat{R}^\ell_{\mathrm{\widetilde{BER}}}$ can be minimized without the knowledge of $\pi_{\mathrm{\widetilde{P}}}$ and $\pi_{\mathrm{\widetilde{N}}}$.
Next, the following equation illustrates the relationship between the clean BER risk $R^\ell_{\mathrm{BER}}$ and the corrupted BER risk $R^\ell_{\mathrm{\widetilde{BER}}}$ w.r.t. to any loss function $\ell$~\citep{charoenphakdee2019symmetric}:
\begin{align}
\label{eq:ber-general}
R^\ell_{\mathrm{\widetilde{BER}}}(g) &= (\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}) {R^\ell_{\mathrm{BER}}(g)} + \underbrace{\frac{\pi_{\mathrm{\widetilde{N}}} \mathbb{E}_\mathrm{P}[\gamma^\ell(g(\boldsymbol{x}))] + (1-\pi_{\mathrm{\widetilde{P}}})\mathbb{E}_\mathrm{N}[\gamma^\ell(g(\boldsymbol{x}))]}{2}}_\textup{Excessive term} \text{,}
\end{align}
where $\gamma^\ell(z) = \ell(z) + \ell(-z)$.
From Eq.~(\ref{eq:ber-general}), we can see that $g$ which minimizes $R^\ell_{\mathrm{\widetilde{BER}}}$ should also perform reasonably well for $R^\ell_{\mathrm{BER}}$ for any loss function.
However, together with $R^\ell_{\mathrm{BER}}$, a prediction function $g$ that minimizes $R^\ell_{\mathrm{\widetilde{BER}}}$ should also take the excessive terms into account, which are the terms that are unrelated to our goal.
As a result, the minimizer of $R^\ell_{\mathrm{\widetilde{BER}}}$ may not be guaranteed to be the minimizer of $R^\ell_{\mathrm{BER}}$ because of the non-constant excessive term.
Next, let us consider a symmetric loss $\ell_\mathrm{sym}$ such that $\ell_\mathrm{sym}(z)+\ell_\mathrm{sym}(-z)=K$, where $K$ is a constant regardless of $z$.
With a symmetric loss, we can rewrite Eq.~\eqref{eq:ber-general} as
\begin{align*}
R^{\ell_\mathrm{sym}}_{\mathrm{\widetilde{BER}}}(g) &= (\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}) {R^{\ell_\mathrm{sym}}_{\mathrm{BER}}(g)} + \frac{\pi_{\mathrm{\widetilde{N}}} \mathbb{E}_\mathrm{P}[K] + (1-\pi_{\mathrm{\widetilde{P}}})\mathbb{E}_\mathrm{N}[K]}{2}\\
&= (\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}) R^{{\ell_\mathrm{sym}}}_{\mathrm{BER}}(g) + K\left(\frac{1-\pi_{\mathrm{\widetilde{P}}}+\pi_{\mathrm{\widetilde{N}}}}{2}\right) \text{.}
\end{align*}
We can see that if a loss is symmetric, then the excessive term will be a constant and the minimizers of $R^\ell_{\mathrm{\widetilde{BER}}}$ and $R^\ell_{\mathrm{BER}}$ must be identical.
This suggests that $g$ can ignore the excessive term when using a symmetric loss.
As a result, BER minimization from corrupted labels can be done effectively without the knowledge of $\pi_{\mathrm{\widetilde{P}}}$ and $\pi_{\mathrm{\widetilde{N}}}$ by minimizing Eq.~\eqref{eq:ber-emp} using a symmetric loss.
\subsection{AUC maximization from corrupted labels}\label{sec:aucmax}
Let us consider a corrupted AUC risk with a surrogate loss $\ell$ that treats $X_\mathrm{\widetilde{P}}$ as being positive and $X_{\mathrm{\widetilde{N}}}$ as being negative:
\begin{align*}
R^\ell_{\mathrm{\widetilde{AUC}}}(g) = \mathbb{E}_\mathrm{\widetilde{P}}[\mathbb{E}_{\mathrm{\widetilde{N}}}[\ell( g(\boldsymbol{x}_\mathrm{\widetilde{P}})-g(\boldsymbol{x}_\mathrm{\widetilde{N}}))]]\text{,}
\end{align*}
which can be empirically approximated using training data as
\begin{align}
\label{eq:auc-emp}
\widehat{R}^\ell_{\mathrm{\widetilde{AUC}}}(g) = \frac{1}{n_{\mathrm{\widetilde{P}}}\times n_{\mathrm{\widetilde{N}}} }\sum_{\boldsymbol{x} \in X_{\mathrm{\widetilde{P}}}} \sum_{\boldsymbol{x}^\prime \in X_{\mathrm{\widetilde{N}}}} \ell(g(\boldsymbol{x})-g(\boldsymbol{x}^\prime))\text{.}
\end{align}
\citet{charoenphakdee2019symmetric} showed that the relationship between $R^\ell_{\mathrm{\widetilde{AUC}}}(g)$ and $R^\ell_{\mathrm{AUC}}(g)$ can be expressed as follows:
\begin{align*}
\begin{split}
R^\ell_{\mathrm{\widetilde{AUC}}}(g) ={}& (\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}})R^\ell_{\mathrm{AUC}}(g) + \underbrace{(1-\pi_{\mathrm{\widetilde{P}}})\pi_{\mathrm{\widetilde{N}}} \mathbb{E}_\mathrm{P}[\mathbb{E}_\mathrm{N}[\gamma^\ell(g(\boldsymbol{x}_\mathrm{P}),g(\boldsymbol{x}_\mathrm{N}))]]}_\textup{Excessive term} \\ &+ \underbrace{\frac{\pi_{\mathrm{\widetilde{P}}}\pi_{\mathrm{\widetilde{N}}}}{2} \mathbb{E}_{\mathrm{P}}[\mathbb{E}_\mathrm{P}[\gamma^\ell(g(\boldsymbol{x}'_{\mathrm{\mathrm{P}}}),g(\boldsymbol{x}_\mathrm{P}))]]}_\textup{Excessive term} \\ &+ \underbrace{\frac{(1-\pi_{\mathrm{\widetilde{P}}})(1-\pi_{\mathrm{\widetilde{N}}})}{2} \mathbb{E}_{\mathrm{N}}[\mathbb{E}_\mathrm{N}[\gamma^\ell(g(\boldsymbol{x}'_{\mathrm{N}}),g(\boldsymbol{x}_\mathrm{N}))]]}_\textup{Excessive term}\text{,}
\end{split}
\end{align*}
where $\gamma^\ell(z,z') = \ell(z-z') + \ell(z'-z)$.
Next, with a symmetric loss $\ell_\mathrm{sym}$, we have
\begin{align*}
R^{\ell_\mathrm{sym}}_{\mathrm{\widetilde{AUC}}}(g) &= (\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}})R^\ell_{\mathrm{AUC}}(g) + (1-\pi_{\mathrm{\widetilde{P}}})\pi_{\mathrm{\widetilde{N}}} \mathbb{E}_\mathrm{P}[\mathbb{E}_\mathrm{N}[K]] \\ &\quad + \frac{\pi_{\mathrm{\widetilde{P}}}\pi_{\mathrm{\widetilde{N}}}}{2} \mathbb{E}_\mathrm{P}[\mathbb{E}_\mathrm{P}[K]] + \frac{(1-\pi_{\mathrm{\widetilde{P}}})(1-\pi_{\mathrm{\widetilde{N}}})}{2} \mathbb{E}_\mathrm{N}[\mathbb{E}_\mathrm{N}[K]]\text{,}
\\
&= (\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}})R^{\ell_\mathrm{sym}}_{\mathrm{AUC}}(g) + K\left(\frac{1 -\pi_{\mathrm{\widetilde{P}}} + \pi_{\mathrm{\widetilde{N}}}}{2}\right) \text{.}
\end{align*}
Similarly to BER minimization from corrupted labels, the result suggests that the excessive terms become a constant when using a symmetric loss and it is guaranteed that the minimizers of $R^\ell_{\mathrm{\widetilde{AUC}}}(g)$ and $R^\ell_{\mathrm{AUC}}(g)$ are identical.
On the other hand, if a loss is non-symmetric, then we may suffer from the excessive terms and the minimizers of both risks may differ.
We can see that in both BER and AUC optimization from corrupted labels, by using a symmetric loss, the knowledge of $\pi_{\mathrm{\widetilde{P}}}$ and $\pi_{\mathrm{\widetilde{N}}}$ are not required and we can treat the corrupted labels as if they were clean to learn in this setting.
We refer the readers to~\citet{charoenphakdee2019symmetric} for more details on experimental results, where symmetric losses are shown to be preferable over non-symmetric losses.
\subsection{BER and AUC optimization for weakly-supervised learning}
Here, we give two examples that the formulation of BER and AUC optimization from corrupted labels can be useful for weakly-supervised learning, which are $(i)$ learning from positive of unlabeled data and $(ii)$ learning from two sets of unlabeled data.
\subsubsection{Learning from positive and unlabeled data}
Here, let us consider the problem of binary classification from positive and unlabeled data.
We consider the case-control setting where the training data are given as follows~\citep{ward2sample,du2014,du2015convex,niu2016,kiryo2017,charoenphakdee2019positive,xu2019revisiting}:
\begin{align*}
X_\mathrm{P}&:= \{\boldsymbol{x}^\mathrm{P}_i\}_{i=1}^{n_\mathrm{U}} \stackrel{\mathrm{i.i.d.}}{\sim} p(\boldsymbol{x}|y=-1) \text{,}\\
X_{\mathrm{U}}&:= \{\boldsymbol{x}^\mathrm{U}_j\}_{j=1}^{n_\mathrm{U'}}\stackrel{\mathrm{i.i.d.}}{\sim} \pi_\mathrm{U} p(\boldsymbol{x}|y=+1)+(1-\pi_\mathrm{U}) p(\boldsymbol{x}|y=-1) \text{.}
\end{align*}
With $\pi_{\mathrm{\widetilde{P}}}=1$ and $\pi_{\mathrm{\widetilde{N}}}=\pi_\mathrm{U}$, we can relate the training data of learning from positive and unlabeled data to learning from corrupted labels.
In this setting,~\citet{sakai2018} showed that for AUC maximization, a convex surrogate loss can be applied but the class prior $\pi_\mathrm{U}$ needs to be estimated to construct an unbiased risk estimator.
By using a symmetric loss, we can safely perform both BER and AUC optimization without estimating the class prior $\pi_\mathrm{U}$ with a theoretical guarantee.
Concretely, with a symmetric loss $\ell_\mathrm{sym}$, BER minimization from positive and unlabeled data can be done effectively by minimizing the following empirical risk:
\begin{align*}
\widehat{R}^{\ell_\mathrm{sym}}_{\mathrm{BER}\text{-}\mathrm{PU}}(g) = \frac{1}{2}\left[ \frac{1}{n_\mathrm{P}}\sum_{\boldsymbol{x} \in X_{\mathrm{P}}} \ell_\mathrm{sym}(g(\boldsymbol{x})) +
\frac{1}{n_\mathrm{U}}\sum_{\boldsymbol{x} \in X_{\mathrm{U}}} \ell_\mathrm{sym}(-g(\boldsymbol{x})) \right],
\end{align*}
and AUC maximization can be done effectively by minimizing the following empirical risk:
\begin{align*}
\widehat{R}^{\ell_\mathrm{sym}}_{\mathrm{AUC}\text{-}\mathrm{PU}}(g) = \frac{1}{n_{\mathrm{P}}\times n_{\mathrm{U}} }\sum_{\boldsymbol{x} \in X_{\mathrm{P}}} \sum_{x^\prime \in X_{\mathrm{U}}} \ell_\mathrm{sym}(g(\boldsymbol{x})-g(x^\prime))\text{.}
\end{align*}
\subsubsection{Learning from two sets of unlabeled data with different class priors}
Here, let us consider the problem of binary classification from two set of unlabeled data, where the training data are given as follows~\citep{lu2018minimal,lu2020mitigating}:
\begin{align*}
X_\mathrm{U}&:= \{\boldsymbol{x}^\mathrm{U}_i\}_{i=1}^{n_\mathrm{U}} \stackrel{\mathrm{i.i.d.}}{\sim} \pi_\mathrm{U} p(\boldsymbol{x}|y=+1)+(1-\pi_\mathrm{U}) p(\boldsymbol{x}|y=-1) \text{,}\\
X_{\mathrm{U'}}&:= \{\boldsymbol{x}^\mathrm{U'}_j\}_{j=1}^{n_\mathrm{U'}}\stackrel{\mathrm{i.i.d.}}{\sim} \pi_\mathrm{U'} p(\boldsymbol{x}|y=+1)+(1-\pi_\mathrm{U'}) p(\boldsymbol{x}|y=-1) \text{,}
\end{align*}
where $\pi_\mathrm{U} > \pi_\mathrm{U'}$.
We can relate given training data of this problem to learning from corrupted labels by having $\pi_{\mathrm{\widetilde{P}}}=\pi_\mathrm{U}$ and $\pi_{\mathrm{\widetilde{N}}}=\pi_\mathrm{U'}$.
Therefore, BER and AUC optimization from two sets of unlabeled data with different class priors can also be carried out effectively with a symmetric loss without knowing class priors $\pi_\mathrm{U}$ and $\pi_\mathrm{U'}$.
It is interesting to see that although the data collection procedure of learning from corrupted labels and learning from two sets of unlabeled data are very different in practice, the assumptions of the data generating process can be highly related.
Concretely, with a symmetric loss $\ell_\mathrm{sym}$, BER minimization from two sets of unlabeled data can be done effectively by minimizing the following empirical risk:
\begin{align*}
\widehat{R}^{\ell_\mathrm{sym}}_{\mathrm{BER}\text{-}\mathrm{UU}}(g) = \frac{1}{2}\left[ \frac{1}{n_\mathrm{U}}\sum_{\boldsymbol{x} \in X_{\mathrm{U}}} \ell_\mathrm{sym}(g(\boldsymbol{x})) +
\frac{1}{n_\mathrm{U'}}\sum_{\boldsymbol{x} \in X_{\mathrm{U'}}} \ell_\mathrm{sym}(-g(\boldsymbol{x})) \right],
\end{align*}
and AUC maximization can be done effectively by minimizing the following empirical risk:
\begin{align*}
\widehat{R}^{\ell_\mathrm{sym}}_{\mathrm{AUC}\text{-}\mathrm{UU}}(g) = \frac{1}{n_{\mathrm{U}}\times n_{\mathrm{U'}} }\sum_{\boldsymbol{x} \in X_{\mathrm{U}}} \sum_{x^\prime \in X_{\mathrm{U'}}} \ell_\mathrm{sym}(g(\boldsymbol{x})-g(x^\prime))\text{.}
\end{align*}
\section{A Symmetric Loss Approach to Learning Only from Relevant Keywords and Unlabeled Documents}\label{sec:application}
In this section, we demonstrate how to apply the robustness result of symmetric losses to tackle a weakly-supervised natural language processing task, namely learning only from relevant keywords and unlabeled documents~\citep{charoenphakdee2019learning}.
\subsection{Background}
To reduce the labeling costs, weakly-supervised text classification has been studied extensively in various settings, e.g., positive and unlabeled text classification~\citep{li2003learning,li2005learning}, zero-shot text classification~\citep{zhang2019integrating}, cross-lingual text classification~\citep{dong2019robust}, and dataless classification~\citep{chang2008importance,song2014dataless,chen2015dataless,jin2017combining,jin2020learning,li2018pseudo}.
Out target problem can be categorized as a variant of dataless classification, where we are given a set of $n_\mathrm{W}$ relevant keywords:
\begin{align*}
W:= \{w_j\}_{j=1}^{n_\mathrm{W}},
\end{align*}
which are keywords that provide characteristics of positive documents.
Also, unlabeled documents drawn from the following distribution are provided:
\begin{align*}
X_{\mathrm{U}}:= \{\boldsymbol{x}^\mathrm{U}_i\}_{i=1}^{n_\mathrm{U}}\stackrel{\mathrm{i.i.d.}}{\sim} p_{\pi_\mathrm{U}}(\boldsymbol{x}),
\end{align*}
where, for $\pi_\mathrm{U} \in (0,1)$,
\begin{align*}
p_{\pi_\mathrm{U}}(\boldsymbol{x}) = \pi_\mathrm{U} p(\boldsymbol{x}|y=+1) +(1-\pi_\mathrm{U})\, p(\boldsymbol{x}|y=-1).
\end{align*}
Note that unlike ordinary dataless classification, where we need keywords for every class~\citep{chang2008importance,song2014dataless,chen2015dataless,li2018pseudo}, in this problem, only keywords for the positive class are provided.
Therefore, this problem setting could be more practical in a situation where negative documents are too diverse to collect representative keywords for the negative class~\citep{hsieh2019classification}.
It is worth noting that our problem is also called \emph{lightly-supervised learning}~\citep{jin2017combining}, where the supervision comes from the relevant keywords.
To solve this problem, \citet{jin2017combining} proposed to use a method based on ensemble learning.
The bottleneck of the method proposed by~\citet{jin2017combining} is lack of flexibility of model choices and optimization algorithms.
This makes it difficult to bring many useful models and techniques from a more well-studied problem such as supervised text classification to help solve this problem.
Moreover, the theoretical understanding of this problem was limited.
To alleviate these limitations, \citet{charoenphakdee2019learning} proposed a theoretically justifiable framework that allows practitioners to choose their preferred models to maximize the performance, e.g., convolutional neural networks~\citep{zhang2015character} or recurrent neural networks~\citep{lai2015recurrent}.
Moreover, this framework does not limit the choice of optimization methods.
One may use any optimization algorithm for their model, e.g., Adam~\citep{adam}.
In learning only from relevant keywords and unlabeled documents, the choice of evaluation metrics depends on the desired behavior of the prediction function $g$ we want to learn.
For example, AUC is appropriate if the goal is simply to learn a bipartite ranking function to rank a positive document over a negative document.
On the other hand, if the goal is document classification, one may use CER or the $\mathrm{F}_{1}$-measure, i.e., the harmonic mean of precision and recall, which has been widely used in text classification~\citep{li2005learning,jin2017combining,jin2020learning,lertvittayakumjorn2019human,lertvittayakumjorn2020find,mekala2020contextualized,he2020towards}.
\subsection{A flexible framework for learning only from relevant keywords and unlabeled documents}
Here, we discuss a flexible framework for learning only from relevant keywords and unlabeled documents proposed by~\citet{charoenphakdee2019learning}.
Figure~\ref{fig:ete-network} illustrates an overview of the framework.
First, pseudo-labeling is carried out to split unlabeled documents into two sets.
Next, by using pseudo-labeled documents, AUC maximization is conducted, where the choice of the surrogate loss is a symmetric loss.
Finally, after obtaining a bipartite ranking function by AUC maximization, a threshold selection procedure is performed to convert the ranking function to a binary classifier.
\begin{figure}[t]
\vspace{0.1in}
\begin{center}
\centerline{\includegraphics[width = \textwidth]{img/baking3.png}}
\caption{An overview of the framework for learning only from relevant keywords and unlabeled document~\citep{charoenphakdee2019learning}.
Blue documents indicate positive documents and red documents denote negative documents in the two sets of documents divided by a pseudo-labeling algorithm.
Note that clean labels are not observed by the framework.}
\label{fig:ete-network}
\end{center}
\vspace{-0.35in}
\end{figure}
\subsubsection{Pseudo-labeling: from learning with keywords and unlabeled documents to learning from corrupted labels}\label{sec:pseudo-label}
The first step is to utilize relevant keywords to perform pseudo-labeling on unlabeled documents.
Concretely, given relevant keywords $W$ and unlabeled documents~$X_\mathrm{U}$, the pseudo-labeling algorithm $A(W,X_\mathrm{U})$ splits $X_\mathrm{U}$ into $X_\mathrm{\widetilde{P}}$ and $X_\mathrm{\widetilde{N}}$.
The key idea of this step is to use pseudo-labeling to bridge learning only from relevant keywords and unlabeled documents to learning from corrupted labels.
More precisely, we assume that the pseudo-labeling algorithm $A(W,X_\mathrm{U})$ returns the following data:
\begin{align*}
X_\mathrm{\widetilde{P}}&:= \{\boldsymbol{x}^\mathrm{\widetilde{P}}_i\}_{i=1}^{n_\mathrm{\widetilde{P}}} \stackrel{\mathrm{i.i.d.}}{\sim} p_{\pi_{\mathrm{\widetilde{P}}}}(\boldsymbol{x}) \text{,}\\
X_{\mathrm{\widetilde{N}}}&= \{\boldsymbol{x}^\mathrm{\widetilde{N}}_j\}_{j=1}^{n_\mathrm{\widetilde{N}}}\stackrel{\mathrm{i.i.d.}}{\sim} p_{\pi_{\mathrm{\widetilde{N}}}}(\boldsymbol{x}) \text{,}
\end{align*}
where the assumption of the data generating process is identical to that of the setting of learning from corrupted labels (see Section~\ref{sec:learncor}).
It is important to note that a pseudo-labeling algorithm we employ here is not expected to perfectly split documents into clean positive and negative documents.
For the choice of the pseudo-labeling algorithm, ~\citet{charoenphakdee2019learning} simply used a cosine similarity score between keywords and documents and compared the score with a pre-defined threshold to split unlabeled documents into two sets.
To further improve the pseudo-labeling accuracy, one may utilize domain-specific knowledge or a keyword mining method to collect more relevant keywords.
Examples of such keyword mining methods are Textrank~\citep{mihalcea2004textrank} and Topicrank~\citep{bougouin2013topicrank}.
Moreover, one may also incorporate an unsupervised learning method~\citep{ko2000automatic, ko2009text} or apply a pre-trained model such as BERT~\citep{devlin2018bert}.
\subsubsection{AUC maximization from pseudo-labeled documents: obtaining a reliable ranking function from unreliable data}
After pseudo-labeling, we can conduct AUC maximization using a symmetric loss to learn a good ranking function $g$ from pseudo-labeled documents.
Recall that, with any symmetric loss, the AUC risk minimizers of the corrupted risk and clean risk are identical, which is suggested from the following equation:
\begin{align}
\label{eq:same-minimizer}
R^{\ell_\mathrm{sym}}_{\mathrm{\widetilde{AUC}}}(g) = (\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}})R^{\ell_\mathrm{sym}}_{\mathrm{AUC}}(g) + K\left(\frac{1 -\pi_{\mathrm{\widetilde{P}}} + \pi_{\mathrm{\widetilde{N}}}}{2}\right) \text{.}
\end{align}
Eq.~\eqref{eq:same-minimizer} indicates that as long as the pseudo-labeling algorithm succeeds in splitting the documents into two sets such that $\pi_{\mathrm{\widetilde{P}}} > \pi_{\mathrm{\widetilde{N}}}$, we can always guarantee that~$g$ can be effectively learned from pseudo-labeled documents.
More precisely, the minimizers of the risk w.r.t. the pseudo-labeled document distribution and the clean document distribution are identical.
However, since any pseudo-labeling algorithm that gives $\pi_{\mathrm{\widetilde{P}}} > \pi_{\mathrm{\widetilde{N}}}$ can guarantee to learn a good ranking function, one important question is: how does the quality of the pseudo-labeling method impact the performance of the trained prediction function $g$ in this framework?
Intuitively, a good pseudo labeling should give a high proportion of positive documents in the pseudo-positive set and a high proportion of negative documents in the pseudo-negative set.
Mathematically, a good algorithm should return two sets of documents with a large $\pi_{\mathrm{\widetilde{P}}}$ and a small $\pi_{\mathrm{\widetilde{N}}}$, that is, $\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}$ is large.
To elucidate the usefulness of a good pseudo-labeling algorithm, it is insightful to analyze an estimation error bound of AUC maximization from corrupted labels.
Let~$\hat{g} \in \mathcal{G}$ be a minimizer of the empirical corrupted AUC risk $\widehat{R}^{\ell_\mathrm{sym}}_{\mathrm{\widetilde{AUC}}}$ in the hypothesis class $\mathcal{G}$ and $g^* \in \mathcal{G}$ be a minimizer of the clean AUC risk $R^{\ell_\mathrm{sym}}_{\mathrm{AUC}}$.
Then, the following theorem can be obtained.
\begin{theorem}[Estimation error bound~\citep{charoenphakdee2019learning}]\label{est-bound}
Let $\mathcal{Q}_\mathcal{G}^{\ell_\mathrm{sym}}$ be a class of functions mapping $\mathcal{X}^2$ to $[0,K]$ such that $\mathcal{Q}_\mathcal{G}^{\ell_\mathrm{sym}}= \{Q: (\boldsymbol{x}, \boldsymbol{x}') \to \ell_\mathrm{sym}(g(\boldsymbol{x})-g(\boldsymbol{x}')), g \in \mathcal{G}\}$.
$\widetilde{\mathfrak{R}}_{n_\mathrm{\widetilde{P}}, n_\mathrm{\widetilde{N}}}(\mathcal{Q}_\mathcal{G}^{\ell_\mathrm{sym}})$ denotes the bipartite Rademacher complexities of function class $\mathcal{Q}_\mathcal{G}^{\ell_\mathrm{sym}}$ (see ~\citet{usunier2005data} for more details).
For all $Q \in \mathcal{Q}_\mathcal{G}^{\ell_\mathrm{sym}}$ and $\delta \in$ (0,1), with probability at least $1-\delta$, we have
\begin{align*}
R^{\ell_\mathrm{sym}}_{\mathrm{AUC}}(\hat{g})-R^{\ell_\mathrm{sym}}_{\mathrm{AUC}}(g^*) \leq \frac{1}{\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}} \bigg[ K \sqrt{\frac{2(n_\mathrm{\widetilde{P}}+n_\mathrm{\widetilde{N}})\log\frac{1}{\delta}}{n_\mathrm{\widetilde{P}}n_\mathrm{\widetilde{N}}}} + 4\widetilde{\mathfrak{R}}_{n_\mathrm{\widetilde{P}}, n_\mathrm{\widetilde{N}}}(\mathcal{Q}_\mathcal{G}^{\ell_\mathrm{sym}})\bigg],
\end{align*}
where the probability is over repeated sampling of $X_\mathrm{\widetilde{P}}$ and $X_\mathrm{\widetilde{N}}$.
\end{theorem}
This theorem explains how the degree of corruption $\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}$ affects the tightness of the bound and therefore the speed of convergence.
When $\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}$ is small, i.e., $\pi_{\mathrm{\widetilde{P}}}$ and $\pi_{\mathrm{\widetilde{N}}}$ are similar, the bound becomes loose.
This illustrates the difficulty of AUC maximization when the pseudo-labeling algorithm performs poorly and we may need a lot of data.
On the other hand, a good pseudo-labeling that gives a large $\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}$ can give a smaller constant $\frac{1}{\pi_{\mathrm{\widetilde{P}}}-\pi_{\mathrm{\widetilde{N}}}}$, which can lead to a tighter bound.
Nevertheless, it is noteworthy that as long as $\pi_{\mathrm{\widetilde{P}}} > \pi_{\mathrm{\widetilde{N}}}$, with all parametric models having a bounded norm such as neural networks with weight decay or kernel models, this learning framework is \textit{statistically consistent}, i.e., the estimation error converges to zero as $n_\mathrm{\widetilde{P}},n_\mathrm{\widetilde{N}} \to \infty$.
\par
\subsubsection{Threshold selection: from a ranking function to a binary classifier}
After obtaining a good ranking function $g$, an important question is how to convert the ranking function to a binary classifier.
Here, we discuss how to decide the classification threshold for optimizing evaluation metrics such as the $\mathrm{F}_{1}$-measure.
It is known that many evaluation metrics can be optimized if a suitable threshold and $p(y=+1|\boldsymbol{x})$ are known~\citep{yan2018binary}.
For example, $\mathrm{sign}[p(y=+1|\boldsymbol{x})-\frac{1}{2}]$ is the Bayes-optimal solution for the classification accuracy, where $\frac{1}{2}$ is the threshold.
Moreover, it has been proven that the Bayes-optimal solution of AUC maximization with an AUC-consistent surrogate loss is any function that has a strictly monotonic relationship with $p(y=+1|\boldsymbol{x})$~\citep{clemenccon2009tree,gao2015consistency,menon2016bipartite}.
Therefore, finding an appropriate threshold to convert a bipartite ranking function to a binary classifier can give a reasonable classifier~\citep{narasimhan2013relationship}.
In learning from relevant keywords and unlabeled documents, no information about a proper threshold can be obtained from the training data since all given data are unlabeled.
For this reason, we may not be able to draw an optimal threshold for optimizing the accuracy and $\mathrm{F}_{1}$-measure without additional assumptions.
On the other hand, as shown in Section~\ref{sec:aucmax}, a ranking function can be learned reliably with a theoretical guarantee based on AUC maximization.
This is the main reason why ~\citet{charoenphakdee2019learning} proposed to first learn a reliable ranking function instead of directly learn a binary classifier in this problem.
Suppose the class prior $p(y=+1)$ in unlabeled documents is known, a reasonable threshold~$\beta \in \mathbb{R}$ can be given based on the following equation:
\begin{align}\label{eq:thr}
p(y=+1) = \int \mathrm{sign}(g(\boldsymbol{x})-\beta) p_{\pi_\mathrm{U}}(\boldsymbol{x}) \mathrm{d}\boldsymbol{x}.
\end{align}
\begin{table}[t]
\centering
\caption{Mean value and standard error of 20 trials with different thresholds on the Subjectivity dataset ($\mathsf{Subj}$)~\citep{Pang+Lee:04a} and the 20 Newsgroups dataset ($\mathsf{20NG}$)~\citep{Lang95} with the \emph{baseball} and \emph{hockey} groups as positive.
ACC denotes the classification accuracy and $\mathrm{F}_{1}$ denotes the $\mathrm{F}_{1}$-measure.
``Sigmoid'' is the framework using the sigmoid loss~\citep{charoenphakdee2019learning}, which employs a recurrent convolutional neural networks model (RCNN)~\citep{lai2015recurrent} with two layer long short-term memory (LSTM)~\citep{hochreiter1997long}.
The ``heuristic threshold'' is the ratio between pseudo-positive documents over unlabeled documents
and the ``default threshold'' is a baseline choice (see~\citet{charoenphakdee2019learning} for details).
It can be seen that given the same ranking function, the classification performance can drastically change depending on the choice of the threshold.} \label{tab:failure}
\resizebox{\columnwidth}{!}{\begin{tabular} { |c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Method}& \multirow{2}{*}{Dataset}& \multirow{2}{*}{AUC}& \multicolumn{2}{c|}{Default threshold} & \multicolumn{2}{c|}{Heuristic threshold} & \multicolumn{2}{c|}{True threshold}\\
& & & $\mathrm{F}_{1}$ & ACC & $\mathrm{F}_{1}$ & ACC & $\mathrm{F}_{1}$ & ACC \\
\hline
\multirow{2}{*}{Sigmoid}& $\mathsf{Subj}$ & 88.1 (0.35) & 69.0 (1.75) & 71.4 (1.33) & 80.1 (0.35) & 80.2 (0.35) & 80.1 (0.38) & 80.1 (0.38) \\
& $\mathsf{20NG}$ & 96.4 (0.12) & 59.0 (0.92) & 68.4 (1.19) & 83.8 (0.24) & 95.1 (0.06) & 90.8 (0.20) & 96.5 (0.08) \\
\hline
\multirow{2}{*}{Maxent}& $\mathsf{Subj}$ & 84.6 (0.20) & 63.4 (0.31) & 66.9 (0.23) & 76.8 (0.21) & 76.8 (0.21) & 76.3 (0.24) & 76.3 (0.24) \\
& $\mathsf{20NG}$ & 57.6 (0.32) & 47.4 (0.05) & 89.2 (0.02) & 51.8 (0.22) & 85.1 (0.09) & 52.4 (0.25) & 81.6 (0.11) \\
\hline
\multirow{2}{*}{RandomForest}& $\mathsf{Subj}$ & 82.4 (0.27) & 33.3 (0.00) & 50.00 (0.00) & 54.8 (0.13) & 60.9 (0.09) & 75.1 (0.27) & 75.1 (0.27) \\
& $\mathsf{20NG}$ & 96.8 (0.16) & 47.2 (0.00) & 89.4 (0.00) & 83.2 (0.32) & 95.0 (0.08) & 89.6 (0.28) & 96.1 (0.10) \\
\hline
\end{tabular}}
\end{table}
Intuitively, the threshold $\beta$ allows $g$ to classify the top proportion of unlabeled documents w.r.t. $p(y=+1|\boldsymbol{x})$ to be positive, and negative otherwise.
With unlabeled documents and the known proportion of positive documents, one can decide $\beta$ that satisfies the empirical version of Eq.~\eqref{eq:thr}.
Concretely, given unlabeled documents $\mathcal{X}_{\mathrm{val}}$ for validation, the threshold can be decided by finding $\beta$ such that
\begin{align*}
p(y=+1) \approx \frac{1}{|\mathcal{X}_{\mathrm{val}}|}\sum_{\boldsymbol{x} \in \mathcal{X}_{\mathrm{val}}} \mathrm{sign}(g(\boldsymbol{x})-\beta).
\end{align*}
This threshold is known as a precision-recall breakeven point, where it is the point that the precision is equal to recall (see~\citet{kato2018learning} for its proof).
Therefore, this choice is arguable to be a reasonable threshold for the $\mathrm{F}_{1}$-measure since this evaluation metric is the harmonic mean of precision and recall.
In practice, it may not be possible to know the proportion $p(y=+1)$, yet we still want a classifier.
Without knowing the proportion of positive documents, it is likely that we learn a wrong threshold, which leads to low performance.
For example, as shown in Table~\ref{tab:failure}, the performance degrades dramatically with a wrong choice of the threshold.
More details on a heuristic for threshold selection and the performance w.r.t. different thresholds can be found in~\citet{charoenphakdee2019learning}.
It is also worth noting that if we cannot guarantee that $\pi_\mathrm{U}\approx p(y=+1|\boldsymbol{x})$, i.e., the proportion of positive documents in the unlabeled training documents is similar to that of the test data, then using a metric such as the $\mathrm{F}_{1}$-measure or classification accuracy is highly biased to the training distribution~\citep{scott2012calibrated,narasimhan2014statistical}.
This problem is known as the class prior shift phenomenon and it might occur in real-world applications~\citep{saerens2002adjusting, datashift,tasche2017fisher}.
For example, by collecting unlabeled documents from the internet, the proportion of positive documents can be different from that of the test distribution where we want to deploy the trained classifier.
Note that the class prior shift phenomenon can dramatically degrade the performance of a classifier~\citep{saerens2002adjusting, datashift, charoenphakdee2019positive}.
Therefore, if it is impossible to estimate the class prior $p(y=+1|\boldsymbol{x})$ or the test environment is susceptible to the class prior shift phenomenon, we suggest to use other evaluation metrics such as BER or the AUC score.
\section{Conclusions}
\label{sec:discussion}
In this article, we have reviewed recent advances in reliable machine learning from a symmetric loss perspective.
We showed in Section~\ref{sec:symadvantage} that if a symmetric loss is used for BER and AUC optimization from corrupted labels, the corrupted and clean risks have the same minimizer regardless of the model.
Furthermore, we demonstrated in Section~\ref{sec:application} that the theoretical advantage of symmetric losses is practically also valuable in learning only from relevant keywords and unlabeled documents.
In this section, we conclude this article by discussing two future directions for the symmetric loss perspective of reliable machine learning.
The first direction is exploring more applications of symmetric losses for reliable machine learning.
Here, we provide two examples of this direction.
First, it has been recently shown that using a symmetric loss can also be beneficial in imitation learning from noisy demonstrations, where the goal is to teach an agent to imitate expert demonstrations although training data contain both expert and non-expert demonstrations.
~\citet{tangkaratt2020robust} showed that imitation learning with a symmetric loss can enable an agent to successfully imitate expert demonstrations without assuming a strong noise assumption such as it is Gaussian~\citep{tangkaratt2020variational} or requiring additional confidence scores for given demonstrations~\citep{wu2019imitation,brown2019extrapolating,brown2020safe}.
Another example is to use a symmetric loss in classification with rejection, where a classifier is allowed to refrain from making a prediction if the prediction is uncertain~\citep{chow1970,yuan2010,cortes2016learning,ni2019calibration,mozannar2020consistent}.
Although well-known symmetric losses have favorable properties such as classification-calibration and AUC-consistency, it is important to note that learning with a symmetric loss cannot guarantee to give a classifier with reliable prediction confidence~\citep{charoenphakdee2019symmetric}.
Recently, ~\citet{charoenphakdee2020classification} proposed an approach based on cost-sensitive classification, which enables any classification-calibrated loss including symmetric losses to be applied for solving classification with rejection.
In the experiments, the sigmoid loss was shown to be highly preferable in classification with rejection from noisy labels and classification with rejection from positive and unlabeled data.
These examples emphasize the potential of symmetric losses for reliable machine learning in addition to what we have introduced in Sections~\ref{sec:symadvantage} and \ref{sec:application}.
Therefore, it could be useful to explore the use of symmetric losses for a wider range of problems, e.g., domain adaptation~\citep{sugiyama2012machine,ben2012minimizing,kuroki2019unsupervised,lee2019domain,redko2020survey}, open-set classification~\citep{saito2018open,ruff2020unifying,fang2020open,geng2020recent}, and learning from aggregate observations~\citep{maron1997framework,hsu2019multi,cui2020classification,zhang2020learning}.
Although symmetric losses are useful in learning from corrupted labels, using them can sometimes lead to undesirable performance because training with a symmetric loss can be computationally hard.~\citep{ghosh2017robust,wang2019symmetric,ma2020normalized}.
Thus, it is interesting to explore non-symmetric losses that can benefit from the symmetric condition.
This is the second future direction we discuss in this section.
Here, we provide two examples to demonstrate the potential of this research direction.
The first example is motivated by the fact that a nonnegative symmetric loss must be non-convex~\citep{du2014}.
To explore a robust convex loss,~\citet{charoenphakdee2019symmetric} proposed the barrier hinge loss, which is a convex loss that satisfies a symmetric condition on a subset of the domain, but not everywhere.
The barrier hinge loss was shown to be highly robust in BER and AUC optimization although it is not symmetric.
This suggests that one can design a non-symmetric loss that benefits from the symmetric condition.
Another example is an approach to combine a symmetric loss with a non-symmetric loss.
Recently,~\citet{wang2019symmetric} proposed the reverse cross-entropy loss, which is a symmetric loss.
Then, they proposed to combine the reverse cross-entropy loss with the ordinary cross-entropy loss by linear combination.
Their experiments showed that the classification performance of the combined loss can be better than using only the reverse cross-entropy loss or other symmetric losses such as the mean absolute error loss~\citep{ghosh2017robust}.
Based on these examples, we can see that it could be beneficial to design a loss function that enjoys the advantages of both symmetric and non-symmetric losses.
\section*{Acknowledgement}
We would like to thank our collaborators: Dittaya Wanvarie, Yiping Jin, Zhenghang Cui, Yivan Zhang, and Voot Tangkaratt.
NC was supported by MEXT scholarship and Google PhD Fellowship program.
MS was supported by JST CREST Grant Number JPMJCR18A2.
|
1,108,101,564,197 | arxiv |
\section{Introduction}
Recommender systems (RS) have been applied successfully in many domains to help users find interesting items and avoid the problem of information overload. Examples of RS include: Amazon, where customers can use recommendations from the system so they can find desirable products easier; Netflix, where the system helps users find interesting movies to watch; Pandora and Spotify, which helps users navigate vast catalogs of content to listen to their favorite artists.
{\let\thefootnote\relax\footnote{Included in the 2017 Workshop on Value-Aware and Multistakeholder Recommendation held in conjunction with RecSys 2017.}}
For the most part, the research on RS has been focused on situations where the system has no preference about which product owners the content shown to the user originates from, just as long as the user is happy with the current recommendation. While this may be acceptable in some applications, there are examples where the recommendations from different stakeholders (e.g. product owners) produce different utilities both to the system and users. For example, if we consider advertisers as stakeholders who want their ads to be shown to the users along with the traditional recommended products, then it is obvious that recommending ads gives different utility to the system than recommending products. The system may make more money from showing a paid ad rather than recommending consumable content. However, from the users' perspective the situation may be different. Users do not really care about ads as much as they want to see their personalized recommendations.
A music recommendation business is an example of managing several different stakeholders that are involved in delivering content. Each stakeholder may have their own incentive to get their content in front of the end user and they may compete or collaborate with each other depending upon the context. In this paper, we discuss how a music recommender service may be viewed as a multi-stakeholder environment where each stakeholder has their own utility function and goal. These stakeholders could include various artists, advertisers or other content providers.
\section{Stakeholders in Music Recommender Systems}
Pandora is one of the world's largest music recommendation companies in terms of active users and songs recommended. The platform consists of a variety of different stakeholders working together to provide users with the best experience, while simultaneously keeping the business alive by generating revenue. Some stakeholders are more user-experience focused so their utility function is closer to what the end user actually wants. Other stakeholders have utility functions which are more revenue-focused and business-oriented. For example, the stakeholder responsible for recommending songs (i.e. music recommender) is focused on delivering the best possible song at the right time to the right end user. Therefore, its objective goal is closely related to what end users want from the system. On the other hand, the stakeholder that is responsible for advertising, \textit{ad service}, is mainly interested in delivering the right ad to the right user at the right time. Generally, users have less interest for ads than songs. Therefore, we can say that the ad service stakeholder's utility is mainly towards more revenue, which is not a primary consideration of the music recommender.
\begin{figure*}[t]
\centering
\includegraphics[width=5in]{figure1.png}
\caption{The architecture of a music recommendation platform consisting of multiple stakeholders.}
\label{fig:ms}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=5in]{figure2.png}
\caption{A sequence of various content served to the user. The recommender system should decide what to play for the current slot, indicated by the rightmost box.}
\label{fig:sequence}
\end{figure*}
Figure ~\ref{fig:ms} shows the architecture of the recommendation platform consisting of different stakeholders. While there may be more fine-grained stakeholders, we only showed four of them for simplicity. For example, the \textit{music recommender} stakeholder is responsible for coming up with the best possible song to play for the user. As we mentioned earlier, this stakeholder is mainly concerned about the needs and interests of the end user. On the other hand, the \textit{ad services} is responsible to find the best time to interrupt the song recommendation to show an ad to the user. In fact, this stakeholder does this calculation based on many different factors such as the contract rules the ad service has with the advertisers, companies financial agenda, etc. There are also a couple of other stakeholders in the platform such as \textit{ticketing} and \textit{artist messages}. The ticketing works similar to the ad services, but instead of ads, there are tickets for which the system tries to find customers. It is important to note that the business rules and utility function for the ticketing stakeholder might be completely different from the ad service even though they both do the similar task of promoting secondary content (i.e. content other than music). The artist messages are recorded audio messages from artists containing information regarding an upcoming concert, album or merchandise. Finally, there is the \textit{content service} that sits on top of the individual stakeholders. This is responsible for managing the different stakeholders and to make the final decision about which content to show to a particular user at a certain time.
In this paper, our focus is mainly on sequential music recommendation where content is recommended one after another. Figure ~\ref{fig:sequence} shows a sequence of different content played to the user in the current session. The sequence contains a variety of content such as songs, ads, artist messages and tickets. Given what has been served to the user thus far, the RS should decide the best content to play for the next slot in the user's session. This slot is shown by the box containing a question mark on the rightmost side of the diagram. All stakeholders try to push their content through the system so that they could be shown to the user. However, since this is a sequential recommendation platform where only a single piece of content can be served at a given time, the content service manager decides which content from one of the stakeholders can occupy this spot.
Certain types of content are more beneficial to the system than others and this is a factor taken into account in the final decision. For example, as shown in figure ~\ref{fig:sequence}, ads, artist messages and tickets bring money to the system, while songs cost the system money (i.e. there is a royalty per each song played). The balance between the long-term and short-term goals of the system must be considered. Greedy decisions such as playing content based purely on maximizing revenue, while ignoring the end user needs will hurt the system in the long term (i.e. users may not return to the system resulting in fewer users and fewer opportunities to make money via ads). On the contrary, playing only songs will not result in any revenue and the system would not be able to continue service. Therefore, both short-term (make users happy) and long-term goals (run a healthy business) should be considered when deciding what content to play for a user at any given time.
Different artists could also be viewed as stakeholders in terms of what they want from the system. Artists may have differing goals in putting their songs on a music recommendation platform such as Pandora. For example, some artists might be interested in targeting a certain age-group in certain locations and certain times. Perhaps they have a upcoming concert and they want to give more awareness to their targeted audience. In addition, the system might want to promote songs from certain artists to give them a boost in the number of listeners and help them achieve more popularity. However, it is important to take the fairness factors into account in promoting artists so that the system does not over-promote some artists at the price of ignoring others.
\section{Related work}
The concept of multiple stakeholders has been discussed in some prior research. For example, in reciprocal recommendation, a successful recommendation is not the one that is only acceptable by the receiver of the recommendation. In fact, the needs and interests of the other part of the recommendation (i.e. the person who is being recommended) is also important for having a successful match \cite{reciprocal}. Researchers working on multi-objective optimization also investigated the problem of having different objectives and constraints when generating recommendations. These objectives are often different evaluation metrics such as accuracy, popularity bias, novelty and diversity \cite{jugovac2017efficient, biasRecSys2017} or they are some sort of constraint imposing certain limitations on the solution \cite{jambor2010optimizing}. The most explicit definition of having multiple stakeholders in a recommendation setting has been discussed in \cite{umapHimanMS} where authors re-defined the recommender systems as multi-stakeholder environments in which multiple stakeholders involve in different stages of the recommendation generation and each benefit from the recommendation delivery in various ways. For example, in ~\cite{DBLP:conf/um/BurkeAMG16}, the authors discuss a scenario which seeks 'fairness' across multiple product suppliers. Moreover, in \cite{vamsBurkeHiman} authors identify patterns of stakeholder utility that characterize different multi-stakeholder recommendation application, and provide a taxonomy of the different possible systems. There are many other domains in which multiple stakeholders are involved in the recommendation process. For example, online dating \cite{reciprocal}, educational recommenders \cite{burke2016educational} and loan recommendation in micro-finance platforms \cite{lee2014fairness} are all instances of recommendations where different stakeholders are involved in receiving or delivering the recommendations.
\begin{comment}
\section{Our goal in participating the workshop}
As one of the largest companies in the music recommendation domain, we think the value-Aware and Multi-stakeholder recommender systems could be a great venue to discuss the latest findings in this trending area of research and intend to have fruitful discussions during the workshop regarding potential solutions for similar problems we are facing in our company and also collaboration with other researchers participating in the workshop.
\end{comment}
|
1,108,101,564,198 | arxiv | \section{Introduction}
Tungsten (W) is one of the most important constituents of tokamak reactor walls \cite{putt}. Additionally, it radiates strongly over almost all ionisation stages. For example, the most intense emission lines of W ions \cite{putt} are from W~XXII to W~L in the VUV to the soft x-ray region, covering an
electron temperature range from about 0.5 to 5.0 keV. Similarly, P{\"u}tterich et al.~\cite{putt} have predicted emission features from W~LXI to W~LXIX in the
0.1--0.15 nm, 1.8--4.0 nm and around 8 nm ranges. However, to assess radiation loss and for modelling plasmas, atomic data (including energy levels and oscillator strengths or radiative decay rates) are required for many of the W ions. Their need for atomic data for several ions, including those of W, has increased significantly due to the developing ITER project. Therefore, several groups of people are actively engaged in producing atomic data.
Early calculations for a number of W ions (W~XXXVIII to W~XLVIII) were performed by Fournier \cite{kbf}. He adopted a relativistic atomic structure code, but reported only limited results for energy levels and oscillator strengths ($f$-values). A thorough critical compilation of experimental, theoretical and analytical energy levels of W ions (W~III through W~LXXIV) has been undertaken by Kramida and Shirai \cite{ks1} and has been further reviewed by Kramida \cite{ks2}. These energy levels, along with some spectral lines, are also available on the NIST (National Institute of Standards and Technology) website at {\tt http://www.nist.gov/pml/data/asd.cfm}. Recently, spectra in the EUV wavelength range (4--20 nm) have been measured by Ralchenko et al. \cite{yuri}, for a number of W ions, namely W~LV to W~LXIV. Similarly, Clementson et al. \cite{ll1} have discussed spectroscopy of many W ions (W~XLVII to W~LXXII). On the other side, calculations have been performed for several W ions, such as by Quinet \cite{pq} for W~XLVIII to W~LXII. Although he adopted the {\sc grasp} code for the calculations, his reported results for energy levels and radiative rates ($A$-values) are confined to forbidden lines within the 3p$^k$ and 3d$^k$ configurations. However, for the modelling of plasmas, atomic data among a wider range of levels/transitions are preferred. Therefore, we have already reported such data for two W ions, namely W~XL \cite{w40a,w40b} and W~LVIII \cite{w58a,w58b}. In this paper, we extend our work to eight other W ions, S-like (W~LIX) to F-like (W~LXVI).
As in our earlier research \cite{w40a,w40b,w58a,w58b} and those of others \cite{pq,ajm}, we have adopted the fully relativistic multi-configuration Dirac-Fock (MCDF) atomic structure code \cite{grasp0}, better known as the general-purpose relativistic atomic structure package ({\sc grasp}) \cite{grasp}. This code is based on the $jj$ coupling scheme, includes higher-order relativistic corrections arising from the Breit interaction and QED (quantum electrodynamics) effects, and is suitable for the heavy ions considered here. However, this original version \cite{grasp0} has undergone several revisions, such as by \cite{grasp,grasp2k, grasp2kk}, and the one employed here (and by many other workers) has been revised by Dr. P. H. Norrington, and is freely available at {\tt http://web.am.qub.ac.uk/DARC/}.
\section{Energy levels}
Extensive configuration interaction (CI) has been incorporated in GRASP, as described below for each ion, and for the optimisation of the orbitals the option of `extended average level' (EAL), in which a weighted (proportional to 2$j$+1) trace of the Hamiltonian matrix is minimised, has been adopted. The GRASP code has a few other choices for optimisation, such as average level (AL) and extended optimal level (EOL). However, in general, the results obtained with the AL option are comparable with those of EAL as already discussed and demonstrated by us for several other ions, such as those of Kr \cite{kr} and Xe \cite{xe}. Similarly, the EOL option may provide slightly more accurate data for a few predefined levels, but is only useful if the experimental energies are known, which is not the case for a majority of the levels of the ions studied here.
\subsection{S-like W~LIX}
Clementson and Beiersdorfer \cite{cb} have measured wavelengths for 3 lines of W~LIX. They also calculated these with two different codes, i.e. {\sc grasp} and {\sc fac} (flexible atomic code), and there is no (major) discrepancy among the results. For modelling purposes, Feldman et al. \cite{uf1} calculated atomic data for many W ions, including W~LIX, but did not report the data. Furthermore, they used a simple model consisting of the 3s$^2$3p$^4$, 3s3p$^5$, 3s$^2$3p$^3$3d, and 3p$^6$ configurations, generating 48 levels in total.
For our work, we have performed two sets of calculations using the {\sc grasp} code. In the first (GRASP1) we have included 2762 levels of the all possible combinations of the $n$ = 3 orbitals, i.e. 18 configurations in number. The second (GRASP2) involves an additional 28 configurations, which are [3s$^2$3p$^3$, 3s$^2$3p$^2$3d, 3s3p$^4$, 3s3p$^3$3d, 3s$^2$3p3d$^2$, 3s3p$^2$3d$^2$, and 3p$^5$]4$\ell$. These 46 configurations generate 12~652 levels in total. In Table~A we compare the energies obtained from both models, but for only the lowest 20 levels. Differences between the two sets of energies are less than 0.025 Ryd and the inclusion of larger CI in the GRASP2 calculations has lowered the energies for most of the levels. Therefore, it is necessary to assess the effect of further CI on the energy levels. For this we have adopted the {\sc fac} code of Gu \cite{fac}, which is also fully relativistic and is available from the website {\tt https://www-amdis.iaea.org/FAC/}. This code is comparatively more efficient to run and generally yields results similar to those obtained with other atomic structure codes, as has already been demonstrated in several of our earlier papers -- see for example Aggarwal et al. \cite{fe15}. With FAC we have also performed two sets of calculations, i.e. FAC1: includes the same 2762 levels as in GRASP1, and FAC2: also includes levels of the 3$\ell^5$4$\ell$ configurations, generating 38~694 levels in total. Energies obtained from both these models are also listed in Table~A for comparison.
Discrepancies between the GRASP1 and FAC1 energies are up to 0.15 Ryd (see level 13), in spite of including the {\em same} CI. This is because of the differences in the algorithms of the codes and also in calculating the central potentials. Additionally, the energies obtained from {\sc fac} are generally lower for most levels. However, inclusion of additional CI in the FAC2 calculations further lowers the energies, but only up to 0.02 Ryd for some of the levels. Therefore, it may be reasonable to say that the inclusion of CI in our GRASP2 calculations is sufficient to calculate accurate results, but differences with FAC2 remain of up to 0.15 Ryd. The NIST compilation is only for a few levels of W~LIX, which are mostly based on the experimental and theoretical work of Clementson et al. \cite{ll1}. However, these energies are not very accurate as indicated on their website, and many levels are also missing from the compilation. Nevertheless, in Table~A we have included their energies for comparison. Unfortunately, differences between their compiled energies and our (any of the) calculations are up to 0.4 Ryd for some of the levels, such as 18--20. Therefore, there may be scope to improve upon our calculated energies but the (in)accuracy cannot definitely be determined by the limited comparison shown in Table~A.
Our calculated energies from GRASP2 are listed in Table~1 along with those from FAC2 for the lowest 220 levels, which belong to the $n \le$ 3 configurations. Beyond these, the levels of the $n$ = 4 configurations start mixing. Discrepancies between the two sets of energies are smaller than 0.4 Ryd ($<$ 0.5\%) for a majority of levels and the orderings are different only in a few instances, such as 70/71 and 151/152. We also note that some differences may be because of a mismatch between the two sets of energies, as it is not always possible to perfectly match these due to their different notations. Also note that the $LSJ$ designations of the levels listed in Table~1 are not always unambiguous, and a few of these can be (inter)changed with varying amounts of CI, codes, and authors preferences. This is inevitable in any calculation because of the strong mixing among some of the levels. As examples, we list the lowest 20 levels in Table~B. For some, such as 1, 2, 10, and 12, there is a clear dominance of one vector (level) and hence there is no scope for ambiguity. However, for others, such as 3--9, several vectors (levels) dominate and therefore it is not straightforward to designate such levels. For example, the eigenvector for level 19 is dominant in 19 but is also significant in 4. However, the eigenvector for level 105 is dominant in both levels 4 and 105 (not listed in Table~B). Finally, it may be noted that the degeneracy among the levels of W~LIX is very large -- see for example levels 3, 5, 9, 19, and 32 of 3s$^2$3p$^3$3d $^5$D$^o$, which are separated by up to $\sim$30 Ryd. For the ground state energy the Breit and QED contributions are 28.7 and 21.7 Ryd, respectively, although they amount to only $\sim$0.1\%.
\subsection{P-like W~LX}
For this ion we have also performed two calculations with {\sc grasp} using different levels of CI, i.e. GRASP1: includes 1313 levels of the 15 $n$ = 3 configurations, which are 3s$^2$3p$^3$, 3s$^2$3p$^2$3d, 3s3p$^4$, 3s$^2$3p3d$^2$, 3s3p$^3$3d, 3s3p$^2$3d$^2$, 3p$^5$, 3p$^4$3d, 3s$^2$3d$^3$, 3p$^3$3d$^2$, 3s3p3d$^3$, 3p$^2$3d$^3$, 3s3d$^4$, 3p3d$^4$, and 3d$^5$. In the other calculation (GRASP2), a further 20 configurations of [3s$^2$3p$^2$, 3s3p$^3$, 3s$^2$3p3d, 3s3p$^2$3d, and 3s$^2$3d$^2$]4$\ell$ are included, generating in total 3533 levels. Similarly, two calculations with {\sc fac} are performed, i.e. FAC1 with the same CI as in GRASP2, and FAC2, which also includes all possible combinations of 3$\ell^4$4$\ell$, generating 14~608 levels in total. Energies for the lowest 220 levels from both GRASP2 and FAC2 are listed in Table~2. These levels belong to the first 8 configurations listed above. For the higher-lying levels, those of $n$ = 4 intermix with $n$ = 3.
In Table~C we compare our energies for the lowest 25 levels of W~LX from GRASP1, GRASP2, FAC1, and FAC2 with the NIST compilation. CI for W~LX is not as important as for W~LIX, because differences between our GRASP1 and GRASP2 energies are smaller than 0.02 Ryd. Similarly, discrepancies between the FAC1 and FAC2 energies are less than 0.03 Ryd. However, differences between the GRASP2 and FAC2 energies are up to 0.3 Ryd for some levels, for reasons already explained in section 2.1. The NIST compilation is only for the lowest 25 levels, listed in Table~C, and our GRASP2 energies are (generally) lower by up to 0.3 Ryd -- see for example, levels 13, 17 and 22. Similar differences remain between the NIST and FAC2 energies, and therefore are not due to a lack of CI. However, it is worth emphasising that the compiled energies of NIST are mostly based on interpolation/extrapolation and hence are likely not very accurate. More importantly, there are differences in the designations of a few levels, particularly the ground state, which is (3s$^2$3p$^3$)~$^2$D$^o_{3/2}$ in our work, but $^2$P$^o_{3/2}$ in NIST. This is a highly mixed level and the eigenvector for $^2$P$^o_{3/2}$ dominates in both levels 1 and 25 -- see Table~D in which eigenvectors for the lowest 25 are listed. However, we have preferred to designate the lower (ground) level as $^2$D$^o_{3/2}$, because the placings of $^2$D$^o_{5/2}$ and $^2$P$^o_{1/2}$ (levels 5 and 6) are unambiguous. There may be similar differences in designations with other calculations because of the very high mixing among some of the levels of W~LX.
\subsection{Si-like W~LXI}
As for other W ions, we have performed two calculations each with the {\sc grasp} and {\sc fac} codes to assess the effect of CI. These are GRASP1: 518 levels of 12 configurations [3s$^2$3p$^2$, 3s3p$^3$, 3s$^2$3p3d, 3s3p$^2$3d, 3p$^4$, 3s$^2$3d$^2$, 3p$^3$3d, 3s3p3d$^2$, 3p$^2$3d$^2$, 3s3d$^3$, 3p3d$^3$, and 3d$^4$]; GRASP2: 4364 levels of 48 configurations, the additional 36 are [3s$^2$3p, 3s3p$^2$, 3s$^2$3d, 3s3p3d, 3p$^3$, 3p$^2$3d, 3s3d$^2$, 3p3d$^2$, and 3d$^3$]4$\ell$; FAC1: 9798 levels of 3*4, 3*3 4*1 and 3*4 5*1; and finally FAC2: which includes 27~122 levels in total, the additional ones arising from 3*3 6*1 and 3*2 4*2 configurations. Energies obtained from these calculations are compared in Table~E with the NIST compilation for the lowest 21 levels of W~LXI, which are the only ones in common. As for other ions, the CI is not very important for this ion, because the GRASP1 and GRASP2 energies agree within to 0.02 Ryd, and the FAC1 and FAC2 energies show no appreciable differences. Similarly, the agreement between our GRASP2 and FAC2 energies is better than 0.2 Ryd -- see levels 12--15. However, as for other ions, the differences with the NIST compilation are larger, up to 0.4 Ryd -- see level 9 for example. Again, the NIST energies are not very accurate and therefore such differences are not surprising. An important difference between our calculations and the NIST compilation is the designation for level 4, i.e. (3s3p$^3$)~$^5$S$^o_2$ which is $^3$P$^o_2$ (64) in the latter. Both these levels are highly mixed, as may be seen from the eigenvectors listed in Table~F for the lowest 21 levels {\em plus} the remaining two of the 3s3p$^3$ configuration, i.e. $^3$P$^o_2$ and $^1$P$^o_1$.
Our recommended energies for the lowest 215 levels of W~LXI are listed in Table~3 from the GRASP2 and FAC2 calculations. These levels belong to the $n$ = 3 configurations and beyond these those of $n$ = 4 intermix. Finally, there are no major differences in the orderings of the two sets of level energies.
\subsection{Al-like W~LXII}
For W~LXII the experimental energies are also as sparse as for other W ions. However, two sets of theoretical energy levels \cite{ajm,saf} are available in the literature. Safronova and Safronova \cite{saf} adopted a relativistic many-body perturbation theory (RMBPT) and reported energies for the lowest 40 levels belonging to the 3s$^2$3p, 3s3p$^2$, 3s$^2$3d, 3s3p3d, 3p$^3$, and 3p$^2$3d configurations. In addition, S.~Aggarwal et al. \cite{ajm} have calculated energies for the lowest 148 levels of the 3s$^2$3p, 3s3p$^2$, 3s$^2$3d, 3s3p3d, 3p$^3$, 3p$^2$3d, 3s3d$^2$, 3p3d$^2$, and 3d$^3$ (nine) configurations, adopting the same version of the {\sc grasp} code as in the present work. The RMBPT energies \cite{saf} are closer to the NIST compilation and in general are lower than those of S.~Aggarwal et al. by up to 0.4 Ryd -- see Table~2 of \cite{ajm}.
We have performed several sets of calculations with the {\sc grasp} code but mention only three here, namely: GRASP1, which includes the basic 148 levels of the 9 configurations listed above; GRASP2, which considers an additional 776 (total 924) levels of the [3s3p, 3s3d, 3p3d, 3s$^2$, 3p$^2$, and 3d$^2$]4$\ell$ (24) configurations; and finally GRASP3 which includes a further 1079 levels (total 2003) of the 30 additional configurations, i.e. [3s3p, 3s3d, 3p3d, 3s$^2$, 3p$^2$, and 3d$^2$]5$\ell$. S.~Aggarwal et al. \cite{ajm} included CI among 35 configurations, which are the basic 9 of GRASP1 {\em plus} another 26, i.e. 3s3p4$\ell$, 3s3d4$\ell$, 3p3d4$\ell$, 3s$^2$4$\ell$, 3p$^2$4$\ell$ (except 3p$^2$4d), 3p4$\ell^2$ (except 3p4p$^2$), and 3d4$\ell^2$. It is not clear why they overlooked configurations such as: 3p$^2$4d, 3p4p$^2$, 3s4$\ell^2$, and 3$\ell$4$\ell\ell'$. In addition, their 35 configurations generate 1007 levels in total (see Table~1 of \cite{km}) whereas they mention only 894, and therefore there is an anomaly of 113 levels. However, we stress that (particularly) the omission of the 3p$^2$4d and 3p4p$^2$ configurations does not affect the energies or the corresponding lifetimes, as already discussed by one of us \cite{km}. More importantly, levels of the 3$\ell$4$\ell^2$ configurations lie at energies well above those of our GRASP3 calculations, and hence are omitted from our work. This has been confirmed by our larger calculation with 75 configurations and 2393 levels. For the same reason we preferred not to include the 4$\ell^2$ configurations for the calculations of energy levels for other W ions. A complete set of energies for all 148 levels (of the GRASP1 calculations) are listed in Table~4 from GRASP3 and FAC2 (see below). We note that levels from all other configurations clearly lie {\em above} these 148 and hence there is no intermixing.
As with {\sc grasp}, we have also performed several calculations with {\sc fac}, but focus on only two, i.e. FAC1: includes the same 2003 levels as in GRASP3, and FAC2: contains 12~139 levels in total, the additional ones arising from the 3*2 6*1, 3*1 4*2, 3*1 5*2 and 3*1 6*2 configurations. In Table~G we compare our energies from GRASP2, GRASP3, FAC1, and FAC2 with those of NIST for the lowest 21 levels, which are in common. Also included in this table are the results of Safronova and Safronova \cite{saf} from RMBPT. The corresponding data of S.~Aggarwal et al. \cite{ajm} are not considered because they are similar to our GRASP2 calculations and have already been discussed previously \cite{km}. Although a considerably large CI has been included in our calculations, it does not appear to be too important for W~LXII, because the GRASP2 and GRASP3 (and FAC1 and FAC2) energies are practically identical. Therefore, the discrepancies between the GRASP and FAC energies (up to 0.4 Ryd, particularly for level 21) are not due to different levels of CI but because of the computational and theoretical dissimilarities in the codes. Nevertheless, although the NIST energies are not claimed to be very accurate, their agreements with those from FAC and RMBPT are better (within 0.1 Ryd) than with GRASP. Regarding all the 148 levels in Table~4, the differences between the GRASP and FAC energies are up to 0.4 Ryd for some (see levels 77 upwards in the table).
Finally, as for other W ions, configuration mixing is strong for W~LXII also and therefore there is always a possibility of (inter)change of level designations listed in Table~4. For the 21 levels listed in Table~G, their designations and orderings are the same between NIST and our calculations, but differ with those of S.~Aggarwal et al. \cite{ajm} for some, such as levels 10 and 68, i.e. (3p$^3$) $^2$D$^o_{3/2}$ and $^2$P$^o_{3/2}$, which are reversed by them. These two levels (and many more) have strong mixing, as may be seen from Table~H in which we list the eigenvectors for the lowest 21 levels plus 68, i.e. 3p$^3$~$^2$P$^o_{3/2}$. Similarly, there is a {\em disagreement} for most level designations between our work and NIST with those of Safronova and Safronova \cite{saf}.
\subsection{Mg-like W~LXIII}
For this ion, earlier calculations for energy levels are by Safronova and Safronova \cite{saf} using the RMBPT method for the lowest 35 levels of the 3s$^2$, 3s3p, 3p$^2$, 3s3d, 3p3d, and 3d$^2$ configurations, whereas the NIST compilation is only for 9 levels -- see Table~I. As for other ions we have performed several sets of calculations with {\sc grasp} and {\sc fac} and here we only state our final results. For the GRASP calculations we have considered 58 configurations, which are 3$\ell^2$, 3s3p, 3s3d, 3p3d, 3$\ell$4$\ell$, 4$\ell^2$, 4$\ell\ell'$, 3$\ell$5$\ell$, and 3$\ell$6$\ell$ (except 6h), while for FAC we include 991 levels, the additional ones arising from 3$\ell$7$\ell$ and 4$\ell$5$\ell$. However, levels of the 4$\ell^2$, 4$\ell\ell'$ and 4$\ell$5$\ell$ configurations mostly lie above those of 3$\ell$7$\ell$ and can therefore be neglected. Energy levels from both calculations are listed in Table~5 for the lowest 210 levels. In Table~I a comparison is shown for the lowest 35 levels with the NIST compilation and the RMBPT calculations \cite{saf}. As for W~LXII, the FAC and RMBPT energies agree closely with each other as well as with NIST, but our GRASP energies are higher by up to 0.3 Ryd for many levels. Similarly, mixing for the levels is strong for a few as shown in Table~J for the lowest 35 -- see in particular levels 22, 25 and 34.
\subsection{Na-like W~LXIV}
For this ion we have gradually increased the number of orbitals to perform {\sc grasp} calculations for up to 1235 levels. The configurations included are 2p$^6$$n\ell$ with $n \le$ 7 and $\ell \le$ 4, 2p$^5$3$\ell\ell'$, 2p$^5$3$\ell^2$, 2p$^5$4$\ell\ell'$, 2p$^5$4$\ell^2$, and 2p$^5$3$\ell$4$\ell$. However, we note that the levels of 2p$^6$$n\ell$ lie {\em below} those of the other configurations. For this reason we only list the lowest 30 levels in Table~K, all belonging to 2p$^6$$n\ell$. However, with {\sc fac} we have performed comparatively larger calculations for up to $n$ = 20 and all possible values of $\ell$, i.e. 1592 levels in total. These results are also listed in Table~K along with those of NIST, which are confined to the $n \le$ 5 levels. The NIST energies differ with FAC by up to 0.26 Ryd for some levels (see 20), but discrepancies are smaller than 0.15 Ryd with those with {\sc grasp}. Again, the differences between the GRASP and FAC energies are not because of different levels of CI, but due to methodological variations. It has not been possible to include higher 2p$^6$$n\ell$ configurations in our {\sc grasp} calculations, but since the {\sc fac} energies have been obtained (as stated above) in Table~6 we list these for the lowest 396 levels, all belonging to 2p$^6$$n\ell$ with $n \le$ 20. This will be helpful for future comparisons. Finally, unlike the other W ions discussed above, there is no (strong) mixing and/or ambiguity for the designation of the 2p$^6$$n\ell$ levels listed in Tables~K and 6.
Safronova et al. \cite{saf2} have reported energies for 242 levels of W~LXIV from three independent codes, namely RMBPT, HULLAC (Hebrew University Lawrence Livermore Atomic Code \cite{hullac}) and the atomic structure code of R.D.~Cowan available at {\tt http://das101.isan.troitsk.ru/cowan.htm}. Although NIST energies for this ion are only available for a few levels, as already seen in Table~K, their RMBPT results are closest to the measurements. Additionally, based on the comparisons made for other W ions, their RMBPT energies should be the most accurate. Nevertheless, the RMBPT energy for level 2 (2p$^5$3s $^3$P$^o_2$) differs by 1.3\% and 6.4\% with those from HULLAC and Cowan, respectively. Corresponding differences for the remaining levels are up to 0.3\% and 1\%, respectively. Only the lowest 5 levels of Table~K are common with their work, as the remaining 237 belong to the 2p$^5$3$\ell\ell'$ configurations. Therefore, our listed energies in Table~6 supplement their data.
\subsection{Ne-like W~LXV}
The NIST compilation of energies for this ion is limited to only 10 levels of the 2p$^5$3$\ell$ configurations. However, Vilkas et al. \cite{mrmp} have reported energies for 141 levels of the 2p$^6$, (2s2p$^6$)3$\ell$, 4$\ell$, 5$\ell$ (except 5g), and (2p$^5$) 3$\ell$, 4$\ell$, 5$\ell$ (except 5g) configurations. For their calculations they adopted the relativistic multi-reference many-body M{\o}ller-Plesset (MRMP) perturbation theory, and included CI up to the $n$ = 5 orbitals. We have included the same configurations for our calculations with {\sc grasp}, which generate 157 levels in total because we have also considered the 5g orbital. However, in Table~7 we list energies for only the lowest 121, because beyond this the levels of the 2s2p$^6$6$\ell$ configurations start mixing in the same way as of 2s2p$^6$5g with those of 2s2p$^6$4$\ell$ -- see levels 92--99 in the table. Additionally, we have performed larger calculations with {\sc fac} with up to 1147 levels, belonging to the 2*8, (2*7) 3*1, 4*1, 5*1, 6*1, 7*1, and 2*6 3*2 configurations. These results are also listed in Table~7 for comparison. Differences between the GRASP and FAC energies are up to 0.5 Ryd (0.07\%) for some levels, but the level orderings are almost identical. Similarly, there is no difference in level orderings with the MRMP calculations \cite{mrmp} and the energies differ only by less than 0.6 Ryd (0.06\%) with GRASP -- see levels 63 and 77--83. Therefore, overall there is no (significant) discrepancy between the three independent calculations. However, in general the FAC energies are lower than those from GRASP for a majority of levels, whereas those of MRMP are higher.
In Table~L, we compare energies with the NIST compilation for only the {\em common} levels. There is no uniform pattern for (dis)agreement between the theoretical and experimental energies. In general, the MRMP energies are closer to those of NIST whereas those from FAC differ the most. Unfortunately, these comparisons are not sufficient for accuracy determination, particularly when the NIST energies are not based on direct measurements. Finally, as for most W ions, for W~LXV also there is a strong mixing for some levels and therefore the level designations listed in Table~7 can vary, although the MRMP calculations \cite{mrmp} have the same labels as in our work. Nevertheless, in Table~M we list the eigenvectors for the lowest 33 levels, which include all of the NIST compilation. Note particularly the mixing for levels 24, 25 and 31.
\subsection{F-like W~LXVI}
For this ion we have performed a series of calculations with {\sc grasp} with gradually increasing CI and our final set includes 501 levels of 38 configurations, which are: 2s$^2$2p$^5$, 2s2p$^6$, (2s$^2$2p$^4$, 2s2p$^5$, 2p$^6$)3$\ell$, 4$\ell$, 5$\ell$. Similarly, calculations with {\sc fac} have been performed for up to 1113 levels from the 2*7 and (2*6) 3*1, 4*1, 5*1, 6*1, 7*1 configurations. These levels span an energy range of up to 1360 Ryd. Opening the 1s shell gives rise to levels above 5000 Ryd and therefore has not been included in the calculations. Energies from both of these calculations are listed in Table~8 for the lowest 150 levels, because beyond this the levels of the $n$ = 5 configurations start mixing. However, the listed levels include all of the $n$ = 3 configurations. Differences between the two sets of energies are up to 0.5 Ryd for some levels, except three (145--147) for which the discrepancies are slightly larger, up to 0.7 Ryd. The level orderings are also the same for a majority of levels, but slightly differ in a few instances, such as for 93--112. NIST listings are available for only two levels, namely 2s$^2$2p$^5$~$^2$P$^o_{1/2}$ and 2s2p$^6$~$^2$S$_{1/2}$, and the energy for the latter is lower by 0.5 Ryd than the theoretical results. No other similar theoretical energies are available for this ion for comparison purposes. Finally, this ion is no exception for level mixing and examples of this are listed in Table~N for the lowest 48 levels -- see in particular 13, 15, 40, and 42.
\section{Radiative rates}\label{sec.eqs}
Apart from energy levels, calculations have been made for absorption oscillator strengths ($f$-values, dimensionless), radiative rates ($A$-values, s$^{-1}$) and line strengths ($S$-values, in atomic units, 1 a.u. = 6.460$\times$10$^{-36}$ cm$^2$ esu$^2$). However, $f$- and $A$-values for all types of transition ($i \to j$) are connected by the following expression:
\begin{equation}
f_{ij} = \frac{mc}{8{\pi}^2{e^2}}{\lambda^2_{ji}} \frac{{\omega}_j}{{\omega}_i}A_{ji}
= 1.49 \times 10^{-16} \lambda^2_{ji} \frac{{\omega}_j}{{\omega}_i} A_{ji}
\end{equation}
where $m$ and $e$ are the electron mass and charge, respectively, $c$ the velocity of light, $\lambda_{ji}$ the transition wavelength in $\rm \AA$, and $\omega_i$ and $\omega_j$ the statistical weights of the lower $i$ and upper $j$ levels, respectively. Similarly, $f$- and $A$-values are related to $S$ by the standard equations given in \cite{w40b}.
In Tables 9--16 we present results for energies (wavelengths, $\lambda_{ji}$ in ${\rm \AA}$), $A$-, $f$- and $S$- values for electric dipole (E1) transitions in W ions, which have been obtained with the {\sc grasp} code. For other types of transitions, namely magnetic dipole (M1), electric quadrupole (E2), and magnetic quadrupole (M2), only the $A$-values are listed, because the corresponding results for $f$- or $S$-values can be obtained using Eqs. (1-5) given in \cite{w40b}. Additionally, we have also listed the ratio (R) of the velocity (Coulomb gauge) and length (Babushkin gauge) forms which often (but not necessarily) give an indication of the accuracy. The {\em indices} used to represent the lower and upper levels of a transition are defined in Tables 1--8. Furthermore, only a limited range of transitions are listed in Tables 9--16, but full tables are available online in the electronic version.
For the W ions considered here, existing $A$- (or $f$-) values are available mostly for three ions, i.e. Al-like W~LXII \cite{saf}, Mg-like W~LXIII \cite{saf} and Na-like W~LXIV \cite{mrmp}. Therefore, we confine our comparisons to these three ions. In Table~O we compare the $f$-values for common E1 transitions with the results of Safronova and Safronova \cite{saf}. Both sets of data agree very well for all transitions. Similarly, for a few weak transitions ($f$ $\sim$ 10$^{-4}$), such as 1--22, 2--3 and 14--19, the ratio R is up to 1.7 and is closer to unity for the comparatively strong transitions. Similar comparison with their results for transitions in W~LXIII is shown in Table~P. For the common transitions listed here, R is unity for all, and $f$-values agree closely for most with only a few exceptions, such as 20--32, 21--30 and 26--34 for which discrepancies are a factor of two. However, we note that the $f$- (or $A$-) values of \cite{saf} are only for a small number of transitions whereas our results listed in Tables 12 and 13 cover a much wider range.
Vilkas et al. \cite{mrmp} have listed $A$-values for some (not all) transitions of W~LXV and in Table~Q we compare their results with our calculations with {\sc grasp}, but only from the lowest three to higher excited levels. Additionally we have listed the $f$-values to indicate the strength of transitions. As for other W ions, R is also listed for these transitions and is within a few percent of unity, irrespective of the $f$-value. There are no appreciable differences between the two sets of $A$-values and discrepancies, if any, are (generally) within $\sim$20\%.
The comparisons of $A$- ($f$-) values discussed above are only for a subset of transitions. Considering a wider range, for a majority of strong transitions ($f$ $\ge$ 0.01) R is often within 20\% of unity, as already seen in Tables~O, P and Q. However, there are (as always) some exceptions. For example, there are only six transitions of W~LXIII with $f$ $>$ 0.01 for which R is up to 1.6, namely 148--166 ($f$ = 0.011, R = 1.3), 158--173 ($f$ = 0.021, R = 1.3), 160--174 ($f$ = 0.028, R = 1.6), 161--175 ($f$ = 0.025, R = 1.4), 162--176 ($f$ = 0.027, R = 1.4), and 163--177 ($f$ = 0.029, R = 1.6). Therefore, based on this and other comparisons already discussed, our assessment of accuracy for the $f$-values for a majority of strong transitions is $\sim$20\%. Finally, for much weaker transitions (often with $f$ $\le$ 10$^{-4}$), R can be several orders of magnitude and it is very difficult to assess the accuracy of the $f$-values because results are often much more variable with CI and/or codes. Generally, such transitions do not make an appreciable contribution to plasma modelling and their results are mostly required for completeness.
\section{Lifetimes}
The lifetime $\tau$ of a level $j$ is given by 1.0/$\Sigma_{i}$$A_{ji}$ and the summation includes $A$-values from all types of transitions, i.e. E1, E2, M1, and M2. Since this is a measurable quantity it helps to assess the accuracy of $A$-values, particularly when a single (type of) transition dominates. Unfortunately, to our knowledge no measurements of $\tau$ are available for the levels of the W ions considered here, but in Tables 1--8 we list our calculated results. Previous theoretical results are available for two ions, i.e. W~LXII \cite{ajm} and W~LXV \cite{mrmp}. Unfortunately, the $\tau$ of S.~Aggarwal et al. \cite{ajm} contain large errors, by up to 14 orders of magnitude, for over 90\% of the levels of W~LXII and bear no relationship to the $A$-values, as already discussed \cite{km}. For W~LXV, the reported $\tau$ of Vilkas et al. \cite{mrmp} are included in Table~7, and there is no significant discrepancy for any level.
\section{Conclusions}
Energy levels and radiative rates for E1, E2, M1, and M2 transitions are reported for eight W ions (W~LIX to W~LXVI). A large number of levels are considered for each ion and the data sets reported here are significantly larger than available in the literature. For our calculations the {\sc grasp} code has been adopted, although {\sc fac} has also been utilised for the determination of energy levels to assess the importance of CI, larger than that considered in {\sc grasp}. It is concluded that CI beyond a certain level does not appreciably improve the level energies. Differences between the GRASP and FAC energies, and the available experimental and theoretical values, are often smaller than 0.5 Ryd, or equivalently the listed energy levels for all W ions are assessed to be accurate to better than 1\%, but scope remains for improvement. A similar assessment of accuracy for the corresponding $A$-values is not feasible, mainly because of the paucity of other comparable results. However, for strong transitions (with large $f$-values), the accuracy for $A$-values and lifetimes may be $\sim$20\%.
Lifetimes for these levels are also listed although no measurements are currently available in the literature. However, previous theoretical values are available for most levels of W~LXV and there is no discrepancy with our work.
\ack
KMA is thankful to AWE Aldermaston for financial support.
\begin{appendix}
\def\thesection{}
\section{Appendix A. Supplementary data
Owing to space limitations, only parts of Tables 9--16 are presented here, the full tables being made available as supplemental material in conjunction with the electronic
publication of this work. Supplementary data associated with this article can be found, in the online version, at doi:nn.nnnn/j.adt.2016.nn.nnn.
\end{appendix}
|
1,108,101,564,199 | arxiv | \subsection{The notion of viscosity solution}
\subsubsection{Viscosity solutions in Jet spaces}
The cortical model previously discussed, associates to each 2D curve $\gamma_{2D}$ its orientation. This procedure can be considered as a lifting of the initial image $I(x,y)$ to a
new function $u$ in the Jet-space $\mathbb{R}^2 \times \mathit{S}^1$ of position and orientation. We refer to Petitot and Tondut, who first described the analogous cortical process as a lifting in a jet space \cite{A21}.
In this section, we will sometimes denote $\xi =(x,y,\theta)$ as an element of $\mathbb{R}^2 \times \mathit{S}^1$.
It is then natural to lift the function $u$ into another Jet-space which contains the formal analogous of its sub-Riemannian gradient $\nabla_0u$ and the formal analogous of its
second derivatives $X^0_iX^0_j$. The definition of viscosity solution in Jet-spaces is based on the Taylor expansion, expressed in terms of these differential objects. Since the analogous of the increment in the direction of the gradient $p$ is expressed through the exponential map, then the increment from a point $\xi$ in the direction $\sum_{i=1}^2\eta_i X^0_i$ is expressed as
$$u\Big(\exp(\sum_{i=1}^2\eta_i X^0_i)(\xi),t+s\Big) - u(\xi,t).$$
At non regular points, such as kinks, there is not either a unique vector $p$ which identifies the horizontal gradient and a unique matrix $r_{ij}$ which identifies the horizontal Hessian. Hence we need to give a more general notion: a couple $(p,r)$ where $p_i, i=1,2$ denotes an horizontal vector and $(r_{ij})$ a $2 \times 2$ matrix is a superjet $\mathcal{J}^+$ for $u$ if it satisfies the following formal analogous of the Taylor development:
\begin{equation}
u\Big(exp(\sum_{i=1}^2\eta_i X_i)(\xi),t+s\Big)- u(\xi,t)
\leq \sum_{i=1}^{2}p_i\eta_i + \frac12 \sum_{i,j=1}^2r_{ij}\eta_i\eta_j + qs + o(|\eta|^2+s^2).
\end{equation}
Let us note that if the superject exists it can be used in place of the derivative; furthermore a function $u$ is a Jet-space viscosity solution if the differential equation in which the derivatives are replaced with the elements of the superjet is satisfied. More precisely:
\begin{definition}
A function $u \in \mathit{C}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty)) \cap \mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty))$ is a jet space-viscosity subsolution of equation (\ref{Smcf}) if for every $(p_H, r_{ij})$ in the super-Jet we have:
\begin{equation}
q \leq
\left\{\}\begin{array}{ll}
\sum_{i,j=1}^{2} A^0_{ij}(p_H) r_{ij} & \mbox{ if }|p_H| \neq 0 \\
\sum_{i,j=1}^{2} A^0_{ij}(\tilde p) r_{ij} & \mbox{ for some }|\tilde p|\leq 1, \mbox{if } |p_H|=0.
\end{array}\right.
\end{equation}
\end{definition}
An analogous definition is provided for a viscosity supersolution. Then a viscosity solution is a function which is both a subsolution and a supersolution.
\subsubsection{Viscosity solutions via test functions}
The definition of viscosity solution in Jet-space of a second order equation can be identified as the approximation of the solution $u$ via a second order polynomial, whose coefficients are exactly the elements $(p_i, r_{ij})$ of the Jet space. More generally we assume that the function $u$ is locally approximate by a smooth test function $\phi$. This definition imposes the behavior of the function $u$ at points where $u-\phi$ attains a maximum. At such points $u$ and $\phi$ will have the same first derivatives, so that $\nabla_0 \phi$ results to be an exact evaluation of the approximation of $\nabla_0 u$. Looking at second derivatives, it follows that for every $i$ we have: $$X_i X_i(u-\phi)\leq 0,$$
so that the curvature of $\phi$ is an upper bound for the curvature of $u$. Due to this observations we can give the following definition:
\begin{definition}
A function $u \in \mathit{C}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty))$ is a viscosity subsolution of (\ref{Smcf}) in $\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty)$ if for any $(\xi,t)$ in $\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty)$ and any function $\phi \in \mathit{C}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty))$ such that $u- \phi$ has a local maximum at $(\xi,t)$ it satisfies:
\begin{equation}
\partial_t \phi \leq
\left\{\begin{array}{ll}
\sum_{i,j=1}^{2}A^0_{ij}(\nabla_0 \phi)X_iX_j \phi \mbox{,}& \mbox{ if } |\nabla_0 \phi| \neq 0 \\
\sum_{i,j=1}^{2}A^0_{ij}(\tilde p)X_iX_j\phi , & \mbox{ for some }\tilde p \in \mathbb{R}^2 \mbox{, } |\tilde p| \neq 1 \mbox{, if} |\nabla_0 \phi| = 0
\end{array}\right.
\end{equation}
A function $u \in \mathit{C}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty))$ is a viscosity supersolution of (\ref{Smcf}) if:
\begin{equation}
\partial_t \phi \geq
\left\{\begin{array}{ll} \sum_{i,j=1}^{2}A^0_{ij}(\nabla_0 \phi)X_iX_j \phi & \mbox{if \,\,} |\nabla_0 \phi| \neq 0 \\
\sum_{i,j=1}^{2}A^0_{ij}(\tilde p)X_iX_j\phi & \mbox{for some\,\,} \tilde p \in \mathbb{R}^2 \mbox{,\,} |\tilde p| \neq 1 \mbox{, if\,} |\nabla_0 \phi| = 0 \\
\end{array}\right.
\end{equation}
\end{definition}
\begin{definition}
A \textit{viscosity solution} of (\ref{Smcf}) is a function $u$ which is both a viscosity subsolution and a viscosity supersolution.
\end{definition}
\begin{theorem}
The two definitions of jet spaces viscosity solution and viscosity solution are equivalent.
\end{theorem}
\subsubsection{Vanishing viscosity solutions}
A vanishing viscosity solution is the limit of the solutions of approximating regular problems.
Let us first explicitly note that the coefficients $A_{ij}$ are degenerate: when the gradient vanishes, they are not defined. Hence we will apply the regularization procedure proposed by Evans and Spruck in \cite{A6} to face singularities, which consists in replacing the coefficients with the following ones:
$$A_{ij}^{\tau}(p)= \left(\delta_{ij} -\frac{p_ip_j}{|p|^2+\tau} \right).$$
This approximation has a clear geometric interpretation, already provided by Evans and Spruck. In equation (\ref{Smcf}) each level set of $u$ evolves by mean curvature. What we obtain adding a new parameter is the evolution of the graph of $u$
$$\Gamma_t^{\tau}=\{ (\xi,\xi_{n+1}) \in \mathbb{R}^{n+1}|\xi_{n+1}=u(\xi, t) \}$$
and the introduction in the space of a metric depending on $\tau$.
In this approximation equation (\ref{Smcf}) reads as:
\begin{equation}\label{mcfgraph}
\left\{\begin{array}{lll}
u_t = \sum\limits_{i,j=1}^{2} A^\tau_{ij}(\nabla_0 u) X_iX_ju & \mbox{ in } & \Omega \subset \mathbb{R}^2\times S^1 \\
u(\cdot ,0)=u_0. & &
\end{array}\right.
\end{equation}
We will now introduce a Riemannian approximation of the mean curvature flow in the graph approximation we made before.
We extend $g_0$ on the whole space $SE(2)$ to a metric $g_\epsilon $ which makes the vectors $X_1$, $X_2$, $\epsilon X_3$ orthonormal. Let us note that $g_\epsilon $ is the Riemannian completion of the horizontal metric. From now on, in order to simplify notations we will always denote
\begin{equation}\label{campie}X_1^\epsilon = X_1, \quad X_2^\epsilon = X_2, \quad X_3^\epsilon = \epsilon X_3.\end{equation}
The Riemannian gradient associated to the metric $g_\epsilon$ will be represented as:
$$\nabla_\epsilon u = X_1^\epsilon u X_1^\epsilon + X_2^\epsilon u X^\epsilon_2 + X_3^\epsilon u X^\epsilon_3 $$
and, using the fact that $X^\epsilon$ are orthonormal, we get:
\begin{equation}
\lvert \nabla _\epsilon u\rvert = \sqrt{(X_1u)^2 + (X_2u)^2 + \epsilon^2 (X_3 u)^2}.
\end{equation}
In the Riemannian setting equation (\ref{mcfgraph}) reads as:
\begin{equation}\label{mcf}
\left\{\begin{array}{lll}
u_t = \sum\limits_{i,j=1}^{3} A^{\epsilon, \tau}_{ij}(\nabla_\epsilon u) X^\epsilon_iX^\epsilon_ju & \text{ in } & \Omega \subset \mathbb{R}^2\times S^1 \\
u(\cdot ,0)=u_0 & &
\end{array}\right.
\end{equation}
where
$$A^{\epsilon, \tau}_{ij}(\nabla_\epsilon u)=\left( \delta_{i,j} - \frac{X^\epsilon_iuX^\epsilon_ju}{|\nabla_\epsilon u|^2+ \tau } \right).$$
In order to prove the existence of a solution we apply another regularization, always introduced by Evans and Spruck. It consists in adding a Laplacian, ensuring that the matrix of the coefficients have strictly positive smallest eigenvalue. Then the approximated coefficients will be:
$$A_{ij}^{\epsilon,\tau,\sigma}(p)=A_{ij}^{\epsilon,\tau}(p)+\sigma\delta_{ij}$$
and the associated equation becomes:
\begin{equation}\label{mcfest}
\left\{\begin{array}{lll}
u_t = \sum\limits_{i,j=1}^{3} A_{ij}^{\epsilon,\tau,\sigma}(\nabla^\epsilon u) X^\epsilon_iX^\epsilon_ju & \text{ in } & \Omega \subset \mathbb{R}^2\times S^1 \\
u(\cdot ,0)=u_0. & &
\end{array}\right.
\end{equation}
This condition makes the coefficients satisfy the coercivity condition and allows to apply the standard theory of uniformly parabolic equations.
We are now in condition to give a third definition of vanishing viscosity solution,
\begin{definition}
A function $u$ is a vanishing viscosity solution of (\ref{Smcf}) if it is limit of a sequence of solutions
$u^{\epsilon_k,\tau_k,\sigma_k}$ of equation (\ref{mcfest}).
\end{definition}
We will see that any Lipschitz continuous vanishing viscosity solution is a viscosity solution of the same equation
(see Theorem \ref{vivi} below).
\subsection{Solution of the approximating equations}
The aim of this sub-section is to prove the existence of solutions for the approximating equation (\ref{mcfest}) and, even more important, to establish estimates independent of all parameters, which hold true also the limit equation (\ref{Smcf}).
We will study solutions on the whole space, since this is the assumption needed for the enhancing algorithm.
\begin{theorem}
\label{First part of existence}
Assume that $u_0 \in \mathit{C}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)$ and that it is constant on the exterior of a cylinder, i.e. there exists $S>0$ such that:
\begin{equation}\label{constantS}
u_0 \mbox{ is constant on \quad} \mathbb{R}^2 \times \mathit{S}^1 \cap \{x^2 + y^2 \geq S\}.
\end{equation}
Then there exists a unique solution $u^{\epsilon,\tau, \sigma} \in \mathit{C}^{2,\alpha}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty))$ of the initial value problem (\ref{mcfest}).
Moreover, for all $t>0$ one has:
\begin{equation}
\lVert u^{\epsilon, \tau, \sigma}(\cdot, t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\leq \lVert u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)} \label{stima0}
\end{equation}
\begin{equation} \label{stima1}
\lVert {\nabla}_{E}u^{\epsilon,\tau, \sigma}(\cdot, t)\rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\leq \lVert {\nabla}_{E} u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}. \end{equation}
where $\nabla_{E}(\cdot)$ denotes the Euclidean gradient. \end{theorem}
Since the previous estimates do not depend on $\sigma$ and $\epsilon$, they will be stable
when these parameters go to $0$:
\begin{corollary}
The solution $u^\tau$ of the equation (\ref{mcfgraph}) satisfies conditions (\ref{stima0}) and (\ref{stima1}).
\end{corollary}
\bigskip
This results generalizes to $SE(2)$ the previous results of \cite{A6} and \cite{A3}.
The main difficult to face in the extension is the fact that the vector fields $X^\epsilon$ does not commute, hence it is not easy to find a nice equation satisfied by the gradient. Following the approach proposed by Mumford in \cite{K1}, we will take the derivatives along the direction of green a family of vector fields $\{Y_i\}_{i=1,\ldots,3}$, which are right invariant with respect to the group law. It is a general fact that these vector fields commute with the left invariant ones. We recall this result in the special case of our vector fields, for reader convenience:
\begin{lemma}\label{Vector fields commute}
A right invariant basis of the tangent space can
be defined as follow:
$$\begin{array}{l}
Y_1=\partial_x \\
Y_2=X_2 + (x\cos\theta+y\sin\theta)X_3+(x\sin\theta - y\cos\theta)X_1 = \partial_\theta - y \partial_x + x \partial_y\\
Y_3=\partial_y.\end{array}
$$
We will see that the vector fields $\{X^\epsilon_i\}_{i=1,2,3}$ defined in (\ref{campie}) commute with $\{Y_i\}_{i=1,2,3}$.
\end{lemma}
\begin{proof}
We will calculate their Lie bracket:
\begin{eqnarray*}
[X_1,Y_1]&=&(\cos\theta \partial_x + \sin\theta \partial_y)\partial_x-\partial_x(\cos\theta \partial_x + \sin\theta \partial_y)\\ & = & \cos\theta \partial_{xx}+\sin\theta \partial_{yx} -\cos\theta \partial_{xx}-\sin\theta \partial_{xy} = 0.
\end{eqnarray*}
Since the coefficients of $Y_2$ do not depend on $\theta$ it is clear that
\begin{eqnarray*}
[X_2,Y_2]&=0.
\end{eqnarray*}
Finally
\begin{eqnarray*}
[X_3,Y_3]&=&(\sin\theta \partial_x - \cos\theta \partial_y)\partial_y-\partial_y(\sin\theta \partial_x - \cos\theta \partial_y)\\&=& \sin\theta \partial_{xy}-\cos\theta \partial_{yy} -\sin\theta \partial_{xy}+\cos\theta \partial_{yy} = 0
\end{eqnarray*}
The other combinations commute analogously.
\end{proof}
The first step of the proof of Theorem \ref{First part of existence} is the existence of the function $u^{}$ and its $L^\infty$ bound:
\begin{theorem} \label{linfti} Under the assumption of Theorem \ref{First part of existence} on the initial datum, the initial value problem (\ref{mcfest}) has an unique solution $u^{\epsilon,\tau, \sigma} \in \mathit{C}^{2,\alpha}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty))$
such that
\begin{equation}
\lVert u^{\epsilon, \tau, \sigma}(\cdot, t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\leq \lVert u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\end{equation}
\end{theorem}
\begin{proof}
For $\sigma>0$, consider the problem associated to equation (\ref{mcfest}) on a cylinder
$B(0,r) \times [0,T]$,
with initial data
\begin{equation}
u_r^{\epsilon,\tau,\sigma}(\cdot,0)=u_0,
\end{equation}
and constant value on the lateral boundary of the cylinder. Note that coefficients ${A_{ij}^{\epsilon,\tau \sigma}}$ satisfy the uniform parabolic condition:
\begin{equation}
\sigma|p|^2 \leq A_{ij}^{\epsilon, \tau, \sigma}(\tilde p)p_ip_j \label{Coercivity condition}
\end{equation}
for each $\tilde p, p \in \mathbb{R}^3$. hence the theory of parabolic equations on bounded cylinders ensures that for every fixed value of the parameters there exists an unique smooth solution $u^{\epsilon,\tau,\sigma}_r$ (see for example Ladyzenskaja, Solonnikov, Ural'tseva \cite{A12}). By maximum principle we have
\begin{equation}
\lVert u_r^{\epsilon, \tau, \sigma}(\cdot,t)\rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)} \leq \lVert u_0\rVert_{\mathcal{L}^{\infty}( \mathbb{R}^2 \times \mathit{S}^1)}. \label{estimate0}
\end{equation}
Letting $r$ tend to $\infty$, we obtain a solution $u^{\epsilon,\tau,\sigma}$ defined on the whole $\mathbb{R}^n \times [0,T]$ such that $$\lVert u^{\epsilon,\tau,\sigma} \rVert_{\infty} \leq \lVert u_0\rVert_{\mathcal{L}^{\infty}( \mathbb{R}^2 \times \mathit{S}^1)}. $$
\end{proof}
We can now complete the second part of the proof of Theorem \ref{First part of existence} which involves the estimate of the gradient:
\begin{theorem}\label{stimagrad}
Under the assumption of Theorem \ref{First part of existence} and \ref{linfti}, the solution of the initial value problem (\ref{mcfest}) satisfies
\begin{equation}
\lVert {\nabla}_{E}u^{\epsilon,\tau, \sigma}(\cdot, t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\leq \lVert {\nabla}_{E} u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}.
\end{equation}
\end{theorem}
\begin{proof}
From Theorem \ref{linfti} we known that there exists an unique smooth solution $u^{\epsilon, \tau, \sigma}$ of equation (\ref{mcfest}) and we only have to estimate its gradient. To this end, we can differentiate equation (\ref{mcfest}) along the directions $\{Y_i\}_{i=1,2,3}$, and using Lemma \ref{Vector fields commute}, we obtain the following equation for
$w_i=Y_iu^{\epsilon, \tau, \sigma}$, for all $i=1,2,3$, and for $\omega_4= Y_2 u^{\epsilon, \tau, \sigma} - (y_0 \partial_x - x_0 \partial_y)u^{\epsilon, \tau, \sigma}$:
\begin{equation}
\frac{\partial}{\partial t}w_i=\sum_{i,j=1}^{3}\left( A_{i,j}^{\epsilon,\tau,\sigma}(\nabla_\epsilon u^{\epsilon, \tau, \sigma})X_i^{\epsilon}X_j^{\epsilon}w_i + (\partial_{\xi_k}A_{i,j}^{\epsilon,\tau,\sigma})(\nabla_\epsilon u^{\epsilon, \tau, \sigma})X_i^{\epsilon}X_j^{\epsilon}u^{\epsilon, \tau, \sigma}X_k w_i\right). \label{Differentiate2}
\end{equation}
The parabolic maximum principle applied to the previous equation yields:
\begin{equation}
\lVert Y_i u^{\epsilon, \tau, \sigma}(\cdot,t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}\leq \lVert Y_i u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)} \label{estimate1}
\end{equation}
This implies that
$$\lVert \partial_x u^{\epsilon, \tau, \sigma}(\cdot,t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
+ \lVert \partial_y u^{\epsilon, \tau, \sigma}(\cdot,t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\leq C\lVert \nabla_E u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}.$$
We have now to establish the estimate of the derivative $\partial_\theta$.
For every fixed value of $(x_0, y_0)$ we have
$$|\partial_\theta u^{^{\epsilon, \tau, \sigma}} (x_0, y_0, \theta)| \leq \max_{|y-y_0|^2+ |x_0-x|^2\leq 1} |Y_2 u^{^{\epsilon, \tau, \sigma}} - (y_0 \partial_x - x_0 \partial_y)u^{^{\epsilon, \tau, \sigma}} |\leq $$$$
\leq \max_{|y-y_0|^2+ |x_0-x|^2\leq 1} |Y_2 u_0 - (y_0 \partial_x - x_0 \partial_y)u_0 |\leq $$
$$
\leq \max_{|y-y_0|^2+ |x_0-x|^2\leq 1} |\partial_\theta u_0| + \max_{|y-y_0|^2+ |x_0-x|^2\leq 1} |y_0-y| |\partial_x u_0| + \max_{|y-y_0|^2+ |x_0-x|^2\leq 1} |x_0-x| |\partial_y u_0 |\leq $$$$\leq\lVert {\nabla}_{E} u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}$$
\end{proof}
Let us conclude this section remarking that the proof of Theorem \ref{First part of existence} is a direct consequence of the two Theorems \ref{linfti} and \ref{stimagrad}.
\subsection{Existence result}
In order to extend to our setting Evans and Spruck's argument in the proof of \cite{A6}, as well as the proof of \cite{A3} we need to let go to $0$ the three approximating parameters $\sigma \rightarrow 0$, $\tau \rightarrow 0$ and $\epsilon \rightarrow 0$.
Since the estimates we have established are uniform in all parameters, we immediately have the
existence of a vanishing viscosity solution:
\begin{theorem}\label{vanishvisc}
Assume that $u_0 \in \mathit{C}(\mathbb{R}^2 \times \mathit{S}^1)$ is Lipschitz continuous and satisfies (\ref{constantS}). Then there exists a vanishing viscosity solution $u \in \mathit{C}^{1,0}$ of (\ref{Smcf}), which satisfies the following properties:
\begin{equation}
\lVert u(\cdot, t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\leq C\lVert u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\end{equation}
\begin{equation}
\lVert {\nabla}_{E} u(\cdot, t) \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}
\leq C\lVert {\nabla}_{E} u_0 \rVert_{\mathcal{L}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1)}.
\end{equation}
\end{theorem}
\begin{proof}
Since $u_0$ is constant at infinity, we immediately deduce from Weiestrass theorem that the Euclidean gradient $\nabla_E u_0$ is bounded. Employing estimates (\ref{stima0}),(\ref{stima1})
and Ascoli Arzel\`{a} Theorem we can extract two sequences $\{\sigma_k\}, \{\epsilon_k\},\{\tau_k\} \rightarrow 0$ of positive numbers such that $\frac{\epsilon_k}{\tau_k}\rightarrow 0$ and such that the corresponding solutions $\{u^k=u^{\epsilon_k,\tau_k, \sigma_k}\}_{k \in \mathbb{N}}$
are convergent in the space of Lipshitz functions. Then, by definition the limit is a Lipshitz continuous vanishing viscosity solution.
\end{proof}
We will now prove that this vanishing viscosity solution is indeed a viscosity solution:
\begin{theorem}\label{vivi}
Assume that $u_0 \in \mathit{C}(\mathbb{R}^2 \times \mathit{S}^1)$ is continuous and satisfies (\ref{constantS}). Then the vanishing viscosity solution detected in Theorem \ref{vanishvisc} is a viscosity solution $u \in \mathit{C}^{1,0}$ of (\ref{Smcf}).
\end{theorem}
\begin{proof}
In order to prove that $u$ is a viscosity solution we consider a function $\phi \in \mathit{C}^{\infty}(\mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty))$ and we suppose that $u-\phi$ has a strict local maximum at a point $(\xi_0,t_0)\in \mathbb{R}^2 \times \mathit{S}^1 \times [0,\infty)$.
Since $u$ is a Lipschitz continuous vanishing viscosity solution, it can be uniformly approximated
by solutions $(u^k)$ of the approximating Riemannian problem (see also Theorem \ref{vivi}).
As $u^k \rightarrow u$ uniformly near $(\xi_0,t_0)$, $u^k-\phi$ has a local maximum at a point $(\xi_k,t_k)$, with
\begin{equation}
(\xi_k,t_k)\rightarrow (\xi_0,t_0) \mbox{\quad as \quad} k\rightarrow \infty \label{sequenza punti}
\end{equation}
Since $u^{k}$ and $\phi$ are smooth, we have
$$\nabla_E u^k=\nabla_E \phi \mbox{\, , \,} \partial_t u^k=\partial_t \phi \mbox{\, and \,} \mathit{D}^2_E(u^k-\phi) \leq 0 \mbox{\, at \,} (\xi_k,t_k)$$
where $\mathit{D}^2_E$ is the Euclidean Hessian.
Thus
\begin{equation}
\partial_t \phi - \big(\delta_{ij} - \frac{X_i^{\epsilon_k}\phi X_j^{\epsilon_k}\phi}{|\nabla_{\epsilon_k} \phi|^2 + \tau_k^2} \big)X_i^{\epsilon_k}X_j^{\epsilon_k}\phi \leq 0 \mbox{\, at \,} (\xi_k,t_k)
\end{equation}
This inequality can be equivalently expressed in terms of the coefficients $A_{i,j}^{\epsilon,\tau}$ as follows. At the point $(\xi_k,t_k)$
\begin{eqnarray}
\partial_t \phi &-& A_{i,j}^{\epsilon_k,\tau_k}(\nabla_{\epsilon_k}\phi)X_i^{\epsilon_k}X_j^{\epsilon_k}\phi\\ &\leq& \partial_t u^k - A_{i,j}^{\epsilon_k,\tau_k}(\nabla_{\epsilon_k}u^k)X_i^{\epsilon_k}X_j^{\epsilon_k}(u^k+\phi-u^k) \leq 0 \label{equation in which we pass to limit}
\end{eqnarray}
If $\nabla_0{\phi}(\xi_0,t_0) \neq 0$, also $\nabla_0{\phi}(\xi_k,t_k) \neq 0$ for sufficiently large $k$. Then letting $k\rightarrow \infty$ we obtain from (\ref{equation in which we pass to limit}):
\begin{equation}
\partial_t \phi \leq \sum_{i,j=1}^{2}\big(\delta_{ij} - \frac{X_i\phi X_j\phi}{{|\nabla_0 \phi|}^2} \big) X_iX_j \phi \mbox{\, at \,} (\xi_0,t_0)
\label{important}\end{equation}
which implies that $u$ is a viscosity subsolution.\\
If $\nabla_0{\phi}(\xi_0,t_0) = 0$ then we set
$$\eta^{k}= \frac{\nabla_{\epsilon_k}\phi(\xi_k,t_k)}{\sqrt{|\nabla_{\epsilon_k}\phi(\xi_k,t_k)|^2+\tau_k^2}}$$
There exists $\eta \in \mathbb{R}^3$ such that $\eta^k \rightarrow \eta$. Note that
$$|(\eta^k)_3|=\frac{\epsilon_k|X_3 \phi(\xi_k,t_k)|}{\sqrt{|\nabla_{\epsilon_k}\phi(\xi_k,t_k)|^2+\tau_k^2}}\leq \frac{(\epsilon_k/\tau_k)|X_3 \phi(\xi_k,t_k)|}{\sqrt{(\epsilon_k/\tau_k)^2\sum_{i=1}^{2}(X_i\phi(\xi_k,t_k))^2+1}}$$
Since the expression vanishes as $k \rightarrow \infty$ we have $\eta_3=0$. The PDE (\ref{equation in which we pass to limit}) now reads as:
$$\partial_t \phi(\xi_k,t_k) - \sum_{i,j=1}^{3}(\delta_{ij}-\eta_i^k \eta_j^k)X_i^{\epsilon_k}X_j^{\epsilon_k}\phi(\xi_k,t_k) \leq 0$$
so as $k \rightarrow \infty$ we obtain
\begin{equation}
\partial_t \phi(\xi_0,t_0) \leq \sum_{i,j=1}^{2}(\delta_{ij}-\eta_i \eta_j)X_iX_j\phi(\xi_0,t_0)
\label{important2}\end{equation}
concluding the proof for the case in which $u- \phi$ has a local strict maximum at point $(\xi_0,t_0)$. If $u- \phi$ has a local maximum, but not necessarily a strict local maximum at $(\xi_0,t_0)$, we can repeat the argument above replacing $\phi(x,t)$ with $$\tilde{\phi}(\xi,t)= \phi(\xi,t)+|\xi-\xi_0|^4+(t-t_0)^4$$
again to obtain (\ref{important}),(\ref{important2}). Consequently $u$ is a weak subsolution. That $u$ is a weak supersolution follows analogously.
\end{proof}
From the above result we can only say that there is a subsequence of $u^{\epsilon,\tau, \sigma}$ which is convergent to the vanishing viscosity solution $u$. In order to prove the uniqueness of the vanishing viscosity solution, we would need the sub-Riemannian analogous of estimate established by Deckelnick and Dzuik in \cite{Deck}:
\begin{proposition}\label{1} There exists a constant $C>0$ independent of $\sigma, \tau$ and $\epsilon$
such that:
\begin{equation}
\lVert u^{\epsilon,\tau, \sigma} - u\rVert_\infty \leq C \tau^\alpha
\end{equation}
\end{proposition}
Letting $\epsilon$ and $\sigma$ go to $0$ we also get:
\begin{equation}
\lVert u^{\tau}- u\rVert_\infty \leq C \tau^\alpha
\end{equation}
where $u^\tau$ is a solution of (\ref{mcfgraph}).
\section{Sub-Riemannian operators in image processing}\label{sec2}
In this section we first recall the cortical model of image completion proposed by Citti and Sarti in \cite{A4} and \cite{A5}. By simplicity we will focus only on the
image processing aspects of the problem, neglecting all the cortical ones.
Then we show that this mechanism
can be adapted to perform contour enhancement.
\subsection{Processing of an image in a sub-Riemannian structure}
\subsubsection{Lifting of the level lines of an image} \label{limage}
The cortical based model of completion, proposed by Citti and Sarti in \cite{A4} and \cite{A5}, lifts each level line of a 2D image $I(x,y)$ in the retinal plane to a new curve in the group $SE(2)=\mathbb{R}^2 \times \textit{S}^1$ of rigid motions of the Euclidean plane, used to model the functional architecture of the primary visual cortex (V1). Precisely if a level line of $I$ is represented as a parametrized 2-dimensional curve $\gamma_{2D}=(x(t),y(t))$ and the vector $\vec{X}_1=(\cos(\bar \theta(t)), \sin(\bar \theta(t)))$ is the unitary tangent to the curve $\gamma_{2D}$ at the point $t$, we say that $\bar \theta(t)$ is the orientation of $\gamma_{2D}$ at the point $t$. Then the choice of this orientation lifts the 2D curve $\gamma_{2D}$ to a new curve: \begin{equation}(x(t),y(t)) \rightarrow (x(t),y(t),\bar \theta(t)) \in \mathbb{R}^2 \times \mathit{S}^1 \label{liftingeq} \end{equation}
as it is shown in figure \ref{liftfigure1}. By construction the tangent vector to any lifted curve $\gamma_{3D}=(x(t),y(t),\bar \theta(t)) $ in $SE(2)$ can be represented as a linear combination of the vector fields: $$X_1 =\cos(\theta) \partial_x + \sin(\theta) \partial_y$$
$$X_2=\partial_\theta.$$
In other words we have associated to every point of $R^2 \times S^1$ the vector space spanned by vectors $X_1$ and $X_2$.
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{lifting1.jpg} \label{liftfigure1}
\caption{A contour represented by the curve $\gamma_{2D}(t)$ is lifted into the roto-translation group obtaining the red curve $\gamma_{3D}(t)$. The tangent space of the roto-translation group is spanned by the vectors $X_1$ and $X_2$.}
\end{figure}
\subsubsection{The sub-Riemannian structure}
We will call horizontal plane and denote it as $HM$ the tangent plane generated by $X_1$ and $X_2$ at every point.
Let us also note that there are no lifted curve with a non-vanishing component in the orthogonal direction $$X_3= -\sin(\theta) \partial_x + \cos(\theta)\partial_y.$$
However derivations in the direction $X_3$ can be recovered by means of the commutator:
$$X_3= X_1 X_2 - X_2 X_1 = [X_1,X_2]= -\sin(\theta) \partial_x + \cos(\theta)\partial_y.$$
This condition ensures that $X_1$, $X_2$ and their commutators of any order span the Euclidean tangent space of $SE(2)$ at every given point, i.e. they satisfy the H\"{o}rmander condition, see \cite{Horm2}. Then the structure obtained with this lifting process is sub-Riemannian. On the plane $HM$ we define the metric $g_0$ which makes $X_1$ and $X_2$ orthonormal. Hence, if a vector $a= a_1 X_1 + a_2 X_2 \in HM$, its horizontal norm is:
\begin{equation}
\lvert a \rvert_0 = \sqrt{(a_1)^2 + (a_2)^2 }.
\end{equation}
The first classical properties of the distance in these spaces have been established by Nagel, Stein and Wainger (see \cite{Nagel}), and Gromov (see \cite{Gromov}). We refer to Hladky (see \cite{A8}) and the references therein for recent contributions.
In this setting the vector fields play the same role as derivatives in the standard setting.
Hence we will say that a function $u: \mathbb{R}^2 \times \mathit{S}^1 \rightarrow \mathbb{R}$ is of class $C^1$ in the sub-Riemannian sense (we will denote it as $u\in C^1_{SR}$) if there exists $X_1 u$ and $X_2 u$ and they are continuous. In this case we will call horizontal gradient $\nabla _0$:
$$\nabla_0 u = (X_1 u) X_1+ (X_2 u) X_2.$$
From the definition stated before it follows that the norm of the horizontal gradient is: \begin{equation} |\nabla _0 u| = \sqrt{(X_1u)^2 + (X_2u)^2 }.\end{equation}
In other words the horizontal gradient is the projection of the standard Euclidean gradient of $u$ on the horizontal plane $HM$.
\subsubsection{Lifting of the image to a regular surface}\label{liftimage}
Since each level line of the image $I$ is lifted to a curve in the 3D cortical space,
the whole image is lifted to a graph $$(x,y)\rightarrow (x,y, \bar \theta(x,y)).$$
Clearly the graph of $\theta$ can be interpreted as the zero level set of the function $u$
$$u(x,y,\theta)= \theta - \bar \theta(x,y),$$
and it can be identified as a regular surface in the
sub-Riemannian structure.
The notion of regular surface $S$ was first introduced by Franchi, Serapioni and Serracassano in \cite{A24}:
\begin{equation}
S = \{(x,y,\theta): u(x,y,\theta)=0 \mbox{\, and \,} \nabla_0 u(x,y,\theta) \not =0\}.
\end{equation}
Since the sub-Riemannian surface $S$ is union of horizontal curves, we say that it is foliated in horizontal curves. The horizontal normal of $S$ is defined as $$\nu_0 = \frac{\nabla_0 u}{|\nabla_0 u|} .$$
Note that in
a smooth surface there can be points where the Riemannian gradient is not $0$, but its projection on the $HM$ plane vanishes:
$$\nabla_0 u =0.$$
Points which have this property are called \textit{characteristics} and the normal is not defined at them. However these points are not present in lifted surfaces.
\subsubsection{Diffusion and concentration algorithm}
We have seen in subsection \ref{liftimage} how to lift an image $I(x,y)$, to a surface $S$.
After that let us lift the level lines of the image $I(x,y)$ to the function $$v(x,y,\bar\theta(x,y))=I(x,y)$$
defined on the surface. The surface $S$ and the function $v$ defined on $S$ will be processed through differential operators defined on $SE(2)$, which model the propagation of information in the cortex. More precisely two mechanisms operate on the lifted surface $S$:
\begin{enumerate}
\item[(a)] a sub-Riemmanian diffusion along the vector fields $X_1$ and $X_2$ which model the propagation of information through the cortical lateral connectivity. This operator can be expressed as
$$\partial_t - X_1^2 - X_2 ^2$$
where $X_1^2$ expresses the second derivative in the direction $X_1$.
The operator is formally degenerated, in the sense that its second fundamental form has $0$ determinant at every point. It has been deeply studied starting from the classical works of H\"ormander in \cite{Hormy}, Rothshild and Stein in \cite{Roth} and Jerison \cite{Jerison} and it is known that it is hypo-elliptic. After that a large literature has been produced on these type of operators, and we refer to \cite{Cap2} for recent presentation of the state of the art.
\item[(b)] a concentration on the surface of maxima to model the non-maximal suppression mechanism and the orientation tuning.
\end{enumerate}
In the Euclidean setting Merrimann, Bence and Osher, proved in \cite{Merriman} the convergence of a similar two step algorithm to the motion by curvature.
In Citti and Sarti \cite{A4} and \cite{A5} the authors studied the motion when (a) and (b) are
applied iteratively and proved that at each step the surface performs an increment in the normal direction
with speed equal to the sub-Riemannian mean curvature.
\subsubsection{Mean curvature flow}
The notion of curvature of a $\mathit{C}^2$ surface at non characteristic points is already well understood, see (\cite{Garofalo}, \cite{Hladky}, \cite{Cheng}, \cite{Ritore}, \cite{Cap2}). It can be defined either as first variation of the area functional, either as limit of the mean curvature of the Riemannian approximation or as horizontal divergence of the horizontal normal:
$$K_0= \mbox{div}_0 (\nu_0) = \mbox{div}_0 \bigg(\frac{\nabla_0 u}{|\nabla_0 u|} \bigg). $$
If each point of the surface evolves in the direction of the normal vector with speed equal to the mean curvature, we say that the surface is evolving by mean curvature. From the previously expression of the curvature we formally get the following equation for the flow, which we can call horizontal (or sub-Riemannian) mean curvature flow:
\begin{equation}\label{Smcf}
\left\{\begin{array}{lll}
u_t = \sum\limits_{i,j=1}^{2} \left( \delta_{i,j} - \frac{X^0_iuX^0_ju}{|\nabla_0 u|^2} \right) X^0_iX^0_ju & \mbox{ in } & \Omega \subset \mathbb{R}^2\times S^1 \\
u(\cdot ,0)=u_0 & &
\end{array}\right.
\end{equation}
where $\delta_{ij}$ is the Kronecker function.
An existence result for this equation was not known. We will provide in next section an existence theorem which will allow us to handle also characteristic points,
and is expressed in terms of viscosity solutions.
\subsubsection{Laplace-Beltrami flow}
Citti and Sarti also conjectured that as a result of the previous mechanisms the function $v(x,y,\bar\theta(x,y))$, which contains the gray-levels values, evolves through the flow described by the Laplace Beltrami operator $\Delta_{LB}$:
\begin{equation}\label{Smcl}
\left\{\begin{array}{lll}
v_t = \sum\limits_{i,j=1}^{2} \left( \delta_{i,j} - \frac{X^0_iuX^0_ju}{|\nabla_0 u|^2} \right)X^0_iX^0_jv & \mbox{ in } & \Omega \subset \mathbb{R}^2\times S^1 \\
v(\cdot ,0)=v_0. & &
\end{array}\right.
\end{equation}
From now on in order to simplify notations we will denote:
\begin{equation}\label{aij}A_{ij}^{0}(\nabla_0 u) = \delta_{i,j} - \frac{X^0_iuX^0_ju}{|\nabla_0 u|^2} , \quad i,j=1, 2.\end{equation}
Let us note that the described equations become degenerate and the solutions are regular only along the directions of the foliation.
\subsection{Enhancement and Inpainting in Sub-Riemannian geometry} \label{section algorithm}
\subsubsection{Inpainting of missing parts of the image}
In the previous section we described an algorithm proposed in \cite{A4} for restoring damaged portions of an image, where the corrupted set $\omega$ is known a priori.
\begin{itemize}
\item[1]{ An image $I(x,y)$ are lifted to a surface $S = \{(x, y, \bar \theta(x,y)\}$ in the Lie group $SE(2)$ of rotation and translation, and the gray level of the image $I(x,y)$ to a function $v(x,y,\bar\theta(x,y))$ defined on $S$. In the lifting the corrupted part of the image becomes $\Omega= \omega\times S^1$, where no surface is defined.}
\item[2]{The surface $S$ and $v(x,y,\bar\theta(x,y))$ are processed via the algorithm of diffusion and concentration in the corrupted region $\Omega$, where we impose Dirichlet boundary conditions. This leads to the motion by mean curvature of the surface $S$ and to a Laplace Beltrami flow for $v(x,y,\bar\theta(x,y))$.}
\item[3]{The final result is obtained by re-projecting onto the plane of the image the value of the intensity $v(x,y,\bar\theta(x,y))$.}
\end{itemize}
The algorithm has been implemented in \cite{GCS} via a diffusion and concentration method,
while it has been implemented via the curvature equation in \cite{A4}.
\subsubsection{Enhancement of boundaries}
One of the scope of this paper is to extend the previous completion algorithm to solve the problem of contours enhancement. The aim of this technique is to provide a regularization in the direction of the boundaries, making them clearer and brighter and eliminating noise. We refer to the paper of Duits \cite{DFI},\cite{DFII} for some results of image enhancement in this space. Precisely he lifts the image $I$ on the 3D features space, using an invertible map defined through
Fourier analysis. The lifted version of the image $I$ is processed in the 3D space
and then reprojected on the 2D plane to recover an enhanced version of the image $I$. In particular he also provides results of enhancement in presence of bifurcation or crossing.
In this paper we face the same problem adapting the algorithm recalled in the previous section.
\begin{itemize}
\item[1]{First we lift the level lines of an image $I(x,y)$ to a surface $S = \{(x, y, \bar \theta(x,y))\}$ and we lift the gray levels of $I(x,y)$ to a function $v(x, y, \bar \theta(x,y))$ always defined on $S$. }
\item[2]{Then we process the surface $S$ via a mean curvature flow and $v$ via a Laplace-Beltrami flow. In order to perform enhancement we propose here to let equations (\ref{Smcf}) and (\ref{Smcl}) evolve in the full domain $\R^2 \times S^1$. Let us remark that lifting the image in the 3D group allows to solve the problem of crossing elongated structures.
Indeed if two lines cross in the 2D space and have different orientations, they are lifted to the 3D space to two different planes, allowing completion and enhancement. The directional diffusion will give place to a regularization only in the direction of contours.}
\item[3]{Finally we project into the plane of the image the value of the gray intensity $v(x,y,\theta(x,y))$.}
\end{itemize}
\section{Introduction}
\input{introductionandsubriemannian}
\section{Existence of viscosity solutions}\label{sec3}
\input{existence}
\section{Numerical scheme}\label{sec4}
\input{numerical}
\section{Results}\label{sec5}
\input{resultsandconclusions}
\section{Acknowledgements}
The research leading to these results has received funding from the People Programme (Marie
Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA
grant agreement n607643. Author GS has received
funding from the European Research Council under the ECs 7th Framework
Programme (FP7/2007 2014)/ERC grant agreement No. 335555.
\subsection{Inpainting results}
Since the lifting procedure is based on the selection of the orientation of level lines at every point, the algorithm performs particularity well for completing gray level images which have non vanishing gradient at every point. Hence we will start with a simple artificial image of this type. The
algorithm performs very well for completion of curved level lines.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{mac1.png}\hspace{.02\textwidth}
\includegraphics[width=0.3\textwidth]{mac2.png}\hspace{.02\textwidth}
\caption{An example of completion performed by the algorithm. In this artificial image the image gradient is lifted in the $\mathbb{R}^2 \times S^1$ space and the black hole is completed by mean curvature flow. Since the level lines of the image are approximately circular, the algorithm performs very well.
\end{figure}
We will now test the algorithm on natural images. In the first image a black hole is present,
and the algorithm correctly reconstructs the missed part of the image:
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{campanile.png}\hspace{.2\textwidth}\\
\caption{Completion result on a real image through sub-Riemannian mean curvature flow in $\mathbb{R}^2 \times S^1$, as described in the paper.}
\end{figure}
The model (\cite{A4}) studied here performs
completion via the curvature flow. Very recently Boscain et al. in \cite{Boscain},
tried to replace this non linear equation by simple diffusion.
In figure \ref{fig:isokernels1} left we consider an image courtesy of U. Boscain \cite{Boscain},
partially occluded by a grid and show the results of completion performed in \cite{Boscain} (second image from left), by the heat equation on the 2D space (third image from left) and by the Citti and Sarti model (right).
A detail is shown in figure \ref{detail1}.
Since the considered image is a painting, extremely smooth, with low contrast,
the 2D heat equation is already able to perform a simple version of completion.
In this case the implementation of sub-Riemannian diffusion \cite{Boscain}
provides a worse result , while the curvature model reconstructs correctly
the missed contours and level lines.
\begin{figure}
\centering
\includegraphics[width=.22\textwidth]{Fig002c.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{creatoreBoscain.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{Fig002c_heat.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{Fig002c_inpainted.png}\\
\caption{Left: an occluded image (courtesy of U. Boscain (\cite{Boscain})), second image from left: the image processed in (\cite{Boscain}); third image from left: the same image processed through the heat equation right: image the inpainted using the Citti Sarti algorithm.}
\label{fig:isokernels1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.22\textwidth]{det_orig_creat.png}\hspace{.02\textwidth}
\includegraphics[width=0.22\textwidth]{det_bosc_creat.png}\hspace{.02\textwidth}
\includegraphics[width=0.22\textwidth]{det_heat_creat.png}\hspace{.02\textwidth}
\includegraphics[width=0.22\textwidth]{det_inpaint_creat.png}
\caption{A detail of previous image: Left: the original image (courtesy of U. Boscain (\cite{Boscain})); Second image from left: the image processed in (\cite{Boscain}); Third image from left: the image processed through the heat equation; Right: image inpainted using the proposed algorithm.}
\label{detail1}
\end{figure}
In figure \ref{fig6} (and in the detail taken from it in figure \ref{fig7})
we consider an other example taken from the same paper.
In this image the grid of points which are missed is larger,
and the previous effect is even more evident.
\begin{figure}
\centering\includegraphics[width=.22\textwidth]{FigE7c.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{FigE7b.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{FigE7c_heat.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{FigE7c_inpainted.png}
\caption{Top left: the original image (courtesy of U. Boscain (\cite{Boscain})); Top right: the image processed in (\cite{Boscain}); Bottom left: the image processed through the heat equation; Bottom right: image inpainted using the proposed algorithm.}
\label{fig6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.22\textwidth]{det_eye_orig1.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{det_eye_bosc1.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{det_eye_heat1.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{det_eye_inpaint1.png}\\
\caption{Detail of the previous image.The result of our algorithm (4th right) preserves the circular level lines of the iris, while simple subRiemannian diffusion (3rd right) destroy it. Simple euclidean diffusion (3rd right) performs at intermediate level.}
\label{fig7}
\end{figure}
In a more recent paper Boscain and al. introduced a linear diffusion
with coefficients depending
on the gradient of the initial image (see \cite{Boscain2}), which they call heuristic.
In figure \ref{boscainprandi} we compare the results obtained with this model,
the heat equation on the image plane and the strongly geometric model
of Sarti and Citti.
\begin{figure}
\includegraphics[width=.22\textwidth]{FigE5c.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{prandi.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{FigE5c_heat.png}\hspace{.02\textwidth}
\includegraphics[width=.22\textwidth]{FigE5c_inpainted.png}\\
\label{boscainprandi}
\caption{On the left the occluded image. From left to right: results from \cite{Boscain2}, with 2D heat equation and our model.}
\label{fig8}
\end{figure}
Then we test our implementation on piecewise constant images. Since the gradient is $0$ in large
part of the image, the lifted gradient is not defined in largest part of the image.
On the other side, since the lifting mimics the behavior of the simple cells of the V1 cortical layer,
the Citti and Sarti algorithm is always applied on a smoothed version of the image. We have applied it on a classical toy problems proposed for example in Bertalmio, Sapiro, Caselles and Ballester in \cite{Bertalmio}. Results are shown in figure \ref{berta}.
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{bert_occ.png}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{bert_inpainted.png}\hspace{.02\textwidth}
\caption{Inpainting a constant coefficient image with the Sarti Citti algorithm.}\label{berta}
\end{figure}
In figure \ref{farfalla} we test our method
on an image taken from the survey \cite{inpainting}.
The present reconstruction is correct in the part of the image characterized by strong boundaries,
but the results of \cite{inpainting} obtained with the model of Morel and Masnou (see \cite{Morel}) seems to be better. The main point
is the boundary detection, which is very accurate in the model of Morel and Masnou, while here the boundaries are detected with a gradient, after smoothing the image.
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{farfalla.png}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{farfallaMM.png}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{farf.jpg}
\caption{On the left the occluded image. From left to right: results from \cite{inpainting} with the model of \cite{Morel}, and with our model.}\label{farfalla}
\end{figure}
\subsection{Enhancement results}
We will show in this section results of the application of the enhancement method we have introduced in Section 2.2.2. Let's recall that enhancement consists in an image filtering that underlines directional coherent structures. With respect to the completion problem there is no part of the image to be disoccluded and all the parts of the initial data are evolved.
In Figure \ref{fig:isokernels11} it is shown a medical image of blood vessels to be filtered to reconstruct the fragmented vessels (courtesy of R. Duits (\cite{DFII}). The second image from left shows the enhancement computed by using CED-OS, see \cite{DFII}, while the third image shows the result obtained using the proposed method.
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{chicken1_orig.pdf}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{chicken1_cedos.pdf}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{chickenNostro.png}
\caption{From left to right: the original image, courtesy of R. Duits (\cite{DFII}), the enhanced image using CED-OS, see \cite{DFII} and the enhanced image obtained using the proposed method.}
\label{fig:isokernels11}
\end{figure}
Here we finally show an example of combination of the techniques of completion and enhancement. We see in this case that enhancement homogenizes the original non occluded part with the reconstructed one.
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{eye_original.png}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{eye_inpainted.png}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{eye_subR.png}
\caption{Left: the original image (courtesy of U. Boscain (\cite{Boscain})); center: image inpainted using the proposed algorithm; right: image inpainted and enhanced with this algorithm.}
\label{fig:isokernels21}
\end{figure}
Here we propose a detail of the previous image in order to underline the effects of the discussed techniques.
\begin{figure}
\centering
\includegraphics[width=.3\textwidth]{eye_original1.png}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{eye_inpainted1.png}\hspace{.02\textwidth}
\includegraphics[width=.3\textwidth]{eye_subR1.png}
\caption{Top left: a detail of the original image (courtesy of U. Boscain (\cite{Boscain})); Bottom left: a detail of the image inpainted using the proposed algorithm; Bottom right: same detail of the image inpainted and enhanced with this algorithm.}
\label{fig:detail}
\end{figure}
\section{Conclusions}
In this paper we have proved existence of viscosity solutions of the mean curvature flow PDE in $\mathbb{R}^2 \times \mathit{S}^1$ with a sub-Riemannian metric. The flow has been approximated with the Osher and Sethian technique and a sketch of the proof of convergence of the numerical scheme is provided. Results of completion and enhancing are obtained both on artificial and natural images. We also provide comparisons with others existing algorithms. In particular
we have illustrated how the method can be used to perform enhancement and how it leads to results comparable with the classical ones of Bertalmio, Sapiro, Caselles and Ballester in \cite{Bertalmio}, of Masnou and Morel in \cite{Morel}.
In the case of image completion we compared the technique with the recent results shown of Boscain, Chertovskih, Gauthier, Remizov in \cite{Boscain}. Furthermore the method can be applied not only to inpainting problems but also in presence of crossing edges, hence a comparison with the results of edges which cannot be done using the method proposed by Bertalmio, Sapiro, Caselles and Ballester in \cite{Bertalmio} is now possible.
|
1,108,101,564,200 | arxiv | \section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\paragraph{Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}}%
\defLaTeX2e{LaTeX2e}
\def\chkcompat{%
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{\Greekmath 010B }%
\def\beta{\Greekmath 010C }%
\def\gamma{\Greekmath 010D }%
\def\delta{\Greekmath 010E }%
\def\epsilon{\Greekmath 010F }%
\def\zeta{\Greekmath 0110 }%
\def\eta{\Greekmath 0111 }%
\def\theta{\Greekmath 0112 }%
\def\iota{\Greekmath 0113 }%
\def\kappa{\Greekmath 0114 }%
\def\lambda{\Greekmath 0115 }%
\def\mu{\Greekmath 0116 }%
\def\nu{\Greekmath 0117 }%
\def\xi{\Greekmath 0118 }%
\def\pi{\Greekmath 0119 }%
\def\rho{\Greekmath 011A }%
\def\sigma{\Greekmath 011B }%
\def\tau{\Greekmath 011C }%
\def\upsilon{\Greekmath 011D }%
\def\phi{\Greekmath 011E }%
\def\chi{\Greekmath 011F }%
\def\psi{\Greekmath 0120 }%
\def\omega{\Greekmath 0121 }%
\def\varepsilon{\Greekmath 0122 }%
\def\vartheta{\Greekmath 0123 }%
\def\varpi{\Greekmath 0124 }%
\def\varrho{\Greekmath 0125 }%
\def\varsigma{\Greekmath 0126 }%
\def\varphi{\Greekmath 0127 }%
\def\Greekmath 0272{\Greekmath 0272}
\def\GreekBold{\@ne}%
\def\@ne{\@ne}
\def\Greekmath#1#2#3#4{%
\ifx\GreekBold\@ne
\mathchar"#1#2#3#4%
\else
\mbox{\boldmath$\mathchar"#1#2#3#4$}
\fi}
\let\SAVEPBF=\pbf
\def\pbf{\let\GreekBold = \relax\SAVEPBF}%
\expandafter\ifx\csname ds@amstex\endcsname\relax
\else\message{amstex already loaded}\makeatother\endinput\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\textstyle \int}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\textstyle \oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\displaystyle \int }%
\def\diint{\mathop{\displaystyle \iint }}%
\def\diiint{\mathop{\displaystyle \iiint }}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\displaystyle \oint }%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\makeatother
\endinput
|
1,108,101,564,201 | arxiv | \section{Introduction}
Historically, the theory of error-correcting codes has benefited greatly
from the use of techniques from algebra and algebraic geometry. Combinatorial
and graph-theoretic methods have also proven to be useful in the
construction and analysis of codes, especially within the last ten
years since the re-discovery of low-density parity-check codes. However,
one area of mathematics that has, surprisingly, only played a
relatively minor role in the development of coding theory is the
field of matroid theory. The reason this is surprising is that,
as anyone with even a basic understanding of the two fields would
realize, coding theory and matroid theory are very closely related.
The former deals with matrices over a finite field ${\mathbb F}$, and these objects
are also of fundamental importance in the latter, where they go
under the name of ${\mathbb F}$-representable matroids.
The connection with matroid theory has not gone unnoticed among
coding theorists. Indeed, as far back as 1976, Greene \cite{greene}
noticed that the Tutte polynomial of a matroid,
when specialized to a linear code ${\mathcal C}$,
was equivalent (in a certain sense) to the homogeneous weight-enumerator
polynomial $W_{{\mathcal C}}(x,y) = \sum_i A_i x^i y^{n-i}$, where $A_i$
is the number of words of weight $i$ in ${\mathcal C}$, and $n$ is
the length of ${\mathcal C}$. The MacWilliams identities are then a special
case of a known identity for the Tutte polynomial
\cite[Chapter~4]{cameron}. The connection with matroids was also exploited
by Barg \cite{barg} to derive MacWilliams-type identities for
generalized-Hamming-weight enumerators of a code.
However, aside from such use of tools from matroid theory to re-derive
results in coding theory that had already been proved by other means,
each field seems to have had little impact on the other. Matroid theory
has derived its inspiration largely from graph theory, and its most
successful applications have traditionally been in the areas of
combinatorial optimization and network flows. Very recently, matroid
theory has found new applications in the rapidly evolving field
of network coding \cite{dougherty}.
On the other hand, coding theory (by which we mean the theory of
error-correcting codes, in contrast to the theory of network coding)
has been largely unconcerned with developments in
combinatorial optimization, as the fundamental problems
in the former seemed to be of a different nature from those in the latter.
However, the recent re-formulation of the maximum-likelihood (ML)
decoding problem, for a binary linear code over a discrete memoryless
channel, as a linear programming problem \cite{FWK} has opened a channel
through which matroid-theoretic results in combinatorial optimization can be
applied to coding theory. The key tool in these results is the use
of the decomposition theory of matroids initiated by Seymour \cite{Sey80},
\cite{Sey81}. Based on Seymour's seminal work, Gr\"otschel and Truemper
\cite{GT} showed that the minimization of a linear functional over
the cycle polytope of a binary matroid could be solved in polynomial
time for certain classes of matroids. This immediately implies
that for the corresponding families of codes, the ML decoding problem
can be solved in time polynomial in the length of the code. Given the
fact that the ML decoding problem is known to be NP-hard in
general \cite{BMvT}, the existence of ``non-trivial'' classes of codes
for which ML decoding can be implemented in polynomial time, is
obviously a significant result. However, as we will show in this paper,
for a code family to which the Gr\"otschel-Truemper result applies,
either the dimension or minimum distance of the codes in the family
grows sub-linearly with codelength. Thus, such code families
are not good from a coding-theoretic perspective. However, they do illustrate
the important point that polynomial-time ML decoding is possible. Moreover,
the matroid-theoretic arguments used by Gr\"otschel and Truemper
do not rule out the possibility that there may exist other
code families for which polynomial-time ML decoding algorithms exist,
which are also good in terms of rate and minimum distance.
The primary goal of this paper is to provide an exposition of the ideas
needed to understand and apply the work of Gr\"otschel and Truemper.
As mentioned earlier, their work relies upon the machinery provided
by Seymour's matroid decomposition theory, and so we will first present
that theory in a coding-theoretic setting. Our presentation of this
decomposition theory will be of a tutorial nature. We have attempted
to keep the presentation self-contained to the extent possible;
we do not provide complete proofs of some of the difficult theorems
that form the basis of the theory.
We provide the relevant definitions and background from matroid theory
in Section~\ref{matroid_section} of this paper. As explained in that
section, binary matroids and binary linear codes are essentially the same
objects. So, techniques applicable to binary matroids are
directly applicable to binary linear codes as well. In particular,
matroid decomposition techniques can be specialized to codes.
Of central importance in matroid theory is the notion of matroid minors.
In the context of codes, a minor of a code ${\mathcal C}$ is any code that can
be obtained from ${\mathcal C}$ by a sequence of shortening and puncturing
operations. Minors have received little (if any) attention in coding theory,
and this seems to be a remarkable oversight
given the fact that they sometimes capture
important structural properties of a code. For example, the
presence or absence of certain minors (as stated precisely in
Theorem~\ref{graphic_code_thm}) decides whether or not a
code is graphic, \emph{i.e.}, has a parity-check matrix that is
the vertex-edge incidence matrix of some graph. Graphic codes
have been studied previously in the information theory literature
\cite{HB68},\cite{jungnickel}, but the excluded-minor
characterization of these codes appears to have been overlooked
in these studies.
In Section~\ref{conn_section}, we introduce a notion of
$k$-connectedness for codes, which is again a specialization of
the corresponding notion for matroids. This is closely related to
$k$-connectedness in graphs, and interestingly enough, is also
related to the trellis complexity of a code \cite{For2}.
We do not explore the latter relationship in any detail in this
paper, instead referring the reader to \cite{kashyap_SIAM},
\cite{kashyap_ITW}, where matroid methods are used to study the
structure of codes with low trellis complexity.
The notion of $k$-connectedness plays an important role in
Seymour's decomposition theory. An idea of why this is so can
be obtained from the simple case of 2-connectedness: it follows from
the relevant definitions that a code is not 2-connected if and only if
it is the direct sum of smaller codes. Similar statements can be made
for codes that are not 3- or 4-connected (or more precisely, not
internally 4-connected --- see Definition~\ref{int_4conn_def})
via the code-composition operations of 2-sum and 3-sum introduced by
Seymour \cite{Sey80}. These operations, as well as the $\overline{3}$-sum which
is in a sense the dual operation to the 3-sum, are explained in
detail in Section~\ref{decomp_section}.
The operations of 2-, 3- and $\overline{3}$-sum have the non-trivial property
that when two codes ${\mathcal C}_1$ and ${\mathcal C}_2$ are composed using one of
these sums to form a code ${\mathcal C}$, then ${\mathcal C}_1$ and ${\mathcal C}_2$ are
(up to code equivalence) minors of ${\mathcal C}$. The relationship between
$k$-connectedness and these sums can then be summarized as follows:
a binary linear code is 2-connected but not 3-connected
(resp.\ 3-connected, but not internally 4-connected) iff
it can be expressed as the 2-sum (resp.\ 3- or $\overline{3}$-sum) of
codes ${\mathcal C}_1$ and ${\mathcal C}_2$, both of which are equivalent to minors of ${\mathcal C}$.
It follows immediately from the above facts that
any binary linear code is either 3-connected and internally 4-connected,
or can be constructed from 3-connected, internally 4-connected minors
of it by a sequence of operations of coordinate permutation,
direct sum, 2-sum, 3-sum and $\overline{3}$-sum. In fact, given any code,
such a decomposition of the code can be obtained in time polynomial in
the length of the code.
This code decomposition theory has immediate applications to families
of codes that are minor-closed in the sense that for each code ${\mathcal C}$
in such a family ${\mathfrak C}$, any code equivalent to a minor of ${\mathcal C}$
is also in ${\mathfrak C}$. Indeed, a code ${\mathcal C}$ is in a minor-closed
family ${\mathfrak C}$ only if the indecomposable (\emph{i.e.}, 3-connected,
internally 4-connected) pieces obtained in the aforementioned
decomposition of ${\mathcal C}$ are in ${\mathfrak C}$.
The above necessary condition is also sufficient if the code family ${\mathfrak C}$
is additionally closed under the operations of direct sum, 2-sum, 3-sum and
$\overline{3}$-sum. Thus, membership of an arbitrary code in such a
family ${\mathfrak C}$ can be decided in polynomial time iff the membership in
${\mathfrak C}$ of 3-connected, internally 4-connected codes can be decided in
polynomial time. A formal statement of these and other related facts
can be found in Section~\ref{minor_closed_section} of our paper.
As an illustrative example, we also outline in
Section~\ref{minor_closed_section} one of the major applications
of Seymour's decomposition theory. This concerns the family of regular
codes which are codes that do not contain as a minor
any code equivalent to the $[7,4]$ Hamming code or its dual. Regular
codes are also characterized by the property that given any
parity-check matrix $H$ of such a code, the 1's in $H$ can be replaced
by $\pm 1$'s in such a way that the resulting $0/\pm 1$ matrix
is totally unimodular. A totally unimodular matrix is a real matrix all
of whose square submatrices have determinants in $\{0,1,-1\}$.
These matrices are of fundamental importance in integer linear
programming problems \cite{HK}. Seymour \cite{Sey80} proved that a
binary linear code is regular iff it can be decomposed into codes
that are either graphic, or duals of graphic codes,
or equivalent to a special $[10,5,4]$ code he called $R_{10}$.
The application of the decomposition theory to linear programming,
and in particular to ML decoding, is the subject of Section~\ref{LP_section}.
Feldman \emph{et al.}\ showed that the ML decoding problem for a
length-$n$ binary linear code ${\mathcal C}$ over a discrete memoryless channel
can be formulated as a minimization problem
$\min \sum_i \gamma_i c_i$, where $\gamma = (\gamma_1,\ldots,\gamma_n) \in {\mathbb R}^n$ is a
certain cost vector derived from the received word and
the channel transition probabilites, and the minimum is taken over
all codewords ${\mathbf c} = (c_1,\ldots,c_n)$ in ${\mathcal C}$. Now, if ${\mathcal C}$ is a
graphic code, then standard graph-theoretic techniques from combinatorial
optimization can be used to find the minimizing codeword in time
polynomial in $n$; a sketch of such an algorithm can be found in
Appendix~\ref{opt_app}. Gr\"otschel and Truemper \cite{GT} additionally showed
that this minimization could also be performed in polynomial time for
certain minor-closed code families that are ``almost-graphic'' in a certain
sense. Such a code family ${\mathfrak C}$ is characterized by the property that
there exists a \emph{finite} list of codes ${\mathfrak D}$ such that
each ${\mathcal C} \in {\mathfrak C}$ can be decomposed in polynomial time in such a way
that at each step of the decomposition, one of the pieces is either graphic
or in ${\mathfrak D}$.
Gr\"otschel and Truemper gave a polynomial-time algorithm
that takes as input a length-$n$ code ${\mathcal C}$ from an almost-graphic family
${\mathfrak C}$ and a cost vector $\gamma \in {\mathbb R}^n$, and constructs a
codeword ${\mathbf c} \in {\mathcal C}$ achieving $\min \sum_i \gamma_i c_i$ by
solving related minimization problems over the
pieces of the decomposition that are graphic or in ${\mathfrak D}$. This algorithm
is also outlined in Appendix~\ref{opt_app}. Thus, the ML decoding
problem can be solved in polynomial time for almost-graphic codes.
Gr\"otschel and Truemper also gave several examples of almost-graphic
families. Interestingly enough, one of these families is that
consisting of codes ${\mathcal C}$ for which the codeword polytope
(\emph{i.e.}, the convex hull in ${\mathbb R}^n$ of the codewords in the
length-$n$ code ${\mathcal C}$) is identical to the Koetter-Vontobel
fundamental polytope \cite{VK05} derived from the entire dual code
${\mathcal C}^\perp$.
Unfortunately, the one truly original result in this paper is a
negative result. We show that for codes in an almost-graphic
family, either their dimension or their minimum distance grows
sub-linearly with codelength. One important implication of this
is that decoding by linear programming, when applied
to any good error-correcting code, must inevitably hit upon
the occasional pseudocodeword, thus resulting in decoding failure.
We make some concluding remarks in Section~\ref{conclusion}.
Some of the lengthier or more technical proofs of results
from Sections~\ref{decomp_section} and \ref{LP_section} are
given in appendices to preserve the flow of the presentation.
\section{Matroids and Codes\label{matroid_section}}
We shall assume familiarity with coding theory; for relevant definitions,
see \cite{sloane}. We will mainly concern ourselves with binary linear
codes, and use standard coding-theoretic notation throughout this paper.
Thus, an $[n,k]$ code is a code of length $n$ and dimension $k$,
and an $[n,k,d]$ code is an $[n,k]$ code that has minimum distance $d$.
Given a code ${\mathcal C}$, $\dim({\mathcal C})$ denotes the dimension of ${\mathcal C}$,
and ${\mathcal C}^\perp$ denotes the dual code of ${\mathcal C}$.
The main purpose of this section is to introduce concepts from
matroid theory that are applicable to coding theory. We will largely follow
the definitions and notation of Oxley \cite{oxley}.
We begin with a definition of matroids.
\begin{definition}
A \emph{matroid} $M$ is an ordered pair $(E,{\mathcal I})$ consisting of a finite
set $E$ and a collection ${\mathcal I}$ of subsets of $E$ satisfying the following
three conditions:
\begin{itemize}
\item[(i)] $\emptyset \in {\mathcal I}$;
\item[(ii)] if $I \in {\mathcal I}$ and $J \subset I$, then $J \in {\mathcal I}$; and
\item[(iii)] if $I_1,I_2$ are in ${\mathcal I}$ and $|I_1| < |I_2|$, then there
exists\footnote{In this paper, we will use $A-B$ to denote the set difference
$A \cap B^c$. The more usual notation $A \setminus B$
has been reserved for the matroid operation of ``deletion''.}
$e \in I_2 - I_1$ such that $I_1 \cup \{e\} \in {\mathcal I}$.
\end{itemize}
\label{matroid_def}
\end{definition}
The set $E$ above is called the \emph{ground set} of the matroid $M$,
and the members of ${\mathcal I}$ are the \emph{independent sets} of $M$.
A maximal independent set, \emph{i.e.}, a set $B \in {\mathcal I}$ such that
$B \cup \{e\} \notin {\mathcal I}$ for any $e \in E - B$, is called
a \emph{basis} of $M$. It is a simple consequence of (iii) in
Definition~\ref{matroid_def} that all bases of $M$ have the same cardinality.
The cardinality of any basis of $M$ is defined to be the \emph{rank} of $M$,
denoted by $r(M)$.
A subset of $E$ that is not in ${\mathcal I}$ is called a \emph{dependent set}.
Minimal dependent sets, \emph{i.e.,} dependent sets all of whose
proper subsets are in ${\mathcal I}$, are called \emph{circuits}. It easily
follows from the definitions that a subset of $E$ is a dependent set
if and only if it contains a circuit.
A dependent set that can be expressed as a disjoint union of
circuits is called a \emph{cycle}.
The above definitions of independent and dependent sets, bases and rank
simply try to abstract the notion of independence and dependence, bases
and dimension, respectively, in a vector space over a field.
Indeed, the most important
class of matroids for our purposes is the class of binary matroids,
which are simply vector spaces over the binary field,
or to put it another way, binary linear codes.
Let $H$ be a binary $m \times n$ matrix, and let
${\mathbf v}_1,{\mathbf v}_2,\ldots,{\mathbf v}_n$ denote the column vectors of $H$.
Set $E = \{1,2,\ldots,n\}$ and take ${\mathcal I}$ to be the collection of
subsets $I = \{i_1,i_2,\ldots,i_s\} \subset E$
such that the sequence of vectors ${\mathbf v}_{i_1}, {\mathbf v}_{i_2}, \ldots, {\mathbf v}_{i_s}$
is linearly independent over the binary field ${\mathbb F}_2$.
It follows from elementary linear algebra that $(E,{\mathcal I})$
satisfies the definition of a matroid given above,
and thus defines a matroid which we shall denote by
$M[H]$. Note that $r(M[H])$ equals the rank (over ${\mathbb F}_2$) of the matrix $H$.
A matroid $M = (E,{\mathcal I})$ is called \emph{binary}
if it is isomorphic to $M[H]$ for some binary matrix $H$. Here, we
say that two matroids $M = (E,{\mathcal I})$ and $M'= (E', {\mathcal I}')$ are
\emph{isomorphic}, denoted by $M \cong M'$, if there is a bijection
$\psi: E \rightarrow E'$ such that for all $J \subset E$,
it is the case that $J \in {\mathcal I}$ if and only if $\psi(J) \in {\mathcal I}'$.
A binary matrix $H$ is also the parity-check matrix of some binary
linear code ${\mathcal C}$. Note that $r(M[H]) = n - \dim({\mathcal C})$.
The code ${\mathcal C}$ and the binary matroid $M[H]$ are
very closely related. Recall from coding theory that a codeword
${\mathbf c} = (c_1 c_2 \ldots c_n) \in {\mathcal C}$, ${\mathbf c} \neq {\mathbf 0}$,
is called \emph{minimal} if its support $\mbox{\textsf{supp}}({\mathbf c}) = \{i: c_i = 1\}$
does not contain as a subset the support of
any other nonzero codeword in ${\mathcal C}$. It is easily seen that ${\mathbf c}$
is a minimal codeword of ${\mathcal C}$ iff its support
is a circuit of $M[H]$. It follows from this that
for any ${\mathbf c} \in \{0,1\}^n$, ${\mathbf c} \neq {\mathbf 0}$, we have
${\mathbf c} \in {\mathcal C}$ iff $\mbox{\textsf{supp}}({\mathbf c})$ is a cycle of $M[H]$.
Furthermore, a routine verification shows that for binary matrices
$H$ and $H'$, $M[H] = M[H']$ iff $H$ and $H'$ are parity-check matrices
of the same code ${\mathcal C}$. This allows us to associate a unique binary matroid
with each binary linear code ${\mathcal C}$, and vice versa.
Thus, binary matroids and binary linear codes are essentially the
same objects. In particular, two codes are
equivalent\footnote{In coding theory, two binary linear codes
are defined to be \emph{equivalent} if one can be obtained from
the other by a permutation of coordinates. In this paper, we will
use the notation $\pi({\mathcal C})$ to denote the code obtained by applying the
coordinate permutation $\pi$ to the code ${\mathcal C}$.} if and only if their
associated binary matroids are isomorphic.
This association between codes and binary matroids
allows us to use tools from matroid theory to study binary linear codes.
While many of the tools used to study matroids have their roots
in linear algebra, there is another source that matroid theory draws from,
namely, graph theory. Indeed, Whitney's founding paper on matroid theory
\cite{whitney} was an attempt to capture the fundamental properties of
independence that are common to graphs and matrices.
Let ${\mathcal G}$ be a finite undirected graph (henceforth simply ``graph'')
with edge set $E$. Define \textsf{cyc} to
be the collection of edge sets of cycles (\emph{i.e.}, closed walks) in ${\mathcal G}$.
Define $I \subset E$ to be independent if $I$ does not contain any
member of \textsf{cyc} as a subset. Equivalently, $I$ is independent if
the subgraph of ${\mathcal G}$ induced by $I$ is a forest. Setting ${\mathcal I}$ to be the
collection of independent subsets of $E$, it turns out that $(E,{\mathcal I})$
is a matroid \cite[Proposition~1.1.7]{oxley}. This matroid is
called the \emph{cycle matroid} of ${\mathcal G}$, and is denoted by $M({\mathcal G})$.
A matroid that is isomorphic to the cycle matroid of some graph is called
\emph{graphic}.
Clearly, the circuits of $M({\mathcal G})$ are the edge sets of
simple cycles (\emph{i.e.}, closed walks in which no intermediate
vertex is visited twice) in ${\mathcal G}$.
The nomenclature ``cycle'' for the disjoint union of circuits in a
matroid actually stems from its use in the context of graphs.
The bases of $M({\mathcal G})$ are the unions
of edge sets of spanning trees of the connected components
of ${\mathcal G}$. Hence, $r(M({\mathcal G})) = |V({\mathcal G})| - t$,
where $V({\mathcal G})$ is the set of vertices of ${\mathcal G}$,
and $t$ is the number of connected components of ${\mathcal G}$.
It is not hard to show that a graphic matroid is binary. Indeed, let $A$ be
the vertex-edge incidence matrix of ${\mathcal G}$. This is the matrix $[a_{i,j}]$
whose rows and columns are indexed by the vertices and edges, respectively,
of ${\mathcal G}$, where $a_{i,j} = 1$ if the $j$th edge is incident with the
$i$th vertex, and $a_{i,j} = 0$ otherwise. It may be verified that
$M({\mathcal G}) \cong M[A]$ (see, \emph{e.g.}, \cite[Proposition~5.1.2]{oxley}).
Given a graph ${\mathcal G}$, we will denote by ${\mathcal C}({\mathcal G})$ the
code associated (or identified) with the binary matroid $M({\mathcal G})$.
In other words, ${\mathcal C}({\mathcal G})$ is the binary linear code that has
the vertex-edge incidence matrix of ${\mathcal G}$ as
a parity-check matrix. We will refer to such codes as \emph{graphic codes},
and denote by $\Gamma$ the set of all graphic codes.
Graphic codes have made their appearance previously in the
information theory literature \cite{BH67,HB68} (also see \cite{jungnickel}
and the references therein).
The repetition code of length $n$ is a graphic code; it is the code
obtained from the $n$-cycle $C_n$, the graph consisting of a
single cycle on $n$ vertices.
However, not all binary codes are graphic. For example, it
can be shown that the [7,4] Hamming code is not graphic.
It is possible to give a precise characterization of
the codes that are graphic in terms of excluded minors,
a notion we need to first define.
There are two well-known ways of obtaining codes of shorter length from
a given parent code. One is via the operation of \emph{puncturing}, in which
one or more columns are deleted from a generator matrix of the parent code
\cite[p.\ 28]{sloane}. The second method is called
\emph{shortening}, and involves one or more columns being
deleted from a parity-check matrix of the parent code \cite[p.\ 29]{sloane}.
Given a code ${\mathcal C}$ of length $n$ with generator matrix $G$,
and a subset $J \subset \{1,2,\ldots,n\}$, we will denote by
${\mathcal C}/J$ the code obtained by puncturing the columns of $G$
with indices in $J$, and by ${\mathcal C} \setminus\! J$ the code obtained
by shortening at the columns of $G$ with indices in $J$.
Note that ${\mathcal C}/J$ is simply the restriction
of the code ${\mathcal C}$ onto the coordinates not in $J$, and
${\mathcal C} \setminus\! J = {({\mathcal C}^\perp / J)}^\perp$.
The notation, though potentially confusing, has been
retained from matroid theory, where the analogues of puncturing and
shortening are called \emph{contraction\/} and \emph{deletion}, respectively.
\begin{definition}
A \emph{minor} of a code ${\mathcal C}$ is any code obtained from ${\mathcal C}$ via
a (possibly empty) sequence of shortening and puncturing operations.
\label{minor_def}
\end{definition}
It may easily be verified that the precise order in which the
deletion and puncturing operations are performed is irrelevant. Hence, any
minor of ${\mathcal C}$ may be unambiguously specified using notation of
the form ${\mathcal C}/ X \setminus\! Y$ (or equivalently, ${\mathcal C} \setminus\! Y / X$)
for disjoint subsets $X,Y \subset \{1,2,\ldots,n\}$; this notation
indicates that ${\mathcal C}$ has been punctured at the coordinates indexed by
$X$ and shortened at the coordinates indexed by $Y$.
The above definition allows a code to be a minor of itself. A minor
of ${\mathcal C}$ that is not ${\mathcal C}$ itself is called a \emph{proper minor} of
${\mathcal C}$. Minors have not received much attention in classical coding theory,
but they play a central role in matroid theory. We will not
touch upon the subject of minors of general matroids, leaving the reader to
refer to \cite[Chapter~3]{oxley} instead. However, we will briefly
mention how the matroid operations of deletion and contraction specialize
to the cycle matroids of graphs.
Let ${\mathcal G}$ be some graph, with edge set $E$. Given $e \in E$, define the graph
${\mathcal G} \shorten e$ to be the graph obtained by deleting the edge $e$
along with any vertices that get isolated as a result of deleting $e$.
Also, define ${\mathcal G} / e$ to be the graph obtained by
contracting $e$, \emph{i.e.}, deleting $e$ and identifying the two
vertices incident with $e$. The process of obtaining ${\mathcal G} / e$
from ${\mathcal G}$ is called \emph{edge contraction}, and that of obtaining
${\mathcal G} \setminus\! e$ from ${\mathcal G}$ is of course called \emph{edge deletion}.
These operations are inductively extended to define ${\mathcal G} \shorten J$ and
${\mathcal G} / J$ for any $J \subset E$. A minor of a graph ${\mathcal G}$ is any graph obtained
from ${\mathcal G}$ via a (possibly empty) sequence of edge deletions and contractions.
The operations of edge deletion and contraction are
the graphic analogues of code shortening and puncturing, respectively.
A mathematically precise statement of this is as follows:
given a graph ${\mathcal G}$ with edge set $E$,
and any $J \subset E$, we have
\cite[Equation 3.1.2 and Proposition~3.2.1]{oxley}
\begin{equation*}
{\mathcal C}({\mathcal G}) / J = {\mathcal C}({\mathcal G} / J) \ \ \text{ and } \ \
{\mathcal C}({\mathcal G}) \setminus\! J = {\mathcal C}({\mathcal G} \shorten J).
\end{equation*}
It follows that any minor of a graphic code is graphic.
Returning to the question of determining which codes are graphic, the
answer can be succinctly given in terms of a list of forbidden minors
by the following result of Tutte \cite{Tut59}.
\begin{theorem}[\cite{Tut59}]
A code is graphic if and only if it does not contain as a minor
any code equivalent to the [7,4] Hamming code or its dual,
or one of the codes ${\mathcal C}(K_5)^\perp$ and ${\mathcal C}(K_{3,3})^\perp$.
\label{graphic_code_thm}
\end{theorem}
In the statement of the above theorem, $K_5$ is the complete
graph on five vertices, while $K_{3,3}$ is the complete bipartite graph
with three vertices on each side. ${\mathcal C}(K_5)^\perp$ is the $[10,4,4]$
code with generator matrix
\begin{equation*}
\left[
\begin{array}{cccccccccc}
1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1
\end{array}
\right],
\end{equation*}
while ${\mathcal C}(K_{3,3})^\perp$ is the $[9,5,3]$ code with generator matrix
\begin{equation*}
\left[
\begin{array}{ccccccccc}
1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1
\end{array}
\right].
\end{equation*}
A proof of the theorem can be found in \cite[Section~13.3]{oxley}. On a
related note, several authors have given algorithms for
deciding whether or not a given code is graphic \cite{BC80,BW88,Fuj80,Tut60},
\cite[Section~10.6]{truemper}. These algorithms run in time polynomial
in the size of the input, which can be a parity-check matrix for the code.
We will use this fact later in the paper.
\section{Connectedness\label{conn_section}}
As mentioned previously, matroid theory draws upon ideas from graph theory.
A key concept in graph theory is the notion of $k$-connectedness for graphs
\cite[Section~III.2]{bollobas}.
Given a graph ${\mathcal G}$, we will let $V({\mathcal G})$ denote
the set of its vertices. A graph is \emph{connected} if any pair of
its vertices can be joined by a path; otherwise, it is \emph{disconnected}.
A maximal connected subgraph of ${\mathcal G}$ is a \emph{connected component},
or simply \emph{component}, of ${\mathcal G}$. Let ${\mathcal G}-W$ denote
the graph obtained from ${\mathcal G}$ by deleting the vertices in
$W$ and all incident edges. If ${\mathcal G}$ is connected and, for some subset
$W \subset V({\mathcal G})$, ${\mathcal G} - W$ is disconnected, then we say that
$W$ is a \emph{vertex cut} of ${\mathcal G}$, or that $W$ \emph{separates} ${\mathcal G}$.
If ${\mathcal G}$ is a connected graph that has at least one pair of distinct
non-adjacent vertices, the \emph{connectivity} $\kappa({\mathcal G})$ of ${\mathcal G}$ is defined to
be the smallest integer $j$ for which ${\mathcal G}$ has a vertex cut $W$ with
$|W| = j$. If ${\mathcal G}$ is connected, but has no pair of distinct non-adjacent
vertices, $\kappa({\mathcal G})$ is defined to be $|V({\mathcal G})| - 1$. Finally, if ${\mathcal G}$
is disconnected, then we set $\kappa({\mathcal G}) = 0$. For an integer $k > 0$,
${\mathcal G}$ is said to be \emph{$k$-connected} if $\kappa({\mathcal G}) \geq k$. Thus, a
graph ${\mathcal G}$ with $|V({\mathcal G})| \geq 2$ is connected if and only if it is
1-connected.
The notion of $k$-connectedness of graphs can be extended to matroids,
but it has to be done carefully. One of the problems encountered when
attempting to do so is that 1-connectedness of graphs does not extend
directly to matroids. The reason for this is that for any
disconnected graph ${\mathcal G}_1$, there is a connected graph ${\mathcal G}_2$ such that
$M({\mathcal G}_1) \cong M({\mathcal G}_2)$ \cite[Proposition~1.2.8]{oxley}. So, the
link between $k$-connectedness in graphs and that defined below
for matroids begins with the case $k = 2$.
The definition we present of $k$-connectedness for matroids was formulated
by Tutte \cite{Tut66}. We will once again restrict our attention
to the case of binary matroids (\emph{i.e.}, codes) only. Let
${\mathcal C}$ be a binary linear code of length $n$. We will hereafter use $[n]$
to denote the set of integers $\{1,2,\ldots,n\}$, and for
$J \subset [n]$, we set $J^c = \{i \in [n]: i \notin J\}$.
To further alleviate notational confusion, for $J \subset [n]$, we will define
${\mathcal C} |_J$ to be the restriction of ${\mathcal C}$ onto its
coordinates indexed by $J$. Equivalently, ${\mathcal C} |_J = {\mathcal C} / J^c$, the
latter being the code obtained from ${\mathcal C}$
by puncturing the coordinates not in $J$.
\begin{definition}
For a positive integer $k$, a partition $(J,J^c)$ of $[n]$ is called
a \emph{$k$-separation of ${\mathcal C}$} if
\begin{equation}
\min\{|J|,|J^c|\} \geq k
\label{ksep_eq1}
\end{equation}
and
\begin{equation}
\dim({\mathcal C}|_J) + \dim({\mathcal C}|_{J^c}) - \dim({\mathcal C}) \leq k-1.
\label{ksep_eq2}
\end{equation}
If ${\mathcal C}$ has a $k$-separation, then ${\mathcal C}$ is said to be \emph{$k$-separated}.
\label{ksep_def}
\end{definition}
When equality occurs in (\ref{ksep_eq1}), $(J,J^c)$ is called a
\emph{minimal $k$-separation}. When equality occurs in (\ref{ksep_eq2}),
$(J,J^c)$ is called an \emph{exact $k$-separation}. Note that the
expression on the left-hand side of (\ref{ksep_eq2}) is always
non-negative, since $\dim({\mathcal C} |_J) + \dim({\mathcal C} |_{J^c}) \geq \dim({\mathcal C})$
for any $J \subset [n]$. This fact easily yields the following result.
\begin{lemma}
${\mathcal C}$ is 1-separated iff it is the direct sum of non-empty codes.
\label{1sep_lemma}
\end{lemma}
\begin{proof}
Since $\dim({\mathcal C} |_J) + \dim({\mathcal C} |_{J^c}) \geq \dim({\mathcal C})$ for any
$J \subset [n]$, we see from Definition~\ref{ksep_def} that
$(J,J^c)$ is a 1-separation of ${\mathcal C}$ iff $J,J^c$ are non-empty,
and $\dim({\mathcal C} |_J) + \dim({\mathcal C} |_{J^c}) = \dim({\mathcal C})$.
Hence, $(J,J^c)$ is a 1-separation of ${\mathcal C}$ iff $J,J^c$ are non-empty,
and ${\mathcal C}$ is the direct sum of ${\mathcal C}|_J$ and ${\mathcal C}|_{J^c}$.
\end{proof}
We now give the definition of $k$-connectedness of codes. Note that
this definition starts with $k = 2$.
\begin{definition}
For $k \geq 2$, a code ${\mathcal C}$ is defined to be \emph{$k$-connected}
if it has no $k'$-separation for any $k' < k$.
\label{conn_def}
\end{definition}
\begin{example}
Let ${\mathcal C}$ be the [7,3,4] simplex code with generator matrix
$$
G = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 0 & 1
\end{array}
\right].
$$
Setting $J = \{1,2,3,7\}$, we see that $(J,J^c)$ forms a 3-separation
of ${\mathcal C}$. Indeed, the rank of the submatrix of $G$ formed by the
columns indexed by $J$ is 3, while the rank of the submatrix formed by the
columns indexed by $J^c$ is 2. In other words,
$\dim({\mathcal C} |_J) = 3$ and $\dim({\mathcal C} |_{J^c}) = 2$.
Thus, both (\ref{ksep_eq1}) and
(\ref{ksep_eq2}) are satisfied with equality, which makes
$(J,J^c)$ a minimal as well as an exact separation.
It may be verified (for example, by exhaustive search) that there are
no 1- or 2- separations for ${\mathcal C}$, and all 3-separations are minimal
and exact. In particular, ${\mathcal C}$ is 2- and 3-connected, but not 4-connected.
\label{simplex_example}
\end{example}
The quantity, $\dim({\mathcal C}|_J) + \dim({\mathcal C}|_{J^c}) - \dim({\mathcal C})$, appearing
on the left-hand side of (\ref{ksep_eq2}) in Definition~\ref{ksep_def}
also arises as part of the definition of the state-complexity profile
of a minimal trellis representation of a code
\cite{For1}--\cite{For3},\cite{horn},\cite{Ksch}.
To be precise, given a length-$n$ code ${\mathcal C}$, the \emph{state-complexity
profile} of a code \cite[Equation (1)]{horn} is defined to be the vector
${\mathbf s}({\mathcal C}) = (s_0({\mathcal C}),\ldots,s_n({\mathcal C}))$, where $s_i({\mathcal C})
= \dim({\mathcal C}|_J) + \dim({\mathcal C}|_{J^c}) - \dim({\mathcal C})$ for $J = [i] \subset [n]$.
Here, $[0]$ is defined to be the null set $\emptyset$. It is known
\cite{For2} that ${\mathbf s}({\mathcal C}) = {\mathbf s}({\mathcal C}^\perp)$, or equivalently,
for any $J \subset [n]$,
$$
\dim({\mathcal C}|_J) + \dim({\mathcal C}|_{J^c}) - \dim({\mathcal C}) =
\dim({\mathcal C}^\perp |_J) + \dim({\mathcal C}^\perp |_{J^c}) - \dim({\mathcal C}^\perp).
$$
As a result, we obtain the interesting and useful fact,
stated in the proposition below, that $k$-connectedness
is a property that is invariant under the operation
of taking code duals.
\begin{proposition}
Let ${\mathcal C}$ be a binary linear code of length $n$.
For any $k \geq 1$, a partition $(J,J^c)$ of $[n]$ is a $k$-separation
of ${\mathcal C}$ iff it is a $k$-separation of ${\mathcal C}^\perp$.
Therefore, for any $k \geq 2$,
${\mathcal C}$ is $k$-connected iff ${\mathcal C}^\perp$ is $k$-connected.
\label{conn_prop}
\end{proposition}
Consider again the simplex code of length 7 from
Example~\ref{simplex_example}. By Proposition~\ref{conn_prop} above,
its dual --- the $[7,4]$ Hamming code --- is also
2- and 3-connected, but not 4-connected.
The link between graph and code $k$-connectedness is
strong, but they are not equivalent notions. The closest relation
between the two occurs when $k=2$.
If ${\mathcal G}$ is a loopless graph without isolated vertices and $|V({\mathcal G})| \geq 3$,
then ${\mathcal C}({\mathcal G})$ is 2-connected iff ${\mathcal G}$ is a 2-connected graph
\cite[Proposition~4.1.8]{oxley}.
To describe the relation between graph and code connectedness in general,
we define the \emph{connectivity}, $\lambda({\mathcal C})$, of a code ${\mathcal C}$
to be the least positive integer $k$ for which there is a
$k$-separation of ${\mathcal C}$, if some $k$-separation exists for ${\mathcal C}$;
$\lambda({\mathcal C})$ is defined to be $\infty$ otherwise.
Note that ${\mathcal C}$ is $k$-connected iff $\lambda({\mathcal C}) \geq k$, and
by Proposition~\ref{conn_prop}, $\lambda({\mathcal C}) = \lambda({\mathcal C}^\perp)$.
It can be shown \cite[Corollary~8.2.7]{oxley}
that for a connected graph ${\mathcal G} \neq K_3$
having at least three vertices,
$$
\lambda({\mathcal C}({\mathcal G})) = \min\{\kappa({\mathcal G}),g({\mathcal G})\},
$$
where $g({\mathcal G})$ denotes the girth (length of shortest cycle) of ${\mathcal G}$.
Now, our reason for presenting a notion of connectedness for codes
is not just that it extends an idea from graph theory.
Certain methods of code composition
have been developed in matroid theory that relate to 2- and 3-separations.
These code composition methods can be considered to be generalizations of
direct sums, and they allow the result of Lemma~\ref{1sep_lemma}
to be extended in a non-trivial manner, paving the way for
the powerful decomposition theory of binary matroids initiated
by Paul Seymour \cite{Sey80}. This decomposition theory allows
one to decompose a binary linear code into smaller codes in
a reversible manner, in such a way that the smaller codes
are equivalent to minors of the original code.
As we shall describe in detail in the next section, to find such
a decomposition of a code, we need to find 1-, 2- or 3-separations
in the code, if such separations exist.
For any fixed positive integer $k$, there are polynomial-time algorithms
known (see \cite[Section 8.4]{truemper}) that, given a binary linear code
${\mathcal C}$, either find a $k$-separation of ${\mathcal C}$,
or conclude that no such separation exists. Here, by ``polynomial-time
algorithm,'' we mean an algorithm that runs in time polynomial
in the length of ${\mathcal C}$. For instance, the problem of deciding the
existence of 1-separations in a code ${\mathcal C}$
is almost trivial. To do so, one takes a matrix $A$ that is either
a generator matrix or a parity-check matrix of ${\mathcal C}$,
brings $A$ to reduced row-echelon form (rref),
removes all-zero rows if they exist, and finally constructs
a certain bipartite graph $BG(A)$. For an $m \times n$ matrix $A$,
the graph $BG(A)$ is defined as follows\footnote{If $A$ is a parity-check
matrix of the code, then $BG(A)$ is simply the corresponding Tanner graph.}:
the vertex set of $BG(A)$ consists of a set of $n$ left vertices
$\{l_1,\ldots,l_n\}$ and a set of $m$ right vertices
$\{r_1,\ldots,r_m\}$; an edge connects
the vertices $l_j$ and $r_i$ iff the $(i,j)$th entry of $A$ is 1.
The code ${\mathcal C}$ is 2-connected (\emph{i.e.},
has no 1-separation) iff $BG(A)$ is connected
\cite[Lemma~3.3.19]{truemper}.
If ${\mathcal C}$ is not 2-connected, the connected components of $BG(A)$
induce the required 1-separation.
In general, for fixed integers $k,l$ with $l \geq k$,
the problem of finding a $k$-separation $(J,J^c)$ of a code,
with $\min\{|J|,|J^c|\} \geq l$, if it exists,
can be solved in time polynomial in the length of the code, by
an algorithm due to Cunningham and Edmonds (in \cite{cunningham}).
We sketch the idea here. The algorithm is based on the fact that
the following problem can be solved in time polynomial in $n$:
for codes ${\mathcal C}_1$ and ${\mathcal C}_2$ each of length $n$, find
a partition of $[n]$ that achieves
\begin{equation}
\min\{\dim({\mathcal C}_1 |_{J_1}) + \dim({\mathcal C}_2 |_{J_2}):\ (J_1,J_2)
\text{ is a partition of } [n] \}.
\label{mat_int_problem}
\end{equation}
The above problem is solved using the \emph{matroid intersection
algorithm} \cite{E1}, \cite[Section~5.3]{truemper},
which we do not describe here.
The $k$-separation problem of interest to us is equivalent to the
following problem for a fixed integer $l \geq k$:
given a code ${\mathcal C}$ of length $n$, find a partition of $[n]$ that achieves
\begin{equation}
\min\{\dim({\mathcal C} |_{J_1}) + \dim({\mathcal C} |_{J_2}):\ (J_1,J_2)
\text{ is a partition of } [n],\ \min\{|J_1|,|J_2|\} \geq l \}.
\label{ksep_min1}
\end{equation}
Indeed, a $k$-separation $(|J|,|J^c|)$ of ${\mathcal C}$, with
$\min\{|J|,|J^c|\} \geq l$, exists iff the minimum in (\ref{ksep_min1})
is at most $\dim({\mathcal C}) + k-1$. Now, the minimization in (\ref{ksep_min1})
can be solved by finding, for each pair of disjoint $l$-element subsets
$E_1,E_2 \subset [n]$, the partition $(J_1,J_2)$ of $[n]$ that achieves
\begin{equation}
\min\{\dim({\mathcal C} |_{J_1}) + \dim({\mathcal C} |_{J_2}):\ (J_1,J_2)
\text{ is a partition of } [n],\ J_1 \supset E_1,\ J_2 \supset E_2 \}.
\label{ksep_min2}
\end{equation}
If (\ref{ksep_min2}) can be solved in time polynomial in $n$ for each
pair of disjoint $l$-element subsets $E_1,E_2 \subset [n]$, then
(\ref{ksep_min1}) can also be solved in time polynomial in $n$,
since there are $O(n^{2l})$ pairs $(E_1,E_2)$.
It turns out that (\ref{ksep_min2}) can be solved in time
polynomial in $n$ by converting it to a minimization of the form in
(\ref{mat_int_problem}), which, as mentioned above, can be solved in
polynomial time. The trick is to set ${\mathcal C}_1 = {\mathcal C} / E_2 \setminus\! E_1$
and ${\mathcal C}_2 = {\mathcal C} /E_1\setminus\! E_2$, which are both codes
of length $n - |E_1 \cup E_2|$. For notational convenience, we will
let the coordinates of ${\mathcal C}_1$ and ${\mathcal C}_2$ retain their
indices from ${\mathcal C}$, \emph{i.e.}, the set of coordinate indices for
${\mathcal C}_1$, as well as for ${\mathcal C}_2$, is $[n] - (E_1 \cup E_2)$.
It may easily be verified that for $J \subset [n]-(E_1 \cup E_2)$,
$\dim({\mathcal C}_i |_J) = \dim({\mathcal C} |_{J\cup E_i}) - \dim({\mathcal C}|_{E_i})$, $i = 1,2$.
Therefore, for any partition $(J_1,J_2)$ of $[n] - (E_1 \cup E_2)$,
setting $\overline{J_i} = J_i \cup E_i$, $i=1,2$, we have
\begin{equation}
\dim({\mathcal C}_1 |_{J_1}) + \dim({\mathcal C}_2 |_{J_2}) =
\dim({\mathcal C} |_{\overline{J}_1}) + \dim({\mathcal C} |_{\overline{J}_2}) -
\dim({\mathcal C}|_{E_1}) - \dim({\mathcal C}|_{E_2}),
\label{convert_eq}
\end{equation}
and $(\overline{J_1},\overline{J_2})$ is a partition of $[n]$. Conversely,
for any partition $(\overline{J_1},\overline{J_2})$ of $[n]$, setting
$J_i = \overline{J_i} - E_i$, $i=1,2$, we see that
$(J_1,J_2)$ forms a partition of $[n] - (E_1 \cup E_2)$, and
(\ref{convert_eq}) is once again satisfied.
Thus, we see that for a fixed pair of disjoint $l$-element subsets
$E_1, E_2 \subset [n]$, given a code ${\mathcal C}$ of length $n$,
if we set
${\mathcal C}_1 = {\mathcal C} / E_2 \setminus\! E_1$ and ${\mathcal C}_2 = {\mathcal C} /E_1\setminus\! E_2$,
then the minimum in (\ref{ksep_min2}) is equal to
$$
\min\{\dim({\mathcal C}_1 |_{J_1}) + \dim({\mathcal C}_2 |_{J_2}):\ (J_1,J_2)
\text{ is a partition of } [n] \}
+ \dim({\mathcal C} |_{E_1}) + \dim({\mathcal C} |_{E_2}).
$$
Since the minima in (\ref{mat_int_problem}) and (\ref{ksep_min2}) just
differ by a constant, the minimization problem (\ref{ksep_min2})
for given ${\mathcal C}$ and $(E_1,E_2)$
is equivalent to the minimization problem (\ref{mat_int_problem})
with ${\mathcal C}_1 = {\mathcal C} / E_2 \setminus\! E_1$ and ${\mathcal C}_2 = {\mathcal C} /E_1\setminus\! E_2$.
Therefore, since (\ref{mat_int_problem}) can be solved in time
polynomial in $n$, so can (\ref{ksep_min2}).
The above sketch does indeed give a polynomial-time algorithm for
determining $k$-separations $(J,J^c)$ with $\min\{|J|,|J^c|\} \geq l$,
but the complexity of the algorithm is $O(n^{2l+\alpha_l})$ for some
constant $\alpha_l$ that arises from the matroid intersection algorithm.
Clearly, this is not very practical even for $l = 3$ or 4, and as we
shall see in the next section, these values of $l$ come up in the
implementation of Seymour's decomposition theory.
A more efficient, albeit more involved,
algorithm for finding 2- and 3-separations is
described in \cite[Section~8.4]{truemper}.
As a final remark in this section, we mention that the fact that there
exist algorithms for solving the minimization problems
(\ref{mat_int_problem})--(\ref{ksep_min2}) that run in time polynomial in $n$
neither contradicts nor sheds any further light on the NP-completeness
results for the closely related problems considered in \cite{horn}.
\section{Code Composition and Decomposition\label{decomp_section}}
The code composition/decomposition methods described in this section
were developed by Seymour in close analogy with a
method of composing/decomposing graphs called \emph{clique-sum}.
In a clique-sum, two graphs, each containing a $K_k$ subgraph
($k$-clique), are glued together
by first picking a $k$-clique from each graph, sticking the two
cliques together so as to form a single $k$-clique in the composite graph,
and then deleting some or all of the edges from this clique.
A formal description of clique-sum can be found in
\cite{Sey80} or in \cite[p.\ 420]{oxley}.
Our exposition of these code composition/decomposition techniques
is based on Seymour's paper \cite{Sey80}. Let ${\mathcal C}$ and ${\mathcal C}'$ be
binary linear codes of length $n$ and $n'$, respectively, and let
$m$ be an integer satisfying $0 \leq 2m < \min\{n,n'\}$. We first
define a code ${\mathcal C} \parallel_m {\mathcal C}'$ as follows:
if $G = [{\mathbf g}_1 \ {\mathbf g}_2 \ \ldots \ {\mathbf g}_n]$ and
$G' = [{\mathbf g}'_1 \ {\mathbf g}'_2 \ \ldots \ {\mathbf g}'_{n'}]$
are generator matrices of ${\mathcal C}$ and ${\mathcal C}'$, respectively,
then ${\mathcal C} \parallel_m {\mathcal C}'$ is the code with generator matrix
$$
\left[
\begin{array}{ccccccccc}
{\mathbf g}_1 & \ldots & {\mathbf g}_{n-m} & {\mathbf g}_{n-m+1} & \ldots & {\mathbf g}_n & {\mathbf 0} & \ldots & {\mathbf 0} \\
{\mathbf 0} & \ldots & {\mathbf 0} & {\mathbf g}'_1 & \ldots & {\mathbf g}'_m & {\mathbf g}'_{m+1} & \ldots & {\mathbf g}'_{n'}
\end{array}
\right].
$$
Thus, ${\mathcal C} \parallel_m {\mathcal C}'$ is a binary linear code of length $n+n'-m$.
This code is almost like a direct sum of ${\mathcal C}$ and
${\mathcal C}'$ except that the two component codes overlap in $m$ positions.
Indeed, when $m = 0$, ${\mathcal C} \parallel_m {\mathcal C}'$ is the direct sum of ${\mathcal C}$ and ${\mathcal C}'$.
Codewords of ${\mathcal C} \parallel_m {\mathcal C}'$ are of the form ${\mathbf c} \parallel_m {\mathbf c}'$, for
${\mathbf c} = c_1c_2\ldots,c_n \in {\mathcal C}$ and ${\mathbf c}' = c'_1c'_2\ldots,c'_{n'} \in {\mathcal C}'$,
where ${\mathbf c} \parallel_m {\mathbf c}' = \hat{c}_1\hat{c}_2\ldots\hat{c}_{n+n'-m}$ is defined to be
$$
\hat{c}_i = \left\{
\begin{array}{cl}
c_i & \text{for $1 \leq i \leq n-m$} \\
c_i+c'_{i-n+m} & \text{for $n-m+1 \leq i \leq n$} \\
c'_{i-n+m} & \text{for $n+1 \leq i \leq n+n'-m$}
\end{array}
\right.
$$
In other words, ${\mathbf c} \parallel_m {\mathbf c}'$ is the binary word of length $n + n' - m$
composed as follows: the first $n-m$ symbols of ${\mathbf c} \parallel_m {\mathbf c}'$
are equal to the first $n-m$ symbols of ${\mathbf c}$, the next $m$ symbols of
${\mathbf c} \parallel_m {\mathbf c}'$ are equal to the coordinatewise modulo-2 sum
of the last $m$ symbols of ${\mathbf c}$ and the first $m$ symbols of ${\mathbf c}'$,
and the last $n'-m$ symbols of ${\mathbf c} \parallel_m {\mathbf c}'$
are equal to the last $n'- m$ symbols of ${\mathbf c}'$.
From ${\mathcal C} \parallel_m {\mathcal C}'$, we derive a new code, which we temporarily denote
by ${\mathcal S}_m({\mathcal C},{\mathcal C}')$, by shortening at the $m$ positions where
${\mathcal C}$ and ${\mathcal C}'$ are made to overlap. To be precise, let
$J = \{n-m+1,n-m+2,\ldots,n\}$, and set ${\mathcal S}_m({\mathcal C},{\mathcal C}') =
({\mathcal C} \parallel_m {\mathcal C}_2) \setminus\! J$. Thus, ${\mathcal S}_m({\mathcal C},{\mathcal C}')$ is a code
of length $n+n'-2m$ which, by choice of $m$, is greater than $n$ and $n'$.
Once again, note that ${\mathcal S}_0({\mathcal C},{\mathcal C}') = {\mathcal C} \parallel_0 {\mathcal C}' = {\mathcal C} \oplus {\mathcal C}'$,
where $\oplus$ denotes direct sum.
We will actually only be interested in two instances of the above
construction (other than the direct-sum case of $m=0$), one of which is
presented next, and the other is introduced
later in Section~\ref{3sum_section}.
\subsection{2-Sums\label{2sum_section}} The ${\mathcal S}_m({\mathcal C},{\mathcal C}')$
construction with $m=1$ is called a 2-sum in certain special cases.
\begin{definition}
Let ${\mathcal C}$, ${\mathcal C}'$ be codes of length at least three, such that
\begin{itemize}
\item[(P1)] $0 \ldots 01$ is not a codeword of ${\mathcal C}$ or ${\mathcal C}^\perp$;
\item[(P2)] $10 \ldots 0$ is not a codeword of ${\mathcal C}'$ or ${{\mathcal C}'}^\perp$.
\end{itemize}
Then, ${\mathcal S}_1({\mathcal C},{\mathcal C}')$ is called the \emph{2-sum} of ${\mathcal C}$ and ${\mathcal C}'$,
and is denoted by ${\mathcal C} \oplus_2 {\mathcal C}'$.
\label{2sum_def}
\end{definition}
Note that 2-sums are only defined for codes having the
properties (P1) and (P2) listed in Definition~\ref{2sum_def}.
These properties can be equivalently stated as follows:
\begin{itemize}
\item[(P1$'$)] $0 \ldots 0 1$ is not a codeword of ${\mathcal C}$,
and the last coordinate of ${\mathcal C}$ is not identically zero;
\item[(P2$'$)] $1 0 \ldots 0$ is not a codeword of ${\mathcal C}'$,
and the first coordinate of ${\mathcal C}'$ is not identically zero.
\end{itemize}
As we shall see below, (P1$'$) and (P2$'$) are more directly
relevant to an analysis of the 2-sum construction.
\begin{example}
Let ${\mathcal C}$ be the $[7,3,4]$ simplex code
with the generator matrix given in Example~\ref{simplex_example}.
As this code satisfies both (P1) and (P2) in Definition~\ref{2sum_def},
we can define ${\mathcal C} \oplus_2 {\mathcal C}$.
Carrying out the 2-sum construction yields
the $[12,5,4]$ code ${\mathcal C} \oplus_2 {\mathcal C}$ with generator matrix
$$
\overline{G} = \left[
\begin{array}{cccccccccccc}
1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1
\end{array}
\right].
$$
The minimal trellis for this code is shown in Figure~\ref{1254_trellis}.
It is easily (from the state-complexity profile of the
minimal trellis, for example) that, with $J = \{1,2,3,4,5,6\}$,
$(J,J^c)$ is a 2-separation of ${\mathcal C} \oplus_2 {\mathcal C}$.
It may further be verified that ${\mathcal C} \oplus_2 {\mathcal C}$
has no 1-separation, meaning that it is 2-connected.
Finally, we note that by shortening
${\mathcal C} \oplus_2 {\mathcal C}$ at the 7th and 8th coordinates, and puncturing
the 9th, 11th and 12th coordinates, we obtain
the simplex code ${\mathcal C}$ again. In other words,
${\mathcal C}$ is a minor of ${\mathcal C} \oplus_2 {\mathcal C}$. These observations are not
mere coincidences, as we shall see below.
\label{2sum_example}
\end{example}
\begin{figure}[t]
\centering{\epsfig{file=1254_trellis.eps, height=3cm}}
\caption{The minimal trellis of the code ${\mathcal C} \oplus_2 {\mathcal C}$ of
Example~\ref{2sum_example}.}
\label{1254_trellis}
\end{figure}
The dimension and minimum distance of a 2-sum ${\mathcal C} \oplus_2 {\mathcal C}'$
can be related to the corresponding parameters of the
component codes ${\mathcal C}$ and ${\mathcal C}'$. Given a binary word ${\mathbf x}$,
we will use $w({\mathbf x})$ to denote its Hamming weight, and for a code ${\mathcal C}$,
we will let $d({\mathcal C})$ denote the minimum distance of ${\mathcal C}$.
\begin{proposition} Let ${\mathcal C}$ and ${\mathcal C}'$ be codes for which
${\mathcal C} \oplus_2 {\mathcal C}'$ can be defined. \\
\emph{(a)} $\dim({\mathcal C} \oplus_2 {\mathcal C}') = \dim({\mathcal C}) + \dim({\mathcal C}') - 1$. \\
\emph{(b)} $d({\mathcal C} \oplus_2 {\mathcal C}') \leq
\min\left\{d({\mathcal C} \setminus\! \{n\}),\ d({\mathcal C}' \setminus\! \{1\})\right\}$,
where $n$ is the length of ${\mathcal C}$. \\
\label{2sum_prop1}
\end{proposition}
\begin{proof}
Throughout this proof, $n$ and $n'$ denote the lengths of
${\mathcal C}$ and ${\mathcal C}'$, respectively. Also, we shall let ${\mathbf x} \parallel {\mathbf x}'$
denote the concatenation of binary words ${\mathbf x}$ and ${\mathbf x}'$.
(a) By definition, ${\mathcal C} \oplus_2 {\mathcal C}' = ({\mathcal C} \parallel_1 {\mathcal C}') \setminus\! \{n\}$,
and so, the 2-sum is isomorphic to the subcode, ${\mathcal E}$,
of ${\mathcal C} \parallel_1 {\mathcal C}'$ consisting of those codewords
$\hat{c}_1\hat{c}_2 \ldots \hat{c}_{n+n'-1}$ such that $\hat{c}_n = 0$. Since
the last coordinate of ${\mathcal C}$ is not identically zero,
${\mathcal E}$ is a proper subcode of ${\mathcal C} \parallel_1 {\mathcal C}'$,
and hence, $\dim({\mathcal C} \oplus_2 {\mathcal C}') = \dim({\mathcal E}) = \dim({\mathcal C} \parallel_1 {\mathcal C}') - 1$.
We claim that the direct sum ${\mathcal C} \oplus {\mathcal C}'$ is in fact isomorphic
(as a vector space over ${\mathbb F}_2$) to ${\mathcal C} \parallel_1 {\mathcal C}'$.
Indeed, consider the map $\phi:\ {\mathcal C} \oplus {\mathcal C}' \rightarrow {\mathcal C} \parallel_1 {\mathcal C}'$
defined via $\phi({\mathbf c} \parallel {\mathbf c}') = {\mathbf c} \parallel_1 {\mathbf c}'$, for
${\mathbf c} \in {\mathcal C}$ and ${\mathbf c}' \in {\mathcal C}'$. This
is a homomorphism onto ${\mathcal C} \parallel_1 {\mathcal C}'$, but since $0 \ldots 0 1 \notin {\mathcal C}$
and $10\ldots 0 \notin {\mathcal C}'$, we have $\ker(\phi) = \{{\mathbf 0}\}$, which
shows that $\phi$ is in fact an isomorphism.
Therefore, $\dim({\mathcal C} \oplus_2 {\mathcal C}') = \dim({\mathcal C} \parallel_1 {\mathcal C}') - 1
= \dim({\mathcal C} \oplus {\mathcal C}') - 1$, which proves the result.
(b) Let $\hat{\c}$ be a minimum-weight codeword in ${\mathcal C} \setminus\! \{n\}$.
Since $\hat{\c} 0 \in {\mathcal C}$, we have, by construction of ${\mathcal C} \oplus_2 {\mathcal C}'$,
$\hat{\c} \parallel_1 {\mathbf 0} \in {\mathcal C} \oplus_2 {\mathcal C}'$. Thus, $d({\mathcal C} \oplus_2 {\mathcal C}')
\leq w(\hat{\c}) = d({\mathcal C} \setminus\! \{n\})$. A similar argument shows that
$d({\mathcal C} \oplus_2 {\mathcal C}') \leq d({\mathcal C}' \setminus\! \{1\})$.
\end{proof}
An interesting property of 2-sums is that they behave just like
direct sums under the operation of taking code duals. Note that
by virtue of (P1) and (P2) in Definition~\ref{2sum_def}, the
2-sum of ${\mathcal C}$ and ${\mathcal C}'$ can be defined if and only if the 2-sum
of their duals, ${\mathcal C}^\perp$ and ${{\mathcal C}'}^\perp$, can be defined.
\begin{proposition}[\cite{oxley}, Proposition~7.1.20]
Let ${\mathcal C}$ and ${\mathcal C}'$ be codes for which
${\mathcal C} \oplus_2 {\mathcal C}'$ can be defined. Then,
$$
{({\mathcal C} \oplus_2 {\mathcal C}')}^\perp = {\mathcal C}^\perp \oplus_2 {{\mathcal C}'}^\perp.
$$
\label{2sum_prop2}
\end{proposition}
This result can be trivially derived from Theorem~7.3 in \cite{For01},
but for completeness, we give a simple algebraic proof in
Appendix~\ref{dual_props_app}.
As an example, the above result implies that the matrix
$\overline{G}$ given in Example~\ref{2sum_example} is the
parity-check matrix of the 2-sum of two copies of a [7,4] Hamming code.
While the properties of 2-sums presented above are interesting, the
usefulness of 2-sums actually stems from the following theorem of Seymour
\cite{Sey80}, which is a result analogous to Lemma~\ref{1sep_lemma}.
\begin{theorem}[\cite{Sey80},Theorem~2.6]
If ${\mathcal C}_1$ and ${\mathcal C}_2$ are codes of length $n_1$ and
$n_2$, respectively, such that ${\mathcal C} = {\mathcal C}_1 \oplus_2 {\mathcal C}_2$, then
$(J,J^c)$ is an exact 2-separation of ${\mathcal C}$, for $J = \{1,2,\ldots,n_1-1\}$.
Furthermore, ${\mathcal C}_1$ and ${\mathcal C}_2$ are equivalent to minors of ${\mathcal C}$.
Conversely, if $(J,J^c)$ is an exact 2-separation of a code ${\mathcal C}$, then
there are codes ${\mathcal C}_1$ and ${\mathcal C}_2$ of length $|J|+1$ and $|J^c| + 1$,
respectively, such that ${\mathcal C}$ is equivalent to ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$.
\label{2sum_thm}
\end{theorem}
The following corollary is a more concise statement of the above theorem,
and is more in the spirit of Lemma~\ref{1sep_lemma}.
\begin{corollary}
A code ${\mathcal C}$ has an exact 2-separation
iff there exist codes ${\mathcal C}_1$ and ${\mathcal C}_2$,
both equivalent to proper minors of ${\mathcal C}$, such that ${\mathcal C}$ is equivalent
to ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$.
\label{2sum_cor1}
\end{corollary}
Another corollary \cite[Theorem~8.3.1]{oxley}, stated next, is a
consequence of the fact that if ${\mathcal C}$ is a 2-connected code,
then any 2-separation of ${\mathcal C}$
must be exact; if not, the 2-separation would be a 1-separation as well,
which is impossible as ${\mathcal C}$ is 2-connected.
\begin{corollary}
A 2-connected code ${\mathcal C}$ is not 3-connected
iff there exist codes ${\mathcal C}_1$ and ${\mathcal C}_2$,
both equivalent to proper minors of ${\mathcal C}$, such that ${\mathcal C}$ is equivalent
to ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$.
\label{2sum_cor2}
\end{corollary}
We will not prove Theorem~\ref{2sum_thm} in its entirety,
referring the reader instead to Seymour's original proof,
or the proof given in Oxley \cite[Section~8.3]{oxley}.
However, we will describe an efficient construction of the components of the
2-sum when an exact 2-separation $(J,J^c)$ of ${\mathcal C}$ is given,
as it is a useful tool in code decomposition. This construction effectively
proves the converse part of the theorem. Our description
is based on the construction given in \cite[Section~8.2]{truemper}.
Let ${\mathcal C}$ be a code of length $n$ and dimension $k$, specified by
a $k \times n$ generator matrix $G$, and let
$(J,J^c)$ be an exact 2-separation of ${\mathcal C}$. By permuting coordinates
if necessary, we may assume that $J = \{1,2,\ldots,m\}$ for some $m < n$.
Let $G|_J$ and $G|_{J^c}$ denote the restrictions of $G$ to the
columns indexed by $J$ and $J^c$, respectively; thus,
$G = [ G |_J \ \ G|_{J^c}]$. Let $\mbox{\textsf{rank}}(G|_J) = k_1$ and
$\mbox{\textsf{rank}}(G|_{J^c}) = k_2$; since $(J,J^c)$ is an exact 2-separation of ${\mathcal C}$,
we have $k_1 + k_2 = k + 1$. Bring $G$ into reduced row-echelon form (rref)
over ${\mathbb F}_2$. Permuting coordinates within $J$ and within $J^c$ if necessary,
$\mbox{\textsf{rref}}(G)$ may be assumed to be of the form
\begin{equation}
\overline{G} =
\left[
\begin{array}{cccc}
I_{k_1} & A & {\mathbf O} & B \\
{\mathbf O} & {\mathbf O} & I_{k_2-1} & C
\end{array}
\right],
\label{rref_eq1}
\end{equation}
where $I_j$, for $j = k_1,k_2-1$, denotes the $j \times j$ identity matrix,
$A$ is a $k_1 \times (|J| - k_1)$ matrix, $B$ is a
$k_1 \times (|J^c| - k_2 + 1)$ matrix,
$C$ is a $(k_2-1) \times (|J^c| - k_2 + 1)$
matrix, and the ${\mathbf O}$'s denote all-zeros matrices of appropriate sizes.
As a concrete example, consider the matrix $\overline{G}$ given in
Example~\ref{2sum_example}, which is indeed of the above form, with
$|J| = |J^c| = 6$, $k_1 = k_2 = 3$,
$$
A = \left[
\begin{array}{ccc}
0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0
\end{array}
\right],\ \
B = \left[
\begin{array}{cccc}
0 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1
\end{array}
\right] \ \ \text{and} \ \
C = \left[
\begin{array}{cccc}
1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1
\end{array}
\right].
$$
The fact that the submatrix
$
\left[
\begin{array}{cc}
{\mathbf O} & B \\ I_{k_2-1} & C
\end{array}
\right]
$
must have rank equal to $\mbox{\textsf{rank}}(G |_{J^c}) = k_2$ implies
that $B$ must have rank 1. Hence, $B$ is actually a matrix with
at least one non-zero row, call it ${\mathbf b}$, and at least one
non-zero column, call it $\widetilde{\b}$. Also, each row of $B$
is either ${\mathbf 0}$ or identical to ${\mathbf b}$, and $\widetilde{\b}$
is the length-$k_1$ column vector whose $i$th component
is a 1 if the $i$th row of $B$ is equal to ${\mathbf b}$, and is 0 otherwise.
Now, define
$$
G_1 = [I_{k_1} \ A \ \ \widetilde{\b}]
$$
and
$$
G_2 = \left[\begin{array}{ccc} 1 & {\mathbf 0}^t & {\mathbf b} \\
{\mathbf 0} & I_{k_2-1} & C \end{array}\right]
= [I_{k_2} \ C'],
$$
where ${\mathbf 0}$ denotes an all-zeros column-vector,
and $C' = \left[\begin{array}{c} {\mathbf b} \\ C \end{array}\right]$.
It is not hard to show that if ${\mathcal C}_1$ and ${\mathcal C}_2$ are the codes
generated by $G_1$ and $G_2$, respectively, then ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$
is the code generated by the matrix $\overline{G}$ in (\ref{rref_eq1}).
Indeed, carefully going through the construction, it may be verified
that all the rows of $\overline{G}$ are in ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$. Hence,
$\dim({\mathcal C}_1 \oplus_2 {\mathcal C}_2) \geq \mbox{\textsf{rank}}(\overline{G}) = k_1 + k_2 - 1$.
However, by Proposition~\ref{2sum_prop1}(a), we have that
$\dim({\mathcal C}_1 \oplus_2 {\mathcal C}_2) = \dim({\mathcal C}_1) + \dim({\mathcal C}_2) - 1 =
k_1 + k_2 - 1$. Hence,
$\dim({\mathcal C}_1 \oplus_2 {\mathcal C}_2) = \mbox{\textsf{rank}}(\overline{G})$, implying that
$\overline{G}$ must be a generator matrix for ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$.
\begin{example}
For the matrix $\overline{G}$ in Example~\ref{2sum_example},
we find the matrices $G_1$ and $G_2$ to be
$$
G_1 = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 0 & 1
\end{array}
\right],\ \ \ \text{and} \ \ \
G_2 = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 0 & 1
\end{array}
\right],
$$
which are indeed the generator matrices of the
two simplex codes whose 2-sum is represented by
$\overline{G}$.
\end{example}
It can also be observed that ${\mathcal C}_1$ and ${\mathcal C}_2$ are minors
of ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$. Indeed, to obtain ${\mathcal C}_1$ as a minor of
${\mathcal C}_1 \oplus_2 {\mathcal C}_2$, we proceed as follows. Let $j$ be the
index of a column of $\overline{G}$ in which the submatrix $B$
has a nonzero column. For the matrix of Example~\ref{2sum_example},
$j$ could be either 10, 11 or 12. Define
$J_1 = \{|J|+1,|J|+2,\ldots,|J|+k_2-1\}$,
and $J_2 = J^c - (J_1 \cup \{j\})$. Then,
${\mathcal C}_1 = ({\mathcal C}_1 \oplus_2 {\mathcal C}_2) \setminus\! J_1 / J_2$.
To obtain ${\mathcal C}_2$ as a minor, let $j'$ be the index of a row
of $\overline{G}$ in which the submatrix $B$ has a nonzero row.
For the matrix of Example~\ref{2sum_example},
$j'$ could be either 1, 2 or 3.
The $j'$th column of $\overline{G}$ is of the form
$[0 \ldots 0\ 1\ 0 \ldots 0]^t$, the single 1 being the $j'$th entry.
Define $J_1' = \{1,2,\ldots,k_1\} - \{j'\}$ and
$J_2' = \{k_1+1,k_1+2,\ldots,|J|\}$.
Then, ${\mathcal C}_2 = ({\mathcal C}_1 \oplus_2 {\mathcal C}_2) \setminus\! J_1' / J_2'$.
Proofs of these statements just involve consistency checking, so
are left as an easy exercise.
In summary, the procedure described above takes as input a $k \times n$
generator matrix $G$ for ${\mathcal C}$, and an exact 2-separation $(J,J^c)$ of it,
and produces as output a permutation $\pi$ of the
coordinates of ${\mathcal C}$, and the generator matrices of
two codes ${\mathcal C}_1$ and ${\mathcal C}_2$, such that ${\mathcal C} = \pi({\mathcal C}_1 \oplus_2 {\mathcal C}_2)$.
The codes ${\mathcal C}_1$ and ${\mathcal C}_2$ are both equivalent to proper minors of ${\mathcal C}$.
The entire procedure can be carried out in $O(k^2n)$ time, which is the
run-time complexity of bringing a $k \times n$ matrix to reduced
row-echelon form via elementary row operations. All other parts
of the procedure can be performed in $O(n)$ time; for example,
since $(J,J^c)$ is given, it only takes $O(n)$ time
to find the permutation, $\pi^{-1}$, that takes $\mbox{\textsf{rref}}(G)$
to the matrix $\overline{G}$ in (\ref{rref_eq1}).
A straightforward combination of
Lemma~\ref{1sep_lemma} and Corollary~\ref{2sum_cor2} yields
the following theorem, which illustrates the utility of the
matroid-theoretic tools presented so far.
\begin{theorem}[\cite{oxley}, Corollary~8.3.4]
Every code that is not 3-connected can be constructed from 3-connected
proper minors of it by a sequence of operations of coordinate
permutation, direct sum and 2-sum.
\label{decomp_thm1}
\end{theorem}
The decomposition of a code via direct sums and 2-sums implicit in the above
theorem can be carried out in time polynomial in the length of the code.
This is due to the following two facts:
\begin{itemize}
\item[(a)] as described in Section~\ref{conn_section},
there are polynomial-time algorithms for finding 1- and 2- separations
in a code, if they exist; and
\item[(b)] given an exact 2-separation of a code ${\mathcal C}$,
there is a polynomial-time procedure that produces
codes ${\mathcal C}_1$ and ${\mathcal C}_2$, both equivalent to proper minors of ${\mathcal C}$,
and a permutation $\pi$ of the coordinate set of ${\mathcal C}$,
such that ${\mathcal C} = \pi({\mathcal C}_1 \oplus_2 {\mathcal C}_2)$.
\end{itemize}
However, direct sums and 2-sums are not enough for our purposes,
nor were they enough for Seymour's theory of matroid decomposition.
Seymour also had to extend the graph-theoretic technique of
3-clique-sum to matroids (in fact, to binary matroids only). The
corresponding operation on binary matroids is called 3-sum.
\subsection{3-Sums\label{3sum_section}}
The special case of the ${\mathcal S}_3({\mathcal C},{\mathcal C}')$ construction called 3-sum is
somewhat more complex in definition than the 2-sum.
Recall that for a binary word ${\mathbf x}$, $w({\mathbf x})$ denotes its Hamming weight.
\begin{definition}
Let ${\mathcal C}$, ${\mathcal C}'$ be codes of length at least seven, such that
\begin{itemize}
\item[(A1)] no codeword of ${\mathcal C}$ or ${{\mathcal C}}^\perp$ is of the form
$0 \ldots 0\, {\mathbf x}$, where ${\mathbf x}$ is a length-3 word with $w({\mathbf x}) \in \{1,2\}$;
\item[(A2)] no codeword of ${\mathcal C}'$ or ${{\mathcal C}'}^\perp$ is of the form
${\mathbf x}\, 0 \ldots 0$, where ${\mathbf x}$ is a length-3 word with $w({\mathbf x}) \in \{1,2\}$; and
\item[(A3)] $0 \ldots 0 111 \in {\mathcal C}$ and $111 0 \ldots 0 \in {\mathcal C}'$.
\end{itemize}
Then, ${\mathcal S}_3({\mathcal C},{\mathcal C}')$ is called
the \emph{3-sum} of ${\mathcal C}$ and ${\mathcal C}'$, and is denoted by
${\mathcal C} \oplus_3 {\mathcal C}'$.
\label{3sum_def}
\end{definition}
It is perhaps worth commenting upon the use of the terms ``2-sum''
and ``3-sum'' to denote codes of the form ${\mathcal S}_m({\mathcal C},{\mathcal C}')$
for $m = 1$ and $m=3$, respectively. The nomenclature
stems from the analogy with the $k$-clique-sum of graphs, wherein
two graphs are glued along a $k$-clique. Note that a
2-clique is a single edge (hence $m=1$) and 3-clique is a triangle
of three edges (hence $m=3$). This also explains why we do not
consider an operation of the form ${\mathcal S}_2({\mathcal C},{\mathcal C}')$.
3-sums are only defined for codes having the properties (A1)--(A3)
listed in the above definition. It is obvious that an equivalent statement
of (A1)--(A3) is the following:
\begin{itemize}
\item[(B1)] $0 \ldots 0 111$ is a minimal codeword of ${\mathcal C}$,
and ${\mathcal C}^\perp$ has no nonzero codeword supported entirely within the
last three coordinates of ${\mathcal C}^\perp$; and
\item[(B2)] $111 0 \ldots 0$ is a minimal codeword of ${\mathcal C}'$,
and ${{\mathcal C}'}^\perp$ has no nonzero codeword supported entirely within the
first three coordinates of ${{\mathcal C}'}^\perp$.
\end{itemize}
In fact, (B1) and (B2) above
are exact translations of the matroid-theoretic
language used by Seymour in his definition of 3-sum \cite{Sey80}.
Another equivalent way of expressing these conditions is the following:
\begin{itemize}
\item[(B1$'$)] $0 \ldots 0 111$ is a minimal codeword of ${\mathcal C}$, and the
restriction of ${\mathcal C}$ onto its last three coordinates is $\{0,1\}^3$; and
\item[(B2$'$)] $111 0 \ldots 0$ is a minimal codeword of ${\mathcal C}'$, and the
restriction of ${\mathcal C}'$ onto its first three coordinates is $\{0,1\}^3$.
\end{itemize}
The equivalence of (B1) and (B1$'$) is a consequence of the easily verifiable
fact that if ${0 \ldots 0111} \in {\mathcal C}$, then
${\mathcal C}^\perp$ has no nonzero codeword supported entirely within the
last three coordinates of ${\mathcal C}^\perp$ if and only if all possible 3-bit
words appear in the last three coordinates of ${\mathcal C}$. The equivalence of
(B2) and (B2$'$) is analogous. It follows immediately from
(B1$'$) and (B2$'$) that ${\mathcal C} {\oplus}_3 {\mathcal C}'$ can be defined only if
$\min\{\dim({\mathcal C}),\dim({\mathcal C}')\} \geq 3$ and $\max\{d({\mathcal C}), d({\mathcal C}')\} \leq 3$.
\begin{example}
Let ${\mathcal C}$ and ${\mathcal C}'$ be the [7,4] Hamming codes given by the generator
matrices $G$ and $G'$, respectively, below.
$$
G = \left[
\begin{array}{ccccccc}
1 & 1 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 1 & 0 & 0 & 1 & 0 \\
0 & 1 & 1 & 0 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 0 & 0 & 0
\end{array}
\right], \ \ \ \
G' = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 & 1 & 1 & 1
\end{array}
\right].
$$
${\mathcal C}$ and ${\mathcal C}'$ satisfy the conditions in Definition~\ref{3sum_def},
so their 3-sum can be defined. The code ${\mathcal C} \oplus_3 {\mathcal C}'$ works out to
be the $[8,4,4]$ extended Hamming code with generator matrix
$$
\overline{G} = \left[
\begin{array}{cccccccc}
1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1
\end{array}
\right].
$$
The minimal trellis of this code is shown in Figure~\ref{844_trellis}.
With $J = \{1,2,3,4\}$, $(J,J^c)$ is an exact
3-separation of ${\mathcal C} \oplus_3 {\mathcal C}'$. It may be verified that, in fact,
$\lambda({\mathcal C} \oplus_3 {\mathcal C}') = 3$. Furthermore, puncturing any coordinate
of ${\mathcal C} \oplus_3 {\mathcal C}'$ yields a $[7,4]$ Hamming code. Thus, ${\mathcal C}$ and ${\mathcal C}'$
are (up to code equivalence) minors of ${\mathcal C} \oplus_3 {\mathcal C}'$.
\label{3sum_example}
\end{example}
\begin{figure}[t]
\centering{\epsfig{file=844_trellis.eps, height=3cm}}
\caption{The minimal trellis of the code ${\mathcal C} \oplus_3 {\mathcal C}'$ of
Example~\ref{3sum_example}.}
\label{844_trellis}
\end{figure}
The dimension and minimum distance of ${\mathcal C} \oplus_3 {\mathcal C}'$
can be related to ${\mathcal C}$ and ${\mathcal C}'$ in a manner analogous to
Proposition~\ref{2sum_prop1} for 2-sums.
\begin{proposition}
For codes ${\mathcal C}$ and ${\mathcal C}'$ of length $n$ and $n'$
for which $\,{\mathcal C} {\oplus}_3 {\mathcal C}'$ can be defined, we have
\begin{itemize}
\item[(a)] $\dim({\mathcal C} {\oplus}_3 {\mathcal C}') = \dim({\mathcal C}) + \dim({\mathcal C}') - 4$.
\item[(b)] $d({\mathcal C} {\oplus}_3 {\mathcal C}') \leq \min\{d({\mathcal C} \setminus\! \{n-2,n-1,n\},
\ d({\mathcal C}' \setminus\! \{1,2,3\})\}$.
\end{itemize}
\label{3sum_prop1}
\end{proposition}
\begin{proof}
We will only prove (a) as the proof of (b) is analogous to the
proof of Proposition~\ref{2sum_prop1}(b).
Let ${\mathbf x} \parallel {\mathbf x}'$ denote the concatenation of binary sequences
${\mathbf x}$ and ${\mathbf x}'$.
We prove the proposition by first showing that
$\dim({\mathcal C} \parallel_3 {\mathcal C}') = \dim({\mathcal C}) + \dim({\mathcal C}') - 1$, and then showing that
$\dim({\mathcal C} \oplus_3 {\mathcal C}') = \dim({\mathcal C} \parallel_3 {\mathcal C}') - 3$. The first of these
equalities follows directly from the observation that the mapping
$$
\phi({\mathbf c} \parallel {\mathbf c}') = {\mathbf c} \parallel_3 {\mathbf c}',\ \ \ {\mathbf c} \in {\mathcal C}, {\mathbf c}' \in {\mathcal C}',
$$
defines a homomorphism from the direct sum
${\mathcal C} \oplus {\mathcal C}'$ onto ${\mathcal C} \parallel_3 {\mathcal C}'$,
with $\dim(\ker(\phi)) = 1$; indeed, $\ker(\phi)$ consists
of the two words ${\mathbf 0}$ and $0 \ldots 0 111 \parallel 111 0 \ldots 0$.
To prove that $\dim({\mathcal C} \oplus_3 {\mathcal C}') = \dim({\mathcal C} \parallel_3 {\mathcal C}') - 3$, we
observe that ${\mathcal C} \oplus_3 {\mathcal C}'$ is isomorphic to the subcode, ${\mathcal E}$, of
${\mathcal C} \parallel_3 {\mathcal C}'$ consisting of those codewords
$\hat{c}_1\hat{c}_2 \ldots \hat{c}_{n+n'-3}$ such that $\hat{c}_{n-2}\hat{c}_{n-1}\hat{c}_n = 000$.
Therefore, $\dim({\mathcal C} \oplus_3 {\mathcal C}') = \dim({\mathcal E})$.
Since the restriction of ${\mathcal C}$ onto its last three coordinates is $\{0,1\}^3$
(property (B1$'$)), and the restriction of ${\mathcal C}'$ onto its first three
coordinates is $\{0,1\}^3$ (property (B2$'$)), the restriction of
${\mathcal C} \parallel_3 {\mathcal C}'$ onto its $(n-2)$th, $(n-1)$th and $n$th coordinates is
also $\{0,1\}^3$. Therefore, $\dim({\mathcal E}) = \dim({\mathcal C} \parallel_3 {\mathcal C}') - 3$,
and hence, $\dim({\mathcal C} \oplus_3 {\mathcal C}') = \dim({\mathcal C} \parallel_3 {\mathcal C}') - 3$, which
completes the proof of the proposition. \end{proof}
An important difference between 2-sums and 3-sums is that the result
of Proposition~\ref{2sum_prop2} does not directly extend to 3-sums. The
reason for this is that for codes ${\mathcal C}$ and ${\mathcal C}'$ satisfying (A1)--(A3)
in Definition~\ref{3sum_def}, the 3-sum ${\mathcal C}^\perp {\oplus}_3 {{\mathcal C}'}^\perp$ cannot
even be defined. Indeed, while (A1) and (A2) are invariant under the operation
of taking duals, (A3) is not --- if $0 \ldots 0 111 \in {\mathcal C}$, then
$0 \ldots 0 111 \notin {\mathcal C}^\perp$. To determine the dual of a 3-sum, we need
to define a ``dual'' operation, namely the $\overline{3}$-sum.
\begin{definition}
Let ${\mathcal C}$, ${\mathcal C}'$ be codes of length at least seven, such that
\begin{itemize}
\item[(A1$'$)] no codeword of ${\mathcal C}$ or ${{\mathcal C}}^\perp$ is of the form
$0 \ldots 0\, {\mathbf x}$, where ${\mathbf x}$ is a length-3 word with $w({\mathbf x}) \in \{1,2\}$;
\item[(A2$'$)] no codeword of ${\mathcal C}'$ or ${{\mathcal C}'}^\perp$ is of the form
${\mathbf x}\, 0 \ldots 0$, where ${\mathbf x}$ is a length-3 word with $w({\mathbf x}) \in \{1,2\}$; and
\item[(A3$'$)] $0 \ldots 0 111 \in {\mathcal C}^\perp$ and
$111 0 \ldots 0 \in {{\mathcal C}'}^\perp$.
\end{itemize}
The \emph{$\overline{3}$-sum} of ${\mathcal C}$ and ${\mathcal C}'$, denoted by ${\mathcal C} \,\overline{\oplus}_3\, {\mathcal C}'$,
is defined as
$$
{\mathcal C} \,\overline{\oplus}_3\, {\mathcal C}' = \overline{{\mathcal C}}\, {\oplus}_3\, \overline{{\mathcal C}'},
$$
where $\overline{{\mathcal C}} = {\mathcal C} \, \bigcup \, (0 \ldots 0 111 + {\mathcal C})$ and
$\overline{{\mathcal C}'} = {\mathcal C}' \, \bigcup \, (111 0 \ldots 0 + {\mathcal C}')$.
\label{3barsum_def}
\end{definition}
Note that (A1$'$) and (A2$'$) are identical to (A1) and (A2), respectively.
To ensure that the above definition can in fact be made, it must be verified
that the 3-sum $\overline{{\mathcal C}}\, {\oplus}_3\, \overline{{\mathcal C}'}$ can actually be defined
for codes ${\mathcal C}$ and ${\mathcal C}'$ satisfying (A1$'$)--(A3$'$). So, let ${\mathcal C}$ and
${\mathcal C}'$ be codes satisfying (A1$'$)--(A3$'$). We need to verify that
$\overline{{\mathcal C}}$ and $\overline{{\mathcal C}'}$ satisfy (A1)--(A3) in Definition~\ref{3sum_def}.
By their very definition, $\overline{{\mathcal C}}$ and $\overline{{\mathcal C}'}$ satisfy (A3).
Furthermore, since ${\overline{{\mathcal C}}}^\perp \subset {\mathcal C}^\perp$ and
${\overline{{\mathcal C}'}}^\perp \subset {{\mathcal C}'}^\perp$, we see that
no codeword of ${\overline{{\mathcal C}}}^\perp$ is of the form $0 \ldots 0\, {\mathbf x}$
as in (A1), and no codeword of ${\overline{{\mathcal C}'}}^\perp$ is of the form
${\mathbf x}\, 0 \ldots 0$ as in (A2). Finally, if $0 \ldots 0\, {\mathbf x} \in \overline{{\mathcal C}}$
for some length-3 word ${\mathbf x}$ with $w({\mathbf x}) \in \{1,2\}$, then since
$0 \ldots 0\, {\mathbf x} \notin {\mathcal C}$ by (A1$'$), it must be that
$0 \ldots 0\, {\mathbf x}$ is in $0 \ldots 0 111 + {\mathcal C}$. But in this case,
$0 \ldots 0\, \overline{{\mathbf x}} \in {\mathcal C}$, where $\overline{{\mathbf x}} = 111 + {\mathbf x}$. So,
$w(\overline{{\mathbf x}}) \in \{1,2\}$ as well, which is impossible by (A1$'$).
Therefore, $\overline{{\mathcal C}}$ cannot contain any word of the form
$0 \ldots 0\, {\mathbf x}$ as in (A1). By analogous reasoning,
$\overline{{\mathcal C}'}$ cannot contain any word of the form
${\mathbf x}\,0 \ldots 0$ as in (A2). We have thus verified that
$\overline{{\mathcal C}}$ and $\overline{{\mathcal C}'}$ satisfy (A1)--(A3), and so
$\overline{{\mathcal C}}\, {\oplus}_3\, \overline{{\mathcal C}'}$ can be defined.
Note that (A3$'$) implies that $0 \ldots 0 111 \notin {\mathcal C}$ and
$111 0 \ldots 0 \notin {\mathcal C}'$, and hence,
$\dim(\overline{{\mathcal C}}) = \dim({\mathcal C}) + 1$ and $\dim(\overline{{\mathcal C}'}) = \dim({\mathcal C}') + 1$.
Therefore, by virtue of Proposition~\ref{3sum_prop1}, we have the
following result.
\begin{lemma}
$\dim({\mathcal C} \,\overline{\oplus}_3\, {\mathcal C}') = \dim({\mathcal C}) + \dim({\mathcal C}') - 2$.
\label{3barsum_lemma}
\end{lemma}
The $\overline{3}$-sum is the dual operation to 3-sum, in a sense made
precise by the proposition below, the proof of which we defer to
Appendix~\ref{dual_props_app}. This result is stated, without
explicit proof, in \cite[p.\ 316]{GT} and \cite[p.\ 184]{truemper};
Truemper \cite{GT,truemper} refers to 3-sum and $\overline{3}$-sum as
$\Delta$-sum and $Y$-sum, respectively. The result can also be derived
from Theorem~7.3 in \cite{For01}.
\begin{proposition}
Let ${\mathcal C}$ and ${\mathcal C}'$ be codes for which
${\mathcal C} \oplus_3 {\mathcal C}'$ can be defined. Then,
$$
{({\mathcal C} {\oplus}_3 {\mathcal C}')}^\perp = {\mathcal C}^\perp \,\overline{\oplus}_3\, {{\mathcal C}'}^\perp.
$$
\label{3sum_prop2}
\end{proposition}
It follows from the last result that a code that is
expressible as a 3-sum can also be expressed as a
$\overline{3}$-sum. Indeed, if ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$, then
by the above proposition, ${\mathcal C}^\perp = {\mathcal C}_1^\perp \,\overline{\oplus}_3\, {\mathcal C}_2^\perp$.
The latter, by definition, is $\overline{{\mathcal C}_1^\perp} {\oplus}_3 \overline{{\mathcal C}_2^\perp}$,
and so again taking duals and using the above proposition, we obtain
$$
{\mathcal C} = {\left(\overline{{\mathcal C}_1^\perp} {\oplus}_3 \overline{{\mathcal C}_2^\perp}\right)}^\perp
= \left(\overline{{\mathcal C}_1^\perp}\right)^\perp \,\overline{\oplus}_3\,
\left(\overline{{\mathcal C}_2^\perp}\right)^\perp.
$$
We record this as a corollary to Proposition~\ref{3sum_prop2}.
\begin{corollary}
If ${\mathcal C}_1 {\oplus}_3 {\mathcal C}_2$ can be defined, then
$$
{\mathcal C}_1 {\oplus}_3 {\mathcal C}_2 = \left(\overline{{\mathcal C}_1^\perp}\right)^\perp \,\overline{\oplus}_3\,
\left(\overline{{\mathcal C}_2^\perp}\right)^\perp.
$$
\label{3sum_cor}
\end{corollary}
Having presented some of the simpler properties of 3-sums, we next state
a highly non-trivial result of Seymour that illustrates how 3-sums
are to be used. The statement of this result is the 3-sum analogue of
Theorem~\ref{2sum_thm}, but there are some important differences between
the two that we will point out after stating the result.
\begin{theorem}[\cite{Sey80},Theorems~2.9 and 4.1]
If ${\mathcal C}_1$ and ${\mathcal C}_2$ are codes of length $n_1$ and
$n_2$, respectively, such that ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$, then
$(J,J^c)$ is an exact 3-separation of ${\mathcal C}$
for $J = \{1,2,\ldots,n_1-3\}$. Furthermore, if ${\mathcal C}$ is 3-connected,
then ${\mathcal C}_1$ and ${\mathcal C}_2$ are equivalent to proper minors of ${\mathcal C}$.
Conversely, if $(J,J^c)$ is an exact 3-separation of a code ${\mathcal C}$, with
$\min\{|J|,|J^c|\} \geq 4$, then there are codes
${\mathcal C}_1$ and ${\mathcal C}_2$ of length $|J|+3$ and $|J^c| + 3$,
respectively, such that ${\mathcal C}$ is equivalent to ${\mathcal C}_1 \oplus_3 {\mathcal C}_2$.
\label{3sum_thm}
\end{theorem}
A couple of key differences between the statements of
Theorems~\ref{2sum_thm} and \ref{3sum_thm} must be stressed.
For a code to be expressible as a 2-sum, it is sufficient that
there exist an exact 2-separation.
However, to make the analogous conclusion about 3-sums,
Theorem~\ref{3sum_thm} not only asks for the existence of an
exact 3-separation $(J,J^c)$, but also adds the additional hypothesis
that $\min\{|J|,|J^c|\} \geq 4$. We will have more
to say about this a little later.
There is a second major difference between the statements of the
two theorems. Theorem~\ref{2sum_thm} states that if
${\mathcal C} = {\mathcal C}_1 \oplus_2 {\mathcal C}_2$, then ${\mathcal C}_1$ and ${\mathcal C}_2$ are
always minors of ${\mathcal C}$, up to coordinate permutation. However,
when ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$, Theorem~\ref{3sum_thm}
imposes the condition that ${\mathcal C}$ be 3-connected
in order to conclude that ${\mathcal C}_1$ and ${\mathcal C}_2$ are equivalent to
minors of ${\mathcal C}$. If the 3-connectedness requirement for ${\mathcal C}$ is dropped,
the conclusion does not hold in general, as the following example shows.
\begin{example}
Take ${\mathcal C}_1$ to be the $[7,4,1]$ code with generator matrix
$$
G_1 = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 & 1 \\
0 & 0 & 1 & 1 & 0 & 1 & 0 \\
1 & 1 & 1 & 0 & 1 & 0 & 0
\end{array}
\right],
$$
and let ${\mathcal C}_2$ be the $[7,4,3]$ Hamming code with generator matrix
$$
G_2 = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 0 & 1 & 1 & 0 \\
0 & 1 & 0 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1 & 1 & 1 & 1
\end{array}
\right].
$$
These codes satisfy (A1)--(A3) of Definition~\ref{3sum_def}, and
their 3-sum, ${\mathcal C}_1 \oplus_3 {\mathcal C}_2$,
is the $[8,4,1]$ code ${\mathcal C}$ generated by
$$
G = \left[
\begin{array}{cccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1
\end{array}
\right].
$$
The minimal trellis of this code is shown in Figure~\ref{841_trellis}.
Note that ${\mathcal C}$ is not 3-connected. In fact, it is not
even 2-connected --- it has a 1-separation $(J,J^c)$ with $J = \{1\}$.
Now, ${\mathcal C}_1$ can be obtained as a minor of ${\mathcal C}$
by puncturing ${\mathcal C}$ at the last coordinate. However, ${\mathcal C}_2$ is not a
minor of ${\mathcal C}$, since puncturing ${\mathcal C}$ at any coordinate does not
yield a $[7,4,3]$ code, and shortening always yields
a code of dimension less than 4.
\label{not_3conn_example}
\end{example}
\begin{figure}[t]
\centering{\epsfig{file=841_trellis.eps, height=3cm}}
\caption{The minimal trellis of the code ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$ of
Example~\ref{not_3conn_example}.}
\label{841_trellis}
\end{figure}
We mention in passing that if ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$,
then the fact that ${\mathcal C}_1$ and ${\mathcal C}_2$ are equivalent to minors of ${\mathcal C}$
whenever ${\mathcal C}$ is 3-connected is far more difficult to prove
(see \cite[Section 4]{Sey80}) than the corresponding part of
Theorem~\ref{2sum_thm}.
We observed above that the mere existence
of an exact 3-separation is not enough for Theorem~\ref{3sum_thm}
to conclude that a code ${\mathcal C}$ is expressible as a 3-sum;
the 3-separation $(J,J^c)$ must also satisfy
$\min\{|J|,|J^c|\} \geq 4$, \emph{i.e.}, must
not be minimal\footnote{The definition of a minimal
$k$-separation is given immediately after Definition~\ref{ksep_def}.}.
It is also implicit in the statement of Theorem~\ref{3sum_thm}
that the existence of a non-minimal 3-separation
is a necessary condition for a code to be a 3-sum.
Indeed, if ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$, then ${\mathcal C}_1$ and ${\mathcal C}_2$ must each
have length at least 7, as per Definition~\ref{3sum_def}. So, with
$J = \{1,2,\ldots,n_1-3\}$ as in the statement of Theorem~\ref{3sum_thm},
it must be true that $|J| \geq 4$, and similarly, $|J^c| \geq 4$.
The following definition allows for a compact statement of a corollary to
Theorem~\ref{3sum_thm} along the lines of Corollary~\ref{2sum_cor2}.
\begin{definition}
A 3-connected code is \emph{internally 4-connected} if
all its 3-separations are minimal.
\label{int_4conn_def}
\end{definition}
Internal 4-connectedness is a notion that lies properly between
3-connectedness and 4-connectedness --- a 3-connected code that is
not 4-connected can be internally 4-connected. Note that any
3-separation in a 3-connected code must be exact, and so we can state
the following corollary to Theorem~\ref{3sum_thm}.
\begin{corollary}
A 3-connected code ${\mathcal C}$ is not internally 4-connected
iff there exist codes ${\mathcal C}_1$ and ${\mathcal C}_2$,
both equivalent to proper minors of ${\mathcal C}$, such that ${\mathcal C}$ is equivalent
to ${\mathcal C}_1 \oplus_3 {\mathcal C}_2$.
\label{3sum_cor1}
\end{corollary}
As in the case of 2-sums, we provide an efficient construction of the
components of the 3-sum when an exact, non-minimal
3-separation $(J,J^c)$ of ${\mathcal C}$ is given. This construction
furnishes a proof of the converse part of Theorem~\ref{3sum_thm}.
Our description is based loosely on the constructions given
in \cite[Section~8.3]{truemper} and \cite{GT}.
Let ${\mathcal C}$ be a code of length $n$ and dimension $k$, specified by
a $k \times n$ generator matrix $G$, and let
$(J,J^c)$ be an exact 3-separation of ${\mathcal C}$, with $|J| \geq 4$
and $|J^c| \geq 4$. By permuting coordinates
if necessary, we may assume that $J = \{1,2,\ldots,m\}$ for some $m$
such that $4 \leq m \leq n-4$.
Let $G|_J$ and $G|_{J^c}$ denote the restrictions of $G$ to
the columns indexed by $J$ and $J^c$, respectively;
thus, $G = [ G |_J \ \ G|_{J^c}]$. Let $\mbox{\textsf{rank}}(G|_J) = k_1$ and
$\mbox{\textsf{rank}}(G|_{J^c}) = k_2$; since $(J,J^c)$ is an exact 3-separation of ${\mathcal C}$,
we have $k_1 + k_2 = k + 2$. Note that $k_1 \leq k$ implies that
$k + k_2 \geq k_1 + k_2 = k + 2$, so that $k_2 \geq 2$; similarly,
$k_1 \geq 2$.
Bring $G$ into reduced row-echelon form (rref)
over ${\mathbb F}_2$. Permuting coordinates within $J$ and within $J^c$ if necessary,
$\mbox{\textsf{rref}}(G)$ may be assumed to be of the form
\begin{equation}
\overline{G} =
\left[
\begin{array}{cccc}
I_{k_1} & A & {\mathbf O} & B \\
{\mathbf O} & {\mathbf O} & I_{k_2-2} & C
\end{array}
\right],
\label{rref_eq2}
\end{equation}
where $I_j$, for $j = k_1,k_2-2$, denotes the $j \times j$ identity matrix,
$A$ is a $k_1 \times (|J| - k_1)$ matrix, $B$ is a
$k_1 \times (|J^c| - k_2 + 2)$ matrix,
$C$ is a $(k_2-2) \times (|J^c| - k_2 + 2)$
matrix, and the ${\mathbf O}$'s denote all-zeros matrices of appropriate sizes.
As a concrete example, consider the matrix $\overline{G}$ given in
Example~\ref{3sum_example}, which is indeed of the above form, with
$|J| = |J^c| = 4$, $k_1 = k_2 = 3$,
$$
A = \left[
\begin{array}{c}
1 \\ 1 \\ 1
\end{array}
\right],\ \
B = \left[
\begin{array}{ccc}
0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0
\end{array}
\right] \ \ \text{and} \ \
C = \left[
\begin{array}{ccc}
1 & 1 & 1
\end{array}
\right].
$$
The fact that the submatrix
$
\left[
\begin{array}{cc}
{\mathbf O} & B \\ I_{k_2-2} & C
\end{array}
\right]
$
must have rank equal to $\mbox{\textsf{rank}}(G |_{J^c}) = k_2$ implies
that $B$ must have rank 2. Hence, $B$ has two
linearly independent rows, call them ${\mathbf x}$ and ${\mathbf y}$, which form
a basis of the row-space of $B$. In particular,
each row of $B$ is either ${\mathbf 0}$, ${\mathbf x}$, ${\mathbf y}$ or ${\mathbf x}+{\mathbf y}$.
Now, define the $(k_1+1) \times (|J|+3)$ matrix
$$
G_1 = \left[\begin{array}{ccc}
I_{k_1} & A & D \\
{\mathbf 0} & {\mathbf 0} & {\mathbf 1}
\end{array}
\right],
$$
where $D$ is a $k_1 \times 3$ matrix whose $i$th row is defined as
$$
\text{$i$th row of $D$} =
\left\{
\begin{array}{cl}
000 & \text{if $i$th row of $B$ is ${\mathbf 0}$} \\
001 & \text{if $i$th row of $B$ is ${\mathbf x}$} \\
010 & \text{if $i$th row of $B$ is ${\mathbf y}$} \\
100 & \text{if $i$th row of $B$ is ${\mathbf x} + {\mathbf y}$};
\end{array}
\right.
$$
and the bottom row of $G_1$, represented by $[{\mathbf 0}\ \ {\mathbf 0}\ \ {\mathbf 1}]$,
is simply $0 \ldots 0 1 1 1$.
Next, define the $(k_2+1) \times (|J^c|+3)$ matrix
$$
G_2 = \left[\begin{array}{ccc} I_3 & {\mathbf O} & X \\
{\mathbf O} & I_{k_2-2} & C \end{array}\right]
= [I_{k_2+1} \ C''],
$$
where
$$
X = \left[\begin{array}{c}
{\mathbf x}+{\mathbf y} \\ {\mathbf y} \\ {\mathbf x}
\end{array}\right]
\ \ \ \text{and} \ \ \
C'' = \left[\begin{array}{c} X \\ C \end{array}\right].
$$
A straightforward verification yields that if
${\mathcal C}_1$ and ${\mathcal C}_2$ are the codes generated by $G_1$ and $G_2$,
respectively, then $\dim({\mathcal C}_1) = k_1+1$,
$\dim({\mathcal C}_2) = k_2 + 1$, and ${\mathcal C}_1,{\mathcal C}_2$ satisfy properties (A1)--(A3)
in Definition~\ref{3sum_def}, so ${\mathcal C}_1 \oplus_3 {\mathcal C}_2$ can be defined.
The construction of $G_1$ and $G_2$ above is carefully crafted to ensure that
all the rows of $\overline{G}$ are in ${\mathcal C}_1 \oplus_3 {\mathcal C}_2$.
Hence, $\dim({\mathcal C}_1 \oplus_3 {\mathcal C}_2) \geq \mbox{\textsf{rank}}(\overline{G}) = k_1 + k_2 - 2$.
However, by Proposition~\ref{3sum_prop1}, we have that
$\dim({\mathcal C}_1 \oplus_3 {\mathcal C}_2) = \dim({\mathcal C}_1) + \dim({\mathcal C}_2) - 4 =
k_1 + k_2 - 2$. Hence,
$\dim({\mathcal C}_1 \oplus_3 {\mathcal C}_2) = \mbox{\textsf{rank}}(\overline{G})$, implying that
$\overline{G}$ must be a generator matrix for ${\mathcal C}_1 \oplus_3 {\mathcal C}_2$.
Note that according to Theorem~\ref{3sum_thm}, if ${\mathcal C}$ is 3-connected,
then ${\mathcal C}_1$ and ${\mathcal C}_2$ are equivalent to proper minors of ${\mathcal C}$.
The procedure described above can be formalized into an algorithm that
takes as input a $k \times n$ generator matrix $G$ for ${\mathcal C}$,
and an exact, non-minimal 3-separation $(J,J^c)$ of it,
and produces as output a permutation $\pi$ of the
coordinates of ${\mathcal C}$, and the generator matrices of
two codes ${\mathcal C}_1$ and ${\mathcal C}_2$, such that ${\mathcal C} = \pi({\mathcal C}_1 \oplus_3 {\mathcal C}_2)$.
The codes ${\mathcal C}_1$ and ${\mathcal C}_2$ are both equivalent to proper minors of ${\mathcal C}$.
This procedure can be carried out
in $O(k^2n)$ time for the same reasons as in the 2-sum case.
For the matrix $\overline{G}$ in Example~\ref{3sum_example},
we find the matrices $G_1$ and $G_2$ to be
$$
G_1 = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 1 & 1 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 & 0 & 0 & 0
\end{array}
\right],\ \ \ \text{and} \ \ \
G_2 = \left[
\begin{array}{ccccccc}
1 & 0 & 0 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 & 1 & 1 & 1
\end{array}
\right],
$$
which are indeed generator matrices of the
two Hamming codes whose 3-sum is represented by
$\overline{G}$.
It must be pointed out that Theorem~\ref{3sum_thm} also holds when
the 3-sums in its statement are replaced by $\overline{3}$-sums. This is
a consequence of Proposition~\ref{conn_prop}, which shows that $(J,J^c)$
is a 3-separation of a code ${\mathcal C}$ iff it is a 3-separation of the
dual code ${\mathcal C}^\perp$. Hence, applying Theorem~\ref{3sum_thm} to ${\mathcal C}^\perp$,
and dualizing via Proposition~\ref{3sum_prop2}, we see that the
3-sums in the statement of Theorem~\ref{3sum_thm} can be
replaced by $\overline{3}$-sums. In particular, we also have the
following corollary to Theorem~\ref{3sum_thm}.
\begin{corollary}
A 3-connected code ${\mathcal C}$ is not internally 4-connected
iff there exist codes ${\mathcal C}_1$ and ${\mathcal C}_2$,
both equivalent to proper minors of ${\mathcal C}$, such that ${\mathcal C}$ is equivalent
to ${\mathcal C}_1 \,\overline{\oplus}_3\, {\mathcal C}_2$.
\label{3sum_cor2}
\end{corollary}
Putting together Theorem~\ref{decomp_thm1} with
\ref{3sum_cor1} and \ref{3sum_cor2}, we obtain the
following theorem, which summarizes the code decomposition
theory presented up to this point.
\begin{theorem}
A binary linear code either is 3-connected and internally 4-connected,
or can be constructed from 3-connected, internally 4-connected
proper minors of it by a sequence of operations of coordinate
permutation, direct sum, 2-sum and 3-sum (or $\overline{3}$-sum).
\label{decomp_thm2}
\end{theorem}
The decomposition of Theorem~\ref{decomp_thm2}
can be carried out in time polynomial in the length of the code, since
\begin{itemize}
\item[(a)]
the decomposition of Theorem~\ref{decomp_thm1} can be carried out in
polynomial time;
\item[(b)] as mentioned in Section~\ref{conn_section},
there are polynomial-time algorithms for finding non-minimal
3-separations (\emph{i.e.}, 3-separations $(J,J^c)$ with
$\min\{|J|,|J^c|\} \geq 4$) in a code, if they exist; and
\item[(c)] there is a polynomial-time procedure that, given an exact,
non-minimal 3-separation of a 3-connected code ${\mathcal C}$, produces codes
${\mathcal C}_1$ and ${\mathcal C}_2$, both equivalent to proper minors of ${\mathcal C}$,
and a permutation $\pi$ of the coordinate set of ${\mathcal C}$,
such that ${\mathcal C} = \pi({\mathcal C}_1 \oplus_3 {\mathcal C}_2)$ or, via Corollary~\ref{3sum_cor},
${\mathcal C} = \pi({\mathcal C}_1 \,\overline{\oplus}_3\, {\mathcal C}_2)$.
\end{itemize}
Note that the theorem does not guarantee uniqueness of
the code decomposition.
\subsection{Code-Decomposition Trees}
A binary tree is a convenient data structure for storing a
decomposition of a code via direct sums, 2-sums, and 3-sums.
Recall that a proper (or full) binary tree is a rooted tree such
that every node of the tree has either zero or two children.
We will drop the adjective ``proper'' as proper binary trees
are the only kind of binary trees we are interested in.
A node without any children is called a \emph{leaf}. Each
non-leaf node has two children, and we will distinguish between
the two, calling one the \emph{left} child and the other
the \emph{right} child.
Let ${\mathcal C}$ be a binary linear code. A \emph{code-decomposition tree}
for ${\mathcal C}$ is a binary tree ${\mathcal T}$ defined as follows.
Each node {\sf v } of ${\mathcal T}$ stores a triple
(\textsf{v.code},\textsf{v.perm},\textsf{v.sum}), where
\textsf{v.code} is a binary linear code, \textsf{v.perm} is either
NULL or a permutation of the coordinate set of \textsf{v.code}, and
\textsf{v.sum} $\in \{\odot,\oplus,\oplus_2,\oplus_3,\,\overline{\oplus}_3\,\}$.
For each node {\sf v } of ${\mathcal T}$, the triple
(\textsf{v.code},\textsf{v.perm},\textsf{v.sum})
must adhere to the following rules:
\begin{itemize}
\item[(R1)] if {\sf v } is the root node, then \textsf{v.code} is the code
${\mathcal C}$ itself;
\item[(R2)] \textsf{v.perm} $=$ NULL iff {\sf v } is a leaf;
\item[(R3)] \textsf{v.sum} $= \odot$ iff {\sf v } is a leaf;
\item[(R4)] if {\sf v } is a non-leaf node, then
\begin{itemize}
\item[(i)] \textsf{lchild.code} and
\textsf{rchild.code} are proper minors of \textsf{v.code}; and
\item[(ii)] the permutation \textsf{v.perm} applied to the sum
$$
(\text{\sf lchild.code})\ \
\text{\sf v.sum}\ \
(\text{\sf rchild.code})
$$
yields \textsf{v.code},
\end{itemize}
where \textsf{lchild} and \textsf{rchild} above respectively refer to
the left and right children of \textsf{v}.
\end{itemize}
In particular, (R4) ensures that for any node \textsf{v}
other than the root node in the tree, \textsf{v.code}
is a proper minor of ${\mathcal C}$.
We will identify the leaves of any code-decomposition tree
with the codes that they store. Theorem~\ref{decomp_thm2}
guarantees that each code ${\mathcal C}$ has a code-decomposition tree
in which each leaf is 3-connected and internally 4-connected. Such
a code-decomposition tree will be called \emph{complete}.
A complete code-decomposition tree for ${\mathcal C}$ can be constructed
in time polynomial in the length of the code ${\mathcal C}$, since
the decomposition of Theorem~\ref{decomp_thm2}
can be carried out in polynomial time. Note that if ${\mathcal C}$
itself is 3-connected and internally 4-connected, then there is
exactly one code-decomposition tree for it, which is the tree
consisting of the single node ${\mathcal C}$ --- or more precisely,
the single node that stores $({\mathcal C},\text{NULL},\odot)$.
Finally, a code-decomposition tree is called \emph{3-homogeneous}
(resp.\ \emph{$\overline{3}$-homogeneous}) if for each non-leaf node
{\sf v } in the tree, if \textsf{v.sum} $\in \{\oplus,{\oplus}_2,{\oplus}_3\}$
(resp.\ \textsf{v.sum} $\in \{\oplus,{\oplus}_2,\,\overline{\oplus}_3\,\}$).
Thus, in a 3-homogeneous (resp.\ $\overline{3}$-homogeneous) tree,
no $\overline{3}$-sums (resp.\ 3-sums) are used. It follows from
Corollaries~\ref{3sum_cor1} and \ref{3sum_cor2} that if
a code-decomposition tree has a node of the form {\sf v } $= ({\mathcal C},\pi,\,\overline{\oplus}_3\,)$,
with ${\mathcal C}$ being 3-connected, then the subtree with {\sf v } as a
root can be replaced with a different subtree in which the root node is
\textsf{v'} $= ({\mathcal C},\pi',{\oplus}_3)$. Therefore, every code has
a complete, 3-homogeneous (or $\overline{3}$-homogeneous) code-decomposition
tree, which again can be constructed in time polynomial in the
length of the code.
Having described in detail Seymour's decomposition theory in the context
of binary linear codes, we now turn to some of its applications.
This theory mainly derives its applications from families of codes
that are minor-closed, and such families form the subject of the next section.
\section{Minor-Closed Families of Codes\label{minor_closed_section}}
A family ${\mathfrak C}$ of binary linear codes is defined to be
\emph{minor-closed} if for each ${\mathcal C} \in {\mathfrak C}$, every code equivalent
to a minor of ${\mathcal C}$ is also in ${\mathfrak C}$. Note that this definition
automatically implies that a minor-closed family, ${\mathfrak C}$, of codes is
closed under code equivalence, \emph{i.e.}, if ${\mathcal C} \in {\mathfrak C}$, then
all codes equivalent to ${\mathcal C}$ are also in ${\mathfrak C}$.
A non-trivial example of a minor-closed family is the set of all
graphic codes, since any minor of a graphic code is graphic.
We will encounter other examples of minor-closed families
(regular codes and geometrically perfect codes)
further on in this paper. We mention in passing another
interesting example of such a family --- codes of bounded
trellis state-complexity. Recall from Section~\ref{conn_section}
that the state-complexity profile of a length-$n$ code ${\mathcal C}$
is defined to be the vector ${\mathbf s}({\mathcal C}) = (s_0({\mathcal C}),\ldots,s_n({\mathcal C}))$,
where $s_i({\mathcal C}) = \dim({\mathcal C}|_J) + \dim({\mathcal C}|_{J^c}) - \dim({\mathcal C})$
for $J = [i] \subset [n]$. Define $s_{\max}({\mathcal C}) =
\max_{i \in [n]} s_i({\mathcal C})$. For a fixed integer $w > 0$, let $TC_w$ denote
the family of codes ${\mathcal C}$ such that there exists a code ${\mathcal C}'$ equivalent
to ${\mathcal C}$ with $s_{\max}({\mathcal C}') \leq w$. Then, $TC_w$ is minor-closed
\cite{kashyap_SIAM}, \cite{kashyap_ITW}. A similar statement holds
for the family of codes that have a cycle-free normal realization
(cf.\ \cite{For03}) whose state-complexity is bounded by $w$.
A general construction of minor-closed families is obtained
by fixing a collection ${\mathcal F}$ of codes, and defining
${\mathfrak C}_{\mathcal F}$ to be the set of all codes ${\mathcal C}$ such that no minor
of ${\mathcal C}$ is equivalent to any ${\mathcal C}' \in {\mathcal F}$. As an example, let
${\mathcal F} = \{{\mathcal H}_7,{\mathcal H}_7^\perp,{\mathcal C}(K_5)^\perp,{\mathcal C}(K_{3,3})^\perp\}$,
where ${\mathcal H}_7$ is the $[7,4]$ Hamming code\footnote{From now on,
${\mathcal H}_7$ will always denote the $[7,4]$ Hamming code.}.
By Theorem~\ref{graphic_code_thm}, ${\mathfrak C}_{\mathcal F}$ in this
case is precisely the family of graphic codes.
It is clear that ${\mathfrak C}_{\mathcal F}$ is a minor-closed family for any fixed ${\mathcal F}$.
In fact, every minor-closed family can be obtained in this manner.
Indeed, let ${\mathfrak C}$ be a minor-closed family of codes.
A code ${\mathfrak D}$ is said to be an \emph{excluded minor} of
${\mathfrak C}$ if ${\mathcal D} \notin {\mathfrak C}$, but every proper minor of ${\mathcal D}$
is in ${\mathfrak C}$. It is not hard to verify that a code ${\mathcal C}$
is in ${\mathfrak C}$ iff no minor of ${\mathcal C}$ is an excluded minor of ${\mathfrak C}$.
Theorem~\ref{graphic_code_thm} is an example of such an
\emph{excluded-minor characterization}, and we will see
more such examples (Theorems~\ref{regular_thm1} and \ref{P_Q_thm})
further below. Thus, taking ${\mathcal F}$ to be the collection of
all excluded minors of ${\mathfrak C}$, we have that ${\mathfrak C} = {\mathfrak C}_{{\mathcal F}}$.
A tantalizing conjecture of Robertson and Seymour asserts that any
minor-closed family ${\mathfrak C}$ of binary linear codes
has only \emph{finitely many} excluded minors \cite[Conjecture~1.2]{GGW_survey}.
Let ${\mathfrak C}$ be a minor-closed family of codes. By Theorem~\ref{decomp_thm2},
every ${\mathcal C} \in {\mathfrak C}$ can be constructed
from 3-connected, internally 4-connected codes in ${\mathfrak C}$ using
direct sums, 2-sums, and 3- or $\overline{3}$-sums.
The converse need not always be true, \emph{i.e.}, it is not
necessarily true that if a code ${\mathcal C}$ has a decomposition via
direct sums, 2-sums, and 3- or $\overline{3}$-sums into codes in ${\mathfrak C}$,
then ${\mathcal C} \in {\mathfrak C}$. Of course, the converse does hold if ${\mathfrak C}$
is also closed under the operations of direct sum, 2-sum, 3-sum
and $\overline{3}$-sum. As usual, ${\mathfrak C}$ is defined to be closed
under direct sum (resp.\ 2-sum, 3-sum, $\overline{3}$-sum) if
for any pair of codes in ${\mathcal C}$, their direct sum
(resp.\ 2-sum, 3-sum, $\overline{3}$-sum, if it can be defined)
is also in ${\mathfrak C}$. We summarize this in the following proposition.
\begin{proposition}
Let ${\mathfrak C}$ be a minor-closed family of codes that is also
closed under the operations of direct sum, 2-sum, 3-sum and
$\overline{3}$-sum. Then, the following are equivalent for a code ${\mathcal C}$.
\begin{itemize}
\item[(i)] ${\mathcal C}$ is in ${\mathfrak C}$.
\item[(ii)] The leaves of some code-decomposition tree
for ${\mathcal C}$ are in ${\mathfrak C}$.
\item[(iii)] The leaves of some complete, 3-homogeneous
or $\overline{3}$-homogeneous code-decomposition tree for ${\mathcal C}$
are in ${\mathfrak C}$.
\item[(iv)] The leaves of every code-decomposition tree
for ${\mathcal C}$ are in ${\mathfrak C}$.
\end{itemize}
\label{minor_closed_prop}
\end{proposition}
\begin{proof}
(i) implies (iv) since the leaves of any code-decomposition tree
of ${\mathcal C}$ are minors of ${\mathcal C}$, and ${\mathfrak C}$ is minor-closed.
The implications (iv) $\Rightarrow$ (iii) and (iii) $\Rightarrow$ (ii)
are trivial. (ii) implies (i) since ${\mathfrak C}$ is closed under
direct-sums, 2-sums, 3-sums and $\overline{3}$-sums.
\end{proof}
Since a complete code-decomposition tree of any code ${\mathcal C}$ can be constructed
in time polynomial in the length of ${\mathcal C}$, we have the following corollary
to the above result.
\begin{corollary}
Let ${\mathfrak C}$ be a minor-closed family of codes that is also
closed under the operations of direct sum, 2-sum, 3-sum
and $\overline{3}$-sum. Then the following are equivalent statements.
\begin{itemize}
\item[(i)] It can be decided in polynomial time whether
or not a given code ${\mathcal C}$ is in ${\mathfrak C}$.
\item[(ii)] It can be decided in polynomial time whether
or not a given 3-connected, internally 4-connected code ${\mathcal C}$
is in ${\mathfrak C}$.
\label{minor_closed_cor}
\end{itemize}
\end{corollary}
The first major application of results such as the above ---
the application which was in fact the motivation for Seymour's
matroid decomposition theory --- relates to totally unimodular matrices.
A real matrix $A$ is said to be \emph{totally unimodular}
if the determinant of every square submatrix of $A$ is in $\{0,1,-1\}$.
In particular, each entry of a totally unimodular matrix is in
$\{0,1,-1\}$. Such matrices are of fundamental importance in
combinatorial optimization and network flow problems, because
total unimodularity is closely related to integer linear programming
\cite{HK}.
A binary matrix is defined to be \emph{regular} if its 1's can
be replaced by $\pm 1$'s in such a way that the resulting
matrix is totally unimodular. Consequently, a binary linear code
is defined to be \emph{regular} if it has a regular parity-check matrix.
It turns out that for a regular code, \emph{every} parity-check matrix
is regular \cite[Corollary~9.2.11]{truemper}. Furthermore, given
a regular binary matrix $B$, there is a polynomial-time algorithm
that converts $B$ to a totally unimodular matrix by assigning
signs to the 1's in $B$ \cite[Corollary~9.2.7]{truemper}.
Thus, regular codes form the key to understanding total unimodularity.
The following theorem, due to Tutte \cite{Tut58}, provides an elegant
excluded-minor characterization of regular codes.
\begin{theorem}
A binary linear code is regular iff it does not contain as a minor
any code equivalent to the [7,4] Hamming code or its dual.
\label{regular_thm1}
\end{theorem}
It follows from the theorem that the family of regular codes,
which we will denote by ${\mathfrak R}$, is
minor-closed, since it is of the form ${\mathfrak C}_{\mathcal F}$ for
${\mathcal F} = \{{\mathcal H}_7,{\mathcal H}_7^\perp\}$. Furthermore, ${\mathfrak R}$ is
closed under the taking of code duals, \emph{i.e.}, the dual of a
regular code is also regular. This is because a code ${\mathcal C}$ contains
${\mathcal H}_7$ as a minor iff its dual ${\mathcal C}^\perp$ contains ${\mathcal H}_7^\perp$
as a minor. It can further be shown \cite[p.\ 437]{oxley}
that ${\mathfrak R}$ is closed under the operations of direct sum,
2-sum, 3-sum and $\overline{3}$-sum.
Note that by Theorem~\ref{graphic_code_thm},
${\mathfrak R}$ contains the family of graphic codes, and hence, the family of
\emph{co-graphic} codes as well, which are codes whose duals are graphic.
Using a long and difficult argument, Seymour \cite{Sey80} proved that
the 3-connected, internally 4-connected codes in ${\mathfrak R}$ are either graphic,
co-graphic, or equivalent to a particular isodual
code that he called $R_{10}$, which is neither graphic nor co-graphic.
\begin{theorem}[\cite{oxley}, Corollary~13.2.6]
If ${\mathcal C}$ is a 3-connected, internally 4-connected regular code, then
${\mathcal C}$ is either graphic, co-graphic, or equivalent to $R_{10}$, which
is the $[10,5,4]$ code with parity-check matrix
$$
\left[
\begin{array}{cccccccccc}
1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1
\end{array}
\right].
$$
\label{regular_thm2}
\end{theorem}
Thus, Seymour's decomposition theory shows that any regular code
(and so by assignment of signs, any totally unimodular matrix)
can be constructed by piecing together --- via direct sums, 2-sums,
and 3-sums or $\overline{3}$-sums --- graphic codes, co-graphic
codes, and codes equivalent to $R_{10}$. Also, membership
in the family of regular codes can be decided in polynomial time.
Indeed, as mentioned at the end of Section~\ref{matroid_section}, there are
polynomial-time algorithms for deciding whether or not a given code
is graphic. Given an $m \times n$ parity-check matrix $H$ for a
code, a generator matrix for the code can be computed using elementary
row operations on $H$ in $O(m^2n)$ time. Thus, the dual of a code
can be determined in polynomial time, and hence it can be decided
in polynomial time whether or not a given code is co-graphic.
Hence, from Corollary~\ref{minor_closed_cor} and
Theorem~\ref{regular_thm2}, it follows that there is a polynomial-time
algorithm for determining whether or not a given code is regular.
The best such algorithm known is due to Truemper \cite{T90},
which runs in $O((m+n)^3)$ time. Truemper's algorithm is also based
on Seymour's decomposition theory, but it implements a highly
efficient procedure for carrying out the decomposition.
While the application of Seymour's decomposition theory to regular codes
is interesting, it is not very useful, perhaps, from a coding-theoretic
perspective. However, in the next section, we give an application that
should be of some interest to a coding theorist.
\section{Application: ML Decoding\label{LP_section}}
The recent work of Feldman, Wainwright and Karger \cite{FWK} shows
that ML decoding of a binary linear code ${\mathcal C}$
over a discrete memoryless channel can be formulated as a
linear program (LP). Recall that the ML decoding problem
is: given a received word ${\mathbf y}$ at the channel output,
find a codeword ${\mathbf x}\in{\mathcal C}$ that maximizes the probability,
$\Pr[{\mathbf y}|{\mathbf x}]$, of receiving ${\mathbf y}$ conditioned on the event
that ${\mathbf x}$ was transmitted.
As observed by Feldman \emph{et al.}, under the assumption
of a discrete memoryless channel, given a received word
${\mathbf y}=y_1 y_2 \ldots y_n$, the
problem of determining ${\arg\max}_{{\mathbf x}\in{\mathcal C}} \Pr[{\mathbf y}|{\mathbf x}]$ is
equivalent to the problem of finding
${\arg\min}_{{\mathbf x}\in{\mathcal C}} \langle\mathbf{\gamma},{\mathbf x}\rangle$,
where $\mathbf{\gamma} = (\gamma_1,\gamma_2,\ldots,\gamma_n)$ is given by
\begin{equation}
\gamma_i = \log \left(\frac{\Pr[y_i|x_i=0]}{\Pr[y_i|x_i=1]}\right)
\label{gamma_def}
\end{equation}
and $\langle\cdot\, , \cdot\rangle$ is the standard inner product on ${\mathbb R}^n$.
Here, for the inner product $\langle\mathbf{\gamma},{\mathbf x}\rangle$ to make sense,
a binary codeword ${\mathbf x}=x_1 x_2 \ldots x_n \in{\mathcal C}$ is
identified with the real vector $(x_1,x_2,\ldots,x_n) \in \{0,1\}^n
\subset {\mathbb R}^n$.
The above formulation shows ML decoding to be
equivalent to the minimization of a linear function
over a finite set ${\mathcal C} \subset \{0,1\}^n$. Let $P({\mathcal C})$
be the \emph{codeword polytope} of ${\mathcal C}$, \emph{i.e.},
the convex hull in ${\mathbb R}^n$ of the finite set ${\mathcal C}$.
It can be shown that the set of vertices of $P({\mathcal C})$ coincides with ${\mathcal C}$.
The key point now is that over a polytope $P$, a linear function
$\phi$ attains its minimum value $\phi_{\min} = \min \{\phi({\mathbf x}): {\mathbf x} \in P\}$
at a vertex of $P$. In particular,
$
{\min}_{{\mathbf x}\in {\mathcal C}} \langle\mathbf{\gamma},{\mathbf x}\rangle
= {\min}_{{\mathbf x}\in P({\mathcal C})} \langle\mathbf{\gamma},{\mathbf x}\rangle,
$
Thus, ML decoding is equivalent to finding a vertex of the polytope
$P({\mathcal C})$ that achieves ${\min}_{{\mathbf x}\in P({\mathcal C})} \langle\mathbf{\gamma},{\mathbf x}\rangle$,
which is a classic LP.
However, ML decoding of an arbitrary code is known to be NP-hard
\cite{BMvT}. So, in general, solving the above LP over the codeword polytope
is also NP-hard. A strategy often followed in such a situation
is to ``relax'' the problem. The idea is to
look for a polytope that contains the code as a subset of its vertex set,
but which has some property that allows an LP defined over it
to be solved more easily. Such a polytope is called a \emph{relaxation}
of the codeword polytope $P({\mathcal C})$.
A certain relaxation of the codeword polytope has received much
recent attention \cite{FWK},\cite{VK05},\cite{ST06}. This is
the polytope which, given a code ${\mathcal C}$ of length $n$,
and a subset $H \subset {\mathcal C}^\perp$, is defined as
$$
Q(H) = \bigcap_{{\mathbf h} \in H} P({\mathbf h}^\perp),
$$
where $P({\mathbf h}^\perp)$ is the codeword polytope of the code
${\mathbf h}^\perp = \{{\mathbf c} \in {\mathbb F}_2^n:\ \langle {\mathbf h},{\mathbf c} \rangle \equiv 0 \pmod 2\}$.
Note that since ${\mathcal C} \subset {\mathbf h}^\perp$ for any ${\mathbf h} \in {\mathcal C}^\perp$,
we have that $P({\mathcal C}) \subset \bigcap_{{\mathbf h} \in H} P({\mathbf h}^\perp) = Q(H)$
for any $H \subset {\mathcal C}^\perp$.
In particular, $P({\mathcal C}) \subset Q({\mathcal C}^\perp)$ for any code ${\mathcal C}$.
For any $H \subset {\mathcal C}^\perp$, the polytope $Q(H)$ contains ${\mathcal C}$
as a subset of its vertex set, ${\mathcal V}(H)$. This is because
${\mathcal C} \subset Q(H) \cap \{0,1\}^n$, and since $Q(H)$ is
contained within the $n$-cube $[0,1]^n$, we also have
$Q(H) \cap \{0,1\}^n \subset {\mathcal V}(H)$. Thus, $Q(H)$ is indeed
a relaxation of $P({\mathcal C})$. Consequently the LP
$\min_{{\mathbf x} \in Q(H)} \langle \gamma, {\mathbf x}\rangle$, where $\gamma$ is the vector defined
via (\ref{gamma_def}), constitutes a relaxation of the LP
that represents ML decoding.
Now, any standard LP-solving algorithm requires that the LP
to be solved have its constraints be represented via
linear inequalities. The advantage of using
the relaxation $Q(H)$ is that there is a convenient such
representation of the constraint ${\mathbf x} \in Q(H)$.
The polytope $Q(H)$ can also be expressed as
(see \emph{e.g.}\ \cite[Theorem~4]{FWK} or \cite[Lemma~26]{VK05}),
\begin{equation}
Q(H) = \bigcap_{{\mathbf h} \in H} \Pi(\mbox{\textsf{supp}}({\mathbf h})),
\label{Q_eq1}
\end{equation}
where for $S \subset [n]$, $\Pi(S)$ denotes the polyhedron
\begin{equation}
\Pi(S) = \bigcap_{\stackrel{J \subset S}{\mbox{\tiny $|J|$ odd}}}
\left\{(x_1,\ldots, x_n) \in [0,1]^n:\ \sum_{j \in J} x_j -
\sum_{i \in S \setminus J} x_i
\leq |J| - 1\right\}.
\label{Q_eq2}
\end{equation}
The efficiency of a practical LP solver (like, say, the simplex or ellipsoid
algorithm) depends on the size of the LP representation, which is
proportional to the number of variables and linear inequalities forming
the constraints. If $H$ above consists of the rows of a parity-check
matrix of a low-density parity-check (LDPC) code, then the representation
of $Q(H)$ given by (\ref{Q_eq1})--(\ref{Q_eq2}) has size linear in
the codelength $n$. So, the ellipsoid algorithm, for example,
would be guaranteed to solve the LP
$\min_{{\mathbf x} \in Q(H)} \langle \gamma, {\mathbf x}\rangle$ in time polynomial in $n$.
However, as we explain next, this LP is no longer equivalent to
ML decoding in general.
Let ${\mathcal J}(H) = {\mathcal V}(H) \cap \{0,1\}^n$
denote the set of \emph{integral vertices} (\emph{i.e.}, vertices all of
whose coordinates are integers) of $Q(H)$. We noted above that
${\mathcal C} \subset {\mathcal J}(H)$. If $H$ is a spanning subset (over ${\mathbb F}_2$)
of ${\mathcal C}^\perp$, so that the vectors in $H$ form (the rows of)
a parity-check matrix of ${\mathcal C}$, then we in fact have ${\mathcal C} = {\mathcal J}(H)$.
This is because if ${\mathbf x} \in \{0,1\}^n$ is not in ${\mathcal C}$,
then ${\mathbf x} \notin {\mathbf h}^\perp$ for some ${\mathbf h} \in H$,
and hence, ${\mathbf x} \notin P({\mathbf h}^\perp) \supset Q(H)$.
The polytope $Q(H)$ in this case is the ``fundamental polytope''
of Vontobel and Koetter \cite{VK05} (or equivalently,
the ``projected polytope'' $\overline{Q}$
of Feldman \emph{et al}.\ \cite[p.\ 958]{FWK}). The fact
that ${\mathcal C} = {\mathcal J}(H)$ for such a polytope $Q(H)$ implies that
the polytope has the following ``ML certificate''
property \cite[Proposition~2]{FWK}: if the LP
$\min_{{\mathbf x} \in Q(H)} \langle \gamma, {\mathbf x}\rangle$, where $\gamma$ is the vector defined
via (\ref{gamma_def}), attains its minimum at some ${\mathbf x} \in {\mathcal J}(H)$,
then ${\mathbf x}$ is guaranteed to be the ML codeword. However, it is
possible that the above LP attains its minimum at some
non-integral vertex ${\mathbf x} \in {\mathcal V}(H) - {\mathcal J}(H)$, in which case decoding
via linear programming over $Q(H)$ fails. The non-integral vertices
of $Q(H)$ are called ``pseudocodewords''.
It is naturally of interest to know when a code ${\mathcal C}$ has
a fundamental polytope $Q(H)$ (for some spanning subset $H$ of
${\mathcal C}^\perp$) without pseudocodewords. For such codes, ML
decoding can be exactly implemented as an LP over $Q(H)$. Clearly,
$Q(H)$ has no pseudocodewords iff ${\mathcal C} = {\mathcal V}(H)$, or equivalently,
$P({\mathcal C}) = Q(H)$. But since $P({\mathcal C}) \subset Q({\mathcal C}^\perp) \subset Q(H)$,
this obviously implies that we must have $P({\mathcal C}) = Q({\mathcal C}^\perp)$.
Conversely, if $P({\mathcal C}) = Q({\mathcal C}^\perp)$, then we may simply take
$H = {\mathcal C}^\perp$ to obtain a fundamental polytope $Q(H)$ without
pseudocodewords. We record this observation as a lemma.
\begin{lemma}
Let ${\mathcal C}$ be a binary linear code. There exists a
spanning subset, $H$, of ${\mathcal C}^\perp$ such that the polytope
$Q(H)$ has no pseudocodewords iff $P({\mathcal C}) = Q({\mathcal C}^\perp)$.
\label{pseudocodeword_lemma}
\end{lemma}
A code ${\mathcal C}$ for which $P({\mathcal C}) = Q({\mathcal C}^\perp)$ holds will be called
\emph{geometrically perfect}, and we will denote by
${\mathfrak G}$ the family of all such codes. So the question then is:
which codes are geometrically perfect?
An answer to this was provided by Barahona and Gr\"otschel \cite{BG},
who showed that the relationship $P({\mathcal C}) = Q({\mathcal C}^\perp)$ is equivalent
to Seymour's ``sums-of-circuits'' property for binary matroids
\cite{Sey81}. The following theorem is thus equivalent to
Seymour's characterization of binary matroids with the sums-of-circuits
property.
\begin{theorem}[\cite{BG}, Theorem~3.5]
A binary linear code ${\mathcal C}$ is geometrically perfect iff
${\mathcal C}$ does not contain as a minor any code equivalent to
${\mathcal H}_7^\perp$, $R_{10}$ or ${\mathcal C}(K_5)^\perp$.
\label{P_Q_thm}
\end{theorem}
By the above theorem, ${\mathfrak G}$
is minor-closed. Moreover, since none of ${\mathcal H}_7^\perp$, $R_{10}$ or
${\mathcal C}(K_5)^\perp$ is graphic, no graphic code can contain any of
them as a minor, and so graphic codes are geometrically perfect.
Gr\"otschel and Truemper \cite[Section~4]{GT} showed that ${\mathfrak G}$ is closed
under the operations of direct sum, 2-sum and $\overline{3}$-sum,
but is not closed under 3-sum. They also observed \cite[p.\ 326]{GT}
that any code in ${\mathfrak G}$ can be constructed via direct sums,
2-sums and $\overline{3}$-sums from graphic codes and copies
of the codes ${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$ and ${\mathcal C}(V_8)^\perp$, where
$V_8$ is the graph in Figure~\ref{V8_fig}. Indeed, this result
is implied by Theorems~6.4, 6.9 and 6.10 in \cite{Sey81}. It is not
hard to verify that the codes ${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$ and
${\mathcal C}(V_8)^\perp$ are in fact in ${\mathfrak G}$. Putting these facts together,
we obtain the following theorem.
\begin{figure}[t]
\centerline{\epsfig{file=V8.eps, width=3.5cm}}
\caption{The graph $V_8$.}
\label{V8_fig}
\end{figure}
\begin{theorem}
For a binary linear code ${\mathcal C}$, the following are equivalent statements.
\begin{itemize}
\item[(i)] ${\mathcal C}$ is geometrically perfect, \emph{i.e.},
$P({\mathcal C}) = Q({\mathcal C}^\perp)$.
\item[(ii)] Each leaf in some complete, $\overline{3}$-homogeneous
code-decomposition tree for ${\mathcal C}$ is either graphic, or equivalent
to one of the codes ${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$ and ${\mathcal C}(V_8)^\perp$.
\item[(iii)] Each leaf in every complete, $\overline{3}$-homogeneous
code-decomposition tree for ${\mathcal C}$ is either graphic, or equivalent
to one of the codes ${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$ and ${\mathcal C}(V_8)^\perp$.
\end{itemize}
\label{mfG_decomp_thm}
\end{theorem}
\begin{proof}
(i) implies (iii) follows directly from the fact that
any code in ${\mathfrak G}$ can be constructed via direct sums,
2-sums and $\overline{3}$-sums from graphic codes and copies (up
to equivalence) of the codes ${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$
and ${\mathcal C}(V_8)^\perp$. The implication
(iii) $\Rightarrow$ (ii) is trivial. Finally,
(ii) $\Rightarrow$ (i) holds since graphic codes and
the codes ${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$ and ${\mathcal C}(V_8)^\perp$ are
all in ${\mathfrak G}$, and ${\mathfrak G}$ is closed under direct sum, 2-sum
and $\overline{3}$-sum.
\end{proof}
Since a complete, $\overline{3}$-homogeneous code-decomposition tree for a code
can be constructed in polynomial time, and testing for
graphicness or equivalence to
${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$ and ${\mathcal C}(V_8)^\perp$ can also be
carried out in polynomial time, we have the following corollary to
the above theorem.
\begin{corollary}
It can be decided in polynomial time whether or not a given code
${\mathcal C}$ is geometrically perfect, \emph{i.e.}, has the property
$P({\mathcal C}) = Q({\mathcal C}^\perp)$.
\label{mfG_cor}
\end{corollary}
However, this is only half the story. If ${\mathcal C}$ is a geometrically
perfect code, the algorithm guaranteed by the above result will determine
this to be the case, but will not produce a ``small'' subset
$H \subset {\mathcal C}^\perp$ such that $P({\mathcal C}) = Q(H)$. The only information
we would have is that $H$ can be taken to be the \emph{entire} dual code
${\mathcal C}^\perp$. While it would then be true that the LP
$\min_{{\mathbf x} \in Q({\mathcal C}^\perp)} \langle\gamma,{\mathbf x}\rangle$ is equivalent to ML decoding,
no known LP-solving algorithm could be guaranteed to efficiently solve
this LP. This is because the representation of $Q({\mathcal C}^\perp)$
given by (\ref{Q_eq1})--(\ref{Q_eq2}) has size exponential in
the codelength $n$. Fortunately, as we shall describe next, for the
family, ${\mathfrak G}$, of geometrically perfect codes, ML decoding can always be
implemented in time polynomial in codelength, \emph{not} using
an LP-solving algorithm, but by means of a combinatorial optimization
algorithm that uses code decompositions. \\[-6pt]
In a series of papers \cite{T85}--\cite{T90} (see also \cite{truemper}),
Truemper carried out a careful examination of matroid decompositions,
from which a particularly interesting observation concerning
geometrically perfect codes could be inferred.
Let ${\mathfrak G}_0$ be the sub-family of ${\mathfrak G}$
that consists of all graphic codes and codes equivalent to one of
${\mathcal H}_7$, ${\mathcal C}(K_{3,3})^\perp$ and ${\mathcal C}(V_8)^\perp$.
As observed in the proof of Corollary~6.6 in \cite{GT}, Truemper's analysis of
matroid decompositions could be used to show that a 2-connected
code ${\mathcal C} \in {\mathfrak G} - {\mathfrak G}_0$ is equivalent to
either ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$ or ${\mathcal C}_1 \,\overline{\oplus}_3\, {\mathcal C}_2$, for some
${\mathcal C}_1 \in {\mathfrak G}_0$ and some 2-connected code ${\mathcal C}_2 \in {\mathfrak G}$,
both of which are equivalent to minors of ${\mathcal C}$.
The decomposition of ${\mathcal C}$ into ${\mathcal C}_1$ and ${\mathcal C}_2$, along with the
coordinate permutation that takes ${\mathcal C}_1 \oplus_2 {\mathcal C}_2$ or
${\mathcal C}_1 \,\overline{\oplus}_3\, {\mathcal C}_2$ (as the case may be) to ${\mathcal C}$,
can be determined in polynomial time. These facts have some significant
consequences, one of which is that any code in ${\mathfrak G}$ is ML-decodable
in polynomial time. However, rather than state these results just for
the class of geometrically perfect codes, we will state and prove them
more generally for codes that are ``almost graphic'' in the sense
that they can be composed
from graphic codes and finitely many other codes.
Recall that $\Gamma$ denotes the family of graphic codes.
\begin{definition}
A minor-closed family of codes ${\mathfrak C}$ is defined to be
\emph{almost-graphic} if there exists a finite
sub-family ${\mathfrak D} \subset {\mathfrak C}$ such that any 2-connected code ${\mathcal C} \in {\mathfrak C}$
is either in $\Gamma \cup {\mathfrak D}$, or is of the form $\pi({\mathcal C}_1 \oplus_2 {\mathcal C}_2)$
or $\pi({\mathcal C}_1 \,\overline{\oplus}_3\, {\mathcal C}_2)$,
for some permutation $\pi$ of the coordinate set of ${\mathcal C}$,
and some codes ${\mathcal C}_1$ and ${\mathcal C}_2$ such that
\begin{itemize}
\item[(a)] ${\mathcal C}_1$ and ${\mathcal C}_2$ are equivalent to minors of ${\mathcal C}$, and
\item[(b)] ${\mathcal C}_1 \in \Gamma \cup {\mathfrak D}$ and ${\mathcal C}_2$ is 2-connected.
\end{itemize}
If there exists a constant $l > 0$ such that
for any length-$n$ code ${\mathcal C} \in {\mathfrak C} - (\Gamma \cup {\mathfrak D})$, the
components ${\mathcal C}_1$, ${\mathcal C}_2$ and $\pi$ of the above decomposition can
be determined in time $O(n^l)$, then the family of codes ${\mathfrak C}$ is said to be
\emph{polynomially almost-graphic (PAG)}.
\label{almost_graphic_def}
\end{definition}
Note that the 3-sum is conspicuous by its absence from the above definition.
We will give an explanation of this at the end of this section.
Definition~\ref{almost_graphic_def} clearly implies
that any code in an almost-graphic family
${\mathfrak C}$ has a $\overline{3}$-homogeneous code-decomposition tree. The definition
in fact implies that a 2-connected code in ${\mathfrak C}$ has a decomposition
tree with the property that each leaf is in $\Gamma \cup {\mathfrak D}$, and
for each non-leaf node {\sf v } in the tree,
\textsf{lchild.code} is a leaf (and hence, is in $\Gamma \cup {\mathfrak D}$),
where \textsf{lchild} is the left child of \textsf{v}.
Such a code-decomposition tree will be called \emph{$(\Gamma \cup {\mathfrak D})$-unary}.
If ${\mathfrak C}$ is PAG, a $\overline{3}$-homogeneous, $(\Gamma \cup {\mathfrak D})$-unary decomposition
tree can be constructed for any 2-connected code ${\mathcal C} \in {\mathfrak C}$ in time
polynomial in the length of ${\mathcal C}$.
The family, $\Gamma$, of graphic codes is trivially PAG. From the
discussion prior to the above definition,
the family, ${\mathfrak G}$, of geometrically perfect codes
is also PAG. Other examples of PAG families are the
code families ${\mathfrak C}_{\mathcal F}$ (cf.\ Section~\ref{minor_closed_section})
for ${\mathcal F} = \{{\mathcal H}_7,{\mathcal C}(K_5)^\perp\}$ and
${\mathcal F} = \{{\mathcal H}_7^\perp,{\mathcal C}(K_5)^\perp\}$, and the
family of co-graphic codes without a ${\mathcal C}(K_5)^\perp$ minor
\cite[Corollary~6.6]{GT}.
PAG codes inherit some of the properties of graphic codes.
For example, it is known \cite{NH81},\cite{jungnickel}
that the ML decoding problem over a memoryless binary symmetric channel
can be solved in polynomial time for the family of graphic codes using
Edmonds' matching algorithm \cite{E2},\cite{EJ73}. A much stronger
decoding result can in fact be proved for graphic codes, and more generally
for PAG codes. This is based on the following optimization result
proved in \cite{GT}, an argument for which is sketched
in Appendix~\ref{opt_app}.
\begin{theorem}[\cite{GT}, Theorem~6.5]
Let ${\mathfrak C}$ be a PAG family of codes. There exists a constant $l > 0$
such that given any length-$n$ code ${\mathcal C}$ in ${\mathfrak C}$ and
any $\mathbf{\gamma} \in {\mathbb R}^n$, a codeword
${\mathbf c}_{\min} \in {\mathcal C}$ achieving $\min_{{\mathbf c} \in {\mathcal C}} \langle\mathbf{\gamma},{\mathbf c}\rangle$
(or equivalently, $\min_{{\mathbf x} \in P({\mathcal C})} \langle\mathbf{\gamma},{\mathbf x}\rangle$)
can be determined in $O(n^l)$ time.
\label{PAG_opt_thm}
\end{theorem}
It should be noted that an actual implementation of the polynomial-time
algorithm implicit in Theorem~\ref{PAG_opt_thm} (and outlined in
Appendix~\ref{opt_app}) requires arithmetic over
the real numbers, unless the vector $\gamma$ has only rational coordinates.
So, in practice, finite-precision arithmetic used in any computer
implementation of the algorithm could only approximate the linear
cost function $\langle\mathbf{\gamma},{\mathbf c}\rangle$ for an arbitrary $\gamma \in {\mathbb R}^n$.
As mentioned earlier, ML decoding over a discrete memoryless channel can
be formulated as a linear program \cite{FWK}. Therefore, we have the
following corollary to the above theorem, again with the caveat
that a true implementation of a polynomial-time algorithm for ML decoding
would require real-number arithmetic.
\begin{corollary}
The maximum-likelihood decoding problem over a discrete memoryless channel
can be solved in polynomial time for a PAG family of codes.
In particular, geometrically perfect codes are ML-decodable in polynomial time.
\label{PAG_cor1}
\end{corollary}
Another problem that is known to be NP-hard in general is the problem
of determining the minimum distance of a code \cite{vardy97}. For
graphic codes, this is equivalent to the problem of finding the girth of
a graph, which can be solved in time polynomial in the number of edges
of the graph --- one of the earliest such algorithms published \cite{IR78}
runs in $O(n^{3/2})$ time in the worst case, where $n$ is the number of edges.
As a consequence of Theorem~\ref{PAG_opt_thm}, we also have that
the minimum distance problem for a PAG family of codes can be solved
in polynomial time.
\begin{corollary}
The minimum distance of any code in a PAG family can be determined
in polynomial time.
\label{PAG_cor2}
\end{corollary}
\begin{proof}
Let ${\mathcal C}$ be a code of length $n$ containing at least one nonzero codeword.
For $i = 1,2,\ldots,n$, define $\mathbf{\gamma}^{(i)} =
(1,\ldots,1,-n,1,\ldots,1)$, with the $-n$ appearing
in the $i$th coordinate of $\mathbf{\gamma}^{(i)}$.
Note that $\min_{{\mathbf c} \in {\mathcal C}} \langle\mathbf{\gamma}^{(i)},{\mathbf c}\rangle$ is always
achieved by a codeword in ${\mathcal C}$ with $i$th coordinate equal to 1, if such
a codeword exists. Indeed, the minimum-achieving codeword ${\mathbf c}^{(i)}$
is the codeword of least Hamming weight among codewords in ${\mathcal C}$
that have a 1 in the $i$th coordinate; if there is no codeword in ${\mathcal C}$
with a 1 in the $i$th coordinate, then ${\mathbf c}^{(i)} = {\mathbf 0}$. Note that
if ${\mathbf c}^{(i)} \neq {\mathbf 0}$, then $\langle\mathbf{\gamma}^{(i)},{\mathbf c}^{(i)}\rangle =
w({\mathbf c}^{(i)}) - 1 - n$. Therefore, the minimum distance of ${\mathcal C}$ is given by
$$
d = n+1 + \min_{i \in [n]} \min_{{\mathbf c} \in {\mathcal C}} \langle\mathbf{\gamma}^{(i)},{\mathbf c}\rangle.
$$
For a PAG family of codes ${\mathfrak C}$, given any code ${\mathcal C} \in {\mathfrak C}$,
each of the minimization problems
$\min_{{\mathbf c} \in {\mathcal C}} \langle\mathbf{\gamma}^{(i)},{\mathbf c}\rangle$ can be solved in polynomial
time, and hence the minimum distance of ${\mathcal C}$
can be determined in polynomial time.
\end{proof}
We point out that in all the minimization problems that must be
solved (recursively going down the code-decomposition tree, as explained
in Appendix~\ref{opt_app})
to determine the minimum distance of a length-$n$ code,
the cost vectors have integer coefficients of magnitude at most
$n^2$. So, the minimum distance of a code from a PAG family can be
determined in polynomial time using finite-precision arithmetic.
The downside of PAG (and more generally, almost-graphic)
code families is that they are not very good from
a coding-theoretic perspective. Recall from coding theory that a code
family ${\mathfrak C}$ is called \emph{asymptotically good} if there exists a
sequence of $[n_i,k_i,d_i]$ codes ${\mathcal C}_i \in {\mathfrak C}$, with
$\lim_i n_i = \infty$, such that $\liminf_i k_i/n_i$ and
$\liminf_i d_i/n_i$ are both strictly positive.
\begin{theorem}
An almost-graphic family of codes cannot be asymptotically good.
\label{AG_bad_thm}
\end{theorem}
In particular, the family of geometrically perfect codes is not
asymptotically good. The theorem is proved in Appendix~\ref{AG_bad_app}.
In view of Lemma~\ref{pseudocodeword_lemma}, the above result
has the following very interesting corollary.
\begin{corollary}
Let ${\mathfrak C}$ be a family of binary linear codes with the following
property: for each ${\mathcal C} \in {\mathfrak C}$, there exists a parity-check matrix
$H$ for ${\mathcal C}$, such that the corresponding fundamental polytope $Q(H)$
has no non-integral vertices (pseudocodewords). Then, ${\mathfrak C}$
is not an asymptotically good code family.
\end{corollary}
Loosely speaking, this means that linear-programming decoding, when applied
to a ``good'' code, must suffer on occasion from decoding failure
due to the presence of pseudocodewords, even if \emph{all}
possible parity checks (dual codewords) are used in the constraints
(\ref{Q_eq1})--(\ref{Q_eq2}) of the LP. Given the close
relationship between linear-programming decoding and iterative decoding
using the min-sum algorithm \cite{VK04},
a similar result is likely to hold for iterative decoding as well.
\medskip
We end this section with an explanation of why 3-sums
were left out of Definition~\ref{almost_graphic_def}. The proof of
Theorem~\ref{PAG_opt_thm} given in Appendix~\ref{opt_app} relies crucially
on the fact that a $\overline{3}$-homogeneous, $(\Gamma \cup {\mathfrak D})$-unary
code-decomposition tree can be constructed in polynomial time for
a code from a PAG family. So, the result is actually
true for any code family for which such trees can be constructed in
polynomial time. Now, if 3-sums were allowed in
Definition~\ref{almost_graphic_def}, it is no longer obvious that
codes from the resulting code family would still have
$\overline{3}$-homogeneous, $(\Gamma \cup {\mathfrak D})$-unary code-decomposition trees.
There is good reason to think that this could still be true,
especially in light of Corollary~\ref{3sum_cor},
which states that a code has a 3-sum decomposition only if it has
a $\overline{3}$-sum decomposition. Indeed, that result may lead us to believe
that if a code ${\mathcal C}$ has a $(\Gamma \cup {\mathfrak D})$-unary code-decomposition tree
which contains 3-sums, then replacing the 3-sums in the tree
with $\overline{3}$-sums in the manner prescribed by Corollary~\ref{3sum_cor}
should result in a $\overline{3}$-homogeneous, $(\Gamma \cup {\mathfrak D})$-unary
code-decomposition tree. However, to show that this is the case,
it would have to be verified that if ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$, with
${\mathcal C}_1$, ${\mathcal C}_2$ satisfying (a) and (b) of Definition~\ref{almost_graphic_def},
then (in the notation of Corollary~\ref{3sum_cor}) $\overline{{\mathcal C}_1}$ and
$\overline{{\mathcal C}_2}$ also satisfy (a) and (b). Now, it is not hard to check that
if ${\mathcal C}_1$ is graphic, then so is $\overline{{\mathcal C}_1}$. However, unless
${\mathcal C}$ is 3-connected, there is no guarantee that $\overline{{\mathcal C}_1}$ and
$\overline{{\mathcal C}_2}$ are equivalent to minors of ${\mathcal C}$, even though
${\mathcal C}_1$ and ${\mathcal C}_2$ are given to be equivalent to minors of ${\mathcal C}$.
It is in fact quite likely to be true that if ${\mathcal C} = {\mathcal C}_1 \oplus_3 {\mathcal C}_2$,
with ${\mathcal C}_1$ and ${\mathcal C}_2$ equivalent to minors of ${\mathcal C}$,
then $\overline{{\mathcal C}_1}$ and $\overline{{\mathcal C}_2}$ are also equivalent to minors of ${\mathcal C}$.
So, it is quite possible that if Definition~\ref{almost_graphic_def}
were to include 3-sums as well, then the resulting code families
would still have $\overline{3}$-homogeneous, $(\Gamma \cup {\mathfrak D})$-unary
code-decomposition trees, and hence Theorem~\ref{PAG_opt_thm} would
continue to hold. However, a rigorous proof of this would take us far
outside the main theme of our paper, and Definition~\ref{almost_graphic_def}
as it stands is good enough for our purposes.
\section{Concluding Remarks\label{conclusion}}
A natural question to ask upon studying the decomposition theory
presented in this paper is whether one can define $k$-sums for $k \geq 4$
that have the same attractive properties as 2-, 3- and $\overline{3}$-sums.
Ideally, such a $k$-sum, denoted by $\oplus_k$, would have the
following property for some fixed integer $l \geq k$:
\begin{quote}
a $k$-connected code ${\mathcal C}$ has a $k$-separation $(J,J^c)$
with $\min\{|J|,|J^c|\} \geq l$ iff
${\mathcal C} = \pi({\mathcal C}_1 \oplus_k {\mathcal C}_2)$ for some permutation $\pi$ of
the coordinates of ${\mathcal C}$, and codes ${\mathcal C}_1$ and ${\mathcal C}_2$ equivalent to
minors of ${\mathcal C}$.
\end{quote}
It is indeed possible to define $k$-sums in such a way that we have
${\mathcal C} = \pi({\mathcal C}_1 \oplus_k {\mathcal C}_2)$ iff ${\mathcal C}$ has a $k$-separation $(J,J^c)$
with $\min\{|J|,|J^c|\} \geq l$. The tricky part is ensuring that
the component codes ${\mathcal C}_1$ and ${\mathcal C}_2$ are retained as minors of ${\mathcal C}$,
and this appears to be difficult in general.
However, we have some preliminary results that indicate
that even without the last property, such $k$-sums can be
used as the building blocks of a decomposition theory that ties in
beautifully with Forney's theory of cycle-free realizations
of linear codes \cite{For03}. This theory would make
further deep connections with matroid theory,
particularly with the notions of matroid branchwidth
and treewidth \cite{GGW02}, \cite{hlineny}.
An exposition of this theory will be given in a future paper.
While the decomposition theory in this paper has been presented
mainly in the context of binary linear codes,
it is possible to extend some of it
to linear codes over arbitrary finite fields as well.
The definitions of minors and $k$-connectedness can be
obviously extended to nonbinary codes.
Also, a 2-sum operation can be defined for codes over an
arbitrary finite field ${\mathbb F}$, and the entire theory
outlined in Section~\ref{2sum_section} does carry over.
However, there is no known 3-sum operation defined for
nonbinary codes that has a property analogous to that
stated in Theorem~\ref{3sum_thm}. Again, it seems that
if we are prepared to give up the requirement that the
components of a $k$-sum be retained as minors of the composite code,
then it is possible to develop a powerful decomposition theory for
nonbinary codes just like that for binary codes.
We end with a pointer to a very interesting direction of current
research in matroid theory. This involves the resolution
of two conjectures whose statements (in the context of codes)
we give below. Recall from Section~\ref{minor_closed_section}
that the notation ${\mathfrak C}_{\mathcal F}$, for some fixed collection ${\mathcal F}$ of codes,
refers to the set of all binary linear codes ${\mathcal C}$ such that no
minor of ${\mathcal C}$ is equivalent to any ${\mathcal C}' \in {\mathcal F}$. We extend that notation
to codes over an arbitrary finite field ${\mathbb F}$ as well.
\begin{conjecture}[\cite{GGW_survey}, Conjecture~1.2]
If ${\mathfrak C}$ is a minor-closed class of codes over a finite field ${\mathbb F}$,
then ${\mathfrak C} = {\mathfrak C}_{\mathcal F}$ for some finite collection of codes ${\mathcal F}$.
\end{conjecture}
Informally, the above conjecture states that any minor-closed class
of codes is characterized by a finite list of excluded minors.
\begin{conjecture}[\cite{GGW_survey}, Conjecture~1.3]
Let ${\mathcal M}$ be a fixed code. Given a length-$n$ code ${\mathcal C}$, it is decidable
in time polynomial in $n$ whether or not ${\mathcal C}$ contains ${\mathcal M}$ as a minor.
\end{conjecture}
The two conjectures together imply that the membership of a code in a
minor-closed class can always be decided in polynomial time. To put it
another way, if a property of codes is preserved under the action
of taking minors, then it should be decidable in polynomial time
whether or not a given code has that property. It should be pointed
out that both conjectures have been shown to be true in the context
of graphic codes, as part of the celebrated Graph Minor Project
of Robertson and Seymour \cite{RS13},\cite{RS20}.
The Graph Minor Project has had a profound impact on modern graph theory
\cite{lovasz}, and its extension to ${\mathbb F}$-representable matroids
(equivalently, codes over ${\mathbb F}$) is bound to have a similar influence
on matroid theory and, as a consequence, on coding theory.
|
1,108,101,564,202 | arxiv | \section{Introduction}
\medskip
In recent years we have witnessed significant theoretical and some
encouraging experimental results in the area of quantum
computing. In $1994$, Peter Shor found a polynomial time algorithm
for the factorization of $n$-bit numbers on quantum computers
\cite{Shor94}. His discovery generated a wave of enthusiasm for
quantum computing, for two major reasons: the intrinsic
intellectual beauty of the algorithm and the fact that efficient
integer factorization is a very important practical problem. The
security of widely used cryptographic protocols is based upon the
conjectured difficulty of the factorization of large integers.
Shor's algorithm reduces the factorization problem to the problem
of finding the period of a function, but uses quantum parallelism
to find a superposition of all values of the function in one step.
Then the algorithm calculates the Quantum Fourier Transform of the
function, which sets the amplitudes into multiples of the
fundamental frequency, the reciprocal of the period. To factor an
integer, Shor's algorithm measures the period of the
function\footnote{A powerful version of the technique used by Shor
is the {\it phase-estimation algorithm} of Kitaev
\cite{Kitaev95}}.
In $1996$, Grover described a quantum algorithm for searching an
unsorted database containing $N$ items in a time of order
$\sqrt{N} $ while on a classical computer the search requires a
time of order $N$ \cite{Grover96}. The critical aspect of a
quantum algorithm is to create a superposition of all possible
states and amplify the ``solution''. The speedup of Grover's
algorithm is achieved by exploiting both quantum parallelism and
the fact that in quantum theory a probability is the square of an
amplitude. Bennett and his co-workers \cite{bennett97} and Zalka
\cite{Zalka99} showed that Grover's algorithm is optimal. No
classical or quantum algorithm can solve this problem faster than
time of order $\sqrt{N} $.
Grover's search can be applied to a large number of unstructured
problems and lead to a square root speedup over the corresponding
classical algorithms. For well structured problems classical
algorithms that produce an approximate solution and perform faster
exit.
Recently, Furrow discussed applications based upon Grover-type
algorithms \cite{Furrow06}. He considered three classes of
applications: a) Graph algorithms, e.g. Breadth-First Search
(BFS), Depth-First Search (DFS), bipartite matching. b)
Computational Geometry algorithms, e.g. Maximum points on a line.
c) Dynamic programming algorithms, such as coin changer. The
author reports the quantum versus classical complexity for BFS and
DFS, $\mathcal{O}(\sqrt{VE \lg V})$, versus $\mathcal{O}(E)$, with
$V$ the number of vertices and $E$ the number of edges of the
graph; for bipartite matching, $\mathcal{O}(V \sqrt{(E+V) \lg V
})$, versus $\mathcal{O}((E+V) \sqrt{V}))$; for maximum points
that lie on a line in $\mathbb{R}^{2}$, out of N points,
$\mathcal{O}(N^{3/2}\lg N)$ versus $\mathcal{O}(N^{2}\lg N)$, and
so on.
Most of the problems discussed in \cite{Furrow06} are
intrinsically search problems and the idea of applying Grover's
search comes naturally to mind. There is an even larger class of
problems which, at the first sight, do not seem directly related
to Grover's search. Applications such as scheduling and resource
allocation are not naturally search problems; nevertheless they
share some common properties, can be reformulated to take
advantage of quantum parallelism and entanglement, and lead to
algorithms which show polynomial speedups over their classical
counterparts.
A scheduling problem is characterized by a tuple $(\alpha \mid
\beta \mid \gamma)$ where $\alpha$ denotes the machine
environment, $\beta$ summarizes the set of constraints, and
$\gamma$ denotes the optimality criterion. The makespan of a
schedule, $C_{max}$ is the maximum completion time of any job in
the schedule. For example, $P||C_{max}$ and $R||C_{max}$ require
the shortest makespan and apply to identical machine environment
and, respectively, a non-homogeneous one.
When we turn our attention to problems when a deadline is imposed,
or when we wish to find a schedule with a given range of possible
average completion time we discover that a full range of
scheduling problems have a quantum counterpart which can take
advantage of Grover's search.
We illustrate these ideas with an example: given a deadline, or a
range of deadlines, the algorithm presented in this paper allows
us to determine if a solution to an $R||C_{max}$ problem with $N$
jobs and $M$ machines exists, and if so, it provides the schedule.
The time complexity of the quantum scheduling algorithm is
$\mathcal{O}(\sqrt{M^N})$ while the complexity of its classical
counterpart is $\mathcal{O}(M^N)$.
Real-time systems are subject to deadlines and Quality of Service
(QoS) constraints imposed to many systems require a given range
of average completion times. Thus, the classes of scheduling
algorithms we discuss in this paper are of significant practical
importance. Such algorithms have a quantum counterpart that enjoy
a square root speedup.
\medskip
\section{Scheduling Algorithms}
\label{SchedulingAlgorithms}
\medskip
Scheduling is the problem of assigning tasks to a set of resources
subject to a set of constraints, over time, in an ``optimal''
manner.
We are given a set of $N=2^{n}$ jobs, $\mathcal{J} =\{
\mathcal{J}_{1}, \mathcal{J}_{2}, \ldots \mathcal{J}_{N} \} $ and
a set of $M=2^{m}$ machines, $\mathcal{M} =\{ \mathcal{M}_{1},
\mathcal{M}_{2}, \ldots \mathcal{M}_{M} \} $. A {\it schedule}
$\mathcal{S}$ for the sets $(\mathcal{J}, \mathcal{M} )$ specifies
which $T_{ij}$ units of time machine $\mathcal{M}_{j}$ uses to
process job $\mathcal{J}_{i}$. We call $C_{i}^{\mathcal{S}}$ the
{\it completion time} of job $\mathcal{J}_{i}$ under the schedule
$\mathcal{S}$. The {\it makespan} of the schedule $\mathcal{S}$ is
the maximum completion time of any job in schedule $\mathcal{S}$:
$$
C_{max}^{\mathcal{S}} = \max_{i} C_{i}^{\mathcal{S}}.
$$
We are often interested in scheduling problems involving {\it
multiple} machines. We distinguish three cases in the {\it
parallel machine environment}:
\begin{enumerate}
\item
{\it Identical parallel environment.} All the machines are
identical and job $\mathcal{J}_{i}$ requires $T_{i}$ units of
time on any machine,
\item {\it Uniformly related parallel environment.} Each machine
$\mathcal{M}_{j}$ has a speed $s_{j}> 0$ and job $\mathcal{J}_{i}$
if processed entirely on machine $\mathcal{M}_{j}$ would take $
T_{i}/s_{j}$ units of time, and
\item
{\it Unrelated parallel machine environment.} Different machines
have different capabilities and the speed of machine
$\mathcal{M}_{j}$ on job $\mathcal{J}_{i}$ is $s_{ij}$. Then the
processing time of job $\mathcal{J}_{i}$ on machine
$\mathcal{M}_{j}$ is $T_{ij} = T_{i}/s_{ij}$, $0 \le T_{ij} \le Q$
and $Q=2^{q}$.
\end{enumerate}
\noindent These machine environment are denoted by $P, Q,$ and
$R$, respectively.
\medskip
Examples of scheduling constraints include deadlines (e.g., job
$i$ must be completed by time $t$), resource capacities (e.g.,
there are only $5$ machines), precedence constraints on the order
of tasks (e.g., one must be done before another), and priorities
on tasks (e.g., finish job $j$ as soon as possible while meeting
the other deadlines). A {\it priority rule} assigns to job
$\mathcal{J}_{i}$ a priority $\pi_{i}$. A {\it busy schedule}
defines the situation where when one machine becomes available it
starts processing the job with the highest priority.
\medskip
Each scheduling problem could have problem specific optimization
criteria. The one which requires the minimization of the makespan
is referred to in scheduling theory as $C_{max}$. Other
optimization criteria can be considered. For example, we may wish
to optimize the average completion time of all jobs:
$$
{1 \over N} \sum_{i=1}^{N} C_{i}^{\mathcal{S}}
$$
an optimization criterion denoted as $\sum_{i=1}^{N}
C_{i}^{\mathcal{S}}$.
Many scheduling problems are $\mathcal{N}\mathcal{P}-$ hard on
classical computers \cite{Brucker06} and have proved to be very
difficult even for a relatively small instance. For example, a
10-job-10-machine job-shop scheduling problem posed in $1963$
remained unsolved until $1989$ \cite{Carlier89}. Most classical
scheduling algorithms gain speedup only on some special cases with
relatively small instance or they try to find good schedules
instead of the optimal one. Furrow\cite{Furrow06} also gave a
special case of the $P||C_{max}$ problem that can be solved by
dynamic programming, which requires that the number of different
jobs processing times be bounded by a constant. It turns out that
there are polynomial approximations for some $P||C_{max}$
problems.
We consider an $R||C_{max}$ scheduling problem in which all
machines are unrelated. All jobs are available at the beginning
time and there are no precedence constraints. As no preemption is
allowed, once job $\mathcal{J}_{i}$ started processing on machine
$\mathcal{M}_{j}$ it must complete its execution before another
job, $\mathcal{J}_{k}$ can be processed on $\mathcal{M}_{j}$.
Given a set of $N=2^n$ jobs and $M=2^m$ machines we can construct
$2^{m2^n}$ different schedules. This $R||C_{max}$ scheduling
problem is $\mathcal{N}\mathcal{P}-$ hard and is difficult for
classical algorithms even with small instance. Up to now, most
classical algorithms use linear programming based rounding
techniques(LP-rounding) to search an approximate schedule instead
of the optimal one \cite{Grigoriev05,Schulz02}. If the number of
machines $m$ is part of the input, the best approximation
algorithm to date is a 2-approximation by Lenstra, Shmoys and
Tardos \cite{Lenstra90} which can find a schedule with
$C_{max}<2*C^{opt}_{max}$. Moreover, the problem cannot be
approximated within a factor strictly smaller than 3/2, unless
P=NP \cite{Lenstra90}. No proper classical algorithm addresses a
general case of searching the optimal schedule except the
exhausting search which has time complexity $\mathcal{O}(M^N)$ on
a classical computer. The paper suggests a reformulation of such
problems to take advantage of Grover-type search and gain square
root speedup on searching the optimal schedules over their
classical counterparts.
\medskip
\section{Information Encoding}
\label{InformationEncoding}
\medskip
Consider a set of $N=2^n$ jobs running on $M=2^m$ unrelated
machines. We assume that the jobs could have different processing
times on different machines and that the processing times are
integers in the range $0 \le T_{ij} < Q=2^q, ~~~ 1 \le i \le
2^{n}, ~~~1 \le j \le 2^{m}$. We also assume that we have a
quantum system with $r=m+q$ qubits for each job.
Given job $\mathcal{J}_{i}$ running on machine $\mathcal{M}_{j}$,
we encode the {\it job-machine} information as a vector $\mid
e_i^j \rangle$ obtained as the tensor product of the machine
index, $j$, and the processing time, $T_{ij}$:
$$
\mid e_i^j \rangle = \mid j \rangle \otimes \mid T_{ij} \rangle.
$$
Then we define {\it job state vectors} for job $i$, $\mid J_{i}
\rangle$ as any superposition of its job-machine vectors. First,
we consider an equal superposition of job-machine vectors as:
$$
\mid J_{i} \rangle = { 1 \over \ {2^{m/2}} } \sum_{j=1}^{2^m}
\mid e_{i}^{j} \rangle.
$$
A {\it schedule} is a tensor product of job-machine vectors:
$$
\mid \mathcal{S}_{k}\rangle = \otimes_{i=1}^{2^n} \mid e_{i}^{j_i}
\rangle,~~~ 1 \le j_i \le 2^m,
$$
which includes one job-machine vector for each job. A specific
machine may be present in multiple job-machine vectors, when the
schedule requires multiple jobs to be executed on the same
machine, or may not appear in any job machine vectors of the
schedule $\mid \mathcal{S}_{k}\rangle$ if none of the jobs is
scheduled on that machine.
The equal superposition of all schedules is:
$$
\mid \mathcal{S} \rangle = {1 \over \sqrt{\sigma}}
\sum_{k=1}^{\sigma} \mid \mathcal{S}_{k} \rangle ,~~~\sigma= 2^{m
2^{n}}
$$
Let $\Omega$ be an operator which given a schedule constructs the
running time on each machine for that schedule. When applied to
the equal superposition of all schedules, it produces a
superposition of the running time $\mid \mathcal{T}\rangle$ on
each machine for all schedules:
$$
\mid \overbrace{\mathcal{S}\mathcal{T}} \rangle = \Omega (\mid
\mathcal{S} \rangle\mid 0\rangle).
$$
where, $\mid \overbrace{\mathcal{S}\mathcal{T}}\rangle$ denotes
the entangled state of $\mathcal{S}$ and $\mathcal{T}$, while the
tensor product of $\mathcal{S}$ and $\mathcal{T}$ is $\mid
\mathcal{S} \mathcal{T}\rangle$.
Let $\Delta$ be an operator which computes a superposition of the
makespan of all schedules:
$$
\mid \overbrace{\mathcal{S}\mathcal{T}C_{max}} \rangle = \Delta
(\mid \overbrace{\mathcal{S}\mathcal{T}}\rangle \mid 0\rangle)=
\Delta ~ \Omega (\mid \mathcal{S}\rangle\mid 0\rangle\mid
0\rangle) .
$$
Figure \ref{makespan} outlines our procedure to produce an equal
superposition of the makespans of all schedules. We now turn our
attention to the quantum circuits to carry out the transformations
discussed in this section and to the question how to obtain from
the superposition of all makespans the optimal one.
\begin{figure}[h]
\begin{center}
\includegraphics[width=8cm]{figures/makespan.eps}
\end{center}
\caption{Outline of our algorithm to prepare the makespan vector.
First we prepare job vectors in an equal superposition of all
possible schedules $\mid \mathcal{S}^{(N(m+q))} \rangle$. Using a
quantum circuit $\Omega$ with inputs $\mid \mathcal{S}^{(N(m+q))}
\rangle$ and a register of $M(n+q)$ qubits in state $\mid 0
\rangle$, we construct the superposition of the running time on
each machine for every schedule. Then we construct the makespan of
each schedule using the operation $\Delta$. Superscripts indicate
the number of qubits for vectors. Note that the number of jobs is
$N=2^{n}$, the number of machines is $M=2^{m}$, and the maximum
execution time of a job on any machine is $Q=2^{q}$. }
\label{makespan}
\end{figure}
\medskip
First, we need to prepare the job vector $ \mid J_{i} \rangle $ in
an equal superposition state which includes the processing times
of job $\mathcal{J}_{i}$ on all machines, as shown in Figure
\ref{pre}. We use $m$ index qubits to control the job-machine
information encoding. As each index qubit is prepared in state $
{1 /\sqrt{2}}( \mid 0\rangle+\mid 1\rangle)$, the target qubits
will be prepared in superpositions of all possible job-machine
states, $e_{i}^{j},~~~ 1 \le i \le n, ~ 1 \le j \le m$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.5cm]{figures/pre.eps}
\end{center}
\caption{A quantum circuit to prepare the job state vectors with
$m=4$. In this case $\mid x \rangle$ is a set of two control
qubits. Each control qubit is set to $ {1 / \sqrt{2}}(\mid 0
\rangle+ \mid 1 \rangle)$. The target qubits will be prepared in a
superposition of all possible job-machine states.} \label{pre}
\end{figure}
\medskip
\noindent {\bf Example:} Table \ref{executionTime} summarizes the
processing time of $8$ jobs on $4$ machines, where $0 < T_{ij} <
2^{4}=16$. Thus $n=3$, $m=2$, and $q=4$. The running time of
$\mathcal{J}_{1}$ on machines $\mathcal{M}_{1}, \mathcal{M}_{2},
\mathcal{M}_{3}$ and $ \mathcal{M}_{4}$ are respectively, $1, 3,
7,$ and $15$ units of time.
\begin{table}[h]
\caption{The processing time of $8$ jobs on $4$ machines}
\begin{center}
\begin{tabular} {c||cccc||c||cccc}
\hline
Job/Machine & $\mathcal{M}_1$ & $\mathcal{M}_2$ & $\mathcal{M}_3$ & $\mathcal{M}_4$ &Job/Machine & $\mathcal{M}_1$ & $\mathcal{M}_2$ & $\mathcal{M}_3$ & $\mathcal{M}_4$\\
\hline \hline
$\mathcal{J}_1$ & 1 & 3 & 7 & 15 & $\mathcal{J}_5$ & 15 & 12 & 3 & 10 \\
$\mathcal{J}_2$ & 2 & 1 & 9 & 3 & $\mathcal{J}_6$ & 10 & 7 & 8 & 14 \\
$\mathcal{J}_3$ & 6 & 2 & 5 & 8 & $\mathcal{J}_7$ & 5 & 2 & 3 & 9 \\
$\mathcal{J}_4$ & 11 & 13 & 7 & 4 & $\mathcal{J}_8$ & 1 & 10 &11 &
13\\
\hline
\end{tabular}
\end{center}
\label{executionTime}
\end{table}
The four vectors used to encode the processing time of job
$\mathcal{J}_{1}$ on machines $\mathcal{M}_{1}, \mathcal{M}_{2},
\mathcal{M}_{3}$ and $ \mathcal{M}_{4}$ are respectively:
$$
\mid e_1^1 \rangle = \mid 000001 \rangle, ~~~~\mid e_1^2 \rangle = \mid 010011
\rangle, ~~~~
\mid e_1^3 \rangle = \mid 100111 \rangle, ~~~~\text{and}~~~~\mid e_1^4 \rangle =\mid 111111 \rangle.
$$
For $\mathcal{J}_{2}$ the basis vectors are $\mid 000010 \rangle$, $\mid 010001
\rangle$, $\mid 101001 \rangle, \mid 110011 \rangle$, and so on.
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.5cm]{figures/pre1.eps}
\end{center}
\caption{A quantum circuit to prepare $\mid J_{1} \rangle$. The
execution times of $J_{1}$ are 1, 3, 7, and 15 units of time
respectively. $\mid x \rangle$ are control qubits which control
the selection of the four job-machine states, $e_{1}^{j},~~ 1 \le
j \le 4$. The first two qubits of $ \mid J_1\rangle$ record the
machine index, and the remaining four qubits record the time if
$J_1$ is assigned to the corresponding machine.}
\label{pre1}
\end{figure}
Figure \ref{pre1} shows the circuit used to prepare the job vector
$\mid J_{1} \rangle$ for our example. The job state $\mid J_{1}
\rangle$ is prepared in an equal superposition of basis states:
$$
\mid J_{1} \rangle= {{1} \over {2}} ( \mid 000001 \rangle + \mid
010011 \rangle + \mid 100111 \rangle + \mid 111111 \rangle).
$$
We can prepare other job vectors in the same way, for example:
$$
\mid J_{2} \rangle= {{1} \over {2}} ( \mid 000010 \rangle + \mid
010001 \rangle + \mid 101001 \rangle + \mid 110011 \rangle).
$$
\medskip
A schedule vector $\mid \mathcal{S} \rangle$ is the tensor product
of all job vectors. As each job vector is prepared as an equal
superposition, the schedule vector is in the equal superposition
of all possible schedules.
We now provide two examples of schedule vectors:
\medskip
\noindent {\bf (i)} Schedule $\mathcal{S}_1$:
$$
[
\mathcal{J}_{1} \mapsto \mathcal{M}_{1},~~
\mathcal{J}_{2} \mapsto \mathcal{M}_{2},~~
\mathcal{J}_{3} \mapsto \mathcal{M}_{1},~~
\mathcal{J}_{4} \mapsto \mathcal{M}_{4},~~
\mathcal{J}_{5} \mapsto \mathcal{M}_{3},~~
\mathcal{J}_{6} \mapsto \mathcal{M}_{2},~~
\mathcal{J}_{7} \mapsto \mathcal{M}_{3},~~
\mathcal{J}_{8} \mapsto \mathcal{M}_{1}]
$$
Schedule $\mathcal{S}_{1}$ corresponds to the following job state
vectors:
\medskip
$\mid J_1 \rangle= \mid \underline{00}0001 \rangle$ \hspace{0.3in}
$\mid J_2 \rangle= \mid \underline{01}0001 \rangle$\hspace{0.3in}
$\mid J_3 \rangle= \mid \underline{00}0110 \rangle$\hspace{0.3in}
$\mid J_4 \rangle= \mid \underline{11}0100 \rangle$
$\mid J_5 \rangle= \mid \underline{10}0011 \rangle$ \hspace{0.3in}
$\mid J_6 \rangle= \mid \underline{01}0111 \rangle$\hspace{0.3in}
$\mid J_7 \rangle= \mid \underline{10}0011 \rangle$\hspace{0.3in}
$\mid J_8 \rangle= \mid \underline{00}0001 \rangle$\medskip
\medskip
\noindent The schedule vector for schedule $\mathcal{S}_{1}$ is:
$\mid \mathcal{S}_{1}\rangle = \mid \underline{00}0001
\rangle\otimes \mid \underline{01}0001 \rangle\otimes \mid
\underline{00}0110 \rangle\otimes \mid \underline{11}0100 \rangle$
\hspace{1.0in}$\otimes \mid \underline{10}0011 \rangle\otimes \mid
\underline{01}0111 \rangle\otimes \mid \underline{10}0011
\rangle\otimes \mid \underline{00}0001 \rangle$
\noindent The completion times on all machines are:\medskip
$
C^{{A}_{1}}(\mathcal{M}_{1}) =1+6+1=8,
~~~C^{{A}_{1}}(\mathcal{M}_{2}) =1+7=8,
~~~C^{{A}_{1}}(\mathcal{M}_{3}) =3+3=6,
~~~C^{{A}_{1}}(\mathcal{M}_{4}) =4.
$\medskip
The makespan of schedule $\mathcal{S}_{1}$ is equal to the largest
completion time over all machines:
$$
C_{max}^{A_{1}} =8.
$$
\medskip
\noindent {\bf (ii)} Schedule $\mathcal{S}_{2}$:
$$
[
\mathcal{J}_{1} \mapsto \mathcal{M}_{2},~~
\mathcal{J}_{2} \mapsto \mathcal{M}_{1},~~
\mathcal{J}_{3} \mapsto \mathcal{M}_{3},~~
\mathcal{J}_{4} \mapsto \mathcal{M}_{4},~~
\mathcal{J}_{5} \mapsto \mathcal{M}_{3},~~
\mathcal{J}_{6} \mapsto \mathcal{M}_{2},~~
\mathcal{J}_{7} \mapsto \mathcal{M}_{1},~~
\mathcal{J}_{8} \mapsto \mathcal{M}_{2}]
$$
\medskip
\noindent The schedule vector is:\medskip
$\mid \mathcal{S}_{2}\rangle =
\mid \underline{01}0011 \rangle\otimes
\mid \underline{00}0010 \rangle\otimes
\mid \underline{10}0101 \rangle\otimes
\mid \underline{11}0100 \rangle$
\hspace{1.0in}
$\otimes \mid \underline{10}0011 \rangle
\otimes \mid \underline{01}0111 \rangle
\otimes \mid \underline{00}0101 \rangle
\otimes \mid \underline{01}1010 \rangle)$
\bigskip
The schedule vector can also be in a superposition of some basic
states, for example, the schedule vector could be in an equal
superposition of the two schedules, $\mathcal{S}_1$ and
$\mathcal{S}_2$:\medskip
$\mid \mathcal{S}\rangle= 1/\sqrt{2}(\mid
\mathcal{S}_1\rangle+\mid \mathcal{S}_2 \rangle)$
\hspace{0.3in}$=1/\sqrt{2}
( \mid \underline{00}0001 \rangle \otimes
\mid \underline{01}0001 \rangle \otimes
\mid \underline{00}0110 \rangle \otimes
\mid \underline{11}0100 \rangle$
\hspace{1.0in}$\otimes
\mid \underline{10}0011 \rangle \otimes
\mid \underline{01}0111 \rangle \otimes
\mid \underline{10}0011 \rangle \otimes
\mid \underline{01}0001 \rangle$
\hspace{0.3in}$+
\mid \underline{01}0011 \rangle \otimes
\mid \underline{00}0010 \rangle \otimes
\mid \underline{10}0101 \rangle \otimes
\mid \underline{11}0100 \rangle$
\hspace{1.0in}$\otimes
\mid \underline{10}0011 \rangle \otimes
\mid \underline{01}0111 \rangle \otimes
\mid \underline{00}0101 \rangle \otimes
\mid \underline{01}1010 \rangle).$
\medskip
\section{The Running Time of Machine $\mathcal{M}_{j}$ under Schedule $\mathcal{S}_{k}$}
\label{RunningTime}
\medskip
Computing the total running time of machine $\mathcal{M}_{j}$
under schedule $\mathcal{S}_{k}$ requires summation of the running
times of all jobs assigned to $\mathcal{M}_{j}$. A quantum adder
similar to a classic adder, e.g. $\mid 5\rangle \hat{+}\mid
6\rangle = \mid 11\rangle$, has:
\begin{itemize}
\item
$n$ inputs $a_1, a_2, \cdots, a_n$, each one is a register of $q$
qubits,
\item
one carry in $c$, and
\item
one output $S=\sum_{i=1}^n a_i+c$, a register of $q+n$ qubits.
\end{itemize}
\medskip
To obtain the total running time of machine $\mathcal{M}_j$ under
schedule $\mathcal{S}_{k}$ we add the execution times, (in our
examples those qubits of a job-machine vector which are not
underlined) for all jobs assigned to $\mathcal{M}_j$ and create a
running time vector for machine $\mathcal{M}_j$, $\mid T_j
\rangle$. The {\it running time vector for schedule}
$\mathcal{S}_{k}$ is the tensor product of the execution time on
individual machines under schedule $\mathcal{S}_{k}$:
$$
\mid \mathcal{T}^{\mathcal{S}_{k}} \rangle=
\mid T_1^{\mathcal{S}_{k}} \rangle \otimes
\mid T_2^{\mathcal{S}_{k}} \rangle \cdots\otimes
\mid T_M^{\mathcal{S}_{k}} \rangle.
$$
For example, for machine $\mathcal{M}_1$ under schedule
$\mathcal{S}_1$ (see Section \ref{InformationEncoding}):
$$
\mid T_1^{\mathcal{S}_1}\rangle:~~~~~
\mid 0001\rangle \hat{+} \mid 0000\rangle \hat{+}\mid 0110\rangle \hat{+}
\mid 0000\rangle \hat{+} \mid 0000\rangle \hat{+} \mid 0000\rangle \hat{+} \mid 0000\rangle
\hat{+} \mid 0001\rangle,
$$
or
$$
\mid T_1^{\mathcal{S}_1}\rangle:~~~~~ \mid 0001000\rangle.
$$
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm]{figures/Summation1.eps}
\end{center}
\caption{A circuit to compute the sum of the execution time of
jobs assigned to each machine. $J_1, J_2 \ldots J_N$ are the job
vectors prepared according to the scheme discussed in Section
\ref{InformationEncoding}, and $T_1, T_2, \ldots T_M$ represent
the execution time on machines
$\mathcal{M}_1,\mathcal{M}_2,...\mathcal{M}_{M}$ respectively.}
\label{Summation1}
\end{figure}
We want to construct a quantum circuit to sum the execution time
of jobs assigned to each machine $\mathcal{M}_{j}$. We use the
index qubits as control qubits to control the summation of each
index, or each machine; the index qubits are entangled with the
target qubits which give the running time on that machine. As the
input qubits $\mid \mathcal{S}\rangle$ are prepared in the equal
superposition of all possible schedules, this operation will
prepare the running time of machines under all possible schedules.
In Figure \ref{Summation1}, $T_1, T_2, \ldots T_M$ represent the
execution time on machines
$\mathcal{M}_1,\mathcal{M}_2,...\mathcal{M}_{M}$ respectively.
When the index qubits are not ``active'', the respective input for
the summation circuit will be zero. For example, if the index of
$e_2^2=\mid \underline{01}0001 \rangle$ is $\underline{01}$,
rather than $\underline{00}$, then the index qubits of $e_2^2$ for
$\mid T_1 \rangle$ are not active.\medskip
How to implement arithmetic on a quantum computer has been address
by many authors. Many detailed quantum circuits and discussions
can be found in \cite{Gossett98,Draper04,Meter05}. For the example
discussed in Section \ref{InformationEncoding}, after the
summation operation, the system will be in the state:
\medskip
$$
\mid \overbrace{\mathcal{S}\mathcal{T}}\rangle =
1/\sqrt{2}(\mid \mathcal{S}_1 \mathcal{T}^1 \rangle +
\mid \mathcal{S}_2 \mathcal{T}^2 \rangle)
$$
or
\hspace{0.3in}$\mid \overbrace{\mathcal{S}\mathcal{T}}\rangle =1/\sqrt{2}( \mid \underline{00}0001
\rangle\otimes \mid \underline{01}0001 \rangle\otimes \mid
\underline{00}0110 \rangle\otimes \mid \underline{11}0100
\rangle\otimes \mid \underline{10}0011 \rangle\otimes \mid
\underline{01}0111 \rangle$
\hspace{1.0in}$~\otimes \mid \underline{10}0011 \rangle\otimes
\mid \underline{01}0001 \rangle\otimes\mid
0001000\rangle\otimes\mid 0001000\rangle\otimes\mid
0000110\rangle\otimes\mid 0000100\rangle $
\hspace{1.0in}$~+ \mid \underline{01}0011 \rangle\otimes \mid
\underline{00}0010 \rangle\otimes \mid \underline{10}0101
\rangle\otimes \mid \underline{11}0100 \rangle\otimes \mid
\underline{10}0011 \rangle\otimes \mid \underline{01}0111 \rangle$
\hspace{1.0in}$~\otimes \mid \underline{00}0101 \rangle\otimes
\mid \underline{01}1010 \rangle\otimes\mid
0000111\rangle\otimes\mid 0010100\rangle\otimes\mid
0001000\rangle\otimes\mid 0000100\rangle)$
\medskip
\section{Determination of the Makespan}
\label{Makespan}
\medskip
Now we have the system in state $\mid
\overbrace{\mathcal{S}\mathcal{T}}\rangle$, of which the last
$M(n+q)$ qubits provide the running time of all $M$ machines under
all schedules. The makespan of a schedule is equal to the maximum
running time among the $M$ machines under that schedule. We want
to construct a {\tt Max} circuit, as shown in Figure \ref{Max}, to
compute the makespan of each schedule. The quantum circuit
computes the maximum over an input set, e.g., $\text{Max} (\mid
5\rangle, \mid 7\rangle, \mid 4\rangle)=\mid 7\rangle$. The input
to this circuit is a set of $n+q$ qubits. The output $\mid
\text{Max} \rangle$, has also $n+q$ qubits. Implementing such
arithmetic on a quantum computer has been addressed by many
researchers\cite{Gossett98,Draper04,Meter05} and we omit the
detail circuit here.
\begin{figure}[h]
\begin{center}
\includegraphics[width=7cm]{figures/Max.eps}
\end{center}
\caption{A quantum circuit to compute the makespan, the maximum
running time among all machines. The output of this circuit is an
entangled state.} \label{Max}
\end{figure}
The output of a quantum circuit as in Figure \ref{Max} is an
entangled state, rather than the tensor product $\mid
T_1\rangle\otimes \mid T_2\rangle\otimes \cdots \otimes \mid
T_M\rangle\otimes\mid C_{max}\rangle$. Recall that the $\mid
\mathcal{T}\rangle$ was prepared in an equal superposition of
running times of the machines under all possible schedules. Thus,
such a {\tt Max} operation prepares the $C_{max}$ register in an
equal superposition of the makespans of all possible schedules.
\medskip
Up to now, we discussed the implementation of the quantum circuits
presented in Figure \ref{makespan}. During these operations, we
successfully entangle the job vectors with the {\tt Sum} and the
{\tt Makespan} vectors. These vectors are prepared in the equal
superposition of all possible schedules, which can be written as:
$$
\mid \overbrace{\mathcal{S}\mathcal{T}C_{max}} \rangle =
\sum_1^{2^{m2^n}}\mathcal{S}_i\mathcal{T}_iC_{maxi},
$$
where, $1\leq i \leq2^{m2^n}$ indexes different schedules, and we
ignore the coefficients.
\medskip
For the simple example in Section \ref{InformationEncoding}, the
system will be in the state :
$\mid \overbrace{\mathcal{S}\mathcal{T}C_{max}} \rangle =
1/\sqrt{2}(\mid \mathcal{S}_1\rangle\mid \mathcal{T}_1\rangle\mid
C_{max1}\rangle+\mid \mathcal{S}_2\rangle\mid
\mathcal{T}_2\rangle\mid C_{max2}\rangle)$
\hspace{0.3in}$=1/\sqrt{2}( \mid \underline{00}0001 \rangle\otimes
\mid \underline{01}0001 \rangle\otimes \mid \underline{00}0110
\rangle\otimes \mid \underline{11}0100 \rangle\otimes \mid
\underline{10}0011 \rangle\otimes \mid \underline{01}0111
\rangle\otimes \mid \underline{10}0011 \rangle$
\hspace{1.0in}$\otimes \mid \underline{01}0001 \rangle\otimes\mid
0001000\rangle\otimes\mid 0001000\rangle\otimes\mid
0000110\rangle\otimes\mid 0000100\rangle\otimes\mid 0001000\rangle
$
\hspace{0.3in}$+ \mid \underline{01}0011 \rangle\otimes \mid
\underline{00}0010 \rangle\otimes \mid \underline{10}0101
\rangle\otimes \mid \underline{11}0100 \rangle\otimes \mid
\underline{10}0011 \rangle\otimes \mid \underline{01}0111
\rangle\otimes \mid \underline{00}0101 \rangle$
\hspace{1.0in}$\otimes \mid \underline{01}1010 \rangle\otimes\mid
0000111\rangle\otimes\mid 0010100\rangle\otimes\mid
0001000\rangle\otimes\mid 0000100\rangle\otimes\mid
0010100\rangle)$
\medskip
\section{Searching for a Schedule with a Given Makespan}
\label{GeneralizedGroverSearchAlgorithms}
\medskip
In our approach, a generalized version of Grover's search
algorithm allows us to find a schedule with a given makespan. The
basic ideas of Grover's quantum search algorithm are discussed
next.
\medskip
Consider a search space $\mathcal{T}_{search} = \{E_{x} \}$
consisting of $N=2^{n}$ elements. Each element $E_{x}, 1 \le x \le
2^{n} $, is uniquely identified by a binary $n$-tuple $x$, called
{\it the index} of the element. We assume that $M \le N$ elements
satisfy the requirements of a query and we wish to identify one of
them.
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{figures/Grover2.eps}
\end{center}
\caption{A quantum circuit for the Grover's iteration}
\label{Grover2}
\end{figure}
The classic approach is to repeatedly select an element $E_{j}$,
decide if the element is a solution to the query, and if so,
terminate the search. If there is a single solution ($M=1$) then
a classical exhaustive search algorithm requires
$\mathcal{O}(2^{n})$ iterations.
For the Grover quantum searching algorithm, we apply a
Walsh-Hadamard transform to create an equal superposition state
which includes all elements of the search space:
$$
\mid \psi \rangle = { 1 \over \sqrt{N}} \sum_{x=0}^{N-1} \mid x
\rangle.
$$
Then we perform Grover's iterations. The circuit for this
algorithm is presented in Figure \ref{Grover2}
An oracle examines an index/label, x, and decides if it matches
the search argument or not. To abstract this process we consider a
function $f(x)$ with $0 \le x \le 2^{n}-1$ such that
$$
f(x) =
\left\{
\begin{array}{ll}
0 & \text{if}~~ x~ \text{is~not~a~solution} \\
1 & \text{if}~~ x~ \text{is~a~solution}.
\end{array}
\right.
$$
An {\it oracle qubit}, $ \mid q \rangle$, initially set to $\mid 0
\rangle$ is reset to $\mid 1 \rangle$ when the oracle recognizes a
solution to the search problem we pose. The black box oracle $O$
performs the following transformation
$$
O\mid x \rangle \mid q \rangle = \mid x \rangle \mid q
\oplus f(x) \rangle.
$$
The oracle qubit\index{oracle qubit} can be initially in the state
$\mid q \rangle = (1 / \sqrt{2}) (\mid 0 \rangle - \mid 1
\rangle). $
Thus this transformation can be rewritten as
$$
O\mid x \rangle ~(\mid 0 \rangle - \mid 1 \rangle ) / \sqrt{2}
=
(-1)^{f(x)} \mid x \rangle ~(\mid 0 \rangle - \mid 1 \rangle ) /
\sqrt{2}.
$$
The state of the oracle qubit does not change and can be omitted
from the description of the quantum search algorithm
$$
\mid x \rangle ~~~~\mapsto~~~~ (-1)^{f(x)} \mid x \rangle.
$$
Let $U$ be the following transformation:
$$
U = 2 \mid 0\rangle \langle 0 \mid - I.
$$
Then a conditional phase shift in Figure \ref{Grover2} applied to
the system is:
$$
S_{p} = H^{\otimes n} U H^{\otimes n} = H^{\otimes n} (2 \mid 0
\rangle \langle 0 \mid - I) H^{\otimes n} = 2 \mid \psi \rangle
\langle \psi \mid - I.
$$
A Grover's iteration consists of $O$, the transformation performed
by the oracle followed by a conditional phase shift:
$$
G = S_{p} O = ( 2 \mid \psi \rangle \langle \psi \mid - I) O
$$
Thus, the quantum search algorithm could be written as:
$$
G^R{ 1 \over \sqrt{N}} \sum_{x=0}^{N-1} \mid x \rangle\mid
q\rangle = [(2\mid \psi \rangle \langle \psi \mid - I) O]^R
{1\over \sqrt{N}} \sum_{x=0}^{N-1}\mid x \rangle(\mid 0 \rangle -
\mid 1 \rangle ) / \sqrt{2} \approx \mid x_0\rangle(\mid 0 \rangle
- \mid 1 \rangle ) / \sqrt{2}
$$
When after $R = \mathcal{O} (\sqrt{N \over M})$ iterations, we
measure the first $n$ qubits and obtain $x_0$, the solution to
the search problem.
\bigskip
{\it Amplitude Amplification} represents a generalization of the
Grover's quantum search idea \cite{Brassard00,Hoyer00a}. Let
$\mathcal{A}$ be a unitary operator in a Hilbert space,
$\mathcal{H}_{N}$ with an orthonormal basis $ \mid 0 \rangle, \mid
1 \rangle, \ldots \mid N-1 \rangle $; the only condition imposed
on $\mathcal{A}$ is to be invertible, thus $\mathcal{A}$ must not
involve any measurements.
If $\chi: \{ 0, 1, \dots N-1 \} \mapsto \{ 0,1 \} $ is a Boolean
function we say that the basis state $\mid x \rangle $ is a
``Good'' state if $\chi(x) = 1$ and $\mid x \rangle $ is a ``Bad''
state" if $\chi(x) = 0$. The central piece of the amplitude
amplification is an operator $\mathcal{Q}$ defined as:
$$
\mathcal{Q} = \mathcal{Q}( \mathcal{A}, \chi, \phi, \varphi) =
- \mathcal{A} S_{0}(\phi) \mathcal{A}^{-1} S_{\chi}(\varphi).
$$
with $\phi$ and $\varphi$ two angles such that $0 \le \phi,
\varphi \le \pi$ and $S_{\chi}$ an operator which conditionally
changes the amplitudes of ``Good'' states:
$$
\mid x \rangle \mapsto
\left\{
\begin{array} {r c l}
e^{i \varphi} \mid x \rangle & \text{if} & \chi(x) = 1 \\
\mid x \rangle & \text{if} & \chi(x) = 0.
\end{array}
\right.
$$
Similarly, $S_{0}$ amplifies the amplitude by a factor $e^{i
\phi}$ if the state is not $\mid 0 \rangle$.
Let $a$ denote the probability of finding a ``Good'' element $x$;
amplitude amplification allows to find a ``Good'' $x$ after an
expected number of applications of $\mathcal{A}$ and of the
inverse of $\mathcal{A}$; the number of iterations is proportional
to $1/\sqrt{a}$. We also define the angle $\theta$ such that:
$$
\sin^{2} (\theta) = a.
$$
Grover's algorithm is a particular instance of amplitude
amplification when the oracle implements the Boolean function
$f=\chi$, and the transformation $\mathcal{A}$ is the
Walsh-Hadamard transform $W=H^{\otimes n}$ on n qubits.
This iteration carried out by transformation $Q$ can be regarded
as a \emph{rotation} in the two-dimensional space spanned by the
the state of a uniform superposition of non-solutions and the
state consisting of a uniform superposition of solutions to the
search problem. The initial state may be expressed as:
$$
\mid \psi_0 \rangle=\sqrt{a}\mid Good\rangle+\sqrt{1-a}\mid
Bad\rangle$$
Figure \ref{amplitude} presents the effect of the transformation
$Q= -\mathcal{A} S_0 \mathcal{A}^{-1}S_{\chi}$ as:
\begin{itemize}
\item the oracle operation $S_{\chi}$ performs a reflection about the
vector $\mid Good \rangle$.
$$
S_{\chi}\mid x\rangle=\mid x\rangle ~~(\chi(x)=1)~~~~~~~~~~~~S_{\chi}\mid x\rangle=-\mid x\rangle ~~ (\chi(x)=0)
$$
\item $\mathcal{A}S_0\mathcal{A}^{-1}$ performs a reflection about the initial state
$\mid \psi_0\rangle$
$$S_0\mid 0\rangle=\mid 0\rangle ~~~~~~~~~~~~~~S_0\mid x\rangle=-\mid x\rangle ~~ (x\neq0)
$$
\item $Q= -\mathcal{A}S_0 \mathcal{A}^{-1}S_{\chi}$ performs a rotation toward $\mid
Good\rangle$ vector by $2\theta$ radians, where $\sin^2 \theta=a$
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=14cm]{figures/amplitude.eps}
\end{center}
\caption{The search operator $Q$ performs a rotation toward $\mid
Good\rangle$ states by $2\theta$ radians. $\theta$ is the angle
between the initial state $\mid \psi_0\rangle$ and the $\mid Bad
\rangle$ in the two-dimensional space spanned by the $\mid
Good\rangle$ and $\mid Bad\rangle$ states and $\sin^{2} (\theta) =
a$ with $a$ the probability of finding a ``Good'' element $x$. (a)
The current state $\mid \psi\rangle$ and the initial state $\mid
\psi_0\rangle$. (b) The oracle operation $S_{\chi}$ performs a
reflection of the current state $\mid \psi\rangle$ about the
vector $\mid Good \rangle$. (c) $\mathcal{A}S_0\mathcal{A}^{-1}$
performs a reflection about the initial state $\mid \psi_0\rangle$
} \label{amplitude}
\end{figure}
Each iteration $Q$ will rotate the system state by $2\theta$
radians toward the solutions of the searching problem. Thus after
$m$ iterations, the measurement on the final state
$Q^m\mid\psi_0\rangle$ will produce a ``Good'' state with
probability equal to $\sin^2((2m+1)\theta)$. The amplitude
amplification algorithm could find a ``Good'' solution in
$\mathcal{O}({1\over \sqrt{a}})$ \cite{Brassard00,Hoyer00a}. Each
iteration involves the application of $\mathcal{A}$ and
$\mathcal{A}^{-1}$.
\medskip
Now let us return to our scheduling problem and recall that:
\begin{itemize}
\item
We prepare each job vector $\mid J_i \rangle$ in a superposition
state which includes the running times on all machines. The first
$m$ qubits of a job vector are used for the index of the machine
and remaining $q$ qubits are used for the running time of that job
on the machine.
\item
We summarize the execution time of all jobs according to their
machine indexes, which produces all schedules and gives the
running time of each machine $T_j$ under these schedules. The
system is prepared in an entangled state:
$$\mid
\overbrace{\mathcal{S}\mathcal{T}}\rangle = {1 \over
\sqrt{\sigma}}\sum_{\text{each schedules
k}}(\bigotimes_{i=1}^{N}\mid J_{ik}\rangle\bigotimes_{j=1}^{M}\mid
T_{jk}\rangle ) ~~~~~~~~\sigma= 2^{m 2^{n}},
$$
a superposition of job vectors and running time vectors of all
possible schedules.
\item
We obtain the maximum running time among all machines using the
{\tt Max} quantum circuit and prepare the system in state:
$$
\mid \overbrace{\mathcal{S}\mathcal{T}C_{max}} \rangle = {1 \over
\sqrt{\sigma}}\sum_{\text{each schedules
k}}(\bigotimes_{i=1}^{N}\mid J_{ik}\rangle\bigotimes_{j=1}^{M}\mid
T_{jk}\rangle \mid C_{max~k}\rangle) ~~~~~~~~\sigma= 2^{m 2^{n}}$$
\end{itemize}
\bigskip
As we can see, our algorithm successfully prepares the system in
an equal superposition of all $2^{m 2^{n}}$ possible schedules. We
define this whole preparation process as $\mathcal{Z}$. This
$\mathcal{Z}$ transformation does not carry out a measurement of
the system at any time. Therefore, there exists an inverse
transformation operation $\mathcal{Z}^{-1}$. We can use the
amplitude amplification algorithm to search the schedule with the
makespan $D_{k}= \mu$. If we find such a makespan, the job vectors
will be projected as well. These projections will give us the
actual mapping of jobs to machines corresponding to the schedule
with the given makespan.
The searching process consists of the following steps:
\begin{itemize}
\item Apply the $\mathcal{Z}$ transformation on $\mid 0\rangle$ to
prepare the system in an equal superposition of all $2^{m 2^{n}}$
possible schedules, $\mid \psi\rangle$.
\item
Repeat the following steps $\mathcal{O}(\sqrt{\sigma})$ times:
\hspace{.5in}Apply $Q$ on $\mid \psi\rangle$, in which $Q =
-\mathcal{Z}S_0 \mathcal{Z}^{-1}S_{\chi}$
\item
Measure the resulting state.
\item
Return the result; the job vectors give the detailed schedule.
\end{itemize}
\medskip
The oracle in our searching algorithm exhibits some difference
with the oracle in Grover's algorithm which checks all qubits to
reverse the solution(s) of the searching problem. In our case, the
oracle only checks a subset, $C_{max}$ qubits. It is easy to
implement such an oracle using an idea similar to the one in
\cite{Nielsen2000}.
The algorithm presented in this paper can be optimized; to avoid
over rotation we could use the fix-point quantum search
\cite{Grover05}.
\medskip
\section{Scheduling Problems with a Quantum Counterpart}
\medskip
Many other scheduling problems can be reformulated to take
advantage of quantum search and exhibit a square root speedup
versus their classical counterparts. Such problems require that a
synthetic measure of performance, $\mu$ be within a given range,
$\mu_{min} \le \mu \le \mu_{max}$.
The generic quantum algorithm proceeds as follows:
\begin{enumerate}
\item
Devise an encoding scheme for the information required to compute
$\mu_{S_{i}}$ for a given schedule $S_{i}$.
\item
Design an algorithm to compute the synthetic measure of
performance, $\mu_{S_{i}}$.
\item
Construct $\mid \mathcal{Q} \rangle$ with
$\mathcal{Q}=(\mu_{S_{i}},S_{i})$ in a superposition state for all
possible schedules.
\item
Design the quantum circuits for the specific encoding scheme and
for the algorithms. Given a set of values $\{q_{1}, q_{2}, \ldots
q_{n}\}$ the circuit should be able to compute a function
$\mu_{S_{i}}=f(q_{1}, q_{2}, \ldots q_{n})$. For example, we may
wish to compute the maximum, the minimum, or an average value.
\item
Design an oracle to identify a specific value of the function
$\mu_{S_{i}}$.
\item Use the quantum search to find if there is a value
$\mu_{S_{i}}= \mu_{min}$. If so, determine the corresponding
schedule $S_{i}$. Continue this process until $\mu_{S_{i}}=
\mu_{max}$.
\item
If no schedule can be found then report failure, otherwise provide
the list of all schedules $S_{i}$ and the corresponding measures
of performance, $\mu_{S_{i}}$.
\end{enumerate}
Consider for example the $R||\sum_{i=1}^{N}C_i^S$ scheduling
problem when the goal is to optimize the average completion time
in unrelated parallel machine environment. It has the similar
encoding process as the $R||C_{max}$ problem. During the process
we construct all schedules, we also summarize the completion time
of different jobs using some simple arithmetic circuits, followed
by Grover-type search. Other scheduling problems such as
minimizing the average waiting time, could also take advantage of
quantum search.
Oftentimes, we have to search for schedules that optimize the
largest subset of a set of synthetic measures of performance,
$\mu,\nu, \pi, \rho, \theta \ldots$. For example, we could have
multiple synthetic performance indicators related to: timing,
resource utilization, cost, and quality of service. In this case
we would run repeatedly the scheduling algorithm for each
performance measure and search for a solution in the given range
for each measure. Once we have conducted all individual searches
we determine the intersection of all schedules that satisfy all
conditions; if the set is empty we remove individual conditions
one by one until we find the a non-empty set.
Scheduling is also intimately related to planning when we have a
complex goal and the sequence of actions to reach each goal have
to be determined. The scenario described above is repeated for
each plan thus the square root speedup of algorithms based upon
quantum search becomes even more important.
\medskip
\section{Summary}
\medskip
When a deadline is imposed, or when we wish to find a schedule
with a given range of possible average completion time we discover
that a full range of scheduling problems have a quantum
counterpart which can take advantage of Grover's search.
Many scheduling problems, resource allocations, and path-finding
problems, share some common properties with the $R||C_{max}$
problem discussed in this paper: a well-defined initial state, a
well-defined desired state or a range of desired states, many
paths to reach the desired state, and well-defined attributes of
an optimal path.
The quantum algorithmic solution to such problems requires the
following steps:
\begin{itemize}
\item Prepare the initial state in an equal superposition of all
possible choices.
\item Use some reversible quantum arithmetic to compute the
specialized property (makespan in our case) needed.
\item Construct the necessary oracle circuit.
\item Use Grover-type algorithms to search for the desired solution.
\end{itemize}
The solution we propose based upon Grover's algorithm is not
universally applicable. Problem requiring specific optimization
criteria may require quantum circuits that cannot be easily
implemented with reversible quantum arithmetic. Moreover,
Grover-type algorithms lead to a square-root improvement over the
exhausting search, while many good classical algorithms may have
better performance for some special problems.
\section{Acknowledgments}
The research reported in this paper was partially supported by
National Science Foundation grant CCF 0523603. The authors express
their thanks to Lov Grover, Pawel Wocjan, and an anonymous
reviewer for their comments and suggestions.
|
1,108,101,564,203 | arxiv | \section{Introduction}
\label{sec:intro}
High velocity clouds (HVCs) are clouds of gas whose speeds differ substantially from the local standard of rest (LSR). The minimum speed in the LSR reference frame is classically defined as $\lvert$v\subscript{LSR}$\rvert$~=~90~km~s\superscript{-1}, but it should be noted
that speeds as low as 70~km~s\superscript{-1} and as high as 100~km~s\superscript{-1} have been used to define a lower limit for HVCs (\citealt*{1997ARA&A..35..217W}, and references therein). Although the original use of the term `HVC' implied that the cloud is neutral, ionized
high velocity material has been observed (see \citealt{2009ApJ...703.1832H} and \citealt{2012ARA&A..50..491P}). In fact, moderately ionized and highly ionized fast moving material appears to cover larger fractions of the sky (81\%\ for Si~{\footnotesize III}, \citealt{2009ApJ...699..754S}; \citealt{2009ApJ...705..962C}
and 60\%\ for \oxysix, \citealt{2003ApJS..146..165S}) than does neutral material (37\%\ for H~{\footnotesize I} with column densities exceeding $\sim$7$\times$10\superscript{17}~cm\superscript{-2}, \citealt{1995ApJ...447..642M}; \citealt{2002ApJS..140..331L}). For the
purpose of this paper, we shall refer to both neutral and ionized high velocity gas as HVC gas.
Three major origin theories exist for HVCs; feedback (or fountain), satellite accretion, and accretion from the IGM. Feedback entails Galactic disk gas being ejected into the halo, like in a fountain that is energized by
various processes such as stellar winds from young
stars and supernovae. This ejected gas cools in the halo, via radiative cooling, and accretes onto the disk. The second idea on our list, the loss of material from satellite galaxies is exemplified by the Magellanic Stream (see Figure 1 of \citealt*{1997ARA&A..35..217W}
and \citet{2012ARA&A..50..491P})
that trails behind the LMC and SMC. As these dwarf galaxies pass through the Milky Way's extended halo, some of their gas is ram pressure stripped and tidally stripped off of them. Perhaps as the gas falls towards the disk of our Galaxy it may become shock heated and
fragment to the point of becoming indistinguishable from our Galaxy's halo gas or condense, due to radiative cooling, and continue to travel toward the disk of our Galaxy as HVCs. Lastly, \textcolor{black}{dark matter-dominated gas-bearing clouds are both expected
in simulations of the Local Group \citep{1999ApJ...522...82K} and observed \citep{2013ApJ...768...77A} while} IGM that flows along galactic filaments has been shown in simulations to condense and accrete along the filaments \citep{2012ApJ...759..137J}. Some of this gas could
continue to cool and begin falling towards our Galaxy's disk.
Each of these origins results from a different reservoir of gas, each of which should have a different metallicity. Thus observed metallicities provide clues to the origins of the clouds. For example, \citet{2003AJ....125.3122T}
and \citet{2007ApJ...657..271C} state that the low metallicities and differing metallicity measurements between different sight lines in Complex C imply that it originated from a more pristine reservoir of gas
than the disk of the Milky Way \textcolor{black}{and has mixed, and is mixing, with ambient material}.
A cloud's metallicity, however, is also affected by the cloud's interaction with its environment. In our simulations, the HVC gas mixes with ambient halo material. Specifically, during the mixing process, the HVC fragments and the resulting fragments entrain and accelerate
halo material. The metallicity of the mixed material is between that of the original cloud and the halo. As the cloud continues to move through the halo it will entrain more halo material, further modifying the observed metallicity.
It should be noted that, for the purpose of this paper, we do not use the technical definition of metallicity ($\log_{10}\left(\frac{n_{\beta}}{n_{H}} \right)-\log_{10}\left(\frac{n_{\beta}}{n_{H}} \right)_{\sun}$),
where $n_{\beta}$ is the number density of any chosen element, $\beta$, and $n_{H}$ is the number density of hydrogen. In casual discussion, the term metallicity is taken to mean the ratio of
the measured abundance over the solar abundance ($\frac{\left(\frac{n_{\beta}}{n_{H}} \right)}{\left (\frac{n_{\beta}}{n_{H}}\right)_{\sun}}$). We use this meaning of the term metallicity and use the
$\left(\frac{n_{\beta}}{n_{H}}\right)_{\sun}$ values in \citealt*{1989GeCoA..53..197A}; e.g., 9.77$\times 10^{-2}$, 3.63$\times 10^{-4}$, 1.21$\times 10^{-4}$, and 8.51$\times 10^{-4}$ heliums, carbons, nitrogens, and oxygens per hydrogen.
Our definition allows us to define solar metallicity to have a value of unity (\abundsolar=1.0). This shall be the convention used throughout the paper when discussing the effects of mixing upon the various metallicities we report,
unless otherwise stated.
Not only is determining the extent of mixing important for determinations of a cloud's original metallicity, but it is also important for understanding the accretion of Milky Way gas into HVCs.
To examine the effect of mixing in detail, we run detailed 2 and 3 dimensional hydrodynamic simulations. We use the FLASH version 2.5 \citep{2000ApJS..131..273F} code.
Our methodology and initial conditions are described in Section~\ref{sec:code}. In Section~\ref{sec:results}, we present the simulational results. Specifically, Subsection~\ref{subsec:example} shows
how the metallicity of high velocity material can be augmented on small scales. Subsection~\ref{subsec:global} shows that the degree of mixing is correlated with the ionization level of the gas such that
weakly ionized high velocity gas is least mixed and highly ionized high velocity gas is well mixed. In Subsections~\ref{subsubsec:ahvg} and \ref{subsubsec:alvg}, respectively, we explore the effects of mixing on the high velocity and
decelerated gas. In Subsection~\ref{subsec:velocity} we further examine the relationship between velocity and metallicity.
We present our conclusions in Section~\ref{sec:conclusion}.
\section{Numerical Code and Initial Parameters}
\label{sec:code}
We use FLASH to calculate the hydrodynamic interaction between the cloud and Milky Way halo gas. We implement two general domain geometries, one of which uses a 2 dimensional, fixed, cylindrically symmetric
domain of height=20,800~pc
(spanning from $z$=-1,200~pc to 19,600~pc) and radius=1,200~pc (Model~A), while the other (Model~B) uses a 3 dimensional, fixed, Cartesian domain of height=10,800~pc in $z$
(spanning from $z$=-1,200~pc to 9,600~pc) and cross sectional dimensions $x$=1,200~pc and $y$=1,200~pc (spanning from $x$=0~pc to $x$=1,200~pc and $y$=0~pc to $y$=1,200~pc). In Model~B we assume a nearly symmetric structure across the $x$=0~pc and
$y$=0~pc planes and therefore simulate one fourth of the cloud to make better use of our computational resources. \textcolor{black}{In addition to the $\frac{1}{4}$~cloud results presented in this paper, preliminary simulations of a half-cloud were
made. There was little difference in visible morphology between the $\frac{1}{2}$~cloud and $\frac{1}{4}$~cloud simulations. However, additional half-cloud simulations were deemed unfeasible due to their longer wall clock times. We reserve
half-cloud simulations for future projects.} Although we use an adaptively refinable grid, we begin the simulations at full refinement such that
all zones span $\sim$3~pc along the z-direction and $\sim$3~pc along the radial direction for Model~A, and $\sim$9~pc in x, y, and z directions for Model~B. Aside from the domain and geometry, all other initial parameters
are the same between our two models.
A 150~pc radius spherical cloud is initially placed with its center at $z$=0~pc, $r$=0~pc in Model~A, and at $x$=0~pc, $y$=0~pc, and $z$=0~pc in Model~B. As in Model C of \citet{2011ApJ...739...30K}, all of our model clouds are warm
(cloud temperature, $T_{cl}$=10\superscript{3}~K), moderately dense
(cloud density of hydrogen, $n_{H,cl}$=0.1~cm\superscript{-3} and density of helium, $n_{He,cl}\cong$~0.1$\times n_{H,cl}$) and surrounded by hot ($T_{ISM}$=10\superscript{6}~K), low density (density of hydrogen, $n_{H,ISM}$=10\superscript{-4}
cm\superscript{-3} and density of helium, $n_{He,ISM}$= 0.1$\times n_{H,ISM}$) Milky Way halo gas. The halo number density falls within observational
constraints \citep[see][]{2011ApJ...739...30K}. \textcolor{black}{It is slightly less dense than the material around Complex~C (10\superscript{-3.3} to 10\superscript{-3.0}~cm\superscript{-3}, see \citealt{2011AJ....141...57H}) and similar to that around the tail of the Magellanic Stream
(10\superscript{-4.1} to 10\superscript{-3.7}~cm\superscript{-3}, see \citealt{2011AJ....141...57H}).} Our halo density is chosen to mimic the extended halo as calculated by \citet{1995ApJ...453..673W} and modeled by
\citet*{2009ApJ...698.1485H}. Our halo material remains at a constant density throughout the simulation. \textcolor{black}{This is not so dissimilar from reality, in that the observationally determined gradient is small, with a factor of
$\sim$6 decrease in density from the height of Complex~C to the height of the tail of the Magellanic Stream. An object travelling at an oblique angle to the Galactic disk would experience an even smaller gradient.} We choose this scenario so that the effects of mixing can be studied over a longer period of simulated time without
changes to variables, such as ambient density, that would effect the rates of ablation by damping or increasing hydrodynamic instabilities. \textcolor{black}{We did not model density inhomogeneities (i.e., clumpiness) as the size scale and density
contrast of such inhomogeneities are not well understood.} Future projects shall focus on the effects of density, speed, magnetic fields, and gravity, as they relate to the survival and mixing of HVCs.
Each of these variables can play important roles in the evolution of an HVC and may alter the mixing characteristics \citep[see][]{2009ApJ...699.1775K}.
Rather than model a sharp boundary between the cloud and ISM, we model a smooth transition in both density and temperature following the function
$$n_{H}\left (r \right ) = -0.5\left ( n_{\text{H, cl}}-n_{\text{H, ISM}} \right ) \tanh \left (\frac{r-150\text{pc}}{20\text{pc}}\right )+ 0.5 \left ( n_{\text{H, cl}}+n_{\text{H, ISM}} \right )$$
which is similar to the density profiles used in simulations by \citet*{2009ApJ...698.1485H} and \citet{2011ApJ...739...30K} who based their hyperbolic tangent transition function on the observations of \citet{2001A&A...369..616B}.
For a graphical representation of the above equation see Figure~1 of \citet{2011ApJ...739...30K}. The density decreases with radius in a transition zone that extends from $r\cong$~90~pc to $r\cong$~210~pc, with the above quoted cloud
radius being the radius at which the density drops to half that of the cloud's center. Any material within the transition zone that initially has $n_{H} \geq$~5~$n_{H,ISM}$
(i.e., any material within $\sim$210~pc of the cloud's center) is assigned the velocity and metallicity of the cloud (see below); note that the majority of this material (over 80\%\ by mass) is within $r$=150~pc.
At the beginning of each simulation, the temperature in the transition zone increases with radius as the density decreases with radius such that the thermal pressure remains constant and is equal to that in the cloud and
ambient gas at the beginning of the simulation. It should be noted that the pressure used in the hydrodynamic calculations is determined using the approximation that the hydrogen and helium in the cloud are fully ionized, but
collisionally ionized gas at the model cloud temperature is not fully ionized. The ionization approximation affects only the pressure calculations done by FLASH. The effect of this approximation is that the cloud is less compressible than a fully neutral cloud.
However, observed clouds are found to be partially ionized. \citet{2009ApJ...703.1832H}, for example, determined the mass of the Smith Cloud to be 5.0$\times$10\superscript{6} solar masses ($M_{\bigodot}$) in neutral hydrogen and
$\sim$3$\times$10\superscript{6}~$M_{\bigodot}$ in ionized hydrogen. From this we can say that $\sim\frac{3}{8}$ of the hydrogen in the Smith Cloud is ionized.
The metallicities of observed HVCs are generally lower than those of the Milky Way and may have been even lower before the high velocity gas began to mix with the Milky Way gas. In order to track the permeation of Milky Way gas
into high velocity gas and vice versa, we give our model HVC and model halo gas different metallicities. We give the halo gas solar photospheric metallicities ($\mathcal{M}_{h}$=\abundsolar=1.0) and we give the HVC gas, also simply referred to as
cloud gas, extremely small metallicities, $\mathcal{M}_{cl}$=0.001. The low metallicity of the cloud extends
through most of the transition zone and ends where $n_{H}=$~5~$n_{H,ISM}$. Later in the simulation, when moderate metallicities of metals are found in high velocity gas, their magnitudes minus the initial 0.001 metallicity of the
cloud, can be attributed to mixing and thus directly provide a quantitative measure of the permeation of Milky Way gas into high velocity cloud gas. \textcolor{black}{It should be noted that our choices of metallicities for the initial cloud and
ambient material can be changed to any value.}
As will be shown in Section~\ref{sec:results}, the degree of permeation varies such that plasma in which the metals are highly ionized generally contains larger fractions of Milky Way gas than does plasma in which the metals are poorly ionized.
Examination of this trend requires accurate tracking of the ionization levels of the gas, for which we use FLASH's non-equilibrium ionization (NEI) module. The NEI module is used
to calculate the extent of the ionization and recombination that occurs during each time step, although the ionization levels at the beginning of each simulation
are calculated under the assumption of collisional ionization equilibrium. Thus initially the cloud is predominantly neutral, based upon its temperature.
As in Model C of \citet{2011ApJ...739...30K}, the halo gas and the HVC move at 150~km~s\superscript{-1} relative to each other. A velocity of this magnitude allows us to distinguish high velocity material from normal velocity material.
Our simulations are conducted in the initial rest frame of the cloud. I.e., at the beginning of the simulation the cloud is stationary in the domain and throughout the simulation hot halo gas flows upwards
(in the positive z-direction) at a speed of 150~km~s\superscript{-1}. This choice of rest frame allows us to model the mixed gas over a longer period of time in a moderately tall domain than could be done if the simulations had been
conducted in the halo's rest frame. However, for the convenience of the reader, and for easier comparison with observations, we report all velocities in the halo's rest frame, from the point of view of an imaginary observer situated below the bottom of the
domain. We accomplish the conversion by subtracting 150~km~s\superscript{-1}
from the simulated velocities in the z-direction. Henceforth, simulated material that moves upwards (away from the Galactic plane) in the halo's rest frame will be described as having a positive velocity in the z-direction while material that moves
downwards (toward the Galactic plane) will be described as having a negative velocity. The total duration of our simulations is 200~Myrs in simulational time which is longer than is typical in HVC simulations. This
duration and all initial parameters are listed in Table~\ref{tab:initcond}.
\section{Results}
\label{sec:results}
The HVC's behavior; including its deformation, shredding, and mixing with the surrounding gas, can be seen in Figure~\ref{fig:cloud} for Model~B (Model~A exhibits similar gross behavior).
The top row of panels in the figure shows the density in the form of number of hydrogens per cm\superscript{3}, where, as mentioned in Section~\ref{sec:code}, there are also 9.77$\times 10^{-2}$ heliums for every hydrogen.
The middle row shows the temperature. The bottom row shows the metallicity, where oxygen is used as an example element though we also simulate and track carbon.
Each variable is plotted as a function of location on the x-z plane along a slice through the domain at $y$=0. We assume
that the structure is approximately symmetric across the $x$=0~pc and $y$=0~pc plane; only positive x and y space is simulated, and only one slice at $y$=0~pc is shown. The displayed panels depict the structure at 0, 40, 80, 120, 160 and
200~Myrs of simulation time but data is collected via output files every 2~Myrs of simulation time and a large number of timesteps (typically 560 and 90 for Models A and B) occur between file outputs.
The semicircle shaped object initially at $x$, $z$=0 in the leftmost of these panels is a slice of one quarter of the cloud at the beginning of the simulation.
As time progresses, the ISM sweeps past it, deforming its shape, creating instabilities, and pulling off material. In the region above the cloud, the two fluids (ablated cloud material and ISM) create a plume of intermediate density
material that is observable in the density profiles in the top row of Figure~\ref{fig:cloud}. The ablated cloud material and ambient materials mix, not only on the
large scale (such as the 150~pc radius of the cloud), but also on the small scale (in the simulations we see mixing on scales as small as a few cells). The mixing of ablated cloud gas with halo gas lowers the temperature of the plume
material relative to that of the halo, and radiative cooling lowers it
further. These processes, mixing and radiative cooling, cause the plume of ablated and mixed gas that trails the cloud to be cooler than its surroundings. See the temperature profiles in the center row of Figure~\ref{fig:cloud}. In
general the metallicity in the plume is greatest at the top and least at the bottom. While this is also the case for the temperature, temperature and metallicity are not tightly correlated. The metallicity of any given segment of gas is
affected only by mixing while the temperature of any given segment of gas is affected by mixing and cooling. The cooling rate is dependent upon the temperature of the mixture and is generally different from the weighted mean of the
temperature dependent cooling rates of the mixing gasses.
Not only are the metallicities in the mixed gas interesting for observational studies, but in our simulations they provide a quantitative measurement of the extent of mixing that has occurred. A mixture in which
the fraction (by mass) of cloud material is $f_{cl}$ and the fraction (by mass) of halo gas is $f_{h}$ will have an average metallicity of \abundscript=$f_{cl}\mathcal{M}_{cl}+f_{h}\mathcal{M}_{h}$ where
\abundscript\subscript{cl} and \abundscript\subscript{h} are the initial metallicities of the cloud and halo respectively. We choose our initial metallicities so that the preceding equation simplifies to
$\mathcal{M}\cong f_{h}$, allowing us to track both the mixing of cloud and halo gas and the evolution of the cloud's metallicity simultaneously.
\subsection{Example Fragment}
\label{subsec:example}
As a demonstration, we examine the mixing in a single intermediate temperature, intermediate density ablated fragment modeled in the FLASH simulation for Model~A.
We choose a somewhat dense fragment of material that was ablated from the cloud.
Figure~\ref{fig:denszoom}a identifies the chosen fragment at 80~Myrs when it is located at ($r,z$) = (350~pc, 1,000~pc); Figure~\ref{fig:denszoom}b shows a close-up view of the fragment
at this time. We track the fragment's motion as the simulation time progresses by an additional 10~Myrs. Figures~\ref{fig:denszoom}~c\&d show the fragment at 90~Myrs, after it has reached a position of
($r,z$)=(350~pc, 1,900~pc). In order to provide an example quantitative analysis, single parcels in the densest part of the fragment are chosen for examination at both epochs. These parcels are located in the
centers of the red boxes in the close-up images. We determine the extent of the mixing that has occurred in these parcels by examining their metallicities.
At 80~Myrs the center parcel has an oxygen metallicity of 5\% of the solar value, and using \abundscript$=f_{cl}\mathcal{M}_{cl}+f_{h}\mathcal{M}_{h}$ from the paragraph above, as well as \abundscript\subscript{cl}=0.001 and \abundscript\subscript{h}=1.0,
we deduce that the parcel is composed of 5\%\ halo gas and 95\%\ cloud gas.
In the span of 10~Myrs, the metallicity in the center of the fragment increases to 10\% of the solar value, indicating that the material in the fragment's center is now composed of 10\%\ halo material and 90\%\ cloud material.
Therefore we can state that some halo gas permeates even moderately dense tails of ablated gas, and that the timescale for permeation is relatively short. These points apply to all of our simulations.
\subsection{Global Analysis}
\label{subsec:global}
Logically, the extent of mixing should be related to the duration of exposure between the material that has been ablated from the cloud and the surrounding hot halo gas. There should also be a relationship between the degree
of mixing and the material's ionization level. This is because mixing transfers thermal energy from the halo material to the cloud material, bringing the temperature of the mixed gas to an intermediate value. As the
temperature begins to equilibrate, the atoms that were contributed by the cloud should begin to ionize while those that were contributed by the halo should begin to recombine. Subsequent radiative cooling only complicates
the situation. The hydrodynamical interaction slows the material that was contributed by the cloud while accelerating the entrained halo material, resulting in a relationship between the material's velocity and its degree of mixing
(and thus its metallicity). In this subsection, we calculate the progression of mixing for gas throughout the domain as a function of time, ionization level, and velocity. We characterize the mixing by the resulting metallicity of our
sample elements oxygen and carbon and present our results in a form that can be compared with observations.
A real observation of a high velocity cloud will sample material along a line of sight. Optimally, multiple lines of sight will be used in order to calculate the average metallicity. In both cases, the metallicities
of various parts of the structure are averaged together. Similarly, averages can be calculated from our simulation. Here, we average over all of the gas in the domain that has a low speed
relative to the halo ($\vec{v_{z}}<$100~km~s\superscript{-1}), and we separately average over all of the gas in the domain that has a high speed relative to the halo ($\vec{v_{z}}\geq$~100~km~s\superscript{-1}). These procedures are equivalent
to calculating the averages from hundreds of vertical sight lines through the domain. Thus, when we examine the extent of the mixing as sampled by particular ions of oxygen and carbon, we do so for all such ions in the domain; i.e. we average
the metallicities of all such ions in the domain.
We plot the average metallicities (i.e., the metallicity of oxygen averaged over all parcels in the domain), \abundscriptbar, with the appropriate velocity in the domain,
as functions of time for individual ionization levels of oxygen for each model in Figure~\ref{fig:mvt}. We show the metallicities of both high velocity, $\bar{\mathcal{M}}_{v>100}$, and low velocity, $\bar{\mathcal{M}}_{v<100}$,
material. The latter includes the `stationary' halo gas, which outweighs the cloud gas. Here, we treat \abundscriptbar\ as being equal to $f_{h}$, because, for our choices of cloud and halo metallicities, they are approximately equal.
The same can be stated for the carbon traces in both models. Additionally, certain ions of carbon follow the same trends as an oxygen counterpart which we shall discuss in the following subsections. Therefore, for clarity,
our plot of the metallicity as traced by ions of carbon uses a single model, Model~B (see Figure~\ref{fig:mvt_C}).
\subsubsection{Analysis of High Velocity Gas}
\label{subsubsec:ahvg}
We first consider the high velocity \oxyone\ panel of Figure~\ref{fig:mvt}. At the start of the simulation, the only source of high velocity gas in the domain is the cloud,
including most of the transition zone between the cloud's interior and the halo. Initially the cloud has approximately primordial metallicities. For this reason, the initial
metallicities of the \oxyone\ to \oxynine\ ions in the high velocity gas is $\mathcal{M}_{cl}$=0.001.
As the simulation progresses, hot, highly ionized, solar metallicity, halo plasma mixes with material at the surface of the cloud and with material that has been shed from the cloud, rapidly augmenting the
average metallicities in the formerly-cloud gas. Early in the simulations, most of this material moves at $\vec{v_{z}}>$100~km~s\superscript{-1}, in which case, the entrained metals count toward $\bar{\mathcal{M}}_{v>100}$ rather than
$\bar{\mathcal{M}}_{v<100}$. Although the freshly entrained halo gas had been hot, it cools by sharing thermal energy with the formerly-cloud gas and by radiating photons.
As the metals in the entrained halo ions cool, they recombine, causing $\bar{\mathcal{M}}_{v>100}$ for \oxyone\ to rise approximately monotonically throughout the simulation. By the end of the simulation,
$\bar{\mathcal{M}}_{v>100}$ for \oxyone\ for Model~A has
reached a value slightly greater than 0.1 while in Model~B has risen to over 0.25. Thus, even the neutral part of the HVC contains $\sim$10\%\ or $\sim$25\%\ halo gas in Models~A and B respectively.
The difference between $\bar{\mathcal{M}}_{v>100}$ for these two cases shows that the 3D simulations are more efficient than the 2D simulations at entraining and cooling halo material.
This is because the clumps are able to fragment along all three Cartesian dimensions in the 3D simulations but are able to fragment along only 2 dimensions (r,z) in Model~A. Model~B is the more realistic of the two models.
For \oxytwo, \oxythree, \oxyfour, \oxyfive, and \oxysix, $\bar{\mathcal{M}}_{v>100}$ experiences a rapid increase during the first few timesteps as halo material mixes with the outermost portions of the cloud. The lower density
at the interface between the cloud and halo material allows this gas to be ablated and mixed more efficiently. But soon the
behaviors of $\bar{\mathcal{M}}_{v>100}$ for these ions becomes more complicated. Ionization of former cloud gas reduces $\bar{\mathcal{M}}_{v>100}$ while recombination
of entrained, initially highly ionized halo gas raises $\bar{\mathcal{M}}_{v>100}$ with the result that $\bar{\mathcal{M}}_{v>100}$
vacillates in time about a near constant value when the cloud is stable and increases to higher values during ablation events. $\bar{\mathcal{M}}_{v>100}$ for \oxytwo\, \oxythree, \oxyfour, \oxyfive, and \oxysix\ reach values values of $\sim$0.2 to
$\sim$0.6.
\oxyseven, \oxyeight, and \oxynine\ are the natural charge states of oxygen in the halo gas. The high velocity \oxyseven, \oxyeight, and \oxynine\ ions that appear from the second timestep onwards in our simulations are
due to halo material that has been accelerated by the cloud. In Model~A, $\bar{\mathcal{M}}_{v>100}$ for these ions asymptotically approaches about 0.9. In Model~B, $\bar{\mathcal{M}}_{v>100}$ for
these ions is more complicated. The values rise early on, fall, and later reach a maximum value around 0.7.
The above noted details notwithstanding, the trends in metallicities are often fairly similar in Models A and B and all values agree to within an order of magnitude. However, of the two models, Model~B has the more realistic simulation
geometry and will be relied upon more heavily in subsequent subsections. The above comparisons also show that the shortness of Model~B's domain (which allows mixed material to flow out of the domain through the upper
boundary as early as t=30~Myrs) does not greatly affect the $\bar{\mathcal{M}}_{v>100}$ results. Most of the material that has left Model~B's domains had slowed significantly before doing so and is therefore not considered in our calculation of $\mathcal{M}_{v>100}$.
We plot \abundscriptbar\ values for carbon in Figure~\ref{fig:mvt_C}. Unlike Figure~\ref{fig:mvt} we only plot Model~B. Ions
that mimic each other's trends are as follows: \oxyone\ and \carone, \oxytwo\ and \cartwo, \oxythree\ and \carthree, \oxyfour\ and \carfour. Note that \oxyfive\ and \oxysix\ match \carfive, with values varying by no more than 0.05 between \oxysix\ and \carfive.
All of the remaining ions, \oxyseven, \oxyeight, \oxynine, \carsix, and \carseven, follow the same trend with values increasing in the order \carsix, \oxyseven, \carseven, \oxyeight, \oxynine.
For the majority of the ions of both carbon and oxygen, the average metallicity in the high velocity gas in our simulations is generally at least 300 times larger than the original metallicity of the cloud, which in our simulations is extremely low.
Initially Model~B's domain (because of the symmetry, this represents $\frac{1}{4}$ of the cloud and surrounding halo) contains almost 13,000~$M_{\bigodot}$ of material at HVC speeds. Of this HVC gas 97\%\ can be defined as
cool (T$\leq$10\superscript{4}~K). As hot halo gas sweeps past the cloud, material is ablated from the cloud and mixed, resulting in intermediate temperatures, speeds, and metallicities. By 100~Myrs we see that the
total mass of high velocity material in the domain has increased by $\sim$40 solar masses due to capture of halo material and condensation. All of \textcolor{black}{the gas at high velocities} is counted, because the only material that leaves the domain has decelerated
to velocities below 100~km~s\superscript{-1} and is no longer considered high velocity material. Of the high velocity gas 92\%\ is cool while 8\%\ is now warm or hot at 100~Myrs, showing that the
evolution of the cloud happens slowly at first. As of 200~Myrs, the domain contains only about 3400~$M_{\bigodot}$ of high velocity material, $\sim\frac{1}{4}$ of the initial high velocity content, with $\sim$2600~$M_{\bigodot}$
as cool high velocity gas and $\sim$800~$M_{\bigodot}$ as warm or hot high velocity gas. I.e., in the span of 200~Myrs, $\frac{3}{4}$ of the initially high velocity gas has decelerated to speeds below 100~km~s\superscript{-1}.
\subsubsection{Analysis of Low Velocity Gas}
\label{subsubsec:alvg}
When we shift our attention to the low velocity gas, we see higher average metallicities in both carbon and oxygen. This effect occurs because the low velocity cut preferentially selects halo gas whose metallicities have been tempered
by the addition of gas that has been ablated from the cloud and decelerated to within 100~km~s\superscript{-1} of the halo's rest frame.
The contribution of the low metallicity cloud gas is most manifest in the \oxyone\ and \carone\ ions, because the original cloud was predominantly neutral while the original halo gas was nearly devoid of neutral material \textcolor{black}{(with the
exception of the cloud-halo interface, which includes a small amount of T$\leq$T\subscript{ISM} gas that has halo metallicity and halo velocity; this material is responsible for the solar metallicity low and intermediate ions in the domain
at t=0~Myrs seen in Figure~\ref{fig:mvt}).} For example,
the value of $\bar{\mathcal{M}}_{v<100}$ for \oxyone\ falls as low as 0.05 (before rising to higher values), which is the lowest $\bar{\mathcal{M}}_{v<100}$ for any of the ions in our simulations, while the value of $\bar{\mathcal{M}}_{v<100}$
for \carone\ falls to 0.1 before rising. The presence of low and moderate metallicity \oxyone\ and \carone-bearing gas at
$\vec{v_{z}}<$100~km~\superscript{-1} suggests that if low metallicity\textcolor{black}{, low velocity} gas is found in the halo, shredding of low metallicity HVCs is one possible source.
Mixing between the halo and cloud gas raises the temperature and ionizes formerly cloud gas by the time it has decelerated to a speed below 100~km~s\superscript{-1}. As the neutral material in the slowed gas experiences warmer
temperatures and collisions with faster electrons, it ionizes. This occurs as early as the first simulated timestep and results in significant changes to the metallicities of \oxytwo, \oxythree, and \oxyfour\, as well as
\cartwo, \carthree, and \carfour, by the end of the first timestep. Thus $\bar{\mathcal{M}}_{v<100}$ for \oxytwo\ and \oxythree\, as well as \cartwo\ and \carthree, drop to less than 0.1, and $\bar{\mathcal{M}}_{v<100}$ for \oxyfour\ drops to
less than 0.3 within a few million years of the simulation's start. As we look at more highly ionized material the metallicities of \oxyfive\ and \carfour\ are lowered less drastically but still experience values less than solar.
After these initial drops, the $\bar{\mathcal{M}}_{v<100}$ for \oxytwo\ through \oxyfive, and \cartwo\ through \carfour, rise and fall with time, due to competition between ionization in the mixed, initially low metallicity cloud gas and
recombination in the initially high metallicity halo gas. Thus, after the first million years, $\bar{\mathcal{M}}_{v<100}$ vacillates between smaller ranges: 0.08 to 0.5, 0.1 to 0.5, 0.2 to 0.5, and 0.3 to 0.7 for \oxytwo, \oxythree, \oxyfour,
and \oxyfive\ respectively and similar values for \cartwo\ through \carfive.
As we consider higher ionization states, we see preferentially more of the influence of the halo gas. Nearly all of the low velocity \carfive, \carsix, \carseven, \oxyseven, \oxyeight, and \oxynine\ originates in halo gas,
which has solar metallicity. Only a small fraction of these ions
originate in ablated cloud gas that has merged with the halo gas. Thus, the gas contributed by the cloud only marginally lowers $\bar{\mathcal{M}}_{v<100}$ for for these ions.
\subsection{Velocity Selections}
\label{subsec:velocity}
These trends raise the questions of whether or not clear relationships between metallicity and velocity would be predicted and could be seen throughout an HVC cloud or complex. In order to address the first of those issues, we
have subdivided our previous velocity regimes into a larger number of regimes and replotted the metallicity as a function of time for 3 sample ions (\oxyone, \oxyfive\ and \oxyeight) in Model~B. The results are plotted in Figure~\ref{fig:vel_cut}.
Note that the lowest velocity range excluded stationary halo gas. Also note that our highest velocity range (v\subscript{z}$\geq$150~km~s\superscript{-1}) samples the very small amount of gas in the domain that moves downwards faster than the
cloud. Such gas resides in the portions of eddies that move downward relative to the cloud's rest frame. Because of this gas's behavior, the v\subscript{z}$\geq$150~km~s\superscript{-1} curves are sporadic and unusual.
The other curves follow the trend that slower (from the point of view of an observer) gas is usually more metal-rich than faster gas. The degree of metal enhancement can change as a function of time; for the first 140~Myrs the trend
can be seen in all 3 sampled ions.
With the exception of the v\subscript{z}$\geq$150~km~s\superscript{-1} curve, the metallicity tends to increase with decreasing gas velocity (from the point of view of an observer) for the first 140~Myrs. \textcolor{black}{This shows the tight
link between mixing and deceleration as both happen simultaneously after gas is ablated from the head of the cloud.} After 140~Myrs, some of the
velocity curves overlap. In order to determine if this relationship between speed and metallicity is observed in real HVCs, more data for more ions and along more sight lines are required.
\section{Conclusion}
\label{sec:conclusion}
In this paper we present FLASH simulations of HVCs traveling through low density gas in the outer halo. Very early in each simulation, hydrodynamic interactions begin to ablate material from the cloud. The ablated material falls behind
the main body of the cloud, where it begins to create a tail. As additional material sheds from the cloud and decelerates, the tail grows, reaching a length of several kpc within the 200~Myrs of simulational time. Although a velocity
gradient develops from the main body of the cloud (the head in a ``head-tail'' structure) through the tail of shed gas, some of the tail gas still travels at speeds
$\geq$~100~km~s$^{-1}$ and thus is fast enough to meet the definition of high velocity gas. \textcolor{black}{Conversely, if the cloud has a higher metallicity than the halo, mixing would dilute the metal content of the cloud as it traveled
toward the disk or orbited the galaxy.}
Mixing between cloud gas and ambient gas occurs along the entire length of the head-tail structure. But, the tail gas, which has been exposed to the ambient gas for the greatest length of time, is most highly mixed while the head is
least mixed. Not only does mixing take the form of the shredding and deceleration of cloud gas, but it is also involves the entrainment and acceleration of halo gas. Thus, mixing boosts the metallicity of HVCs whose original
metallicity was lower than that in the halo.
At any given time, the metallicity of the gas in any given cell in the domain is directly related to the fractions of material that have come from the halo and cloud, respectively. Thus, the metallicity of the gas can be seen as
both a function of mixing and a tracer of previous mixing. Using FLASH hydrodynamic simulations, we estimate and present the degree of mixing as a function of time. We do this separately for the high velocity material and the
low velocity material. We use oxygen and carbon as sample elements and present the mixing fractions and resulting metallicities as functions of the time-dependent ionization states of the oxygen and carbon atoms. Although in our simulations the
original metallicity of the cloud is very low and the metallicity of the halo is much higher, we present \textcolor{black}{an equation} that can be used to determine the metallicity and degree of mixing in cases where the cloud and halo metallicities
differ from those chosen for our particular simulations.
\textcolor{black}{In order to more accurately predict the chemical evolution of any HVC, simulations specifically tailored to that cloud would be required. However, our simulations can make rough estimates for observed clouds.} In our simulations, mixing raises
the metallicity in the least ionized high velocity material from 0.1$\%$ to $\sim25\%$ of solar while raising the metallicity of the most ionized high velocity material from 0.1\%\
to 70\%\ of solar for Model B, our 3-dimensional model, over the span of 200~Myrs. Observations of high velocity neutral and once ionized gas in Complex A for example show that this complex currently has subsolar
metallicity in \oxyone\ ranging between 5\%\ to 10\%\ of solar \citep[see][Subsection~4.1]{2001ApJS..136..463W}. Furthermore, Complex~C has also been shown to have subsolar metallicity using \oxyone\ and Si~{\footnotesize I} ranging between
10\%\ and 30\%\ of solar \citep{2011ApJ...739..105S}. \textcolor{black}{If these two HVCs had been interacting with a solar metallicity halo for 200~Myrs} all \oxyone\ in Complex~A and as much as 25\%\ of the \oxyone\ in Complex~C could be
due to halo material that has been mixed into the clouds as the clouds traveled through the Galaxy's halo. If an observer were to take the observationally determined metallicity (for example 30\%\ of solar) and subtract off the
contribution due to mixing with halo gas over a period of 200~Myrs, ($\sim$25\%\ of solar), then the difference (up to 5\%\ of solar) would be attributed to the cloud as it was before undergoing 200~Myrs of mixing. Such a small
metallicity supports the suggestion that some HVCs originated in nearly primordial gas outside of the Galaxy. \textcolor{black}{These are very rough estimates that would be improved upon by more pointed simulational studies whose purpose is to
model these particular clouds. The metallicity of the ambient material could also be adjusted. If the halo were given subsolar metallicity the rate of metal augmentation would decrease. This would in turn mean primordial gas entering our halo would require
more time traversing our halo to reach observed metallicities.}
\textcolor{black}{The simulational results also suggest that if an HVC is observed to have an extremely low metallicity then it must either be located far from a galaxy or have only recently entered that galaxy's metal rich halo. }
Over the 200~Myrs of simulated time, parts of the cloud are ablated, decelerated, and/or mixed with hot halo gas. By the end of the simulation only
about 21\%\ of the amount of initially cool high velocity gas is still cool and traveling with a high velocity in the simulational domain, whereas
the rest has either been heated via mixing with ambient material, such that this gas would no longer be defined as cool or has decelerated to the point we no longer define it as HVC material and would now define it as intermediate or low velocity gas.
Considering that the simulations show that some ablated material decelerates to non-HVC velocities, much like in the simulations of \citet*{2009ApJ...698.1485H} and \citet{2010MNRAS.404.1464M}, we suggest that observations of
low metallicities in intermediate and low velocity halo gas may indicate material that has been shed by an HVC.
We find that ablated and decelerated material tends to have undergone more mixing and have higher metallicities than its faster counterparts \textcolor{black}{more recently shed from the main body of the cloud.} This is most apparent during the first
140~Myrs of the simulation in all ionization states of oxygen. More observations along more sight lines with greater ranges of speed are required to determine if this relationship is observed in real HVCs.
We would like to thank the referee for his or her insightful ideas and suggestions for improvements to this manuscript. We would also like to thank Eric Suter (University of Georgia) and Kara Ponder (University of Georgia) for their
individual contributions and hard work on this project. The software used in this work was in part developed by the DOE-supported ASC/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. The simulations
were performed at the The Georgia Advanced Computing Resource Center (GACRC) of the University of Georgia. This work was supported by NASA grants NNX00AD13G and NNX09AD13G, awarded through the Astrophysics Theory Program.
\begin{table}[h]\tiny
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\multicolumn{6}{c}{Simulation Parameters}\\
\hline
\hline
\multicolumn{6}{c}{Model}\\
\hline
\multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{A} & \multicolumn{2}{c|}{B} \\
\hline
\multicolumn{6}{c}{Domain}\\
\hline
\multicolumn{2}{|c|}{Coordinates} & \multicolumn{2}{|c|}{r,z} & \multicolumn{2}{|c|}{x,y,z}\\
\multicolumn{2}{|c|}{Geometry} & \multicolumn{2}{|c|}{Cylindrical} & \multicolumn{2}{|c|}{Cartesian}\\
\multicolumn{2}{|c|}{Symmetries} & \multicolumn{2}{|c|}{About z=0~pc} & \multicolumn{2}{|c|}{Across x=0~pc}\\
\multicolumn{2}{|c|}{}& \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{Across y=0~pc}\\
\cline{3-6}
\multicolumn{2}{|c|}{Physical Size} & \multicolumn{2}{|c|}{0~pc $\leq$ r $\leq$ 1200~pc} & \multicolumn{2}{|c|}{0~pc $\leq$ x $\leq$ 1200~pc}\\
\multicolumn{2}{|c|}{}& \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{0~pc $\leq$ y $\leq$ 1200~pc}\\
\multicolumn{2}{|c|}{}& \multicolumn{2}{|c|}{-1,200~pc~$\leq$ z $\leq$~19,600~pc} & \multicolumn{2}{|c|}{-1,200~pc~$\leq$ z $\leq$~9,600~pc}\\
\cline{3-6}
\multicolumn{2}{|c|}{Simulation Duration} & \multicolumn{4}{c|}{200~Myrs} \\
\hline
\multicolumn{6}{c}{Cloud}\\
\hline
\multicolumn{2}{|c|}{Initial Location} & \multicolumn{2}{|c|}{z=0~pc, r=0~pc} & \multicolumn{2}{|c|}{x=0~pc, y=0~pc, z=0~pc}\\
\hline
\multicolumn{2}{|c|}{Radius} & \multicolumn{4}{c|}{150~pc}\\
\multicolumn{2}{|c|}{Internal Density} & \multicolumn{4}{c|}{Hydrogen: n\subscript{H,cl}=0.1 cm\superscript{-3}~~~Helium: 0.1 n\subscript{H,cl}}\\
\multicolumn{2}{|c|}{Internal Temperature} & \multicolumn{4}{c|}{T\subscript{cl}=10\superscript{3} K}\\
\multicolumn{2}{|c|}{Internal Metallicity} & \multicolumn{4}{c|}{\abundscript\subscript{cl}=0.001} \\
\hline
\multicolumn{6}{c}{Milky Way Gas}\\
\hline
\multicolumn{2}{|c|}{Density} & \multicolumn{4}{c|}{Hydrogen: n\subscript{H,ISM}=10\superscript{-4} cm\superscript{-3}~~~Helium: 0.1 n\subscript{H,ISM}}\\
\multicolumn{2}{|c|}{Temperature} & \multicolumn{4}{c|}{T\subscript{ISM}=10\superscript{6} K}\\
\multicolumn{2}{|c|}{Metallicity} & \multicolumn{4}{c|}{\abundscript\subscript{h}=1.0} \\
\hline
\multicolumn{6}{c}{Motion}\\
\hline
\multicolumn{2}{|c|}{Speed} & \multicolumn{4}{c|}{150~km~s\superscript{-1}}\\
\multicolumn{2}{|c|}{Simulational Rest Frame} & \multicolumn{4}{c|}{Initial cloud's rest frame}\\
\multicolumn{2}{|c|}{Analysis Rest Frame} & \multicolumn{4}{c|}{Halo's rest frame} \\
\hline
\hline
\end{tabular}
\caption{Initial simulation parameters. From top to bottom: domain parameters, initial cloud parameters, initial halo material parameters, simulated motion within the system, and assumed ionization fraction used in the calculation of
the cooling rate in the low temperature gas.}
\label{tab:initcond}
\end{center}
\end{table}
\begin{figure}[h]
\begin{center}
\epsscale{0.45}
\plotone{densitycloud.eps}\\
\plotone{tempcloud.eps}\\
\plotone{abundcloud.eps}\\
\end{center}
\caption{Plots of log$_{10}$ hydrogen number density (in units of cm\superscript{-3}, top row), log$_{10}$ temperature (in units of K, middle row), and oxygen metallicity
(bottom row) are presented for Model~B. Each image is from a single 2 dimensional cut, along $y$=0. The model is shown at a series
of ages (0, 40, 80, 120, 160, and 200 Myrs) from left to right, thus revealing the evolution of the system over time. \textcolor{black}{A color version of this figure can be found in the on-line version of the manuscript.}}
\label{fig:cloud}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{dens_zoom.eps}
\end{center}
\caption{Plots of log$_{10}$ hydrogen number density for Model~A showing the fragment used in our quantitative example.
The left images, a\&b, show the simulation
at the age of 80~Myrs, while the right images c\&d show the simulation at the age of 90~Myrs. Images a\&c show the entire domain with the red square identifying
and centered on the chosen fragment. Images b\&d are close-up images of the fragment. The red squares in panels b and d are centered on cells chosen for our example quantitative analysis in Section~\ref{subsec:example}. \textcolor{black}{A color version of this figure can be found in the on-line version of the manuscript.}}
\label{fig:denszoom}
\end{figure}
\begin{figure}
\epsscale{0.30}
\plotone{mvt1.eps}\\
\epsscale{0.50}
\plottwo{mvt2.eps}{mvt3.eps}\\
\plottwo{mvt4.eps}{mvt5.eps}\\
\plottwo{mvt6.eps}{mvt7.eps}\\
\plottwo{mvt8.eps}{mvt9.eps}\\
\caption{Average metallicity (\abundscriptbar) and halo gas fractions ($f_{h}$) as functions of time for each ionization state of oxygen. Model~A is represented by the red dot-dash line while Model~B is represented by the solid black line. High velocity
material for each model is represented by the thicker lines and low velocity material by the thinner counterpart. \textcolor{black}{A color version of this figure can be found in the on-line version of the manuscript.}}
\label{fig:mvt}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=9.4cm]{mvt_C.eps}
\end{center}
\caption{Average metallicity (\abundscriptbar) and halo gas fraction ($f_{h}$) as functions of time for each ionization state of carbon. Only Model~B is plotted as all models mimic the trends of oxygen with values agreeing
within 0.050. HVC material is plotted using thicker lines while the low velocity material is plotted using the thinner counterparts. Individual ionization states are plotted using different shades from black to light gray (different colors in the online version)
and with varying line styles. \textcolor{black}{A color version of this figure can be found in the on-line version of the manuscript.}}
\label{fig:mvt_C}
\end{figure}
\begin{figure}
\epsscale{0.5}
\plotone{mvt_cut1.eps}\\
\epsscale{1.0}
\plottwo{mvt_cut5.eps}{mvt_cut8.eps}
\caption{Average metallicity (\abundscriptbar) and halo gas fractions ($f_{h}$) as functions of time for \oxyone, \oxyfive, and \oxyeight\ plotted using various velocity selections. For clarity we only show Model~B and separate the velocity ranges
with differing line styles. \textcolor{black}{A color version of this figure can be found in the on-line version of the manuscript.}}
\label{fig:vel_cut}
\end{figure}
|
1,108,101,564,204 | arxiv | \section{Introduction}
The LHCb experiment at the LHC has been taking pp collision data in 2011 and 2012 at $\sqrt{s}=$7 and 8 TeV respectively, integrating a luminosity in
excess of 3~fb$^{-1}$. The large (several hundred microbarns) production
cross sections and efficient trigger strategies allow to collect unprecedentedly large data sets for charm and beauty decays.
As an example, about 1.2 million $\overline{\mathrm{B}}^0 \rightarrow \mathrm{D}^{*+} \mu^- {\bar{\nu}}_{\mu}$ decays~\footnote{charge-conjugation
is always implied, unless specified otherwise} have been reconstructed
in 1 fb$^{-1}$. In the vast physics program which can be exploited with these large datasets, semileptonic decays play an important role.
\section{Semileptonic B Decays at LHCb}
Semileptonic B decays were firstly studied in LHCb in order to
determine the $b\overline{b}$ production cross section~\cite{bbxsec}, the performance of flavour tagging algorithms~\cite{flavtag}, and the
various b-hadron production fractions at the LHC~\cite{prodfrac}. More recently, semileptonic decays of $\mathrm{B}_s$ mesons were used to test
CP violation in neutral meson mixing, resulting in the most precise measurement of the semileptonic asymmetry in the $\mathrm{B}_s$ sector~\cite{asls}. Besides these studies,
LHCb can contribute to the understanding of currently open issues in semileptonic decays of B, $\mathrm{B}_s$ and $\Lambda_b$ hadrons,
including the composition of the inclusive semileptonic widths in terms of exclusive decays, measurements of form factors and determinations of
CKM parameters $|V_{ub}|$ and $|V_{cb}|$.
Semileptonic B decays give an experimentally clean signature, due to a charm hadron and a muon originating from a common vertex. A requirement
on the impact parameter of the charmed hadron with respect to the primary vertex to be significantly different from zero suppresses copious background
of charm produced promptly in the $pp$ collision. The b hadron species are determined from the reconstructed charm hadron, {\it i.e.} samples with
D$^0$, D$^{+}$, D$_s^+$, $\Lambda_c^+$ in the final states are originating mainly from B$^-$, $\mathrm{B}^0$, $\mathrm{B}^0_s$ and $\Lambda_b^0$
decays, respectively. {\it Cross-feeds} due to {\it e.g.} $\mathrm{\overline{B}}^0_s \rightarrow (\mathrm{D}_s^{**} \rightarrow \mathrm{DK}) X \mu^- \nu$ or
$\mathrm{\overline{B}}^{0,+} \rightarrow (\mathrm{D}_s \mathrm{K}) X \mu^- \nu$ (and similar baryonic decays), can be estimated by reconstructing final
states with a D$^0$ and a charged kaon or proton, by using other available measurements and assuming {\it e.g.} isospin conservation.
The production fractions of B$^0_s$ and $\Lambda_b$, $f_s$ and $f_{\Lambda_b}$, are measured relative to the sum of the
production fractions of B$^0$ and B$^+$ mesons, $f_u+f_d$. Therefore, the most abundant
B$^+$--B$^0$ cross-feed, due to $\overline{\mathrm{B}}^0 \rightarrow \mathrm{D}^{*+} \mu^- {\bar{\nu}}_{\mu}$ decays followed by
D$^{*+} \rightarrow $D$^0 \pi^+$, is avoided.
Relative production fractions are determined in intervals of pseudorapidity ($\eta$) and transverse momentum
($p_T$) of the charm-muon pair, using a data sample of 3 pb$^{-1}$~\cite{prodfrac}.
Table~\ref{tab:fracyields} shows the obtained signal and background yields.
The results are:
\begin{eqnarray*}
\frac{f_s}{f_u+f_d} & = & 0.134 \pm 0.004 ^{+0.011}_{-0.010} \\
\frac{f_{\Lambda_b}}{f_u+f_d} (p_T) & = & (0.404 \pm 0.017 \pm 0.027 \pm 0.105) \times \\
& & \times [1-(0.031 \pm 0.004 \pm 0.003) p_T (\mathrm{GeV}) ] \\
\end{eqnarray*}
where units are given with $c=1$, the first and second uncertainties are respectively of statistical and systematical origin and the third one is due to the limited knowledge
of the $\Lambda_c\rightarrow pK\pi$
branching fraction. The latter dominates the measurement of the $\Lambda_b$ production fraction, whereas systematic uncertainties due to the modeling
of ${\mathrm{B}}_s$ semileptonic decays and $\mathrm{D}^+, \mathrm{D}_s$ branching fractions dominate the measurement of the $\mathrm{B}_s$ production fraction.
No dependence on $p_T$ is observed in $\mathrm{B}_s$ production fraction, which is found to be in agreement with previous determinations at LEP and about
one standard deviation lower than the Tevatron measurement. For $\Lambda_b$, the production fraction is not flat over $p_T$, in agreement with similar results
from the Tevatron in the same $p_T$ region. A somewhat smaller fraction had been measured by the LEP experiments, on a harder $p_T$ spectrum. A detailed discussion
and interpretation of these results is given in Ref.~\cite{HFAG2012}.
\begin{table}[b]
\begin{center}
\begin{tabular}{lcccc} \hline
Final state & Signal & \multicolumn{3}{c}{Background sources:} \\
& & Prompt charm & Combinatorial & $\Lambda_c$ reflections \\ \hline
D$^0 \mu \nu X$ & 27666$\pm$ 167 & 695$\pm$43 & 1492$\pm$30 & \\
D$^+ \mu \nu X$ & 9257$\pm$ 110 & 362$\pm$34 & 1150$\pm$22 & \\
D$_s \mu \nu X$ & 2192$\pm$ 64 & 63$\pm$16 & 985$\pm$145 & 387$\pm$132\\
$\Lambda_c \mu \nu X$ & 3028$\pm$ 112 & 43$\pm$17 & 589$\pm$27 & \\ \hline
\end{tabular}
\caption{Signal and background yields for the final states analyzed in the measurement of b-hadron production fractions.}
\label{tab:fracyields}
\end{center}
\end{table}
As a by-product of these measurements, the D$^0$K final state was searched for P-wave D$_s$ mesons, giving the first observation of the
semileptonic decay $\overline{\mathrm{B}}_s \rightarrow \mathrm{D}_{s2}^{*+} X \mu \overline{\nu}$ and the most precise measurement of the
$\overline{\mathrm{B}}_s \rightarrow \mathrm{D}_{s1}^{+} X \mu \overline{\nu}$ decay.
\section{Future Prospects}
Measurements of the CKM matrix elements $|V_{cb}|$ and $|V_{ub}|$ in exclusive semileptonic decays imply the reconstruction,
in the b hadron rest frame, of observables such as the squared invariant mass of the lepton pair ($q^2$). This difficult task at hadron colliders
can be accomplished by exploiting the separation between primary and secondary vertices at LHCb. Therefore, the B flight direction vector can be
determined and the neutrino momentum can be measured with a two-fold ambiguity. The resulting $q^2$ resolution (with a core of about
0.4 GeV$^2$)~\cite{urquijo} is similar to that observed at the B factories. The distributions of $q^2$ and $m(\mathrm{D}_s+\mu)$ in a sample of
$\overline{\mathrm{B}}_s \rightarrow \mathrm{D}_{s}^{+} X \mu \overline{\nu}$ decays is shown in Figure~\ref{fig:dsq2}, where the different contributions,
also shown, can be statistically separated~\cite{prodfrac}. Similar results have been obtained on a sample with $\Lambda_c$ baryons in the final state (see
Figure~\ref{fig:Lcq2}). The ultimate goals of these studies will be the determination of form factors, $|V_{cb}|$ and $|V_{ub}|$ in exclusive
B$_s$ and $\Lambda_b$ decays.
\begin{figure}[htb]
\begin{center}
\epsfig{file=dsq2.pdf,width=0.99\textwidth}
\caption{Projections of a two-dimensional fit to the $q^2$ and $m(D_s+\mu)$ distributions
of semileptonic decays including a $D_s^+$ meson. The different components
are stacked: the background is represented by a black dot-dashed line, $D_s^+$ by a red dashed
line, $D^{*+}$ by a blue dash-double dotted line and P-wave D mesons by a green dash-dotted line.}
\label{fig:dsq2}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\epsfig{file=Lcq2.pdf,width=0.99\textwidth}
\caption{Projections of a two-dimensional fit to the $q^2$ and $m(\Lambda_c+\mu)$ distributions of semileptonic decays including a $\Lambda_c$ baryon.
The different components are stacked: the dotted line represents the combinatoric background, the bigger dashed line (red) represents the
$\Lambda_c^+ \mu^- \bar{\nu}$ component, the smaller dashed line (blue) the $\Lambda_c(2595)^+$, and the solid line represents the $\Lambda_c(2625)^+$ component.}
\label{fig:Lcq2}
\end{center}
\end{figure}
LHCb has also good potential to measure $\overline{\mathrm{B}} \rightarrow D^{(*)} \tau^- \overline{\nu}$ decays, which recently received great
attention after Babar measured branching fractions which exclude the Standard Model expectations at the 3.4 sigma level~\cite{bbrDtaunu}.
However, the neutrino reconstruction method
outlined above is much more difficult to apply in purely leptonic tau decays, due to the presence of two additional neutrinos. Therefore, three-prong decays such as
$\tau^{\pm} \rightarrow \pi^+ \pi^- \pi^{\pm} \nu_{\tau}$ have been exploited. In this case, the neutrino reconstruction procedure can be applied and
the tau momentum can be determined with a two-fold ambiguity, which becomes four-fold on the B meson momentum. Moreover, care must be taken in order
to avoid non-physical solutions due to the finite momentum and vertex resolutions. A recent measurement of
$\mathrm{B}_{(s)} \rightarrow \mathrm{D}_{(s)} \pi \pi \pi$ decays~\cite{BDpipipi} indicates that similar decays with high track multiplicity can be
reconstructed in LHCb, with low combinatorial background.
The long standing puzzle of the composition of the inclusive semileptonic rate in terms of the exclusive decays can also be investigated, by
reconstructing decays in P-wave charm mesons as well as in radial excitations.
An investigation of three-body decays of these particles, already in progress in hadronic B decays
~\cite{BDpipipi}, will also help in normalizing their corresponding semileptonic B branching fractions,
which are typically computed by assuming that P-wave charm mesons decay in two-body final states only.
\section{Conclusions}
In conclusion, semileptonic B/B$_s$/$\Lambda_b$ decays are an important part of the LHCb physics program. They have been succesfully used to measure the
$\mathrm{b\bar{b}}$ cross section and b hadron production fractions at the LHC, and to establish the most accurate limit on CP violation in the mixing of
B$_s$ mesons. Studies of B$_s$ and $\Lambda_b$ decays have been performed, which will eventually lead to novel determinations of the $|V_{cb}|$ and $|V_{ub}|$
CKM matrix elements, which can help to clarify the existing tension between measurements with inclusive and exclusive semileptonic B decays.
A measurement of $\overline{\mathrm{B}} \rightarrow D^{(*)} \tau^- \overline{\nu}$ decays seems also feasible at LHCb, although neutrino reconstruction will be challenging.
Precise measurements of semileptonic decays in higher D meson excitations will allow to reduce systematic errors on other measurements,
and possibly solve long-standing puzzles in semileptonic B decays.
|
1,108,101,564,205 | arxiv | \section{Introduction}
\subsection*{Motivation} This paper is a contribution to the new emerging field of convex semi-algebraic geometry,
and its purpose is threefold: First we show that the
moment approach for global polynomial optimization proposed in \cite{lasserre1},
and based on semidefinite programming (SDP), is consistent as it simplifies
and/or has better convergence properties when
solving convex problems.
In other words, the SDP moment approach somehow "recognizes" convexity,
a highly desirable feature for a general purpose
method because, in principle, convex problems should be easier to solve.
We next review some recent results (and provide a new one)
on the representation of convex basic semi-algebraic sets
by linear matrix inequalities which show how convexity permits to derive
relatively simple and {\it explicit} semidefinite representations. In doing so
we also provide a {\it certificate} of convexity for $\mathbf{K}$ when its defining polynomials are not convex.
Finally, we consider the important Jensen's inequality in convex analysis.
When restricting its application to a class of convex polynomials,
we provide an extension to a class of linear functionals that are not
necessarily probability measures.
To do so, we use (and sometimes extend) some recent results of the author
\cite{lasserre-sdr, lasserre-convex} and Helton and Nie \cite{HN1}.
We hope to convince the reader that convex semi-algebraic geometry
is indeed a very specific subarea of real algebraic geometry which
should deserve more attention from both the optimization and real algebraic
geometry research communities.
\subsection*{Background} I. Relatively recent results in the theory of moments and its dual theory of
positive polynomials have been proved useful in polynomial optimization as they provide the
basis of a specific convergent numerical approximation scheme. Namely, one can define a hierarchy of
semidefinite relaxations (in short SDP-relaxations) of the original optimization problem whose associated monotone sequence of optimal values converges to the global optimum.
For a more detail account
of this approach, the interested reader is referred to e.g. Lasserre \cite{lasserre1,lasserre2}, Parrilo \cite{parrilo}, Schweighofer \cite{markus}, and the many references therein.
Remarkably, practice seems to reveal that convergence is often fast and even finite.
However, the size of the SDP-relaxations grows rapidly with the rank in the hierarchy;
typically the $r$-th SDP-relaxation in the hierarchy has $O(n^{2r})$ variables and semidefinite matrices
of $O(n^{r})$ sizes (where $n$ is the number of variables in the original problem).
On the other hand, it is well-known that a large class of convex optimization problems can be solved efficiently; see e.g. Ben Tal and Nemirovski \cite{bental}.
Therefore, as the SDP-based moment approach is dedicated to solving difficult non convex (most of
the time NP-hard) problems, it should have the highly desirable feature
to somehow {\it recognize} "easy" problems like convex ones.
That is, when applied to such easy problems it should show some significant
improvement or a particular nice behavior not necessarily valid in the general case.
Notice that this is {\it not} the case of the LP-based moment-approach
described in \cite{lasserre2,lasserre3} for which only asymptotic (and {\it not} finite) convergence occurs in general
(and especially for convex problems), a rather annoying feature.
However, for SDP-relaxations, some results of \cite{lasserre-convex} already show that indeed convexity helps as one provides specialized
representation results for convex polynomials that are nonnegative on a basic semi-algebraic set.
II. Next, in view of the
potential of semidefinite programming techniques,
an important issue is the characterization of convex sets that are semidefinite representable
(in short called SDr sets). A SDr set $\mathbf{K}\subset\mathbb{R}^n$ is the projection of
a set defined by linear matrix inequalities (LMIs). That is,
\[\mathbf{K}:=\{x\in\mathbb{R}^n\::\:\exists\, y\in\mathbb{R}^s\:{\rm s.t.} \quad A_0+\sum_{i=1}^nx_i\,A_i+
\sum_{j=1}^sy_j\,B_j\succeq0\}\]
for some real symmetric matrices $(A_i,B_j)$ (and where $A\succeq0$ stands for $A$ is positive semidefinite).
For more details, the interested reader is referred to
Ben Tal and Nemirovski \cite{bental},
Lewis et al. \cite{lewis}, Parrilo \cite{parrilo2},
and more recently, Chua and Tuncel \cite{chua},
Helton and Nie \cite{HN1,HN2}, Henrion \cite{didier} and Lasserre \cite{lasserre-sdr}.
For compact basic semi-algebraic sets
\begin{equation}
\label{setk}
\mathbf{K}\,:=\{x\in\mathbb{R}^n\::\:g_j(x)\geq0,\quad j=1,\ldots,m\:\},
\end{equation}
recent results of Helton and Nie \cite{HN1,HN2} and the author \cite{lasserre-sdr} provide sufficient conditions
on the defining polynomials $(g_j)\subset\mathbb{R}[X]$ for the convex hull
${\rm co}\,(\mathbf{K})$ ($\equiv\mathbf{K}$ if $\mathbf{K}$ is convex) to be SDr.
Again, an interesting issue is to analyze whether
convexity of $\mathbf{K}$ (with or without concavity of the defining polynomials $(g_j)$)
provides some additional insights and/or simplifications.
Another interesting issue is how to detect whether a basic semi-algebraic set $\mathbf{K}$ is convex, or equivalently, how to obtain an algebraic {\it certificate} of convexity of $\mathbf{K}$ from its defining polynomials $(g_j)$.
By certificate we mean a mathematical statement that
obviously implies convexity of $\mathbf{K}$, can be checked
numerically and does not require infinitely many tests. So far, and to the best of our knowledge, such a certificate does not exist.
III. The celebrated Jensen's inequality is an important result in convex analysis which
states that $E_\mu(f(x))\geq f(E_\mu(x))$ for a convex function $f:\mathbb{R}^n\to\mathbb{R}$ and a probability measure $\mu$ with $E_\mu(x)<\infty$. A third goal of this paper
is to analyze whether when restricted to a certain class of convex polynomials,
Jensen's inequality can be extended to a class of linear functionals larger than the class
of probability measures.
\subsection*{Contribution}
Concerning issue I: We first recall two previous results proved in \cite{lasserre-convex}:
(a) the cone of convex SOS is dense (for the $l_1$-norm of coefficients) in the cone of nonnegative convex polynomials,
and (b) a convex Positivstellensatz
for convex polynomials nonnegative on $\mathbf{K}$ (a specialization of Putinar's Positivstellensatz).
We then analyze the role of convexity
for the polynomial optimization problem
\begin{equation}
\label{pbp}
\mathbf{P}:\quad f^*\,=\,\displaystyle\min_x\: \{\:f(x)\::\:x\in\mathbf{K}\:\}
\end{equation}
with $\mathbf{K}$ as in (\ref{setk}), and show that indeed convexity helps and makes the
SDP-relaxations more efficient. In particular, when $\mathbf{K}$ is convex and Slater's condition\footnote{Slater's condition holds for $\mathbf{K}$ in (\ref{setk}) if
for some $x_0\in\mathbf{K}$, $g_j(x_0)>0$, $j=1,\ldots,m$.} holds,
by using some recent results of Helton and Nie \cite{HN1}, we show that
(i) If the polynomials $f,(-g_j)$ are all convex and
$\nabla^2f$ is positive definite (and so $f$ is strictly convex) on $\mathbf{K}$,
then the hierarchy of SDP-relaxations has {\it finite} convergence.
(ii) If $f$ and $(-g_j)$ are all
SOS-convex (i.e. their Hessian is a SOS matrix polynomial),
then $\mathbf{P}$ reduces to solving a {\it single} SDP whose index in
the hierarchy is readily available.\\
Concerning II: Under certain sufficient conditions on the $(g_j)$
(typically some second order positive curvature conditions)
Helton and Nie \cite{HN1,HN2} have proved that ${\rm co}\,(\mathbf{K})$ (or $\mathbf{K}$ if convex) has
a semidefinite representation that uses Schm\"udgen or Putinar
SOS representation of polynomials positive on $\mathbf{K}$; see \cite{HN1,lasserre-convex}.
Yet, in general its dimension depends on an unknown degree parameter in
Schm\"udgen (or Putinar) SOS representation.
Our contribution is to provide a new sufficient condition for existence of a SDr
when $\mathbf{K}$ is compact with nonempty interior and its boundary satisfies
some nondegeneracy assumption.
It translates the geometric property of convexity of $\mathbf{K}$ into a
SOS Putinar representation of some appropriate polynomial
obtained from each $g_j$. When satisfied, this representation
provides an algebraic certificate of convexity for $\mathbf{K}$ and it
is almost necessary in the sense that it always holds true
when relaxed by an arbitrary $\epsilon>0$.
It also contains as special cases Helton and Nie \cite{HN1} sufficient conditions
of SOS-convexity or strict convexity on $\partial\mathbf{K}$ of the $-g_j$'s, and leads to an explicit semidefinite representation of $\mathbf{K}$. We also provide a more general algebraic certificate based on
Stengle's Positivstellensatz, but more complex and heavy to implement
and so not very practical. In practice both certificates are
obtained by solving a semidefinite program. Therefore, because of unavoidable
numerical inaccuracies, the certificate is valid only up to machine precision.
Concerning III, we prove that when restricting its application to the subclass of
SOS-convex polynomials, Jensen's inequality can be extended
to all linear functionals $L_\mathbf{y}$ (with $L_\mathbf{y}(1)=1$) in the
dual cone of SOS polynomials, hence {\it not}
necessarily probability measures.
Some of the results already obtained in \cite{HN1,lasserre-sdr} and
in the present paper strongly suggest that the class of SOS-convex polynomials introduced in Helton and Nie \cite{HN1} is particularly nice and should deserve more attention.
\section{Notation, definitions and preliminary results}
Let $\mathbb{R}[X]$ be the ring of real polynomials in the variables $X=(X_1,\ldots,X_n)$, and let $\Sigma^2[X]\subset\mathbb{R}[X]$ be the subset of sums of squares (SOS) polynomials.
Denote $\mathbb{R}[X]_d\subset\mathbb{R}[X]$ be the set of
polynomials of degree at most $d$, which forms a vector space of dimension
$s(d)={n+d\choose d}$.
If $f\in\mathbb{R}[X]_d$, write
$f(X)=\sum_{\alpha\in\mathbb{N}^n}f_\alpha X^\alpha$ in
the usual canonical basis $(X^\alpha)$, and
denote by $\mathbf{f}=(f_\alpha)\in\mathbb{R}^{s(d)}$ its vector of coefficients. Also write
$\Vert f\Vert_1 \:(=\Vert\mathbf{f}\Vert_1:=\sum_\alpha \vert f_\alpha\vert$) the $l_1$-norm of
$f$. Finally, denote by $\Sigma^2[X]_d\subset\Sigma^2[X]$ the subset of
SOS polynomials of degree at most $2d$.
We use the notation
$X$ for the variable of a polynomial $X\mapsto f(X)$ and $x$ when $x$ is a point of $\mathbb{R}^n$,
as for instance in $\{x\in\mathbb{R}^n \::\: f(x)\geq0\}$.
\subsection*{Moment matrix} With $\mathbf{y}=(y_\alpha)$ being a sequence indexed in the canonical basis
$(X^\alpha)$ of $\mathbb{R}[X]$, let $L_\mathbf{y}:\mathbb{R}[X]\to\mathbb{R}$ be the linear functional
\[f\quad (=\sum_{\alpha}f_{\alpha}\,X^\alpha)\quad\mapsto\quad
L_\mathbf{y}(f)\,=\,\sum_{\alpha}f_{\alpha}\,y_{\alpha},\]
and let $M_d(\mathbf{y})$ be the symmetric matrix with rows and columns indexed in
the canonical basis $(X^\alpha)$, and defined by:
\[M_d(\mathbf{y})(\alpha,\beta)\,:=\,L_\mathbf{y}(X^{\alpha+\beta})\,=\,y_{\alpha+\beta},\quad\alpha,\beta\in\mathbb{N}^n_d\]
with $\mathbb{N}^n_d:=\{\alpha\in\mathbb{N}^n\::\:\vert \alpha\vert \:(=\sum_i\alpha_i)\leq d\}$.
\vspace{0.2cm}
\subsection*{Localizing matrix} Similarly, with $\mathbf{y}=(y_{\alpha})$
and $g\in\mathbb{R}[X]$ written
\[X\mapsto g(X)\,=\,\sum_{\gamma\in\mathbb{N}^n}g_{\gamma}\,X^\gamma,\]
let $M_d(g\,\mathbf{y})$ be the symmetric matrix with rows and columns indexed in
the canonical basis $(X^\alpha)$, and defined by:
\[M_d(g\,\mathbf{y})(\alpha,\beta)\,:=\,L_\mathbf{y}\left(g(X)\,X^{\alpha+\beta}\right)\,=\,\sum_{\gamma}g_{\gamma}\,
y_{\alpha+\beta+\gamma},\]
for every $\alpha,\beta\in\mathbb{N}^n_d$.
\vspace{0.2cm}
\subsection*{Putinar Positivstellensatz}
Let $Q(g)\subset\mathbb{R}[X]$ be the quadratic module generated by the polynomials $(g_j)\subset\mathbb{R}[X]$, that is,
\begin{equation}
\label{qg}
Q(g)\,:=\,\left\{\sigma_0+\sum_{j=1}^m\sigma_j\,g_j\::
\quad(\sigma_j)\subset\Sigma^2[X]\:\right\}.\end{equation}
\begin{ass
\label{assput}
$\mathbf{K}\subset\mathbb{R}^n$ is a compact basic semi-algebraic set defined as in (\ref{setk}) and
the quadratic polynomial $X\mapsto M-\Vert X\Vert^2$ belongs to $Q(g)$.
\end{ass}
\vspace{0.2cm}
Assumption \ref{assput} is not very restrictive. For instance, it holds if
every $g_j$ is affine (i.e., $\mathbf{K}$ is a convex polytope) or if the level
set $\{x\::\:g_j(x)\geq0\}$ is compact for some $j\in\{1,\ldots,m\}$. In addition,
if $M-\Vert x\Vert\geq0$ for all $x\in\mathbf{K}$, then it suffices to add the redundant quadratic constraint $M^2-\Vert x\Vert^2\ge0$ to the definition (\ref{setk}) of $\mathbf{K}$ and Assumption \ref{assput} will hold true.
\begin{theorem}[Putinar's Positivstellensatz \cite{putinar}]
\label{thput}
Let Assumption \ref{assput} hold.
If $f\in\mathbb{R}[X]$ is (strictly) positive on $\mathbf{K}$, then $f\in Q(g)$. That is:
\begin{equation}
\label{putinarrep}
f\,=\,\sigma_0+\sum_{j=1}^m\sigma_j\, g_j,\end{equation}
for some SOS polynomials $(\sigma_j)\subset\Sigma^2[X]$.
\end{theorem}
\subsection{A hierarchy of semidefinite relaxations (SDP-relaxations)}
Let $\mathbf{P}$ be the optimization problem (\ref{pbp}) with $\mathbf{K}$ as in (\ref{setk})
and let $r_j=\lceil ({\rm deg}\,g_j)/2\rceil$, $j=1,\ldots,m$.
With $f\in\mathbb{R}[X]$ and $2r\geq \max[{\rm deg}\,f,\:\max_j2r_j]$, consider the hierarchy of semidefinite relaxations $(\mathbf{Q}_r)$ defined by:
\begin{equation}
\label{sdpprimal}
\mathbf{Q}_r:\:\quad\left\{\begin{array}{ll}
\displaystyle\inf_\mathbf{y}&L_\mathbf{y}(f)\\
\mbox{s.t.}&M_r(\mathbf{y})\,\succeq0\\
&M_{r-r_j}(g_j\,\mathbf{y})\,\succeq0,\qquad j=1,\ldots,m\\
&y_0\,=1
\end{array}\right.,
\end{equation}
with optimal value denoted by $\inf\mathbf{Q}_r$. One says that $\mathbf{Q}_r$ is solvable if it has an optimal solution
(in which case one writes $\inf\mathbf{Q}_r=\min\mathbf{Q}_r$).
The dual of $\mathbf{Q}_r$ reads
\begin{equation}
\label{sdpdual}
\mathbf{Q}^*_r:\:\quad\left\{\begin{array}{ll}
\displaystyle\sup&\lambda\\
\mbox{s.t.}&f-\lambda\,=\,\sigma_0+\displaystyle\sum_{j=1}^m\sigma_j\,g_j\\
&\sigma_j\in\Sigma^2[X],\quad j=0,1,\ldots,m\\
&{\rm deg}\,\sigma_0,\,{\rm deg}\,\sigma_j+{\rm deg}\,g_j\leq 2r,\quad j=1,\ldots,m
\end{array}\right.,
\end{equation}
with optimal value denoted by $\sup\mathbf{Q}^*_r$ (or $\max\mathbf{Q}^*_r$ if the $\sup$ is attained).\\
By weak duality $\sup\mathbf{Q}^*_r\leq\inf\mathbf{Q}_r$ for every $r\in\mathbb{N}$ and
under Assumption \ref{assput},
$\inf\mathbf{Q}_r\uparrow f^*$ as $r\to\infty$. For a more detailed account see e.g.
\cite{lasserre1}.
\subsection{Convexity and SOS-convexity}
We first briefly recall basic facts on a multivariate convex function.
If $C\subseteq\mathbb{R}^n$ is a nonempty convex set, a function $f:C\to\mathbb{R}$ is convex on $C$ if and only if
\[f(\lambda x+(1-\lambda)y)\leq\,\lambda f(x)+(1-\lambda)f(y),\qquad \forall\,\lambda\in (0,1),\:x,y\in C.\]
Similarly, $f$ is strictly convex on $C$ if and only if
the above inequality is strict for every $x,y\in C$, $x\neq y$, and all $\lambda\in (0,1)$.
If $C\subseteq\mathbb{R}^n$ is an open convex set and $f$ is twice differentiable on $C$, then $f$ is convex on $C$ if and only if
its Hessian $\nabla^2f$ is positive semidefinite on $C$ (denoted $\nabla^2f\succeq0$ on $C$).
Finally, if $\nabla^2f$ is positive definite on $C$ (denoted $\nabla^2f\succ0$ on $C$) then $f$ is strictly convex on $C$.
\subsection*{SOS-convexity}
Helton and Nie \cite{HN1} have introduced the following interesting subclass of
convex polynomials, called SOS-convex polynomials.\\
\begin{defn}[Helton and Nie \cite{HN1}]
\label{def1}
A polynomial $f\in\mathbb{R}[X]_{2d}$ is said to be SOS-convex if
$\nabla^2f$ is SOS, that is, $\nabla^2f=LL^T$ for some real matrix polynomial
$L\in\mathbb{R}[X]^{n\times s}$ (for some $s\in\mathbb{N}$).
\end{defn}
\vspace{0.2cm}
As noted in \cite{HN1}, an important feature of SOS-convexity is that it can be can be checked numerically by solving a SDP. They have also proved the following
important property:\\
\begin{lemma}[Helton and Nie {\cite[Lemma 7]{HN1}}]
\label{prop}
If a symmetric matrix polynomial $P\in\mathbb{R}[X]^{r\times r}$ is SOS then for any $u\in\mathbb{R}^n$, the double integral
\[X\mapsto\quad F(X,u)\,:=\,\int_0^1\int_0^tP(u+s(X-u))\,ds\,dt\]
is also a symmetric SOS matrix polynomial in $\mathbb{R}[X]^{r\times r}$.
\end{lemma}
\vspace{0.2cm}
And also:
\begin{lemma}[Helton and Nie {\cite[Lemma 8]{HN1}}]
\label{prop2}
For a polynomial $f\in\mathbb{R}[X]$ and every $x,u\in\mathbb{R}^n$:
\begin{eqnarray*}
f(x)&=&f(u)+\nabla f(u)^T(x-u)\\
&&+\:(x-u)^T\underbrace{\int_0^1\int_0^t\nabla^2f(u+s(x-u))dsdt}_{F(x,u)}\,(x-u).\end{eqnarray*}
And so if $f$ is SOS-convex and $f(u)=0,\nabla f(u)=0$, then $f$ is a SOS polynomial.
\end{lemma}
\subsection{An extension of Jensen's inequality}
Recall that if $\mu$ is a probability measure on $\mathbb{R}^n$ with $E_\mu(x)<\infty$,
Jensen's inequality states that if $f\in L_1(\mu)$ and $f$ is convex, then
\[E_\mu(f(x))\,\geq\,f(E_\mu(x)),\]
a very useful property in many applications.
We now provide an extension of
Jensen's inequality when one restricts its application
to the class of SOS-convex polynomials.
Namely, we may consider the linear functionals $L_\mathbf{y}:\mathbb{R}[X]_{2d}\to\mathbb{R}$
in the dual cone of $\Sigma^2[X]_{d}$, that is, vectors
$\mathbf{y}=(y_\alpha)$ such that $M_d(\mathbf{y})\succeq0$ and $y_0=L_\mathbf{y}(1)=1$;
hence $\mathbf{y}$ is {\it not} necessarily the (truncated) moment sequence of some probability measure $\mu$.
Crucial in the proof is Lemma \ref{prop} of Helton and Nie.
\begin{theorem}
\label{th-jensen}
Let $f\in\mathbb{R}[X]_{2d}$ be SOS-convex, and let
$\mathbf{y}=(y_\alpha)_{\alpha\in\mathbb{N}^n_{2d}}$ satisfy $y_0=1$ and $M_d(\mathbf{y})\succeq0$.
Then:
\begin{equation}
\label{jensen}
L_\mathbf{y}(f(X))\,\geq\,f(L_\mathbf{y}(X)),
\end{equation}
where $L_\mathbf{y}(X)=(L_\mathbf{y}(X_1),\ldots,L_\mathbf{y}(X_n))$.
\end{theorem}
\begin{proof}
Let $z\in\mathbb{R}^n$ be fixed, arbitrary, and consider the polynomial $X\mapsto f(X)-f(z)$.
Then,
\begin{equation}
\label{jensen-1}
f(X)-f(z)\,=\,\langle \nabla f(z),X-z\rangle+\langle (X-z),F(X)(X-z)\rangle,\end{equation}
with $F:\mathbb{R}^n\to \mathbb{R}[X]^{n\times n}$ being the matrix polynomial
\[X\mapsto\quad F(X)\,:=\,\int_0^1\int_0^t\nabla^2f(z+s(X-z))\,ds\,dt.\]
As $f$ is SOS-convex, by Lemma \ref{prop}, $F$ is a SOS matrix polynomial and so the polynomial
$X\mapsto \Delta(X):=\langle (X-z),F(X)(X-z)$ is SOS, i.e.,
$\Delta\in\Sigma^2[X]$.
Then applying $L_\mathbf{y}$ to the polynomial $X\mapsto f(X)-f(z)$ and using (\ref{jensen-1}) yields
(recall that $y_0=1$)
\begin{eqnarray*}
L_\mathbf{y}(f(X))-f(z)&=&\langle\nabla f(z),L_\mathbf{y}(X)-z\rangle+L_\mathbf{y}(\Delta(X))\\
&\geq&\langle\nabla f(z),L_\mathbf{y}(X)-z\rangle\quad\mbox{[because $L_\mathbf{y}(\Delta(X))\geq0$].}
\end{eqnarray*}
As $z\in\mathbb{R}^n$ was arbitrary, taking $z:=L_\mathbf{y}(X)\,(=(L_\mathbf{y}(X_1),\ldots,L_\mathbf{y}(X_n))$ yields the desired result.
\end{proof}
\vspace{0.2cm}
As a consequence we also get:
\begin{corollary}
Let $f$ be a convex univariate polynomial, $g\in\mathbb{R}[X]$ (and so
$f\circ g\in\mathbb{R}[X]$). Let $d:=\lceil ({\rm deg}\,f\circ g)/2\rceil$, and let
$\mathbf{y}=(y_\alpha)_{\alpha\in \mathbb{N}^n_{2d}}$ be such that $y_0=1$ and $M_d(\mathbf{y})\succeq0$. Then:
\begin{equation}
\label{cor-1}
L_\mathbf{y}[\,f(g(X))\,]\,\geq\,f(L_\mathbf{y}[\,g(X)\,]).
\end{equation}
\end{corollary}
\begin{proof}
Again let $z\in\mathbb{R}^n$ be fixed, arbitrary, and consider the univariate
polynomial $Y\mapsto f(Y)-f(z)$ so that
(\ref{jensen-1}) holds. That is,
\[f(Y)-f(z)\,=\,f'(z)\,(Y-z)+ F(Y)(Y-z)^2,\]
with $F:\mathbb{R}\to \mathbb{R}[Y]$ being the univariate polynomial
\[Y\mapsto\quad F(Y)\,:=\,\int_0^1\int_0^t f"(z+s(Y-z))\,ds\,dt.\]
As $f$ is convex, $f"\geq0$, and so the univariate polynomial
$Y\mapsto F(Y)(Y-z)^2$ is nonnegative, and being univariate, is SOS.
Therefore, with $Y:=g(X)$,
\[f(g(X))-f(z)\,=\,f'(z)\,(g(X)-z)+ F(g(X))(g(X)-z)^2,\]
and so
\begin{eqnarray*}
L_\mathbf{y}[\,f(g(X))]-f(z)&=&f'(z)\,(L_\mathbf{y}[\,g(X)\,]-z)+L_\mathbf{y}[\,F(g(X))\,(g(X)-z)^2\,]\\
&\geq&f'(z)( L_\mathbf{y}[\,g(X)\,]-z)\end{eqnarray*}
and taking $z:=L_\mathbf{y}[g(X)]$ yields the desired result.
\end{proof}
\vspace{0.2cm}
Hence the class of SOS-convex polynomials has the very interesting property to
extend Jensen's inequality to some linear functionals that are not necessarily coming from a probability measure.
\section{Semidefinite relaxations in the convex case}
\subsection{A convex Positivstellensatz}
Let $\mathbf{K}$ be as in (\ref{setk}) and define $Q_c(g)\subset \mathbb{R}[X]$ to be the set:
\begin{equation}
\label{qc}
Q_c(g)\,:=\,\left\{\:\sigma_0+\sum_{j=1}^m\lambda_j\,g_j\::\quad
\lambda\in\mathbb{R}^m_+\,;\:\sigma_0\in\Sigma^2[X],\:\sigma_0\mbox{ convex}
\:\right\}\subset Q(g).
\end{equation}
The set $Q_c(g)$ is a specialization of
$Q(g)$ in (\ref{qg}) to the convex case, in that the weights asociated with the $g_j$'s are nonnegative scalars, i.e., SOS polynomials of degree 0, and the SOS polynomial $\sigma_0$ is convex.
In particular, every $f\in Q_c(g)$ is nonnegative on $\mathbf{K}$.
Let $\mathcal{F}_\mathbf{K}\subset\mathbb{R}[X]$ be the convex cone
of convex polynomials nonnegative on $\mathbf{K}$.\\
\begin{theorem}[Lasserre \cite{lasserre-convex}]
\label{thmain2}
Let $\mathbf{K}$ be as in (\ref{setk}), Slater's condition hold
and $g_j$ be concave for every $j=1,\ldots,m$.
Then with $Q_c(g)$ as in (\ref{qc}), the set $Q_c(g)\cap \mathcal{F}_\mathbf{K}$ is dense in $\mathcal{F}_\mathbf{K}$ for the $l_1$-norm $\Vert \cdot\Vert_1$. In particular,
if $\mathbf{K}=\mathbb{R}^n$ (so that $\mathcal{F}_{\mathbb{R}^n}=:\mathcal{F}$ is now the set of nonnegative convex polynomials), then
$\Sigma^2[X]\cap \mathcal{F}$ is dense in $\mathcal{F}$.
\end{theorem}
\vspace{0.2cm}
Theorem \ref{thmain2} states that if $f$ is convex and nonnegative on $\mathbf{K}$
(including the case $\mathbf{K}\equiv \mathbb{R}^n$) then one may approximate $f$ by a sequence
$\{f_{\epsilon r}\}\subset Q_c(g)\cap \mathcal{F}_\mathbf{K}$
with $\Vert f-f_{\epsilon r}\Vert_1\to 0$ as $\epsilon\to 0$
(and $r\to\infty$). For instance, with $r_0:=\lfloor ({\rm deg}\,f)/2\rfloor +1$,
\begin{eqnarray}
\nonumber
X\mapsto f_{\epsilon r}(X)&:=&f+\epsilon (\theta_{r_0}(X)+\theta_{r}(X)),\quad\mbox{ with }\\
\label{2}
X\mapsto \theta_r(X)&:=&1+\sum_{k=1}^r \sum_{i=1}^n \frac{X_i^{2k}}{k{\rm !}}
\qquad r\geq r_\epsilon,\end{eqnarray}
for some $r_\epsilon$; see Lasserre \cite{lasserre-convex} for details.
Observe that Theorem \ref{thmain2} provides $f$ with a {\it certificate}
of nonnegativity on $\mathbf{K}$.
Indeed, let $x\in\mathbf{K}$ be fixed arbitrary. Then as $f_{\epsilon r}\in Q_c(g)$
one has $f_{\epsilon r}(x)\geq0$. Letting $\epsilon\downarrow 0$ yields
$0\leq \lim_{\epsilon\to 0}f_{\epsilon r}(x)=f(x)$. And as $x\in\mathbf{K}$ was arbitray,
$f\geq0$ on $\mathbf{K}$.
Theorem \ref{thmain2} is a convex (weak) version of Theorem \ref{thput} (Putinar's Positivstellensatz) where one replaces the quadratic module $Q(g)$ with its subset
$Q_c(g)$. We call it a {\it weak} version of Theorem \ref{thput} because
it invokes a density result (i.e. $f_{\epsilon r}\in Q_c(g)$ whereas $f$ might not be an element of $Q_c(g)$).
Notice that $f$ is allowed to be nonnegative (instead of strictly positive) on $\mathbf{K}$ and
$\mathbf{K}$ need {\it not} be compact; recall that extending
Theorem \ref{thput} to non compact basic semi-algebraic sets $\mathbf{K}$ and to polynomials $f$ nonnegative on $\mathbf{K}$
is hopeless in general; see Scheiderer \cite{claus}.\\
\begin{corollary}
\label{special}
Let $\mathbf{K}$ be as in (\ref{setk}), $f\in\mathbb{R}[X]$ with $f^*:=\min_x \{f(x)\::\:x\in\mathbf{K}\}$ and let
$d:=\max[\lceil ({\rm deg}\,f)/2\rceil,\max_j\lceil ({\rm deg}\,g_j)/2\rceil\,]$.
Consider the simplified SDP-relaxation
\begin{equation}
\label{newsdpprimal}
\widehat{\mathbf{Q}}:\:\quad\left\{\begin{array}{ll}
\displaystyle\inf_\mathbf{y}&L_\mathbf{y}(f)\\
\mbox{s.t.}&M_d(\mathbf{y})\,\succeq0\\
&L_\mathbf{y}(g_j)\,\geq0,\qquad j=1,\ldots,m\\
&y_0\,=1
\end{array}\right.
\end{equation}
and its dual
\begin{equation}
\label{newsdpdual}
\widehat{\mathbf{Q}}^*:\:\quad\left
\{\begin{array}{ll}
\displaystyle\sup_{\gamma,\sigma_0,\lambda}&\gamma\\
\mbox{s.t.}&f-\gamma\,=\,\sigma_0+\displaystyle\sum_{j=1}^m\lambda_j\,g_j\\
&\sigma_0\in\Sigma^2[X]_d;\:\lambda_j\geq0,\quad j=1,\ldots,m
\end{array}\right.
\end{equation}
\indent
{\rm (a)} If $f-f^*\in Q_c(g)$
then the SDP-relaxation
$\widehat{\mathbf{Q}}$ and its dual $\widehat{\mathbf{Q}}^*$ are exact.\\
{\rm (b)} If $f,-g_j\in\mathbb{R}[X]$ are convex, $j=1,\ldots,m$, and if $\mathbf{y}$ is an optimal solution of $\widehat{\mathbf{Q}}$
which satisfies
\begin{equation}
\label{rank}
{\rm rank}\,M_d(\mathbf{y})\,=\, {\rm rank}\,M_{d-1}(\mathbf{y}),\end{equation}
then $\widehat{\mathbf{Q}}$ is exact and
$x^*:=(L_\mathbf{y}(X_i))\in\mathbf{K}$ is a (global) minimizer of $f$ on $\mathbf{K}$.
\end{corollary}
\begin{proof}
(a) If $f-f^*\in Q_c(g)$, i.e., if $f-f^*=\sigma_0+\sum_{j=1}^m\lambda_jg_j$,
with $\sigma_0\in\Sigma^2[X]_d$ and $\lambda\in\mathbb{R}^m_+$,
the triplet $(f^*,\sigma_0,\lambda)$ is a feasible solution of $\widehat{\mathbf{Q}}^*$
with value $f^*$. Therefore, as $\sup\widehat{\mathbf{Q}}^*\leq\inf\widehat{\mathbf{Q}}\leq f^*$, the SDP-relaxation $\widehat{\mathbf{Q}}$ and its dual $\widehat{\mathbf{Q}}^*$ are exact. In fact, $(f^*,\sigma_0,\lambda)$ is an optimal solution of $\widehat{\mathbf{Q}}^*$.
(b) If $\mathbf{y}$ satisfies the rank condition (\ref{rank}) then
by the {\it flat extension} theorem of Curto and Fialkow \cite{curto},
$\mathbf{y}$ is the (truncated) moment sequence of an atomic probability measure $\mu$ on $\mathbb{R}^n$,
say $\mu=\sum_{k=1}^s\lambda_k\delta_{x(k)}$ with
$s={\rm rank}\,M_d(\mathbf{y})$, $0<\lambda_k\leq 1$, $\sum_k\lambda_k=1$,
and $\delta_{x(k)}$ being the Dirac measure at $x(k)\in\mathbb{R}^n$, $k=1,\ldots,s$.
Let $x^*:=\sum_k\lambda_kx(k)=(L_\mathbf{y}(X_i))\in\mathbb{R}^n$.
Then $f^*\geq L_\mathbf{y}(f)$ and by convexity of $f$, $L_\mathbf{y}(f)=\sum_k\lambda_kf(x(k))\geq f(\sum_k\lambda_kx(k))=f(x^*)$.
Similarly, by convexity of $-g_j$,
$0\leq L_\mathbf{y}(g_j)=\sum_k\lambda_kg_j(x(k))\leq g_j(\sum_k\lambda_kx(k))=g_j(x^*)$, $j=1,\ldots,m$.
Therefore, $x^*\in\mathbf{K}$ and as $f(x^*)\leq f^*$, $x^*$ is a global minimizer of $f$ on $\mathbf{K}$.
\end{proof}
\vspace{0.2cm}
Notice that $\mathbf{K}$ in Corollary \ref{special} need not be compact. Also,
Corollary \ref{special}(b) has practical value because in general
one does not know whether $f-f^*\in Q_c(g)$
(despite that in the convex case, $f-f^*\in\mathcal{F}_\mathbf{K}$ and $Q_c(g)\cap\mathcal{F}_\mathbf{K}$
is dense in $\mathcal{F}_\mathbf{K}$). However, one may still solve $\widehat{\mathbf{Q}}$ and check
whether the rank condition (\ref{rank}) is satisfied.
If in solving $\widehat{\mathbf{Q}}_r$, the rank condition (\ref{rank}) is not satisfied, then
other sufficient conditions can be exploited as we next see.
\subsection{The SOS-convex case}
Part (a) of the following result is already contained in
Lasserre \cite[Cor. 2.5]{lasserre-convex}.\\
\begin{theorem}
\label{thm2}
Let $\mathbf{K}$ be as in (\ref{setk}) and Slater's condition hold.
Let $f\in\mathbb{R}[X]$ be such that
$f^*:=\inf_x\{f(x)\::\:x\in\mathbf{K}\}=f(x^*)$ for some $x^*\in\mathbf{K}$.
If $f$ is SOS-convex and $-g_j$ is SOS-convex
for every $j=1,\ldots,m$, then:
{\rm (a)} $f-f^*\in Q_c(g)$.
{\rm (b)} The simplified SDP-relaxation $\widehat{\mathbf{Q}}$
in (\ref{newsdpprimal}) and its dual (\ref{newsdpdual}) are exact and solvable.
If $\mathbf{y}$ is an optimal solution of $\widehat{\mathbf{Q}}$ then
$x^*:=(L_\mathbf{y}(X_i))\in\mathbf{K}$ is a global minimizer of $f$ on $\mathbf{K}$.
\end{theorem}
\begin{proof}
(a) is proved in \cite[Cor. 2.5]{lasserre-convex}.
(b) That $\widehat{\mathbf{Q}}$ is exact follows from (a) and Corollary \ref{special}(a).
Hence it is solvable (e.g. take $\mathbf{y}$ to be the moment sequence associated with
the Dirac measure at a global minimizer $x^*\in\mathbf{K}$). So let
$\mathbf{y}$ be an optimal solution of $\widehat{\mathbf{Q}}$, hence with $f^*=L_\mathbf{y}(f)$. As
$-g_j$ is SOS-convex for every $j$, then by Theorem \ref{th-jensen},
$0\leq L_\mathbf{y}(g_j)\leq g_j(x^*)$ with $x^*:=(L_\mathbf{y}(X_i))$ and so $x^*\in\mathbf{K}$.
Similarly, as $f$ is SOS-convex, we also have $f^*=L_\mathbf{y}(f)\geq f(x^*)$ which proves that
$f(x^*)=f^*$ and $x^*$ is a global minimizer of $f$ on $\mathbf{K}$.
Finally, as by (a) $f-f^*\in Q_c(g)$ then $\widehat{\mathbf{Q}}^*$ is exact and solvable.
\end{proof}
\vspace{0.2cm}
(Again notice that $\mathbf{K}$ in Theorem \ref{thm2} need not be compact.)
So the class of SOS-convex polynomials is particularly interesting.
Not only Jensen's inequality can be extended to some linear functionals that are
not coming from a probability measure, but one may also solve SOS-convex optimization problems
$\mathbf{P}$ in (\ref{pbp}) (i.e. with $f$ and $\mathbf{K}$ defined with SOS-convex polynomials) by
solving the single semidefinite program (\ref{newsdpprimal}).
Notice that a self-concordant\footnote{The self-concordance property
introduced in \cite{nesterov} is fundamental in the design and efficiency
of interior point methods for convex programming.} logarithmic barrier function exists for (\ref{newsdpprimal})
whereas
the logarithmic barrier function with barrier parameter $\mu$:
\begin{equation}
\label{log}
x\mapsto \phi_\mu(x)\,:=\,\mu\,f(x)-\sum_{j=1}^m\ln\, (-g_j(x)),\end{equation}
associated with $\mathbf{P}$, is not self-concordant in general.
Therefore, despite (\ref{newsdpprimal}) involves additional variables (a lifting), solving (\ref{newsdpprimal}) via an
interior point method might be more efficient than solving $\mathbf{P}$
by using the logarithmic barrier function (\ref{log}) with no lifting.
In addition, all SOS-convex polynomials nonnegative on $\mathbf{K}$ and which attain
their minimum on $\mathbf{K}$, belong to $Q_c(g)$, a very specific
version of Putinar Positivstellensatz (as $f$ is only nonnegative and
$\mathbf{K}$ need not be compact).
\subsection{The strictly convex case}
If $f$ or some of the $-g_j$'s is not SOS-convex
but $\nabla^2f\succ0$ (so that $f$ is strictly convex)
and $-g_j$ is convex for every $j=1,\ldots,m$, then inspired
by a nice argument from Helton and Nie \cite{HN1} for existence of a semidefinite representation of convex sets, one obtains the following result.\\
\begin{theorem}
Let $\mathbf{K}$ be as in (\ref{setk}) and let Assumption \ref{assput} and Slater's condition hold.
Assume that $f,-g_j\in\mathbb{R}[X]$ are convex, $j=1,\ldots,m$, with $\nabla^2f\succ0$ on $\mathbf{K}$.
Then the hierarchy of SDP-relaxations defined in
(\ref{sdpprimal}) has finite convergence. That is,
$f^*=\sup\mathbf{Q}^*_r\,=\,\inf\mathbf{Q}_r$ for some index $r$. In addition,
$\mathbf{Q}_r$ and $\mathbf{Q}^*_r$ are solvable so that $f^*=\max\mathbf{Q}^*=\min\mathbf{Q}_r$.
\end{theorem}
\begin{proof}
Let $x^*\in\mathbf{K}$ be a global minimizer (i.e. $f^*=f(x^*)$).
As Slater's condition holds, there exists a vector of Karush-Kuhn-Tucker (KKT) multipliers
$\lambda\in\mathbb{R}^m_+$ such that the (convex) Lagrangian $L_f\in\mathbb{R}[X]$ defined by
\begin{equation}
\label{lagrangian}
X\mapsto L_f(X)\,:=\,f(X)-f^*-\sum_{j=1}^m\lambda_j\,g_j(X)\end{equation}
has a global minimum at $x^*\in\mathbf{K}$, i.e., $\nabla L_f(x^*)=0$. In addition,
$\lambda_jg_j(x^*)=0$ for every $j=1,\ldots,m$ and $L_f(x^*)=0$. Then,
by Lemma \ref{prop2},
\[L_f(X)\,=\,\left\langle (X-x^*),F(X,x^*)(X-x^*)\right\rangle\]
with
\[F(X,x^*)\,:=\,\left(\int_0^1\int_0^t\nabla^2L_f(x^*+s(X-x^*))\,ds\,dt\right).\]
Next, let $I_n$ be the $n\times n$ identity matrix.
As $\nabla^2f\succ0$ on $\mathbf{K}$, continuity of the (strictly positive)
smallest eigenvalue of $\nabla^2f$
and compactness of $\mathbf{K}$ yield that $\nabla^2f \succeq\delta I_n$ on $\mathbf{K}$,
for some $\delta>0$. Next, as $-g_j$ is convex for every $j$, and in view of the definition (\ref{lagrangian})
of $L_f$, $\nabla^2 L_f\succeq\nabla^2f\succeq\delta I_n$ on $\mathbf{K}$.
Hence for every $\xi\in\mathbb{R}^n$, $\xi^TF(x,x^*)\xi\geq\delta \int_0^1\int_0^t\xi^T\xi dsdt=\frac{\delta}{2} \xi^T\xi$, and so
$F(x,x^*) \succeq\frac{\delta}{2}\,I_n$ for every $x\in\mathbf{K}$.
Therefore,
by the matrix polynomial version of Putinar Positivstellensatz,
\[F(X,x^*)\,=\,F_0(X)+\sum_{j=1}^mF_j(X)\,g_j(X),\]
for some real SOS matrix polynomials
$X\mapsto F_j(X)=L_j(X)L_j(X)^T$ (for some apppropriate
$L_j\in\mathbb{R}[X]^{n\times p_j}$), $j=0,\ldots,m$. See
Helton and Nie \cite{HN1}, Kojima and Maramatsu \cite{kojima}, Hol and Scherer \cite{hol}.
But then
\[X\mapsto \left\langle (X-x^*),F_j(X,x^*)(X-x^*)\right\rangle\,=\,\sigma_j(X)\in\Sigma^2[X],\qquad j=0,\ldots,m\]
and so
\begin{eqnarray*}
f(X)-f^*&=&L_f(X)+\sum_{j=1}^m\lambda_jg_j(X)\\
&=&\sigma_0(X)+\sum_{j=1}^m(\lambda_j+\sigma_j(X))\,g_j(X).
\end{eqnarray*}
Let $2s$ be the maximum degree of the SOS polynomials $(\sigma_j)$.
Then $(f^*,\{\sigma_j+\lambda_j\})$ is a feasible solution of the SDP-relaxation
$\mathbf{Q}^*_r$ in (\ref{sdpdual}) with $r:=s+\max_jr_j$. Therefore, as $\sup\mathbf{Q}^*_r\leq \inf\mathbf{Q}_r\leq f^*$,
the SDP-relaxations $\mathbf{Q}_r$ and $\mathbf{Q}^*_r$ are exact, finite convergence occurs and $\mathbf{Q}^*_r$ is solvable.
But this also implies that $\mathbf{Q}_r$ is solvable (take $\mathbf{y}$ to be the moment sequence of the
Dirac measure $\delta_{x^*}$ at any global minimizer $x^*\in\mathbf{K}$).
\end{proof}
\vspace{0.2cm}
When compared to Theorem \ref{thm2} for the SOS-convex case, in the strictly convex
case the simplified SDP-relaxation $\widehat{\mathbf{Q}}$ in (\ref{newsdpprimal}) is not guaranteed
to be exact. However, finite convergence still occurs for the SDP-relaxations
($\mathbf{Q}_r$) in (\ref{sdpprimal}).
\begin{remark}
{\rm It is worth emphasizing that in general, the hierarchy of LP-relaxations
(as opposed to SDP-relaxations) defined in
\cite{lasserre3} and based on Krivine's representation \cite{krivine,vasilescu} for polynomials positive on $\mathbf{K}$,
{\it cannot} have finite convergence, especially in the convex case! For more details, the interested reader is referred to
\cite{lasserre2,lasserre3}. Therefore, and despite LP software packages can
solve LP problems of very large size, using LP-relaxations
does not seem a good idea even for solving a convex polynomial optimization problem.}
\end{remark}
\section{Convexity and semidefinite representation of convex sets}
We now consider the semidefinite representation of
convex sets. First recall the following result.
\begin{theorem}[Lasserre \cite{lasserre-sdr}]
\label{lagrangiansos}
Let $\mathbf{K}$ in (\ref{setk}) be compact with
$g_j$ concave, $j=1,\ldots,m$, and assume that Slater's condition holds.
If the Lagrangian polynomial
$L_f$ in (\ref{lagrangian}) associated with every {\it linear}
polynomial $f\in\mathbb{R}[X]$ is SOS, then with $d:=\max_j\lceil ({\rm deg}\,g_j)/2\rceil$, the set
\begin{equation}
\label{lfsos}
\Omega\,:=\,\left\{(x,\mathbf{y})\in\mathbb{R}^n\times \mathbb{R}^{s(2d)}\::\:\left\{\begin{array}{ll}
M_d(\mathbf{y})&\succeq0\\
L_\mathbf{y}(g_j)&\geq 0,\quad j=1,\ldots,m\\
L_\mathbf{y}(X_i)&=x_i,\quad i=1,\ldots,n\\
y_0&=1\end{array}\right.\right.\end{equation}
is a semidefinite representation of $\mathbf{K}$.
\end{theorem}
\vspace{0.2cm}
Next, Helton and Nie \cite{HN1,HN2} have provided several interesting
second-order positive curvature (sufficient and necessary) conditions on the defining polynomials
$(g_j)$ for $\mathbf{K}$ (or its convex hull ${\rm co}\,(\mathbf{K})$) to have a SDr.
In particular (recall that $r_j=\lceil ({\rm deg}\,g_j)/2\rceil$ for every $j=1,\ldots,m$):\\
\begin{theorem}[Helton and Nie \cite{HN1}]
\label{suffhn1}
Let $\mathbf{K}$ in (\ref{setk}) be convex, Asssumption \ref{assput} hold, and assume that Slater's condition holds and $g_j$ is concave on $\mathbf{K}$, $j=1,\ldots,m$.
{\rm (a)} If $-g_j$ is SOS-convex for every $j=1,\ldots,m$, then for every linear $f\in\mathbb{R}[X]$,
the associated Lagrangian $L_f$
(\ref{lagrangian}) is SOS and the set $\Omega$ in
(\ref{lfsos}) is a semidefinite representation of $\mathbf{K}$.
{\rm (b)} If every $-g_j$ is either SOS-convex or satisfies
$-\nabla^2g_i\succ0$ on $\mathbf{K}\cap\{x\::\:g_j(x)=0\}$, then there exists $r\in\mathbb{N}$
such that the set
\begin{equation}
\label{thmain-2}
\Omega\,:=\,\left\{(x,\mathbf{y})\in\mathbb{R}^n\times \mathbb{R}^{s(2r)}\::\:\left\{\begin{array}{ll}
M_r(\mathbf{y})&\succeq0\\
M_{r-r_j}(g_j\,\mathbf{y})&\succeq 0,\quad j=1,\ldots,m\\
L_\mathbf{y}(X_i)&=x_i,\quad i=1,\ldots,n\\
y_0&=1\end{array}\right.\right\}\end{equation}
is a semidefinite representation of $\mathbf{K}$.
\end{theorem}
\vspace{0.2cm}
See \cite[Theor. 6, and 9]{HN1}. This follows from the fact
that the Hessian $\nabla^2L_f$ associated with a linear $f\in\mathbb{R}[X]$
has a Putinar representation in terms of SOS matrix polynomials, and with degree
of the weights bounded uniformly in $f$. In principle, the degree parameter $d$ in Theorem
\ref{suffhn1}(b) may be computed by solving a hierarchy of semidefinite programs.
Some other (more technical) weaker second-order positive curvature sufficient conditions
(merely for existence of a SDr) are also
provided in \cite{HN1,HN2} but the semidefinite representation is not explicit
any more in terms of the defining polynomials $(g_j)$. Notice that if $\mathbf{K}$ is compact but Assumption \ref{assput} does not hold, then one still obtains a semidefinite representation for $\mathbf{K}$ but more complicated as it is now based on
Schm\"udgen's representation \cite{schmudgen} instead of Putinar's representation; see \cite[Theor. 5]{HN1}.
We next provide a sufficient condition in the case where $\mathbf{K}$ is convex but its
defining polynomials $(-g_j)$ are {\it not} necessarily convex. Among its distinguishing features,
it is checkable numerically, contains Theorem \ref{suffhn1} as a special case and leads to
the explicit semidefinite representation (\ref{thmain-2}) of $\mathbf{K}$.
\subsection{Algebraic certificate of convexity}
We first present the following characterization of convexity when
$\mathbf{K}$ is closed, satisfies a nondegeneracy assumption on its boundary, and Slater's condition holds.\\
\begin{lemma}
\label{lemmaconvex}
Let $\mathbf{K}$ be as in (\ref{setk}) (hence closed), Slater's condition hold and assume that for every
$j=1,\ldots,m$, $\nabla g_j(y)\neq0$ if $y\in\mathbf{K}$ and $g_j(y)=0$.
Then $\mathbf{K}$ is convex if and only if for every $j=1,\ldots,m$,
\begin{equation}
\label{statconvex}
\langle\nabla g_j(y),x-y\rangle \geq0,\qquad \forall x\,\in\mathbf{K}\mbox{ and }\forall\,y\in\mathbf{K}\mbox{ with }g_j(y)\,=\,0.\end{equation}
\end{lemma}
\begin{proof}
The {\it only if part} is obvious. Indeed if
$\langle\nabla g_j(y),x-y\rangle <0$ for some $x\in\mathbf{K}$ and $y\in\mathbf{K}$ with
$g_j(y)=0$, then there is some $\overline{t}>0$ such that $g_j(y+t(x-y))<0$ for all $t\in (0,\overline{t})$ and so the point
$x':=tx+(1-t)y$ does not belong to $\mathbf{K}$, which in turn implies that $\mathbf{K}$ is not convex.
For the {\it if part}, (\ref{statconvex}) implies that at every point of the boundary, there exists a supporting hyperplane for $\mathbf{K}$. As $\mathbf{K}$ is closed with nonempty interior,
the result follows from \cite[Theor. 1.3.3]{schneider}\footnote{The author is grateful to L. Tuncel for
providing us with the reference \cite{schneider}.}.
\end{proof}
\vspace{0.2cm}
The nondegeneracy assumption is crucial as demonstrated in the following simple example kindly provided by an anonymous referee:
\begin{ex}
{\small{\rm
\label{ex0}Consider the non convex set $\mathbf{K}\subset\mathbb{R}^2$ defined by:
\[\mathbf{K}\,:=\,\{\,x\in\mathbb{R}^2\::\:(1-x_1^2+x_2^2)^3\geq0,\:10-x_1^2-x_2^2\geq0\:\}\]
Then it is straightforward to see
that (\ref{statconvex}) is satisfied. This is because
$\nabla g_1$ vanishes on the piece of boundary determined by
$g_1(x)=0$.}}
\end{ex}
\vspace{0.2cm}
Next, using the above characterization (\ref{statconvex}), we provide an algebraic certificate of convexity.\\
\begin{corollary}[Algebraic certificate of convexity]
\label{iff}
Let $\mathbf{K}$ be as in (\ref{setk}), Slater's condition hold and assume that for every
$j=1,\ldots,m$, $\nabla g_j(y)\neq0$ if $y\in\mathbf{K}$ and $g_j(y)=0$.
Then $\mathbf{K}$ is convex if and only if for every $j=1,\ldots,m$,
\begin{equation}
\label{certif-alg}
h_j(X,Y) \langle \nabla g_j(Y),X-Y\rangle\,=\,\langle\nabla g_j(Y),X-Y\rangle^{2l}+\theta_j(X,Y)+\varphi_j(X,Y) g_j(Y),
\end{equation}
for some integer $l\in\mathbb{N}$, some polynomial $\varphi_j\in\mathbb{R}[X,Y]$ and some
polynomials $h_j,\theta_j$ in the preordering\footnote{
The preordering of $\mathbb{R}[X]$ generated by a family $(g_1,\ldots,g_m)\subset\mathbb{R}[X]$ is the set of polynomials $\{p\::\:p=\sum_{J\subseteq\{1,\ldots,m\}}\sigma_J(\prod_{j\in J}g_j),\:\mbox{with }\sigma_J\in\Sigma^2[X]\}$.} of $\mathbb{R}[X,Y]$
generated by the family of polynomials $(g_k(X),g_p(Y))$, $k,p\in\{1,\ldots,m\}$, $p\neq j$.
\end{corollary}
\begin{proof}
By Lemma \ref{lemmaconvex}, $\mathbf{K}$ is convex if and only if for every $j=1,\ldots ,m$,
the polynomial $(X,Y)\mapsto \langle\nabla g_j(Y),X-Y\rangle$ is nonnegative on the set
$\Omega_j$ defined by:
\begin{equation}
\label{omegaj}
\Omega_j\,:=\,\{(x,y)\in\mathbf{K}\times\mathbf{K}\::\:g_j(y)=0\:\}.\end{equation}
Equivalently, $\mathbf{K}$ is convex if and only if for every $j=1,\ldots,m$:
\begin{eqnarray*}
\emptyset\,=\,\left\{(x,y)\in\mathbb{R}^n\::\: \right.&&(x,y)\in\mathbf{K}\times\mathbf{K}\,;\quad g_j(y)\,=\,0\,;\\
&&\left.\langle\nabla g_j(y),x-y\rangle\leq0\,;\:\langle\nabla g_j(y),x-y\rangle\neq0\right\}.\end{eqnarray*}
Then (\ref{certif-alg}) follows from Stengle's Positivstellensatz \cite[Theor. 4.4.2, p. 92]{roy}.
\end{proof}
\vspace{0.2cm}
Observe that Corollary \ref{iff} provides an algebraic certificate of convexity when $\mathbf{K}$
is closed with nonempty interior and a nondegeneracy assumption holds on its boundary.
If one fixes an a priory bound $s$ on $l\in\mathbb{N}$ and on the degree of $h_j,\theta_j$ and $\varphi_j$, then checking whether
(\ref{certif-alg}) holds reduces to solving a semidefinite program. If $\mathbf{K}$ is convex,
by increasing $s$, eventually one would obtain such a certificate if
one could solve semidefinite programs exactly. In practice, and because of unavoidable numerical inaccuracies, one only obtains
a numerical approximation of the optimal value and so, a certificate valid {\it up to machine precision} only.
However, implementing such a procedure is extremely costly because
one has potentially $2\times 2^{m}$ unknown SOS polynomials to define
$h_j$ and $\theta_j$ in (\ref{certif-alg})! Therefore,
it is highly desirable to provide a less costly certificate but with no guarantee to hold
for every $\mathbf{K}$ as in Corollary \ref{iff}.
In particular one only considers compact sets $\mathbf{K}$. Indeed,
if $\mathbf{K}$ is compact, one has the following result (recall that $g_0\equiv 1$).\\
\begin{lemma}
\label{lemmageom}
Let $\mathbf{K}$ be convex, Assumption \ref{assput} and Slater's condition hold.
Assume that for every $j=1,\ldots,m$,
$\nabla g_j(y)\neq0$ if $y\in\mathbf{K}$ and $g_j(y)=0$.
Then for every $\epsilon>0$ and every $j=1,\ldots,m$:
\begin{eqnarray}
\nonumber
\left\langle\, \nabla g_j(Y),X-Y\,\right\rangle+\epsilon &=&
\sum_{k=0}^m\sigma_{jk}(X,Y)\, g_k(X)
+\sum_{k=0,k\neq j}^m\psi_{jk}(X,Y) \,g_k(Y)\\
\label{everyj}
&&+\,\psi_j(X,Y)\,g_j(Y),
\end{eqnarray}
for some SOS polynomials $(\sigma_{jk})$ and $(\psi_{jk})_{k\neq j}\subset\Sigma^2[X,Y]$, and some polynomial $\psi_j\in\mathbb{R}[X,Y]$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemmaconvex}, for every $j=1,\ldots,m$,
and every $x,y\in\mathbf{K}$ such that $g_j(y)=0$, (\ref{statconvex}) holds
and therefore, for every $j=1,\ldots,m$,
\begin{equation}
\label{pos}
\langle \nabla g_j(y),x-y\rangle\,+\,\epsilon \,>\,0\qquad \forall (x,y)\in\Omega_j,
\end{equation}
where $\Omega_j$ has been defined in (\ref{omegaj}).
As $\mathbf{K}$ satisfies Assumption \ref{assput} then so does
$\Omega_j$ for every $j=1,\ldots,m$. Hence (\ref{everyj}) follows from
(\ref{pos}) and Theorem \ref{thput}.
\end{proof}
\vspace{0.2cm}
Therefore, inspired by Lemma \ref{lemmageom}, introduce the following condition:
\begin{ass}[Certificate of convexity]
\label{geom}
For every $j=1,\ldots,m$, (\ref{everyj}) holds with $\epsilon=0$. Then
let $d_j\in\mathbb{N}$ be such that $2d_j$ is larger than the maximum degree of the polynomials
$\sigma_{jk}g_k,\psi_{jk}g_k,\psi_jg_j\in\mathbb{R}[X,Y]$ in (\ref{everyj}), $j=1,\ldots,m$.
\end{ass}
\vspace{0.2cm}
When $\mathbf{K}$ is closed (and not necessarily compact), Slater's condition holds and the nondegeneracy assumption
on the boundary holds (i.e., $\nabla g_j(y)\neq0$ if $y\in\mathbf{K}$ and $g_j(y)=0$)
Assumption \ref{geom} is indeed a certificate of convexity because
then (\ref{statconvex}) holds for every $x,y\in\mathbf{K}$ with $g_j(y)=0$, and by Lemma \ref{lemmaconvex},
$\mathbf{K}$ is convex. It translates the
geometric property of convexity of $\mathbf{K}$ into an algebraic SOS Putinar representation of
the polynomial $(X,Y)\mapsto \langle\nabla g_j(Y),X-Y\rangle$ nonnegative on $\Omega_j$, $j=1,\ldots,m$.
On the other hand, if $\mathbf{K}$ is convex and Assumption \ref{assput}, Slater's condition and the nondegeneracy assumption all hold, then Assumption \ref{geom} is almost necessary as, by Lemma \ref{lemmageom},
(\ref{everyj}) holds with $\epsilon>0$ arbitrary.
With $d_j$ fixed a priori, checking whether (\ref{everyj}) hold with
$\epsilon=0$ can be done numerically. (However, again it provides a certificate of convexity
valid {\it up to machine precision} only.)
For instance, for every $j=1,\ldots,m$, it suffices to solve the semidefinite program
(recall that $r_k=\lceil ({\rm deg}\,g_k)/2\rceil$, $k=1\ldots,m$)
\begin{equation}
\label{test}
\left\{\begin{array}{ll}
\rho_j:=&\displaystyle\min_{\mathbf{z}}\:L_\mathbf{z}(\langle\nabla g_j(Y),X-Y\rangle)\\
\mbox{s.t.}&M_{d_j}(\mathbf{z})\succeq0\\
&M_{d_j-r_k}(g_k(X)\,\mathbf{z})\succeq0,\quad k=1,\ldots,m\\
&M_{d_j-r_k}(g_k(Y)\,\mathbf{z})\succeq0,\quad k=1,\ldots,m;\,k\neq j\\
&M_{d_j-r_j}(g_j(Y)\,\mathbf{z})\,=\,0\\
&y_0=1
\end{array}\right..\end{equation}
If $\rho_j=0$ for every $j=1,\ldots,m$, then
Assumption \ref{geom} holds.
This is in contrast to the PP-BDR
property in \cite{lasserre-convex} that cannot be checked numerically as it involves infinitely many linear polynomials $f$.
\begin{remark}
\label{numeric}
{\rm Observe that the usual rank condition (\ref{rank}) used as a
stopping criterion to detect whether (\ref{test}) is exact (i.e. $\rho_1=0$), cannot be satisfied
in solving (\ref{test}) with primal dual interior point methods (as in the SDP-solvers
used by GloptiPoly) because one tries to find an optimal solution $\mathbf{z}^*$ in the {\it relative interior}
of the feasible set of (\ref{test}) and this gives maximum rank to the moment matrix $M_{d_j}(\mathbf{z}^*)$.
Therefore, in the context of (\ref{test}), if indeed $\rho_j=0$ then $\mathbf{z}^*$ corresponds to the moment vector
of some probability measure $\mu$ supported on the set of points $(x,x)\in \mathbf{K}\times\mathbf{K}$ that satisfy
$g_j(x)=0$ (as indeed $L_{\mathbf{z}^*}(\langle\nabla g_j(Y),X-Y)\rangle)=0=\rho_j$). Therefore $\rho_j=0$ as $d_j$ increases but the rank of $M_{d_j}(\mathbf{z}^*)$ does not stabilize because
$\mu$ is not finitely supported. In particular, a good candidate $\mathbf{z}^*$ for optimal solution is
the moment vector of the probability measure uniformly distributed on
the set $\{(x,x)\in\mathbf{K}\times\mathbf{K}\::\:g_j(x)=0\}$.
Alternatively, if $\rho_j\approx0$ and the dual of (\ref{test}) has an optimal solution
$(\sigma_{jk},\psi_{jk},\psi_j)$, then in some cases
one may check if (\ref{everyj}) holds exactly after appropriate rounding of coefficients of the solution.
But in general, obtaining an exact certificate (i.e.,
$\rho_j=0$ in the primal or (\ref{everyj}) with $\epsilon=0$ in the dual)
numerically is hopeless.
}
\end{remark}
\begin{ex}
\label{ex1}
{\rm {\small Consider the following simple illustrative example in $\mathbb{R}^2$:
\begin{equation}
\label{setexample}
\mathbf{K}\,:=\,\{\,x\in\mathbb{R}^2\::\: x_1x_2-1/4\geq0;\: 0.5-(x_1-0.5)^2-(x_2-0.5)^2\geq0\,\}\end{equation}
Obviously $\mathbf{K}$ is convex but its defining polynomial $x\mapsto g_1(x):=x_1x_2-1/4$ is not concave
whereas $x\mapsto g_2(x):=0.5-(x_1-0.5)^2-(x_2-0.5)^2$ is.
With $d_1=3$, solving (\ref{test}) using GloptiPoly 3\footnote{GloptiPoly 3 (a Matlab based public software)
is an extension of GloptiPoly \cite{acm} to solve the generalized problem of moments described in \cite{rio}.
For more details see {\tt www.laas.fr/$\sim$henrion/software/}.}
yields the optimal value
$\rho_1\approx-4.58.10^{-11}$ which, in view of the machine precision for the SDP solvers used in GloptiPoly, could be considered to be zero, but of course with no guarantee. However, and according to Remark \ref{numeric},
we could check that
(again up to machine precision) for every $\alpha\in\mathbb{N}^n$ with $\vert\alpha\vert\leq 2d_j$,
$z^*_{\alpha,\alpha}=z^*_{2\alpha,0}$ and $z^*_{\alpha,0}=z^*_{0,\alpha}$.
In addition, because of symmetry, $z_{\alpha,\beta}=z_{\alpha' ,\beta'}$
whenever $\alpha'_1=\alpha_2$ and $\alpha'_2=\alpha_1$ (and similarly for $\beta$ and $\beta'$).
Indeed for moments of order $1$ we have
$z^*_{\alpha,\beta}=(0.5707,0.5707,0.5707,0.5707)$ and for moments of order $2$,
\[z^*_{\alpha,\beta}=(0.4090,0.25,0.4090,0.25,0.4090, 0.25, 0.4090, 0.4090, 0.25,0.4090).\]
For $j=2$ there is no test to perform because $-g_2$ being quadratic and convex yields
\begin{equation}
\label{newtest}
\langle\nabla g_2(Y),X-Y\rangle \,=\,g_2(X)-g_2(Y)+\underbrace{(X-Y)^T(-\nabla^2g_2(Y))(X-Y)}_{SOS}
\end{equation}
which is in the form (\ref{everyj}) with $d_2=1$.
}}
\end{ex}
\vspace{0.2cm}
We next show the role of Assumption \ref{geom} in obtaining a semidefinite representation of $\mathbf{K}$.\\
\begin{theorem}
\label{thmain}
Let Assumption \ref{assput} and Slater's condition hold.
Moreover, assume that for every $j=1,\ldots,m$,
$\nabla g_j(y)\neq0$ whenever $y\in\mathbf{K}$ and $g_j(y)=0$.
If Assumption \ref{geom} holds then $\mathbf{K}$ is convex and $\Omega$ in (\ref{thmain-2}) with $d:=\max_j d_j$,
is a semidefinite representation of $\mathbf{K}$.
\end{theorem}
\begin{proof}
That $\mathbf{K}$ is convex follows from Lemma \ref{lemmaconvex}.
We next prove that the PP-BDR property defined in Lasserre \cite{lasserre-sdr} holds for $\mathbf{K}$.
Let $f\in\mathbb{R}[X]$ be a linear polynomial with coefficient vector $\mathbf{f}\in\mathbb{R}^n$ (i.e., $X\mapsto f(X)=\mathbf{f}^TX$)
and consider the optimization problem $\mathbf{P}:\: \min \:\{\mathbf{f}^Tx\::\:x\in\mathbf{K}\}$.
As $\mathbf{K}$ is compact, let $x^*\in\mathbf{K}$ be a global minimizer of $f$. The Fritz-John optimality conditions state
that there exists $0\neq\lambda\in\mathbb{R}^{m+1}_+$ such that
\begin{equation}
\label{fj}
\lambda_0 \,\mathbf{f}=\sum_{j=1}^m\lambda_j \,\nabla g_j(x^*);\quad\lambda_j\,g_j(x^*)=0\quad\forall j=1,\ldots,m.\end{equation}
(See e.g. \cite{john}.)
We first prove by contradiction that if Slater's condition and the nondegeneracy assumption hold then
$\lambda_0>0$.
Suppose that $\lambda_0=0$ and let $J:=\{j\in\{1,\ldots,m\}\::\:\lambda_j>0\}$; hence $J$ is nonempty as $\lambda\neq0$. With $x_0\in\mathbf{K}$ such that $g_j(x_0)>0$ (as Slater's condition holds, one such $x_0$ exists),
let $B(x_0,\rho):=\{z \::\: \Vert z-x_0\Vert\leq\rho\}$. For $\rho$ sufficiently small,
$B(x_0,\rho)\subset\mathbf{K}$ and $g_j(z)>0$ for all $z\in B(x_0,\rho)$ and every $j=1,\ldots,m$. Then
by (\ref{fj}) and $\lambda_0=0$,
\[0=\sum_{j=1}^m\lambda_j \,\langle\nabla g_j(x^*),z-x^*\rangle,\qquad\forall z\in B(x_0,\rho),\]
which in turn implies (by nonnegativity of each term in the above sum)
\[\langle\nabla g_j(x^*),z-x^*\rangle =0,\qquad \forall z\in B(x_0,\rho),\: j\in J.\]
But this clearly implies $\nabla g_j(x^*)=0$ for every $j\in J$, in contradiction
with the nondegeneracy assumption. Hence $\lambda_0>0$ and by homogeneity,
we may and will take $\lambda_0=1$.
Therefore, letting $Y:=x^*$ in (\ref{everyj}), the polynomial $X\mapsto f(X)-f^*$ can be written
\begin{eqnarray*}
\mathbf{f}^TX-f^*&=&\sum_{j=1}^m\lambda_j\displaystyle\left[\:\langle \nabla g_j(x^*),X-x^*\rangle\:\right]\\
&=&\sum_{j=1}^m\lambda_j\left[\sum_{k=0}^m\sigma_{jk}(X,x^*)\, g_k(X)
+\sum_{k=0,k\neq j}^m\psi_{jk}(X,x^*) \,g_k(x^*)\right.\\
&&\left.+\,\psi_j(X,x^*)\,g_j(x^*)\right]
\end{eqnarray*}
where we have used (\ref{everyj}) with $Y=x^*$ and $\epsilon=0$.
Next, observe that :
\begin{eqnarray*}
X\mapsto \sigma_{jk}(X,x^*)&\in&\Sigma^2[X]\qquad\mbox{[as $\sigma_{jk}\in\Sigma^2[X,Y]$]}\\
X\mapsto \psi_{jk}(X,x^*) \,g_k(x^*)&\in&\Sigma^2[X]\qquad\mbox{[as $\psi_{jk}\in\Sigma^2[X,Y]$ and $g_j(x^*)\geq0$]}\\
\lambda_jg_j(x^*)&=&0\qquad j=1,\ldots,m.\end{eqnarray*}
And so, as $\lambda\in\mathbb{R}^m_+$,
\begin{equation}
\label{aux}
X\mapsto \mathbf{f}^TX-f^*\,=\,\Delta_0(X)+\sum_{j=1}^m\Delta_j(X)\,g_j(X),\end{equation}
for SOS polynomials $(\Delta_j)_{j=0}^m\subset\Sigma^2[X]$ defined by
\begin{eqnarray*}
X\mapsto\Delta_0(X)&=&\sum_{j=1}^m\lambda_j\left(\sum_{k=0,k\neq j}^m\psi_{jk}(X,x^*) \,g_k(x^*)\right)\\
X\mapsto\Delta_j(X)&=&\sum_{l=1}^m\lambda_l\,\sigma_{lj}(X,x^*),\qquad j=1,\ldots,m.
\end{eqnarray*}
Write every affine polynomial $f\in\mathbb{R}[X]$ as $\mathbf{f}^TX +f_0$ for some $\mathbf{f}\in\mathbb{R}^n$
and $f_0=f(0)$.
If $f$ is nonnegative on $\mathbf{K}$ then from (\ref{aux}),
\begin{eqnarray*}
f(X)\,=\,\mathbf{f}^TX-f^*+f^*+f_0&=&f^*+f_0+\Delta_0(X)+\sum_{j=1}^m\Delta_j(X)\,g_j(X)\\
&=&\widehat{\Delta}_0(X)+\sum_{j=1}^m\Delta_j(X)\,g_j(X)\qquad\forall X,
\end{eqnarray*}
with $\widehat{\Delta}_0\in\Sigma^2[X]$ (because $f^*+f_0\geq0$)
and so, the PP-BDR property holds for $\mathbf{K}$ with order $d$.
By \cite[Theor. 2]{lasserre-sdr}, $\mathbf{K}$ is SDr with the semidefinite representation (\ref{thmain-2}).
\end{proof}
\vspace{0.2cm}
We next show that the two sufficient conditions of strict convexity
and SOS-convexity of Helton and Nie \cite{HN1}
in Theorem \ref{suffhn1}
both imply that Assumption \ref{geom} holds and so Theorem \ref{thmain} contains
Theorem \ref{suffhn1} as a special case.\\
\begin{corollary}
\label{finalsdr}
Let $\mathbf{K}$ in (\ref{setk}) be convex and both Assumption \ref{assput} and Slater's condition hold. Assume that either $-g_j$
is SOS-convex or $-g_j$ is convex on $\mathbf{K}$ and $-\nabla^2g_j\succ0$ on $\mathbf{K}\cap\{x\::\:g_j(x)=0\}$, for every $j=1,\ldots,m$. Then Assumption \ref{geom} holds
and so Theorem \ref{thmain} applies.
\end{corollary}
\begin{proof}
By Lemma \ref{prop2}, for every $j=1,\ldots,m$, write
\[(X,Y)\quad\mapsto \quad g_j(X)-g(Y)-\left\langle \nabla g_j(Y),X-Y\right\rangle\,=\,\]
\[\left\langle (X-Y),\underbrace{\left(\int_0^1\int_0^t\nabla^2
g_j(Y+s(X-Y))\,dsdt\right)}_{F_j(X,Y)}\,(X-Y)\right\rangle.\]
If $-\nabla^2g_j\succ0$ on $y\in\mathbf{K}$ with $g_j(y)=0$, then
from the proof of \cite[Lemma 19]{HN1}, $-F_j(x,y)\succ0$ for all
$x,y\in\mathbf{K}$ with $g_j(y)=0$. In other words,
$-F_j(x,y)\succeq\delta I_n$ on $\Omega_j$ (defined in (\ref{omegaj})) for some $\delta>0$.
Therefore, by the matrix polynomial version of Putinar Positivstellensatz
in \cite[Theor. 29]{HN1},
\begin{equation}
\label{hnaux}
-F_j(X,Y)\,=\,\sum_{k=0}^m\widehat{\sigma}_{jk}(X,Y)g_k(X)+
\sum_{k=0,k\neq j}^m\widehat{\psi}_{jk}(X,Y)g_k(Y)+
\widehat{\psi}_{j}(X,Y)g_j(Y)\end{equation}
for some SOS matrix polynomials
$(\widehat{\sigma}_{jk}(X,Y))$, $(\widehat{\psi}_{jk}(X,Y))$ and some
matrix polynomial $\widehat{\psi}_{j}(X,Y)$.
On the other hand, if $-g_j$ is SOS-convex
then by Lemma \ref{prop},
$-F_j(X,Y)$ is SOS and therefore (\ref{hnaux}) also holds (take
$\widehat{\sigma}_{jk}\equiv 0$ for all $k\neq 0$,
$\widehat{\psi}_{jk}\equiv 0$ for all $k$ and $\widehat{\psi}_j\equiv 0$).
But then
\begin{eqnarray*}
g_j(X)-g(Y)-\left\langle \nabla g_j(Y),X-Y\right\rangle&=&
\left\langle (X-Y),F_j(X,Y)(X-Y)\right\rangle\\
&=&-\sum_{k=0}^m\left\langle (X-Y),\widehat{\sigma}_{jk}(X,Y)(X-Y)\right\rangle g_k(X)\\
&-&\sum_{k=0,k\neq j}^m\left\langle (X-Y),\widehat{\psi}_{jk}(X,Y)(X-Y)\right\rangle g_k(Y)\\
&-&\left\langle (X-Y),\widehat{\psi}_{j}(X,Y)(X-Y)\right\rangle g_j(Y)\\
&=&-\sum_{k=0}^m\sigma_{jk}(X,Y)\, g_k(X)-\\
&&\sum_{k=0,k\neq j}^m\psi_{jk}(X,Y) \,g_k(Y)
-\,\psi_j(X,Y)\,g_j(Y)
\end{eqnarray*}
for all $X,Y$ and for some SOS polynomials
$\sigma_{jk},\psi_{jk}\in\mathbb{R}[X,Y]$ and some polynomial $\psi_j\in\mathbb{R}[X,Y]$.
Equivalently,
\begin{eqnarray*}
\langle\nabla g_j(Y),X-Y)&=&g_j(X)-g_j(Y)
+\sum_{k=0}^m\sigma_{jk}(X,Y)\, g_k(X)\\
&&+\sum_{k=0,k\neq j}^m\psi_{jk}(X,Y) \,g_k(Y)
+\psi_j(X,Y)\,g_j(Y)\\
&=&\sum_{k=0}^m\sigma'_{jk}(X,Y)\, g_k(X)
+\sum_{k=0,k\neq j}^m\psi_{jk}(X,Y) \,g_k(Y)\\
&&+\psi'_j(X,Y)\,g_j(Y)
\end{eqnarray*}
for some SOS polynomials $\sigma'_{jk},\psi_{jk}\in\Sigma^2[X,Y]$
and some polynomial $\psi'_{j}\in\mathbb{R}[X,Y]$.
In other words, Assumption \ref{geom} holds, which concludes the proof.
\end{proof}
\vspace{0.2cm}
Hence if each $-g_j$ is SOS-convex or convex on $\mathbf{K}$ with $-\nabla^2g_j\succ0 $ on $\mathbf{K}\cap\{x\::\:g_j(x)=0\}$,
one obtains a numerical scheme to obtain the parameter $d$ in Theorem
\ref{thmain} as well as the semidefinite representation
(\ref{thmain-2}) of $\mathbf{K}$. Solve the semidefinite programs (\ref{test}) with degree
parameter $d_j$. Eventually, $\rho_j=0$ for every $j=1,\ldots,m$.\\
\begin{ex}
{\small{\rm Consider the convex set $\mathbf{K}$ in (\ref{setexample}) of Example \ref{ex1}
for which the defining polynomial $g_1$ of $\mathbf{K}$ is not concave.
We have seen that
Assumption \ref{geom} holds (up to $\rho_1\approx 10^{-11}$, close to machine precision)
and $\max [d_1,d_2]=3$. By Theorem \ref{thmain}, if $\rho_1$ would be exactly $0$,
the set
\begin{equation}
\label{derniere}
\Omega\,:=\,\left\{(x,\mathbf{y})\in\mathbb{R}^n\times \mathbb{R}^{s(6)}\::\:\left\{\begin{array}{ll}
M_{3}(\mathbf{y})&\succeq0\\
M_{2}(g_j\,\mathbf{y})&\geq 0,\quad j=1,2\\
L_\mathbf{y}(X_i)&=x_i,\quad i=1,2\\
y_0&=1\end{array}\right.\right..\end{equation}
would be a semidefinite representation of $\mathbf{K}$.
At least in practice, for every linear polynomial $f\in\mathbb{R}[X]$, minimizing
$L_\mathbf{y}(f)$ over $\Omega$
yields the desired optimal value $f^*:=\min_{x\in\mathbf{K}} f(x)$, up to
$\rho_1\approx -10^{-11}$.
Indeed, let $f\in\mathbb{R}[X]$ be $\mathbf{f}^TX$ for some vector $\mathbf{f}\in\mathbb{R}^n$.
In minimizing $f$ over $\mathbf{K}$, one has $\mathbf{f}=\lambda_1\nabla g_1(x^*)+\lambda_2\nabla g_2(x^*)$ for some
$\lambda\in\mathbb{R}^2_+$, some $x^*\in\mathbf{K}$ with $\lambda_ig_i(x^*)=0$, $i=1,2$, and
$f^*=\lambda_1 \langle\nabla g_1(x^*),x^*\rangle+\lambda_2 \langle\nabla g_2(x^*),x^*\rangle
=\min_{x\in\mathbf{K}}\mathbf{f}^Tx$.
Let $x$ be as in (\ref{derniere}), arbitrary. Then
\[\mathbf{f}^Tx-f^*\,=\,L_\mathbf{y}(f(X)-f^*)\,=\,\sum_{i=1}^2
\lambda_iL_\mathbf{y}(\langle \nabla g_i(x^*),X-x^*\rangle).\]
If $\lambda_1>0$ so that $g_1(x^*)=0$, use (\ref{aux}) to obtain
\[L_\mathbf{y}(\langle \nabla g_1(x^*),X-x^*\rangle)=
L_\mathbf{y}(\rho_1+\Delta_0(X)+\sum_{j=1}^2\Delta_j(X)g_j(X))\geq \rho_1,\]
because $L_\mathbf{y}(\Delta_0)\geq0$ follows
from $M_3(\mathbf{y})\succeq0$, and $L_\mathbf{y}(\Delta_jg_j)\geq0$, $j=1,2$, follows from
$M_2(g_1\mathbf{y}),M_2(g_2\mathbf{y})\succeq0$. If $\lambda_2>0$ so that $g_2(x^*)=0$, then from (\ref{newtest})
\[L_\mathbf{y}(\langle \nabla g_2(x^*),X-x^*\rangle)\,=\,L_\mathbf{y}(g_2(X)-
\langle (X-x^*),\nabla^2g_2(x^*)(X-x^*)\rangle)\geq0,\]
because $L_\mathbf{y}(g_2)\geq0$ follows from $M_2(g_2\,\mathbf{y})\succeq0$
whereas the second term is nonnegative as
$\langle (X-x^*),-\nabla^2g_2(x^*)(X-x^*)\rangle$ is SOS and $M_3(\mathbf{y})\succeq0$.
Hence $\mathbf{f}^Tx-f^*\geq\rho_1$. On the other hand, from
$\mathbf{K}\subseteq\{x\::\:(x,y)\in\Omega\}$, one finally obtains the desired result
\[f^*+\rho_1\,\leq\min\:\{\mathbf{f}^Tx\::\,(x,y)\in\Omega\}\,\leq\, f^*.\]
}}
\end{ex}
\section{Conclusion}
As well-known, convexity is a highly desirable property in optimization.
We have shown that it also has important specific consequences
in polynomial optimization. For instance, for polynomial optimization problems with
SOS-convex or strictly convex polynomial data,
the basic SDP-relaxations of the moment approach \cite{lasserre1}
{\it recognizes} convexity and finite convergence occurs.
Similarly, the set $\mathbf{K}$ has a semidefinite representation, explicit in terms of the defining
polynomials $(g_j$).
The class of SOS-convex polynomials introduced in Helton and Nie \cite{HN1} is particularly interesting
because the semidefinite constraint to handle in the semidefinite relaxation only
involves the Hankel-like moment matrix
which does {\it not} depend on the problem data! Hence one might envision
a dedicated SDP solver that would take into account this peculiarity as Hankel-like or Toeplitz-like
matrices enjoy very specific properties.
Moreover, if restricted to this class of polynomials, Jensen's inequality can be
extended to linear functionals in the dual cone of SOS polynomials (hence not necessarily
probability measures).
Therefore, a topic of further research is to evaluate how {\it large} is the subclass of SOS-convex
polynomials in the class of convex polynomials, and if possible, to also provide simple sufficient conditions for SOS-convexity.
\section*{Acknowledgements}
The author wishes to thank L. Tuncel and Y. Nesterov for helpful discussions
on various characterizations of convex sets, and also two anonymous referees
for several corrections as well as suggestions and remarks
to improve a first version of this paper.
|
1,108,101,564,206 | arxiv | \section{Introduction}}
\IEEEPARstart{O}{ptical flow}, which refers to the point correspondence across a pair of images, is induced by the spatial motion at any image position. Due to the well-known aperture problem, optical flow cannot be directly measured. The partial observability of optical flow is the major reason that makes it a challenging problem.
The optical flow problem has attracted many attentions since the seminal works by Horn and Schunck~\cite{Horn81}, and Lucas and Kanade~\cite{Lucas81} about four decades ago. Most of the approaches estimate optical flow relying on an energy minimization method in a coarse-to-fine framework~\cite{Brox04,Papenberg06,Brox11}. Optical flow is refined iteratively using a numerical approach from the coarsest level towards the finest level by warping one of the images in the image pair towards the other using the flow estimate from the coarser level. The warping technique is theoretically justified to minimize the energy functional~\cite{Brox04, Papenberg06}. On the other hand, normal flow which is directly measurable is more ready for motion estimation~\cite{Hui13,Hui13a,Hui15}.
FlowNet~\cite{Dosovitskiy15} and FlowNet2~\cite{Ilg17}, are pioneers to use convolutional neural network (CNN) for optical flow estimation. Their performances especially the successor are approaching to the state-of-the-art energy minimization approaches, while the speed is several orders of magnitude faster. To push the envelop of accuracy, FlowNet2 is designed as a cascade of variants of FlowNet, \emph{i.e. } FlowNetC and FlowNetS. Each network in the cascade refines the preceding flow field by contributing on the flow adjustment between the first image and the warped second image. The model, as a result, comprises over 160M parameters and has a slow runtime, which could be formidable in many applications. Another work, SPyNet~\cite{Ranjan17}, uses a spatial pyramid network with only 1.2M parameters by adopting image warping in each pyramid level. Nonetheless, its performance can only match that of FlowNet but not FlowNet2.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figure_arxiv2/overview.pdf}
\caption{Examples demonstrate the effectiveness of the proposed components in LiteFlowNet for i) feature warping, ii) cascaded flow inference, and iii) flow regularization. Enabled components are indicated with bold black fonts.}
\label{fig:overview}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{figure_arxiv2/LiteFlowNet.png}
\caption{The network structure of LiteFlowNet. For the ease of representation, only a design of 3-level pyramid is shown. Given an image pair ($I_{1}$ and $I_{2}$), NetC generates two pyramids of high-level features ($\mathcal F_{k}(I_{1})$ in pink and $\mathcal F_{k}(I_{2})$ in red, $k \in [1, 3]$).
NetE yields multi-scale flow fields such that each of them is generated by a cascaded flow inference module $M$:$S$ (in blue color, including a descriptor matching unit $M$ and a sub-pixel refinement unit $S$) and a regularization module $R$ (in green color).
Flow inference and regularization modules correspond to data fidelity and regularization terms in conventional energy minimization methods, respectively.}
\label{fig:network structure}
\end{figure*}
FlowNet2~\cite{Ilg17} and SPyNet~\cite{Ranjan17} showed the potential of solving the optical flow problem by using CNNs. Our earlier work, LiteFlowNet~\cite{Hui18}, is inspired by their successes, but we further drill down some of the key elements of solving the flow problem by adopting \textit{data fidelity} and \textit{regularization} in classical variational methods to CNN more closely. In this work, we provide more details on the correspondences between conventional methods and optical flow CNNs. We also present LiteFlowNet2 that has a better flow accuracy and a faster runtime by optimizing the network architecture and training protocols of LiteFlowNet.
In the following, we first discuss the motivations, namely i) data fidelity, ii) image warping, and iii) regularization, from classical variational methods on the design of LiteFlowNet. Then, we highlight the more specific differences between our design and the state-of-the-art optical flow CNNs.
\vspace{0.1cm}
\noindent
\textbf{Data Fidelity.}
Point correspondence across two images is generally constrained by the classical brightness constancy~\cite{Horn81}. Gradient~\cite{Brox04} and higher-order brightness constancy~\cite{Papenberg06} assumptions are also widely used in the literature. The above constancy assumptions are collectively known as data fidelity and are often combined to form a hybrid data term~\cite{Brox04,Xu12}. Although different matching quantities are proved to be useful in solving the optical flow problem, finding a correct proportion of their contributions in the hybrid data term is non-trivial and requires a highly engineered data fusion model~\cite{Kim13}. An improper mixture of the brightness and gradient terms can severely affect the performance~\cite{Xu12}.
To avoid the aforementioned difficulties, feature descriptors are not explicitly defined and are learned in variational setting~\cite{Sun08}. We use a CNN to train a \textit{pyramidal feature descriptor} (\emph{i.e. } a feature encoder)~\cite{Dosovitskiy15,Ilg17} which resembles data fidelity in variational methods and is prepared for establishing robust point correspondence later. Specifically, a given image pair is transformed from the spatial domain to the learned feature space in the form of two pyramids of multi-scale high-dimensional feature maps.
\vspace{0.1cm}
\noindent
\textbf{Image Warping.}
It is proved that image warping effectively minimizes an energy functional by using a numerical method in a coarse-to-fine framework~\cite{Brox04, Papenberg06}. Intuitively, at each iteration the numerical solver displaces every pixel value of the second image in the image pair according to the constraints imposed in the functional so that the warped image has a visual appearance close to the first image. Image warping is practiced in FlowNet2~\cite{Ilg17} and SPyNet~\cite{Ranjan17} between in cascaded networks and pyramid levels, respectively.
However, warping an image and then generating the feature maps of the warped image as the above CNN-based methods are two ordered steps. We find that the two steps can be reduced to a single one by directly warping the feature maps of the second image, which have been provided by the feature encoder. This one-step \textit{feature warping} (f-warp) process reduces the more discriminative feature-space distance instead of the RGB-space distance between the two images. This makes LiteFlowNet more powerful and efficient in addressing the optical flow problem. We use the spatial transformer~\cite{Jaderberg15} for the f-warp.
\vspace{0.1cm}
\noindent
\textbf{Regularization.}
Merely using data fidelity for flow estimation is an ill-posed problem~\cite{Horn81}. One example is the one-to-many point correspondences in homogeneous regions of an image pair. With the co-occurrence between motion boundaries and intensity edges, flow estimate is often smoothed by an anisotropic image-driven regularization~\cite{Werlberger09, Xu12}. However, the image-driven strategies are prone to over-segmentation artifacts in the textured image regions since image edges do not necessarily correspond to flow edges. More advanced methods overcome the previous shortcomings through the use of an anisotropic image- and flow-driven regularization~\cite{Sun08} and a complementary regularizer~\cite{Zimmer11}.
With the motivation to establish robust point correspondence in the learned feature space, we generalize the use of regularization from the spatial space to the feature space. This allow the flow field to be regularized by a \textit{feature-driven local convolution} (f-lconv) at each pyramid level. The kernels of such a local convolution are adaptive to the pyramidal features from the encoder, flow estimate, and occlusion probability map. This makes the flow regularization to be both flow- and image-aware. We name it as the feature-driven local convolution layer in order to distinguish it from the local convolution (lconv) layer of which filter weights are locally fixed in conventional CNNs~\cite{Taigman14}. We use the feature-driven convolution~\cite{Brabandere16} in our framework to regularize flow fields.
\begin{table*}[ht]
\small
\centering
\caption{A comparison of the major components used in the state-of-the-art optical flow CNNs. (Notes: $^{1}$We use the convention that flow field at level 1 has the same spatial resolution as the given image pair. $^{2}$Flow inference from levels 7 to 3 is performed in each of the stacking networks except the fusion network. Flow fields resulting from FlowNet2-CSS and FlowNet2-SD are upsampled by a factor 4 (\emph{i.e. } from level 3 to level 1) and then used as the inputs to the fusion network. $^{3}$The authors excluded the use of residual connections in the publicly released model.)}\label{tab: flow cnn comparison}
\scalebox{0.85}{
\begin{tabular}{ccccccc}
\hline
\multicolumn{1}{|c|}{}
&\multicolumn{1}{c|}{FlowNetS\cite{Dosovitskiy15}}
&\multicolumn{1}{c|}{FlowNetC\cite{Dosovitskiy15}}
&\multicolumn{1}{c|}{FlowNet2\cite{Ilg17}}
&\multicolumn{1}{c|}{SPyNet\cite{Ranjan17}}
&\multicolumn{1}{c|}{PWC-Net\cite{Sun18}}
&\multicolumn{1}{c|}{LiteFlowNet\cite{Hui18}} \\
\hline
\multicolumn{1}{|l|}{Architecture}
&\multicolumn{1}{c|}{U-Net}
&\multicolumn{1}{c|}{U-Net}
&\multicolumn{1}{c|}{U-Net}
&\multicolumn{1}{c|}{spatial pyramid}
&\multicolumn{1}{c|}{spatial pyramid}
&\multicolumn{1}{c|}{spatial pyramid} \\
\multicolumn{1}{|l|}{Stacking Multiple Networks}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{5 networks}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark} \\
\multicolumn{1}{|l|}{Multi-Level Flow Fields$^{1}$}
&\multicolumn{1}{c|}{levels: 7 -- 3}
&\multicolumn{1}{c|}{levels: 7 -- 3}
&\multicolumn{1}{c|}{levels: 7 -- 1$^{2}$}
&\multicolumn{1}{c|}{levels: 6 or 5 -- 1}
&\multicolumn{1}{c|}{levels: 7 -- 3}
&\multicolumn{1}{c|}{levels: 6 -- 2} \\
\multicolumn{1}{|l|}{Cost Volume}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{single, long range}
&\multicolumn{1}{c|}{single, long range}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{multiple, short range}
&\multicolumn{1}{c|}{multiple, short range} \\
\multicolumn{1}{|l|}{Warping}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{image, per network}
&\multicolumn{1}{c|}{image, per level}
&\multicolumn{1}{c|}{feature, per level}
&\multicolumn{1}{c|}{feature, per level} \\
\multicolumn{1}{|l|}{Flow Inference (per level)}
&\multicolumn{1}{c|}{direct}
&\multicolumn{1}{c|}{direct}
&\multicolumn{1}{c|}{direct}
&\multicolumn{1}{c|}{residual}
&\multicolumn{1}{c|}{direct$^{3}$}
&\multicolumn{1}{c|}{cascaded, residual} \\
\multicolumn{1}{|l|}{Flow Regularization}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{per level} \\
\hline
\end{tabular}}
\end{table*}
\vspace{0.1cm}
\noindent
\textbf{Our Design.}
The proposed network, dubbed LiteFlowNet~\cite{Hui18}, consists of a multi-scale feature encoder and a multi-scale flow decoder as shown in Figure~\ref{fig:network structure}. The encoder maps a given image pair, respectively, into two pyramids of multi-scale high-dimensional features~\cite{Dosovitskiy15,Ilg17}. The decoder then estimates optical flow in a coarse-to-fine framework~\cite{Ranjan17}. Specifically, the decoder infers a flow field by selecting and using the features of the same resolution from the encoder at each pyramid level. This design leads to a lighter and a more efficient network compared to FlowNet~\cite{Dosovitskiy15} and FlowNet2~\cite{Ilg17} that adopt U-Net architecture~\cite{Ronneberger15} for flow inference. SPyNet~\cite{Ranjan17} uses a spatial pyramid network to infer a flow field at each pyramid level from the corresponding image pair in the image pyramid. On the contrary, our network separates the processes of feature extraction and flow estimation into encoder and decoder, respectively. This helps us to better pinpoint the bottleneck of accuracy and model size. In particular, our decoder uses a pair of feature maps from the encoder for flow inference instead of using a pair of images.
At each pyramid level, we introduce a novel cascaded flow inference. Each of them has a f-warp layer to displace the feature maps of the second image towards the first image using the flow estimate from the previous level rather than image warping as practiced in FlowNet2~\cite{Ilg17} and SPyNet~\cite{Ranjan17}. Flow residue is computed to reduce the feature-space distance between the images.
This design is advantageous to the conventional design of using a single network for flow inference.
First, the cascade progressively improves flow accuracy thus allowing an early correction of the estimate without passing more errors to the next pyramid level.
Second, this design allows seamless integration with descriptor matching. We assign a matching network to the first inference. Consequently, pixel-accuracy flow field can be generated first and then it is refined to sub-pixel accuracy in the subsequent inference network.
Since at each pyramid level the feature-space distance between the images has been reduced by the f-warp, a short searching range rather than a long searching range~\cite{Dosovitskiy15, Ilg17} is used to establish a cost volume. Besides, matching can be performed at sampled positions to aggregate a sparse cost volume. This effectively reduces the computational burden raised by the explicit matching.
After the cascaded flow inference, the flow field is further regularized by a f-lconv layer.
The effectiveness of the aforementioned designs are depicted in Figure~\ref{fig:overview}. In summary, the contributions of this work are in four aspects:
\begin{enumerate}
\item We present a study to bridge the correspondences between the well-established principles in conventional methods for optical flow estimation and optical flow CNNs.
\item More details of our earlier work LiteFlowNet~\cite{Hui18} are presented.
\item LiteFlowNet2, another lightweight convolutional network, is evolved from LiteFlowNet~\cite{Hui18} to better address the problem of optical flow estimation by improving flow accuracy and computation time.
\item LiteFlowNet2 outperforms the state-of-the-art FlowNet2~\cite{Ilg17} on Sintel and KITTI benchmarks, while being 25.3 times smaller in the model size and 3.1 times faster in the runtime. The optical flow processing frequency of LiteFlowNet2 reaches up to 25 flow fields per second for an image pair in Sintel dataset with size $1024 \times 436$ on a NVIDIA GTX 1080 GPU. Our network protocol and trained models are made publicly available on \url{https://github.com/twhui/LiteFlowNet2}.
\end{enumerate}
\section{Related Work}
The problem of optical flow estimation has been widely studied in the literature since 1980s. A detailed review is beyond the scope of this work. Here, we briefly review some of the major approaches, namely variational, machine learning, and CNN-based methods.
\vspace{0.1cm}
\noindent \textbf{Variational Methods.} Since the pioneering work by Horn and Schunck~\cite{Horn81}, variational methods have dominated in the literature. Brox \emph{et al. } address illumination changes by combining the brightness and gradient constancy assumptions~\cite{Brox04}. Brox \emph{et al. } integrate rich descriptors into a variational formulation~\cite{Brox11}. In DeepFlow~\cite{Weinzaepfel13}, Weinzaepfel \emph{et al. } propose to correlate multi-scale patches and incorporate this as the matching term in a functional. In PatchMatch Filter~\cite{Lu13}, Lu \emph{et al. } establish dense correspondence using the superpixel-based PatchMatch~\cite{Barnes09}. Revaud \emph{et al. } propose EpicFlow that uses externally matched flows as the initialization and then performs interpolation~\cite{Revaud15}. Zimmer \emph{et al. } design the complementary regularization that exploits directional information from the constraints imposed in data term~\cite{Zimmer11}. Our network that infers optical flow and performs flow regularization is inspired by data fidelity and regularization in variational methods.
\vspace{0.1cm}
\noindent \textbf{Machine Learning Methods.} Black \emph{et al. } propose to represent complex image motion as a linear combination of the learned basis vectors~\cite{Black97}. Roth \emph{et al. } formulates the prior probability of flow field as Field-of-Experts model~\cite{Roth05a} that captures higher order spatial statistics~\cite{Roth05b}. Sun \emph{et al. } study the probabilistic model of brightness inconstancy in a high-order random field framework~\cite{Sun08}. Nir \emph{et al. } represent image motion using the over-parameterization model~\cite{Nir08}. Rosenbaum \emph{et al. } model the local statistics of optical flow using Gaussian mixtures~\cite{Rosenbaum13}. Given a set of sparse matches, Wulff \emph{et al. } propose to regress them to a dense flow field using a set of basis flow fields (PCA-Flow)~\cite{Wulff15}. It can be shown that the parameterized model~\cite{Black97, Nir08, Wulff15} is related to the flow inference in CNNs.
\vspace{0.1cm}
\noindent \textbf{CNN-Based Methods.} A comparison of the major components used in the state-of-the-art optical flow CNNs is summarized in Table~\ref{tab: flow cnn comparison}.
In FlowNet~\cite{Dosovitskiy15}, Dosovitskiy \emph{et al. } use an optional post-processing step that involves energy minimization to reduce smoothing effect across flow boundaries. This process is not end-to-end trainable. On the contrary, we present an end-to-end approach that performs in-network flow regularization using a f-lconv layer, which plays a similar role as the regularization term in variational methods.
In FlowNet2~\cite{Ilg17}, Ilg \emph{et al. } introduce a huge network cascade (over 160M parameters) that consists of variants of FlowNet (FlowNetS and FlowNetC). The cascade improves flow accuracy with an expense of model size and computational complexity.
A compact network termed SPyNet~\cite{Ranjan17} from Ranjan \emph{et al. } uses a spatial pyramid network. It warps the second image toward the first one using the estimated flow field from the previous level. But the accuracy is below FlowNet2 (KITTI 2012~\cite{Geiger12}: 4.1 vs 1.8 measured in AEE, KITTI 2015~\cite{Menze15}: 35.07\% vs 11.48\% measured in Fl-all). On the contrary, LiteFlowNet infers a flow field at each pyramid level from the corresponding feature pair in the encoder and uses feature warping. LiteFlowNetX, a small-sized variant of our network, outperforms SPyNet while being 1.33 times smaller in the model size. Zweig \emph{et al. } present a network to interpolate third-party sparse flows but requiring off-the-shelf edge detector~\cite{Zweig17}.
DeepFlow~\cite{Weinzaepfel13} that involves convolution and pooling operations is however not a CNN, since the ``filter weights" are non-trainable image patches. It uses correlation according to the terminology used in FlowNet.
A notable concurrent work to LiteFlowNet is PWC-Net~\cite{Sun18}, which is about \textbf{18 times smaller} than FlowNet2~\cite{Ilg17}. LiteFlowNet~\cite{Hui18}, a more lightweight CNN, is about \textbf{30 times smaller} than FlowNet2. Both of the works use the coarse-to-fine flow inference, feature warping, and cost volume for optical flow estimation, and are presented in CVPR 2018. However, there a number of distinctions between them. First, LiteFlowNet incorporates the cascaded flow inference to estimate residual flow at each pyramid level. Specifically, the pixel-level flow estimate that is generated by the cost-volume flow decoder is refined to the sub-pixel level. Second, flow fields resulting from the the cascaded flow inference are further regularized by feature-driven local convolutions. Third, densely connected layers and feed-forwarding of feature maps from the previous level are not used in each pyramid level of the decoder. Fourth, LiteFlowNet is also benefited from the use of stage-wise training (more details in Section~\ref{sec:liteflownet}) to improve the optical flow accuracy and reduce the training time. These differences make LiteFlowNet to be more efficient in terms of the number of model parameters for solving the optical problem and therefore it attains a smaller model size than PWC-Net.
An alternative approach for establishing dense correspondence is to match image patches. Zagoruyko \emph{et al. } first introduce to use CNN-feature matching~\cite{Zagoruyko15}. G\"uney \emph{et al. } use feature representation and formulate optical flow estimation in MRF~\cite{Guney16}. Bailer \emph{et al. }\cite{Bailer17} use multi-scale features and then perform feature matching as Flow Fields~\cite{Bailer15}. Although pixel-wise matching can establish accurate point correspondence, the computational demand is too high for practical use (it takes several seconds even a GPU is used). As a tradeoff, Dosovitskiy \emph{et al. }\cite{Dosovitskiy15} and Ilg \emph{et al. }\cite{Ilg17} perform feature matching only at a reduced spatial resolution. On the contrary, we reduce the computational burden of feature matching by using a short-ranged matching of warped CNN features and a sub-pixel refinement at every pyramid level. We further reduce the computation cost by constructing sparse cost volumes at high-resolution pyramid levels.
Jaderberg \emph{et al. } propose a spatial transformer that allows spatial manipulation of feature maps within the network~\cite{Jaderberg15}. We use the spatial transformer for the f-warp. Specifically, given a high-dimensional feature map as the input, each feature vector\footnote{We can also use the f-warp layer to displace each channel differently when multiple flow fields are supplied. The usage, however, is beyond the scope of this work.} is individually displaced to a new location by the f-warp layer in accordance with the displacement vector at the corresponding position in the computed flow field.
In comparison to FlowNet2~\cite{Ilg17} and SPyNet~\cite{Ranjan17}, the spatial transformation is limited to images, LiteFlowNet is a more generic warping network that warps high-level CNN features.
Brabandere \emph{et al. } propose a network to predict new frame(s) within a given video~\cite{Brabandere16}. The filters are generated dynamically conditioned on an input. We are inspired by flow regularization in variational methods~\cite{Horn81, Sun08, Werlberger09, Xu12, Zimmer11} and use the feature-driven convolution from Brabandere \emph{et al. } in our framework to regularize flow fields.
\section{LiteFlowNet}
\label{sec:liteflownet}
Two lightweight sub-networks that are specialized in \textit{pyramidal feature extraction} and \textit{optical flow estimation} constitute LiteFlowNet. Figure~\ref{fig:network structure} shows an overview of its network architecture. Since the spatial dimension of feature maps is contracting in the feature encoder and that of flow fields is expanding in the flow decoder, we name the two sub-networks as NetC and NetE respectively. NetC transforms a given image pair respectively into two pyramids of multi-scale high-dimensional features. NetE consists of cascaded flow inference and regularization modules. It estimate flow fields from low to high spatial resolutions.
\vspace{0.1cm}
\noindent \textbf{Pyramidal Feature Extraction.} As shown in Figure~\ref{fig:network structure}, NetC is a two-stream sub-network in which the filter weights are shared across the two streams. Each of them functions as a \textit{pyramidal feature descriptor} that transforms a given image $I$ to a pyramid of multi-scale high-dimensional features $\{\mathcal{F}_{k}(I)\}$ from the highest spatial resolution ($k = 1$) to the lowest spatial resolution ($k = L$). The pyramidal features are generated by stride-1 and stride-$s$ convolutions with the reduction of spatial resolution by a factor of $s$ down the inverted pyramid. In the following, we omit the subscript $k$ that indicates the level of pyramid for brevity. We use $\mathcal{F}_{i}$ to represent the extracted CNN features for $I_{i}$. When we discuss the operations in a pyramid level, the same operations are applicable to other levels.
We use the design principle that high-resolution feature maps require a large receptive field for convolutional processing. For every decrement of two pyramid levels, we assign a smaller receptive field than the previous level. Suppose a 6-level feature encoder is used, the sizes of receptive field are set to 7, 7, 5, 5, 3, and 3 for levels 6 to 1, respectively. Since the size of receptive field across convolution layers can be accumulated, we improve the computational efficiency by replacing a large-kernel convolution layer with multiple small-kernel convolution layers. Except a $7\times7$ kernel is used at the first convolution layer in NetC, $3\times3$ kernels are used for the subsequent layers and the numbers of convolution layers are set to 3, 2, 2, 1, and 1 for levels 5 to 1, respectively. More details about the network architecture can be found in \nameref{sec:appendix}.
\vspace{0.1cm}
\noindent \textbf{Feature Warping.} We denote ${\bf x}$ as a point in the image domain $\Omega \subset \mathbb{R}^{2}$. At each pyramid level, a flow field ${\bf u}$, \emph{i.e. } a function ${\bf u}: \Omega \rightarrow \mathbb{R}^{2}$, is inferred from the features $\mathcal{F}_{1}$ and ${\mathcal F}_{2}$ of images $I_{1}$ and $I_{2}$. Flow inference becomes more challenging if $I_{1}$ and $I_{2}$ are captured far away from each other because a correspondence needs to be searched in a large area. With the motivation of \textit{image warping} used in conventional methods~\cite{Brox04, Papenberg06} and recent CNNs~\cite{Ilg17, Ranjan17} for addressing large-displacement flow, we propose to reduce the feature-space distance between $\mathcal{F}_{1}$ and ${\mathcal F}_{2}$ by \textit{feature warping} (f-warp) prior to recovering the flow field. Specifically, ${\mathcal F}_{2}$ is warped towards ${\mathcal F}_{1}$ by f-warp via a flow estimate ${\bf u}$, \emph{i.e. } $\widetilde {\mathcal F}_{2}({\bf x}) \triangleq {\mathcal F}_{2}({\bf x}+{\bf u}) \sim {\mathcal F}_{1}({\bf x})$. This allows our network to infer residual flow $\Delta {\bf u}$ between ${\mathcal F}_{1}$ and warped ${\mathcal F}_{2}$ (\emph{i.e. } $\widetilde {\mathcal F}_{2}$) that has smaller flow magnitude but not the complete flow field ${\bf u}$ that is more difficult to infer (more details in Section~\ref{sec:cascaded flow inference}).
Unlike conventional methods, f-warp is performed on high-level CNN features but not on images. This makes our network more powerful and efficient in addressing the optical flow problem. To allow end-to-end training, ${\mathcal F}$ is interpolated to ${\widetilde {\mathcal F}}$ for any sub-pixel displacement ${\bf u}$ as follows:
\begin{equation}\label{bi interpolation}
\widetilde{\mathcal F}({\bf x}) = \sum_{{\bf x}_{s}^{i} \in {\mathcal N}({\bf x}_{s})}{\mathcal F}({\bf x}_{s}^{i})\left(1-\left| x_{s} - x_{s}^{i}\right|\right) \left(1-\left| y_{s} - y_{s}^{i}\right|\right),
\end{equation}
where ${\bf x}_{s} = {\bf x}+{\bf u} = (x_{s}, y_{s})^{\top}$ denotes the source coordinates in the input feature map ${\mathcal F}$ that defines the sample point, ${\bf x} = (x, y)^{\top}$ denotes the target coordinates of the regular grid in the interpolated feature map $\widetilde{\mathcal F}$, and ${\mathcal N}({\bf x}_{s})$ denotes the four pixel neighbors of ${\bf x}_{s}$. The above bilinear interpolation allows back-propagation during training as its gradients can be efficiently computed~\cite{Jaderberg15}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figure_arxiv2/M-S.png}
\caption{A cascaded flow inference module $M$:$S$ in NetE. It consists of a descriptor matching unit $M$ and a sub-pixel refinement unit $S$. In $M$, f-warp transforms the high-level feature ${\mathcal F}_{2}$ to $\widetilde{\mathcal F}_{2}$ using the upscaled (by a factor of 2) flow estimate $2 {\bf u}^{ \uparrow 2}$ from the previous pyramid level. In $S$, $\mathcal F_{2}$ is warped by the flow estimate ${\bf u}_{m}$ resulting from $M$. Residual flow $\Delta {\bf u}_{m}$ is inferred from the cost volume $V$ and $\Delta {\bf u}_{s}$ is used to correct ${\bf u}_{m}$ due to the pixel-level cost aggregation. In comparison to the residual flow $\Delta {\bf u}_{m}$, more flow adjustment can be found on flow boundaries in $\Delta {\bf u}_{s}$.}
\label{fig:MS}
\end{figure}
\subsection{Cascaded Flow Inference}
\label{sec:cascaded flow inference}
At each pyramid level of NetE, flow field inference is performed in a two-step procedure. An overview of the working mechanism is illustrated in Figure~\ref{fig:MS}. First, the pixel-by-pixel matching of high-level feature vectors across a given image pair yields a coarse flow estimate. Second, a subsequent refinement on the coarse flow further improves it to sub-pixel accuracy. The use of such a cascaded flow inference is novel in the literature.
\vspace{0.1cm}
\noindent \textbf{First Flow Inference -- Descriptor Matching.} Point correspondence between $I_{1}$ and $I_{2}$ is established through computing the correlation (\emph{i.e. } dot product) of high-level feature vectors in individual pyramidal features ${\mathcal F}_{1}$ and ${\mathcal F}_{2}$ as follows~\cite{Dosovitskiy15}:
\begin{equation}\label{eq:matching cost}
c({\bf x},{\bf d}) = {\mathcal F}_{1}({\bf x}) \cdot {\mathcal F}_{2}({\bf x}+{\bf d}) / N,
\end{equation}
where $c$ is the matching cost between point ${\bf x}$ in ${\mathcal F}_{1}$ and point ${\bf x}+{\bf d}$ in ${\mathcal F}_{2}$, ${\bf d} \in {\mathbb Z^{2}}$ (an 2-D integer set) is the displacement vector from ${\bf x}$, and $N$ is the length of the feature vector. The x- and y-components of ${\bf d}$ are bounded by $\pm D$ and $D \in {\mathbb Z}_{+}$ (an 1-D positive integer set). A cost volume $V$ is built by aggregating all the matching costs $c({\bf x},{\bf d})$ into a 3D grid. At pyramid level $k$, the dimension of $V$ is $\frac{H}{2^{k-1}} \times \frac{W}{2^{k-1}} \times (2D+1)$ for an image pair of size $H \times W$.
Unlike the conventional construction of cost volume~\cite{Dosovitskiy15, Ilg17}, we reduce the computational burden raised in three ways:
\begin{enumerate}
\item \textit{Multi-Level Short Searching Range}: Matching of feature vectors between ${\mathcal F}_{1}$ and ${\mathcal F}_{2}$ is performed within a short searching range at every pyramid level instead of using a long searching range only at a high-resolution pyramid level.
\item \textit{Feature Warping}: We reduce the feature-space distance between ${\mathcal F}_{1}$ and ${\mathcal F}_{2}$ prior to constructing the cost volume. ${\mathcal F}_{2}$ is warped towards ${\mathcal F}_{1}$ by a f-warp layer using the flow estimate from the previous level.
\item \textit{Spare Cost Volume}: We perform feature matching only at the sampled positions in the pyramid levels with high spatial resolution. The sparse cost volume is interpolated in the spatial dimension to fill the missed matching costs for the unsampled positions.
\end{enumerate}
The first two techniques effectively reduce the searching space needed, while the third technique reduces the frequency of matching per pyramid level. This in turn causes a speed-up in constructing the cost volume.
In the descriptor matching unit $M$, the residual flow $\Delta{\bf u}_{m}$ between ${\mathcal F}_{1}$ and warped ${\mathcal F}_{2}$, \emph{i.e. } $\widetilde {\mathcal F}_{2}({\bf x}) = {\mathcal F}_{2}({\bf x} + s{\bf u}^{\uparrow s})$, is inferred from the constructed cost volume $V$ as illustrated in Figure~\ref{fig:MS}. A complete flow field ${\bf u}_{m}$ is computed as follows:
\begin{equation}
{\bf u}_{m} = \underbrace{M\big(V({\mathcal F}_{1}, \widetilde {\mathcal F}_{2}; D)\big)}_{\Delta {\bf u}_{m}} + s{\bf u}^{\uparrow s},
\end{equation}
where flow field ${\bf u}$ from a preceding level needs to be upsampled in spatial resolution (denoted by ``$\uparrow$$s$") and magnitude (multiplied by a scalar $s$) to $s{\bf u}^{\uparrow s}$ for matching the resolution of the pyramidal features in the current level. For consecutive levels, we use $s=2$.
\vspace{0.1cm}
\noindent
\textbf{Second Flow Inference -- Sub-Pixel Refinement.} Since the cost volume in the descriptor matching unit is aggregated by measuring pixel-by-pixel correlation, flow estimate ${\bf u}_{m}$ resulting from the previous inference is only up to pixel-level accuracy. We introduce the second flow inference in the wake of descriptor matching as shown in Figure~\ref{fig:MS}. It aims to refine the pixel-level flow field ${\bf u}_{m}$ resulting from the descriptor matching unit to sub-pixel accuracy. This prevents erroneous flows being amplified by upsampling and passing to the next pyramid level. Specifically, ${\mathcal F}_{2}$ is warped to a new $\widetilde {\mathcal F}_{2}$ using the current flow estimate ${\bf u}_{m}$. For correcting ${\bf u}_{m}$, the sub-pixel refinement unit $S$ yields a more accurate flow field ${\bf u}_{s}$ by minimizing the feature-space distance between ${\mathcal F}_{1}$ and $\widetilde {\mathcal F}_{2}$ through computing a residual flow $\Delta {\bf u}_{s}$ as follows:
\begin{equation}
{\bf u}_{s} = \underbrace{S\big({\mathcal F}_{1}, \widetilde {\mathcal F}_{2}, {\bf u}_{m}\big)}_{\Delta {\bf u}_{s}} + {\bf u}_{m}.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figure_arxiv2/fold.png}
\caption{Folding and packing of f-lconv filters $\{g\}$. The $(x,y)$-entry of a 3D tensor $\bar G_{c}$ (the cube on the right) with size $H \times W \times \omega^{2}$ is a 3D column vector with length $w^{2}$. It corresponds to the unfolded f-lconv filter $g_{x,y,c}$ (the plane on the right) with size $\omega \times \omega$ to be applied at position $(x,y)$ and channel $c$ in the vector-valued feature $\mathcal{F}$.}
\label{fig:fold}
\end{figure}
\subsection{Flow Regularization}
\label{sec:flow regularization}
Cascaded flow inference resembles the role of data fidelity in conventional minimization methods. However using data term alone, vague flow boundaries and other undesired artifacts can exist in flow fields~\cite{Werlberger09, Zimmer11}. To tackle this problem, \textit{feature-driven local convolution} (f-lconv) is used to regularize each flow field resulting from the cascaded flow inference. The operation of f-lconv to a flow field is well-governed by the Laplacian formulation of diffusion of pixel values~\cite{Tschumperle05} (see Section~\ref{sec:regularization term} for more details). In contrast to the \textit{local convolution} (lconv) used in conventional CNNs~\cite{Taigman14}, the f-lconv is more generalized. Not only a distinct filter is used for each position of a flow field but the filter is adaptively constructed to regularize each flow vector with a weighted average of flow vectors from nearby pixels.
Consider a general case, a vector-valued feature $\mathcal{F}$ that has to be regularized has a spatial dimension $H \times W$ and $C$ channels. Define $G = \{g\}$ as a set of filters used in a f-lconv layer. The operation of a f-lconv filter $g_{x,y,c}$ with size $\omega \times \omega$ to $\mathcal{F}$ at position $(x, y)$ and channel $c$ is formulated as follow:
\begin{equation}\label{eq:local convolution1}
{\mathcal F}_{r}(x,y,c) = \sum_{(x_{i}, y_{i}) \in {\mathcal N}(x,y)}g_{x,y,c}(x_{i},y_{i}) \mathcal{F}(x + x_{i},y + y_{i},c),
\end{equation}
where ${\mathcal F}_{r}(x,y,c)$ is the scalar output and ${\mathcal N}(x,y)$ denotes the neighborhood containing $\omega \times \omega$ pixels centered at position $(x,y)$.
To regularize a flow field, f-lconv filters need to be specialized. It should behave as an averaging filter if the variation of flow vectors over a patch is supposed to be smooth. It should also not over-smooth flow vectors across flow boundary. To this end, we design a CNN unit $R_{D}$ to generate a feature-driven variation metric $\mathcal{D}$ with dimension $H \times W \times \omega \times \omega \times C$\footnote{For the case of flow field, the dimension of $\mathcal{D}$ is $H \times W \times \omega \times \omega \times 2$ as a flow field has 2 channels. But for the purpose of a lightweight implementation, both channels of a flow field is regularized equally, \emph{i.e. } $C=1$.}. It predicts the local flow variation over a patch with size $\omega \times \omega$ at all positions in a flow field using pyramidal feature $\mathcal{F}_{1}$, flow field ${\bf u}_{s}$ from the cascaded flow inference, and occlusion probability map\footnote{We use $L_{2}$ brightness error $||I_{2}({\bf x}+{\bf u})-I_{1}({\bf x})||_{2}$ between the warped second image and the first image as the occlusion probability map.} $O$ as follows:
\begin{equation}\label{eq:distance metric D}
\mathcal{D} = R_{\mathcal{D}}(\mathcal{F}_{1}, {\bf u}_{s}, O).
\end{equation}
With the introduction of feature-driven variation metric $\mathcal{D}$, each filter $g$ of f-lconv is constructed as follows:
\begin{equation}
g_{x,y,c} = \frac{\text{exp}(-\mathcal{D}(x,y,0,0,c)^{2})}{\sum_{(m,n) \in {\mathcal N}(x,y)} \text{exp}(-\mathcal{D}(x,y,m,n,c)^{2})}.
\end{equation}
We intend to use the negative tail of the exponential function to constrain the values of f-lconv filters in $[0, 1]$ as the rapid-growing positive tail makes the training of the f-lconv more difficult.
Here, we provide a mechanism to perform f-lconv efficiently. For a $C$-channel input $\mathcal{F}$, we use $C$ tensors $\bar G_{1}, ..., \bar G_{C}$ to store f-lconv filter set $G$. As illustrated in Figure~\ref{fig:fold}, each f-lconv filter $g_{x,y,c}$ is folded into a 3D column vector with length $w^{2}$ and then packed into the $(x,y)$-entry of a 3D tensor $\bar G_{c}$ with size $H \times W \times w^{2}$. The same folding and packing operations are also applied to each patch in each channel of $\mathcal{F}$. This results in $C$ tensors $\bar F_{1}, ..., \bar F_{C}$ for $\mathcal{F}$. In this way, Equation~\eqref{eq:local convolution1} can be reformulated to:
\begin{equation}\label{eq:local convolution2}
{\mathcal F}_{r}(c) = \bar G_{c} \odot \bar F_{c},
\end{equation}
where ``$\odot$" denotes element-wise dot product between the corresponding column vectors of the tensors. With the abuse of notation, ${\mathcal F}_{r}(c)$ denotes the $xy$-slice at channel $c$ in the regularized $C$-channel feature $\mathcal{F}_{r}$. Equation~\eqref{eq:local convolution2} reduces the dimension of tensors from $H \times W \times \omega^{2}$ (right-hand side prior to the dot product) to $H \times W$ (left-hand side).
To summarize, ${\bf u}_{s}$ resulting from the cascaded flow inference is adaptively regularized by the flow regularization module $R$ using a set of f-lconv filters $G$ as follows:
\begin{equation}
{\bf u}_{r} = R({\bf u}_{s}; G).
\end{equation}
\section{Correspondences between Optical Flow CNNs and Variational Methods}
We first provide a brief review for estimating optical flow using variational methods. In the next two sub-sections, we will bridge the correspondences between optical flow CNNs and classical variational methods.
Consider an image sequence $I({\bf x}, t): {\mathbb{R}}^{3} \rightarrow \mathbb{R}$ with ${\bf x} = (x, y)^{\top} \in \Omega$ over a rectangular spatial domain $\Omega \subset \mathbb{R}^{2}$ and a temporal dimension $t$. The optical flow field ${\bf u}: \Omega \rightarrow {\mathbb{R}}^{2}$ that is induced by the spatial motion of the scene and/or the camera itself corresponds to the displacement vector field between images $I_{1}$ (at $t = 1$) and $I_{2}$ (at $t = 2$). The flow field can be estimated by minimizing an energy functional $E$ of the general form~\cite{Zimmer11}:
\begin{equation}\label{eq:functional}
\begin{split}
E({\bf u}) &= E_{dat}({\bf u}) + \lambda E_{reg}(\nabla{\bf u}) \\
&= \int_{\Omega} \big(e_{data}({\bf u}) + \lambda e_{reg}(\nabla{\bf u})\big) d{\bf x},
\end{split}
\end{equation}
where $e_{dat}$ and $e_{reg}$ represent the data and regularization costs respectively, and $\lambda > 0$ is the smoothness weight.
\subsection{Data Term}
Point correspondence across a pair of images is imposed in the data term of Eq.~\eqref{eq:functional} as a combination of several matching quantities $\{D_{i}\}$ as follows~\cite{Zimmer11, Kim13}:
\begin{equation}\label{eq:data term}
E_{dat}({\bf u}) = \int_{\Omega} \sum \gamma_{i} D_{i} (I_{1}, I_{2}) d{\bf x},
\end{equation}
where $\gamma_{i}$ is the weighting factor for $D_{i}$. Two popular matching quantities are image brightness constancy assumption $\Psi\big(\left| I_{2}({\bf x} + {\bf u} ) - I_{1}({\bf x}) \right|^{2} \big)$~\cite{Horn81} and gradient constancy assumption $\Psi\big(\left| \nabla I_{2}({\bf x} + {\bf u}) - \nabla I_{1}({\bf x}) \right|^{2} \big)$~\cite{Brox04}, where $\Psi$ is a robust penalty function. Other higher-order constancy data terms are also widely used~\cite{Papenberg06}.
The contributions of different matching quantities need to be compromised by using appropriate weighting factors~\cite{Xu12, Kim13}. It is also necessary to maintain differentiability of both data and regularization (Section~\ref{sec:regularization term}) terms because Eq.~\eqref{eq:functional} needs to be solved using the Euler-Lagrange equation.
In comparison to conventional methods, state-of-the-art optical flow networks do not explicitly define those matching quantities $\{D_{i}\}$. Back2Basics~\cite{Yu16} uses a photometric loss that is computed as the difference between the first image and the warped second image. SPyNet~\cite{Ranjan17} uses a pair of images from the image pyramids to generate a flow field at the corresponding pyramid level. PWC-Net~\cite{Sun18} and LiteFlowNet~\cite{Hui18} use a learnable feature encoder instead. In more details, we train NetC of LiteFlowNet as a CNN-based \textit{pyramidal feature descriptor} $\mathcal{F}(I): {\mathbb{R}}^{2}\rightarrow {\mathbb{R}}^{N}$ that transforms a given image pair $(I_{1}, I_{2})$ respectively into two pyramids of multi-scale high-dimensional features. With the introduction of feature descriptor, the \textit{cascaded flow inference} in NetE that has been presented in Section~\ref{sec:cascaded flow inference} is trained to solve for the minimization of the difference between the high-level features $\mathcal{F}_{2}$ of $I_{2}$ and $\mathcal{F}_{1}$ of $I_{1}$ by computing the dense correspondence between them. In other words, feature encoders that are used in LiteFlowNet and other optical flow CNNs~\cite{Ilg17,Ranjan17,Sun18} resemble the role of data term in variational methods.
\subsection{Regularization Term}
\label{sec:regularization term}
Flow field that is merely computed by data fidelity is fragile to outliers. Energy functional is often augmented to enforce dependency between neighboring flow vectors~\cite{Horn81}. Regularization of a vector field can be viewed as diffusion of pixel values~\cite{Tschumperle05}. By applying the Euler-Lagrange equation to Eq.~\eqref{eq:functional}, the regularization component is given by:
\begin{equation}\label{eq:divergence form}
\text{div}\left(\partial_{\nabla{\bf u}} E_{reg}(\nabla{\bf u}) \right) = \text{div}({\bf D} \nabla{\bf u}),
\end{equation}
where ${\bf D}$ is a $2 \times 2$ diffusion tensor.
The above divergence formulation can also be rewritten into an oriented Laplacian form as follows:
\begin{equation}\label{eq:Laplacian form}
\text{div}({\bf D} \nabla{\bf u}) = \text{trace}({\bf T}{\bf H}_{i}), i = 1, 2,
\end{equation}
where ${\bf H}_{i}$ is the Hessian matrix of the $i$-th vector component of the flow field and ${\bf T}$ is a $2 \times 2$ tensor. The solution of Eq.~\eqref{eq:Laplacian form} is given by:
\begin{equation}\label{eq:flow smoothing}
{\bf u} = K(\textbf{T}) \ast {\bf u}',
\end{equation}
where ``$\ast$" denotes a convolution and $K$ is a 2D oriented Gaussian kernel (the exact structure of $K$ depends on ${\bf D}$ used in $E_{reg}$) and ${\bf u}'$ is the intermediate flow field generated from the data term~\cite{Xiao06}. In other words, enforcing smoothness constraint on the flow field is equivalent to applying a convolution with a 2D oriented Gaussian kernel to the intermediate flow field generated by the data term.
Unlike the smoothing kernel in Eq.~\eqref{eq:flow smoothing} that requires engineered regularizing structure, we use a \textit{feature-driven local convolution} (f-lconv) filters $G= \{g\}$ to regularize each flow vector differently in the flow field by adapting f-lconv kernel to the pyramidal feature $\mathcal{F}$ resulting from the encoder, intermediate flow field ${\bf u}'$ from the data term, and occlusion probability map $O$. Our feature-driven flow regularization is defined as follows:
\begin{equation}\label{eq:feature-driven flow smoothing}
{\bf u} = g\left(\mathcal{F}_{1}, {\bf u}', O\right) \ast {\bf u}'.
\end{equation}
The flow regularization module $R$ in NetE that performs the above feature-driven flow smoothing operation has been presented in Section~\ref{sec:flow regularization}. By replacing the intermediate flow field ${\bf u}'$ to flow field ${\bf u}_{s}$ generated from the cascaded flow inference, Eq.~\eqref{eq:feature-driven flow smoothing} corresponds to Eq.~\eqref{eq:local convolution1}. This concludes that our feature-driven regularization resembles the role of regularization term in variational methods. In Back2Basics~\cite{Yu16}, flow regularization is enforced by a piecewise smoothness function in the training loss instead.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figure_arxiv2/D2_L6_x.png} \\
\vspace{0.2cm}
\includegraphics[width=\linewidth]{figure_arxiv2/D2_res_L3_x.png}
\caption{A visualization of the learned filters with sizes $3\times3$ and $5\times5$ for the horizontal flow component at level 6 (top 2 rows) and level 3 (bottom 2 rows) in the sub-pixel refinement unit of LiteFlowNet2, respectively.}
\label{fig:flow bases}
\end{figure}
\section{Relationship between Optical Flow CNNs and Basis Representation}
The parameterized models of image motion~\cite{Black97,Nir08,Wulff15} use a linear combination of basis vectors $\{{\bf m}_{i} \in {\mathbb{R}}^{2hw}\}$ to approximate an image motion ${\bf u}$ within an image patch with size $h \times w$ as follows:
\begin{equation} \label{eq:flow basis equation}
{\bf u}_{vec} = \textstyle \sum_{i = 1}^{C}a_{i}{\bf m}_{i},
\end{equation}
where ${\bf u}_{vec} \in {\mathbb{R}}^{2hw}$ is the vectorized flow field of ${\bf u}$ by packing all the $x$- and $y$-components of ${\bf u}$ into a single vector and $\{a_{i}\}_{i=1, 2, ..., C}$ are the flow coefficients to be estimated.
The above basis representation is related to flow inferences in LiteFlowNet~\cite{Hui18} and other optical flow CNNs~\cite{Dosovitskiy15,Ilg17,Ranjan17,Sun18}. At a pyramid level, the penultimate and last layers of the descriptor matching and sub-pixel flow refinement units in LiteFlowNet can be represented by the following:
\begin{subequations}\label{eq:S equations}
\renewcommand{\theequation}{\theparentequation.\arabic{equation}}
\begin{align}
{\mathcal F}^{N-1} &= \sigma\left({\bf W}^{N-1}\ast {\mathcal F}^{N-2} + {\bf b}^{N-1} \right), \label{eq:S equation 1}\\
\Delta {\bf u} &= {\bf W}^{N} \ast {\mathcal F}^{N-1} + {\bf b}^{N}, \label{eq:S equation 2}
\end{align}
\end{subequations}
where ``$\ast$'' denotes a convolutional operator and $N$ is the total number of convolution layers used. Furthermore, ${\bf W}^{i}$ and ${\mathcal F}^{i}$ represent the convolution filters and feature maps that are used and generated at the $i$-th layer, respectively. A trainable bias $b^{i}$ is added to each feature map after the convolutional operation. We denote a set of bias scalars as ${\bf b}^{i}$. Each convolution layer is followed by an activation function $(\sigma)$ for non-linear mapping unless otherwise specified.
Suppose ${\mathcal F}^{N-1}$ in Eq.~\eqref{eq:S equation 2} is a $C$-channel vector-valued feature map, the equation can be re-written into the expanded form as follows:
\begin{equation}
\Delta {\bf u} = \textstyle \sum_{i = 1}^{C}\big({\bf W}_{i}\ast {\mathcal F}(i) + b_{i}\big), \label{eq:expanded S equation}
\end{equation}
where ${\bf W} = \{{\bf W}_{i}\}$, ${\bf b} = \{b_{i}\}$, and ${\mathcal F}(i)$ is the $i$-th channel of ${\mathcal F}$ (superscripts $N$ and $N-1$ are removed for brevity).
\vspace{0.1cm}
\noindent
\textbf{Similarities.} Suppose the residual flow $\Delta {\bf u}$ in Eq.~\eqref{eq:expanded S equation} is the flow field that we need to estimate even though it is not the full flow, the vectorized ${\bf W}_{i}\ast {\mathcal F}_{i} + b_{i}$ resembles $a_{i}{\bf m}_{i}$ in Eq.~\eqref{eq:flow basis equation}. The number of channels in ${\mathcal F}^{N-1}$ (Eq.~\eqref{eq:S equation 2}) corresponds to the number of basis vectors in Eq.~\eqref{eq:flow basis equation}.
In particular, the filters ${\bf W}$ and feature maps ${\mathcal F}$ in Eq.~\eqref{eq:S equation 2} correspond to the basis vectors $\{{\bf m}_{i}\}$ and flow coefficients $\{a_{i}\}$ in Eq.~\eqref{eq:flow basis equation}, respectively. The computation of feature maps (\emph{i.e. } flow coefficients in conventional basis representation) for CNN flow inference is governed by the $N-1$ convolution layers prior to Eq.~\eqref{eq:S equation 2}. Figure~\ref{fig:flow bases} provides an example of the visualization of the learned filters (\emph{i.e. } flow bases in conventional basis representation) at level 6 and level 3 in the sub-pixel refinement unit of LiteFlowNet2.
\vspace{0.1cm}
\noindent
\textbf{Differences.} The dimension of CNN filters is usually small (a few pixels width) while the dimension of basis fields (before vectorization) is same as image patches under consideration. The dimension of CNN feature maps is proportional to that of the given images (depending on the pyramid level under consideration) while flow coefficients are scalars. Furthermore, a flow vector is constructed by a convolution between CNN filters and feature patches centered at the corresponding position in the feature maps as the flow vector while each vectorized flow patch is a linear combination of basis vectors.
\section{Experiments}
\label{sec:experiments}
\subsection{LiteFlowNet}
\label{sec:liteflownet}
\noindent
\textbf{Network Details.} In LiteFlowNet, NetC is a 6-level feature encoder and NetE is a flow decoder for generating flow fields from levels 6 to 2 in a coarse-to-fine manner. Flow field at level 2 is upsampled by a bilinear interpolation to the same resolution at level~1 as the given image pair.
We set the maximum searching radius for constructing cost volumes to 3 and 6 pixels for levels 6 to 4 and levels 3 to 2, respectively. Matching is performed at every position across two pyramidal features to form a cost volume, except for levels 3 to 2 that it is performed at a regularly sampled grid (using a stride of 2) to form a sparse cost volume.
All convolution layers use $3\times3$ filters, except the first layer in NetC uses $7\times7$ filters, each last layer in descriptor matching $M$, sub-pixel refinement $S$, and flow regularization $R$ uses $5\times5$ filters for levels 4 to 3 and $7\times7$ filters for level 2.
Each convolution layer is followed by a leaky rectified linear unit layer, except f-lconv and the last layers in $M$, $S$ and $R$ networks.
More network details can be found in \nameref{sec:appendix}.
\vspace{0.1cm}
\noindent
\textbf{Training Details.} In conventional training methods~\cite{Ilg17,Sun18}, all parts of network are trained by the same number of iterations. On the contrary, we pre-train LiteFlowNet on FlyingChairs dataset~\cite{Dosovitskiy15} using \textbf{stage-wise training protocol} as follows: First, NetC and $M_{6}$:$S_{6}$ of NetE are trained for 300k iterations. Second, $R_{6}$ together with the trained network in step 1 are trained for 300k iterations. Third, for levels $k \in [5, 2]$, $M_{k}$:$S_{k}$ followed by $R_{k}$ is added into the trained network each time. The new network cascade is trained for 240k iterations, except the last-level network is trained for 300k iterations. The new filter weights at level $k$ are initialized from the previous level $k-1$. The advantages of stage-wise training over the conventional training are:
\begin{enumerate}
\item \textit{Shorter training time}: The network stages are gradually added in the cascade. The stages that are lately added are relatively trained by a smaller number of iterations than the early added stages. Furthermore, the runtime of the cascade consisting lesser network stages is faster than the more complete network. The overall network, therefore, requires lesser training time than the conventional training method. The training of LiteFlowNet2 (including training and validation phases) requires 5.5 days instead of 8 days on an NVIDIA TITAN X.
\item \textit{Better performance}: Although the network stages that are lately added are relatively trained by a smaller number of iterations, stage-wise training promotes lower training losses on the overall network. This is possible because filter weights in the succeeding stage are well-initialized from the previously trained stage rather than randomly assigned. The average end-point error (AEE) of LiteFlowNet2 is improved from 4.66 to 4.11 on KITTI 2012~\cite{Geiger12} and is significantly improved from 12.42 to 11.31 on KITTI 2015~\cite{Menze15}. For benchmarking on FlyingChairs, AEEs are 1.68 (vs 1.70) and 1.60 (vs 1.61) on the training and validation sets, respectively. The results are significantly improved on KITTI but are similar on FlyingChairs. This indicates that stage-wise training is effective in alleviating the over-fitting issue.
\end{enumerate}
Learning rates are initially set to 1e-4, 5e-5, and 4e-5 for levels 6 to 4, 3, and 2 respectively. We reduce it by a factor of 2 starting at 120k, 160k, 200k, and 240k iterations. We use the same batch size of 8, data set resolution (randomly cropped: $448\times320$), loss weights (levels 6 to 2: 0.32, 0.08, 0.02, 0.01, 0.005), training loss ($L_{2}$ flow error), Adam optimization ($\gamma = 0.5$, weight decay = 4e-4), data augmentation (including noise injection), scaled ground-truth flow (by a factor of $\frac{1}{20}$) as FlowNet2~\cite{Ilg17}. Furthermore, we use a training loss for every inferred flow field.
After pre-training LiteFlowNet on FlyingChairs (Chairs)~\cite{Dosovitskiy15}, it is trained on a more challenging data set, Things3D\footnote{We excluded a small amount of training data in Things3D undergoing extremely large flow displacement as advised by the authors~(\url{https://github.com/lmb-freiburg/flownet2/issues}).}~\cite{Mayer16} according to the training schedule (Chairs~$\rightarrow$~Things3D) as FlowNet2~\cite{Ilg17}. It is trained for 500k iterations. Batch size is reduced to 4 and dataset resolution is increased to $768\times384$. Learning rate is set to 3e-6 and is reduced by half starting at 200k iterations for every increment of 100k iterations. No stage-wise training is used for subsequent fine-tunings. We denote \textbf{LiteFlowNet-pre} and \textbf{LiteFlowNet} as the networks pre-trained on Chairs and fine-tuned on Things3D, respectively.
After training on Things3D, we use the generalized Charbonnier function $\rho(x) = (x^{2} + \epsilon^{2})^{q}$ ($\epsilon^{2} = 0.01$ and $q = 0.2$) as the robust training loss for further fine-tuning on subsequent datasets. The flow accuracy of LiteFlowNet2 is improved (KITTI 2012~\cite{Geiger12}: 3.73 vs 3.42 and KITTI 2015~\cite{Menze15}: 9.80 vs 8.97) when the robust loss with a higher learning rate 1e-5 is used for fine-tuning on Things3D. However, the results on the testing sets of Sintel~\cite{Butler12} and KITTI are not much different from the case without using the robust loss. Therefore, we choose to use $L_{2}$ loss when fine-tuning on Things3D. The fine-tuning details on the respective training sets of Sintel and KITTI will be presented in Section~\ref{sec:results}.
\subsection{LiteFlowNet2}
\label{sec:liteflownet2}
\begin{table}[t]
\small
\centering
\caption{AEE and runtime (for Sintel) measured at different components (NetC: a feature encoder, NetE: a multi-scale flow decoder) and pyramid levels of LiteFlowNet~\cite{Hui18} trained on Things3D. The percentage change is relative to the previous level.} \label{tab:AEE and runtime analysis}
\scalebox{0.85}{
\begin{tabular}{ccccccc}
\hline
\multicolumn{1}{|c|}{} &\multicolumn{1}{c|}{NetC}
&\multicolumn{5}{c|}{NetE} \\
\hline
\multicolumn{1}{|l|}{Level} &\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{6}
&\multicolumn{1}{c|}{5}
&\multicolumn{1}{c|}{4}
&\multicolumn{1}{c|}{3}
&\multicolumn{1}{c|}{2} \\
\hline
\multicolumn{1}{|l|}{Sintel Clean} &\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{~~~5.41~~~}
&\multicolumn{1}{c|}{3.85}
&\multicolumn{1}{c|}{3.03}
&\multicolumn{1}{c|}{2.65}
&\multicolumn{1}{c|}{2.48} \\
\multicolumn{1}{|l|}{} &\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{-28.8\%}
&\multicolumn{1}{c|}{-21.3\%}
&\multicolumn{1}{c|}{-12.5\%}
&\multicolumn{1}{c|}{-6.4\%} \\
\hline
\multicolumn{1}{|l|}{KITTI 2012} &\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{~~~8.58~~~}
&\multicolumn{1}{c|}{6.04}
&\multicolumn{1}{c|}{4.67}
&\multicolumn{1}{c|}{4.18}
&\multicolumn{1}{c|}{4.00} \\
\multicolumn{1}{|l|}{} &\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{-29.6\%}
&\multicolumn{1}{c|}{-22.7\%}
&\multicolumn{1}{c|}{-10.5\%}
&\multicolumn{1}{c|}{-4.4\%} \\
\hline
\multicolumn{1}{|l|}{Runtime (ms)} &\multicolumn{1}{c|}{14.36}
&\multicolumn{1}{c|}{1.69}
&\multicolumn{1}{c|}{2.03}
&\multicolumn{1}{c|}{4.88}
&\multicolumn{1}{c|}{13.06}
&\multicolumn{1}{c|}{52.51} \\
\multicolumn{1}{|l|}{} &\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{-}
&\multicolumn{1}{c|}{+20.1\%}
&\multicolumn{1}{c|}{+140\%}
&\multicolumn{1}{c|}{+168\%}
&\multicolumn{1}{c|}{+302\%} \\
\hline
\end{tabular}}
\vspace{-0.1cm}
\end{table}
\begin{figure*}[t]
\centering
\captionsetup[subfigure]{labelformat=empty, justification=centering}
\captionsetup[subfloat]{farskip=0pt,captionskip=0pt}
\begin{tabular}{ccccc}
\subfloat[~~~~Level 6: $32\times14$, \newline AEE: 2.28]{\includegraphics[width=3.55cm]{figure_arxiv2/LiteFlowNet_MPIClean20_L6.png}}\hfill
\subfloat[~~~~Level 5: $64\times28$, \newline AEE: 1.22]{\includegraphics[width=3.55cm]{figure_arxiv2/LiteFlowNet_MPIClean20_L5.png}}\hfill
\subfloat[~~~~Level 4: $128\times56$, \newline AEE: 0.76]{\includegraphics[width=3.55cm]{figure_arxiv2/LiteFlowNet_MPIClean20_L4.png}}\hfill
\subfloat[~~~~Level 3: $256\times112$, \newline AEE: 0.47]{\includegraphics[width=3.55cm]{figure_arxiv2/LiteFlowNet_MPIClean20_L3.png}}\hfill
\subfloat[~~~~Level 2: $512\times224$, \newline AEE: 0.33]{\includegraphics[width=3.55cm]{figure_arxiv2/LiteFlowNet_MPIClean20_L2.png}}\hfil
\end{tabular}
\vspace{-0.2cm}
\caption{An example of coarse-to-fine flow fields generated from LiteFlowNet~\cite{Hui18} trained on Chairs~$\rightarrow$~Things3D. Each of the flow fields is upsampled to the same resolution as the ground truth by bilinear interpolation prior to computing AEE.}
\label{fig:a pyramid of flows}
\end{figure*}
We analyze the flow accuracy in terms of AEE and the computation time at each pyramid level of LiteFlowNet~\cite{Hui18} trained on Chairs~$\rightarrow$~Things3D. The results are summarized in Table~\ref{tab:AEE and runtime analysis}. An example of multi-scale flow fields on Sintel Clean training set is also provided in Figure~\ref{fig:a pyramid of flows}. We optimize the network architecture of LiteFlowNet with the following motivations and evolve our earlier model to a faster and more accurate LiteFlowNet2.
\vspace{0.1cm}
\noindent
\textbf{Pyramid Level.} As summarized in Table~\ref{tab:AEE and runtime analysis}, the computation time increases exponentially with the resolution of flow field. In particular, the improvement in flow accuracy is not significant when comparing the AEE at level 3 to that at level~2. On the contrary, about 60\% of the total computation time spent on the flow decoder at level~2. In LiteFlowNet2, we improve the computational efficiency by reducing the number of pyramid levels in NetE from five (levels 6 to 2) to four (levels 6 to 3).
\vspace{0.1cm}
\noindent
\textbf{Network Depth.} By limiting the pyramid level of NetE up to level 3 in LiteFlowNet~\cite{Hui18}, the flow accuracy is decreased as revealed in Table~\ref{tab:AEE and runtime analysis}. In order to compensate the loss, we add two convolution layers (with 128 and 96 output channels) between the 128- and 64-channel convolution layers to each flow decoder in the cascaded flow inference of NetE. We will show in Section~\ref{sec:results} that LiteFlowNet2 has a higher flow accuracy than LiteFlowNet.
\vspace{0.1cm}
\noindent
\textbf{Pseudo Flow Inference and Regularization.}
We also address the inefficient computation at level 2 by introducing a simplified flow inference (without descriptor matching) and regularization at this level for the model fine-tuned on KITTI.
The pseudo network is constructed as follows: First, we remove all the layers before the last layer in the original flow inference and regularization, respectively. Then, we replace the removed layers by feed-forwarding the upsampled features respectively from the layer prior to the last layer in the flow inference and regularization at level~3.
Using the pseudo network, the runtime at level 2 is greatly reduced from 52.51ms to 8.95ms. We have experienced that the pseudo network can improve the flow accuracy on KITTI testing set (evaluation will be provided in Section~\ref{sec:results}) but there is no significant improvement on Sintel testing set. Since the latter is a synthetic dataset, a flow CNN is more easily to be trained for fitting the non-realistic scene. However, the variability in real-world data such as lighting and object textures is more challenging. Therefore, using one more flow inference and regularization is beneficial to the refinement of preceding flow estimate on KITTI.
\begin{table}[t]
\small
\centering
\caption{AEE of LiteFlowNet2 trained on Chairs using different training protocols against LiteFlowNet~\cite{Hui18}.} \label{tab:results for different training protocols}
\scalebox{0.85}{
\begin{tabular}{l|c|c|c|c|c}
\hline
\multicolumn{1}{|c|}{}
&\multicolumn{1}{c|}{Sintel Clean}
&\multicolumn{1}{c|}{Sintel Final}
&\multicolumn{1}{c|}{KITTI12}
&\multicolumn{1}{c|}{KITTI15} \\
\hline
\multicolumn{1}{|l|}{LiteFlowNet~\cite{Hui18}} &\multicolumn{1}{c|}{}
&\multicolumn{1}{c|}{}
&\multicolumn{1}{c|}{}
&\multicolumn{1}{c|}{} \\
\multicolumn{1}{|l|}{~learning rate: 5e-5}
&\multicolumn{1}{c|}{2.94}
&\multicolumn{1}{c|}{4.28}
&\multicolumn{1}{c|}{4.73}
&\multicolumn{1}{c|}{11.75} \\
\hline
\multicolumn{1}{|l|}{LiteFlowNet2}
&\multicolumn{1}{c|}{}
&\multicolumn{1}{c|}{}
&\multicolumn{1}{c|}{}
&\multicolumn{1}{c|}{} \\
\multicolumn{1}{|l|}{~learning rate: 5e-5}
&\multicolumn{1}{c|}{2.84}
&\multicolumn{1}{c|}{4.16}
&\multicolumn{1}{c|}{4.29}
&\multicolumn{1}{c|}{12.01} \\
\multicolumn{1}{|l|}{~learning rate: 6e-5}
&\multicolumn{1}{c|}{2.80}
&\multicolumn{1}{c|}{\textbf{4.14}}
&\multicolumn{1}{c|}{4.25}
&\multicolumn{1}{c|}{11.76} \\
\multicolumn{1}{|l|}{~+ extra training loss}
&\multicolumn{1}{c|}{\textbf{2.78}}
&\multicolumn{1}{c|}{\textbf{4.14}}
&\multicolumn{1}{c|}{\textbf{4.11}}
&\multicolumn{1}{c|}{\textbf{11.31}} \\
\hline
\end{tabular}}
\vspace{-0.1cm}
\end{table}
\vspace{0.1cm}
\noindent
\textbf{Training Details.}
We pre-train LiteFlowNet2 (\textbf{LiteFlowNet2-pre}) using the same stage-wise training protocol as LiteFlowNet~\cite{Hui18} except for a few minor differences. Learning rate is set to 6e-5 instead of 5e-5. At the output of last flow regularization in NetE, the flow field is further upsampled to the same resolution as the image pair and we introduce an additional training loss with a loss weight 6.25e-4. Table~\ref{tab:results for different training protocols} summarizes the results of LiteFlowNet and LiteFlowNet2 under different training protocols. Using the same training protocol as LiteFlowNet, LiteFlowNet2 outperforms LiteFlowNet on Sintel and KITTI 2012. If the learning rate of LiteFlowNet2 is increased to 6e-5, it is on par with LiteFlowNet on KITTI 2015. Using the learning rate of 6e-5 and extra training loss, LiteFlowNet2 outperforms LiteFlowNet on KITTI 2012 and 2015.
The fine-tuning protocol for the respective training set of Sintel and KITTI is the same as LiteFlowNet unless otherwise specified. The improvements due to the better fine-tuning protocol will be presented in Section~\ref{sec:results}.
\begin{table}[t]
\small
\centering
\caption{AEE on the Chairs testing set. Models are trained on the Chairs training set.} \label{tab:flyingchairs results}
\scalebox{0.85}{
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{1}{|c|}{FlowNetS}
&\multicolumn{1}{c|}{FlowNetC}
&\multicolumn{1}{c|}{SPyNet}
&\multicolumn{1}{c|}{LiteFlowNetX-pre}
&\multicolumn{1}{c|}{LiteFlowNet-pre} \\
\hline
\multicolumn{1}{|c|}{2.71}
&\multicolumn{1}{c|}{2.19}
&\multicolumn{1}{c|}{2.63}
&\multicolumn{1}{c|}{2.25}
&\multicolumn{1}{c|}{\textbf{1.57}} \\
\hline
\end{tabular}}
\vspace{-0.1cm}
\end{table}
\begin{table*}[t]
\small
\centering
\caption{A comparison on the performance of the state-of-the-art optical flow methods in terns of AEE. The values in parentheses are the results of the networks on the data they were trained on, and hence are not directly comparable to the others. Out-Noc: Percentage of erroneous pixels defined as end-point error (EPE) $>$3 pixels in non-occluded areas. Fl-all: Percentage of outliers averaged over all pixels. Inliers are defined as EPE $<$3 pixels or $<$5\%. The best number for each category is highlighted in bold and the second best is underlined. (Notes: $^{1}$The values are reported from~\cite{Ilg17}. $^{2,3,4}$The values are computed using the trained models provided by the authors. $^{3}$Large discrepancy exists as the authors mistakenly evaluated the results on the disparity dataset. $^{4}$Up-to-date dataset is used. $^{6}$Trained on Driving and Monkaa~\cite{Mayer16}. $^{7}$Results are reported from the arXiv paper~\cite{Hui18-arxiv}.)} \label{tab:results}
\scalebox{0.85}{
\begin{tabular}{|c|l|c c|c c|c c c|c c c|}
\hline
\multirow{1}{*}{}
&\multirow{1}{*}{Method}
&\multicolumn{2}{c|}{Sintel Clean}
&\multicolumn{2}{c|}{Sintel Final}
&\multicolumn{3}{c|}{KITTI 2012}
&\multicolumn{3}{c|}{KITTI 2015} \\
\multirow{1}{*}{}
&\multirow{1}{*}{}
&\multicolumn{1}{c}{train}&\multicolumn{1}{c|}{test}
&\multicolumn{1}{c}{train}&\multicolumn{1}{c|}{test}
&\multicolumn{1}{c}{train}&\multicolumn{1}{c}{test}&\multicolumn{1}{c|}{test (Out-Noc)}
&\multicolumn{1}{c}{train}&\multicolumn{1}{c}{train (Fl-all)}&\multicolumn{1}{c|}{test (Fl-all)} \\
\hline\
\multirow{6}{*}{\rotatebox[origin=c]{90}{Conventional}}
&\multirow{1}{*}{LDOF$^{1}$~\cite{Brox11}}
&4.64&\multicolumn{1}{c|}{7.56}
&5.96&\multicolumn{1}{c|}{9.12}
&10.94&\multicolumn{1}{c}{12.4}&\multicolumn{1}{c|}{-}
&18.19&\multicolumn{1}{c}{38.11\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{DeepFlow$^{1}$~\cite{Weinzaepfel13}}
&2.66&\multicolumn{1}{c|}{5.38}
&3.57&\multicolumn{1}{c|}{7.21}
&4.48&\multicolumn{1}{c}{5.8}&\multicolumn{1}{c|}{-}
&10.63&\multicolumn{1}{c}{26.52\%}&\multicolumn{1}{c|}{29.18\%}\\
\multirow{1}{*}{}
&\multirow{1}{*}{Classic+NLP~\cite{Sun14}}
&4.49&\multicolumn{1}{c|}{6.73}
&7.46&\multicolumn{1}{c|}{8.29}
&-&\multicolumn{1}{c}{7.2}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{PCA-Layers$^{1}$~\cite{Wulff15}}
&3.22&\multicolumn{1}{c|}{5.73}
&4.52&\multicolumn{1}{c|}{7.89}
&5.99&\multicolumn{1}{c}{5.2}&\multicolumn{1}{c|}{-}
&12.74&\multicolumn{1}{c}{27.26\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{EpicFlow$^{1}$~\cite{Revaud15}}
&2.27&\multicolumn{1}{c|}{4.12}
&3.56&\multicolumn{1}{c|}{6.29}
&\textbf{3.09}&\multicolumn{1}{c}{3.8}&\multicolumn{1}{c|}{-}
&9.27&\multicolumn{1}{c}{27.18\%}&\multicolumn{1}{c|}{\textbf{27.10\%}}\\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowFields$^{1}$~\cite{Bailer15}}
&\textbf{1.86}&\multicolumn{1}{c|}{\textbf{3.75}}
&\textbf{3.06}&\multicolumn{1}{c|}{\textbf{5.81}}
&3.33&\multicolumn{1}{c}{\textbf{3.5}}&\multicolumn{1}{c|}{-}
&\textbf{8.33}&\multicolumn{1}{c}{\textbf{24.43\%}}&\multicolumn{1}{c|}{-} \\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Hybrid}}
&\multirow{1}{*}{Deep DiscreteFlow~\cite{Guney16}}
&-&\multicolumn{1}{c|}{3.86}
&-&\multicolumn{1}{c|}{5.73}
&-&\multicolumn{1}{c}{3.4}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{21.17\%} \\
\multirow{1}{*}{}
&\multirow{1}{*}{Bailer \emph{et al. }~\cite{Bailer17}}
&-&\multicolumn{1}{c|}{\textbf{3.78}}
&-&\multicolumn{1}{c|}{5.36}
&-&\multicolumn{1}{c}{\textbf{3.0}}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{19.44\%} \\
\multirow{1}{*}{}
&\multirow{1}{*}{DC Flow~\cite{Xu17}}
&-&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c|}{\textbf{5.12}}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{\textbf{14.86\%}}\\
\hline
\multirow{9}{*}{\rotatebox[origin=c]{90}{Heavyweight CNN}}
&\multirow{1}{*}{FlowNetS~\cite{Dosovitskiy15}}
&4.50&\multicolumn{1}{c|}{7.42}
&5.45&\multicolumn{1}{c|}{8.43}
&8.26&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNetS-ft~\cite{Dosovitskiy15}}
&(3.66)&\multicolumn{1}{c|}{6.96}
&(4.44)&\multicolumn{1}{c|}{7.76}
&7.52&\multicolumn{1}{c}{9.1}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-} \\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNetC~\cite{Dosovitskiy15}}
&4.31&\multicolumn{1}{c|}{7.28}
&5.87&\multicolumn{1}{c|}{8.81}
&9.35&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNetC-ft~\cite{Dosovitskiy15}}
&(3.78)&\multicolumn{1}{c|}{6.85}
&(5.28)&\multicolumn{1}{c|}{8.51}
&8.79&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNet2-S$^{2}$~\cite{Ilg17}}
&3.79&\multicolumn{1}{c|}{-}
&4.99&\multicolumn{1}{c|}{-}
&7.26&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&14.28&\multicolumn{1}{c}{51.06\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNet2-C$^{2}$~\cite{Ilg17}}
&3.04&\multicolumn{1}{c|}{-}
&4.60&\multicolumn{1}{c|}{-}
&5.79&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&11.49&\multicolumn{1}{c}{44.09\%}&\multicolumn{1}{c|}{-} \\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNet2~\cite{Ilg17}}
&\textbf{2.02}&\multicolumn{1}{c|}{\textbf{3.96}}
&\textbf{3.54}$^{3}$&\multicolumn{1}{c|}{6.02}
&4.01$^{4}$&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&10.08$^{4}$&\multicolumn{1}{c}{29.99\%$^{4}$}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNet2-ft-sintel~\cite{Ilg17}}
&(1.45)&\multicolumn{1}{c|}{4.16}
&(2.19$^{3}$)&\multicolumn{1}{c|}{\textbf{5.74}}
&\textbf{3.54}$^{4}$&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&\textbf{9.94}$^{4}$&\multicolumn{1}{c}{\textbf{28.02\%}$^{4}$}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{FlowNet2-ft-kitti~\cite{Ilg17}}
&3.43&\multicolumn{1}{c|}{-}
&4.83$^{3}$&\multicolumn{1}{c|}{-}
&(1.43$^{4}$)&\multicolumn{1}{c}{\textbf{1.8}}&\multicolumn{1}{c|}{4.82\%}
&(2.36$^{4}$)&\multicolumn{1}{c}{(8.88\%$^{4}$)}&\multicolumn{1}{c|}{\textbf{11.48\%}}\\
\hline
\multirow{14}{*}{\rotatebox[origin=c]{90}{Lightweight CNN}}
\multirow{1}{*}{}
&\multirow{1}{*}{SPyNet~\cite{Ranjan17}}
&4.12&\multicolumn{1}{c|}{6.69}
&5.57&\multicolumn{1}{c|}{8.43}
&9.12&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{SPyNet-ft~\cite{Ranjan17}}
&(3.17)&\multicolumn{1}{c|}{6.64}
&(4.32)&\multicolumn{1}{c|}{8.36}
&\textit{3.36}$^{6}$&\multicolumn{1}{c}{4.1}&\multicolumn{1}{c|}{12.31\%}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{35.07\%} \\
\multirow{1}{*}{}
&\multirow{1}{*}{PWC-Net~\cite{Sun18}}
&2.55&\multicolumn{1}{c|}{-}
&3.93&\multicolumn{1}{c|}{-}
&4.14&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&10.35&\multicolumn{1}{c}{33.67\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{PWC-Net-ft~\cite{Sun18}}
&(2.02)&\multicolumn{1}{c|}{4.39}
&(2.08)&\multicolumn{1}{c|}{5.04}
&(1.45)&\multicolumn{1}{c}{1.7}&\multicolumn{1}{c|}{4.22\%}
&(2.16)&\multicolumn{1}{c}{(9.80\%)}&\multicolumn{1}{c|}{9.60\%}\\
\multirow{1}{*}{}
&\multirow{1}{*}{PWC-Net\_ROB~\cite{Sun19}}
&(1.81)&\multicolumn{1}{c|}{3.90}
&(2.29)&\multicolumn{1}{c|}{4.90}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&-&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{11.63\%}\\
\multirow{1}{*}{}
&\multirow{1}{*}{PWC-Net-ft+~\cite{Sun19}}
&(1.71)&\multicolumn{1}{c|}{\textbf{3.45}}
&(2.34)&\multicolumn{1}{c|}{\textbf{4.60}}
&(0.99)&\multicolumn{1}{c}{\textbf{1.4}}&\multicolumn{1}{c|}{3.36\%}
&(1.47)&\multicolumn{1}{c}{(7.59\%)}&\multicolumn{1}{c|}{\underline{7.72\%}}\\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNetX-pre~\cite{Hui18}}
&3.70&\multicolumn{1}{c|}{-}
&4.82&\multicolumn{1}{c|}{-}
&6.81&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&16.64&\multicolumn{1}{c}{36.64\%}&\multicolumn{1}{c|}{-} \\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNetX~\cite{Hui18}}
&3.58&\multicolumn{1}{c|}{-}
&4.79&\multicolumn{1}{c|}{-}
&6.38&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&15.81&\multicolumn{1}{c}{34.90\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNet-pre~\cite{Hui18}}
&2.78&\multicolumn{1}{c|}{-}
&4.17&\multicolumn{1}{c|}{-}
&4.56&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&11.58&\multicolumn{1}{c}{32.59\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNet$^{7}$~\cite{Hui18}}
&2.48&\multicolumn{1}{c|}{-}
&4.04&\multicolumn{1}{c|}{-}
&4.00&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&10.39&\multicolumn{1}{c}{28.50\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNet-ft$^{7}$~\cite{Hui18}}
&(1.35)&\multicolumn{1}{c|}{4.54}
&(1.78)&\multicolumn{1}{c|}{5.38}
&(1.05)&\multicolumn{1}{c}{\underline{1.6}}&\multicolumn{1}{c|}{\underline{3.27\%}}
&(1.62)&\multicolumn{1}{c}{(5.58\%)}&\multicolumn{1}{c|}{9.38\%}\\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNet2-pre}
&2.78&\multicolumn{1}{c|}{-}
&4.14&\multicolumn{1}{c|}{-}
&4.11&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&11.31&\multicolumn{1}{c}{32.12\%}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNet2}
&\textbf{2.32}&\multicolumn{1}{c|}{-}
&\textbf{3.85}&\multicolumn{1}{c|}{-}
&\textbf{3.77}&\multicolumn{1}{c}{-}&\multicolumn{1}{c|}{-}
&\textbf{9.83}&\multicolumn{1}{c}{\textbf{28.45\%}}&\multicolumn{1}{c|}{-}\\
\multirow{1}{*}{}
&\multirow{1}{*}{LiteFlowNet2-ft}
&(1.41)&\multicolumn{1}{c|}{\underline{3.48}}
&(1.83)&\multicolumn{1}{c|}{\underline{4.69}}
&(0.95)&\multicolumn{1}{c}{\textbf{1.4}}&\multicolumn{1}{c|}{\textbf{2.63\%}}
&(1.33)&\multicolumn{1}{c}{(4.32\%)}&\multicolumn{1}{c|}{\textbf{7.62\%}}\\
\hline
\end{tabular}}
\end{table*}
\subsection{Results}
\label{sec:results}
We compare LiteFlowNet, LiteFlowNet2, and its variants to the state-of-the-art optical flow methods on the public benchmarks including FlyingChairs (Chairs)~\cite{Dosovitskiy15}, Sintel Clean and Final passes~\cite{Butler12}, KITTI~2012~\cite{Geiger12}, and KITTI 2015~\cite{Menze15}. Average end-point error (AEE) and specialized percentage error are reported.
\vspace{0.1cm}
\noindent
\textbf{FlyingChairs.} We first compare the intermediate results of several well-performing networks trained on Chairs alone in Table~\ref{tab:flyingchairs results}. LiteFlowNet-pre outperforms the compared networks. No intermediate result is available for FlowNet2~\cite{Ilg17} as each stacking network is trained on the Chairs~$\rightarrow$~Things3D schedule individually. Since FlowNetC, FlowNetS (variants of FlowNet~\cite{Dosovitskiy15}), and SPyNet~\cite{Ranjan17} have fewer parameters than FlowNet2 and the latter two models do not perform feature matching, we construct a small-size counterpart \textbf{LiteFlowNetX-pre} for a fair comparison by removing the matching part and shrinking the model sizes of NetC and NetE by about 4 and 5 times, respectively. Despite LiteFlowNetX-pre is 43 and 1.33 times smaller than FlowNetC and SPyNet, respectively, it still outperforms these networks and is on par with FlowNetC that uses explicit feature matching. As shown in Table~\ref{tab:results}, LiteFlowNet2-pre which is also trained on Chairs is on par with LiteFlowNet on Sintel and outperforms LiteFlowNet on KITTI.
\vspace{0.1cm}
\noindent
\textbf{MPI Sintel.} The results are summarized in Table~\ref{tab:results}. LiteFlowNetX-pre outperforms FlowNetS~\cite{Dosovitskiy15}, FlowNetC~\cite{Dosovitskiy15}, and SPyNet~\cite{Ranjan17} that are trained on Chairs on all cases. LiteFlowNet, trained on the Chairs~$\rightarrow$~Things3D schedule, performs better than LiteFlowNet-pre as expected. LiteFlowNet also outperforms SPyNet, FlowNet2-S~\cite{Ilg17}, and FlowNet2-C~\cite{Ilg17}. It is on par with PWC-Net~\cite{Sun18}.
With the improved architecture and training protocol, LiteFlowNet2 outperforms its predecessor LiteFlowNet and PWC-Net.
We also fine-tuned LiteFlowNet on a mixture of Sintel Clean and Final training data (\textbf{LiteFlowNet-ft}) using the generalized Charbonnier loss with the settings $\epsilon^{2} = 0.01$ and $q = 0.2$. We randomly crop $768\times384$ patches and use a batch size of 4. No noise augmentation is performed but we introduce image mirroring~\cite{Sun18} to improve the diversity of the training set. Learning rate is set to 5e-5 and the training schedule is similar to the training on Things3D except it is trained for 600k and is re-trained with a reduced learning rate for a reduced number of iterations. LiteFlowNet-ft outperforms FlowNet2-ft-sintel~\cite{Ilg17} and EpicFlow~\cite{Revaud15} on Sintel Final testing set. It is on par with PWC-Net-ft~\cite{Sun18}. Despite DC Flow~\cite{Xu17} (a hybrid method consists of CNN and post-processing) performs better than LiteFlowNet, its GPU runtime requires several seconds that makes it formidable in many applications.
For fine-tuning LiteFlowNet2, we further improve the diversity of the training set by using a mixture of Sintel and KITTI data (\textbf{LiteFlowNet2-ft}) for a batch size of 4 containing two image pairs from each of the training sets. Unlike PWC-Net+~\cite{Sun19}, our mixture does not contains HD1K dataset~\cite{Kondermann16} as we have experienced that there is no significant improvement after including it. LiteFlowNet2-ft outperforms all the compared methods and is on par with PWC-Net-ft+ on Sintel Clean and Sintel Final testing sets. On the other hand, when LiteFlowNet2 is fine-tuned on the same training set as LiteFlowNet (\emph{i.e. } containing Sintel training set only), AEE is increased from 3.48 to 3.83 and 4.69 to 5.06 on Sintel Clean and Final testing sets, respectively. Nevertheless, it still outperforms LiteFlowNet.
We also train LiteFlowNet using the new fine-tuning protocol. AEE is decreased from 4.54 to 4.01 and 5.38 to 5.21 on the testing sets of Sintel Clean and Final, respectively.
Some examples of flow fields on the training and testing sets of Sintel are provided in Figure~\ref{fig:Sintel flows}. Since LiteFlowNet(-ft) and LiteFlowNet2(-ft) have flow regularization, sharper flow boundaries and lesser artifacts can be observed in the resulting flow fields.
\begin{figure*}[t]
\begin{center}
\captionsetup[subfigure]{labelformat=empty, justification=centering}
\captionsetup[subfloat]{farskip=0pt,captionskip=0pt}
\begin{tabular}{cccccc}
\includegraphics[width=3.0cm]{figure_arxiv2/Overlay_MPICleanTrain413.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/GT_MPICleanTrain413.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/FlowNet2_MPICleanTrain413_EPE_1-0707.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/PWC-Net_MPICleanTrain413_EPE_1-2080.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet_MPICleanTrain413_EPE_1-0534.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet2_MPICleanTrain413_EPE_0-9837.png}\\
\includegraphics[width=3.0cm]{figure_arxiv2/Overlay_MPICleanTrain637.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/GT_MPICleanTrain637.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/FlowNet2_MPICleanTrain637_EPE_3-0101.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/PWC-Net_MPICleanTrain637_EPE_3-6635.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet_MPICleanTrain637_EPE_2-7382.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet2_MPICleanTrain637_EPE_2-1623.png} \\
\includegraphics[width=3.0cm]{figure_arxiv2/Overlay_MPIFinalTrain172.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/GT_MPIFinalTrain172.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/FlowNet2_MPIFinalTrain172_EPE_3-0907.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/PWC-Net_MPIFinalTrain172_EPE_2-3458.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet_MPIFinalTrain172_EPE_2-2209.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet2_MPIFinalTrain172_EPE_2-1373.png}\\
\subfloat[Image overlay]{\includegraphics[width=3.0cm]{figure_arxiv2/Overlay_MPIFinalTrain1035.png}}\hfill
\subfloat[Ground truth]{\includegraphics[width=3.0cm]{figure_arxiv2/GT_MPIFinalTrain1035.png}}\hfill
\subfloat[FlowNet2~\cite{Ilg17}]{\includegraphics[width=3.0cm]{figure_arxiv2/FlowNet2_MPIFinalTrain1035_EPE_5-9815.png}}\hfill
\subfloat[PWC-Net$^{1}$~\cite{Sun18}]{\includegraphics[width=3.0cm]{figure_arxiv2/PWC-Net_MPIFinalTrain1035_EPE_4-4407.png}}\hfill
\subfloat[LiteFlowNet~\cite{Hui18}]{\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet_MPIFinalTrain1035_EPE_4-2038.png}}\hfill
\subfloat[LiteFlowNet2]{\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet2_MPIFinalTrain1035_EPE_3-9696.png}}\\
\includegraphics[width=3.0cm]{figure_arxiv2/Img1_MPICleanTest116.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/SPyNet-ft_MPICleanTest116.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/FlowNet2-ft_MPICleanTest116.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/PWC-Net+_MPICleanTest116.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet-ft_MPICleanTest116.png}\hfill
\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet2_MPICleanTest116.png}\\
\subfloat[First image]{\includegraphics[width=3.0cm]{figure_arxiv2/Img1_MPIFinalTest33.png}}\hfill
\subfloat[SPyNet-ft~\cite{Ranjan17}]{\includegraphics[width=3.0cm]{figure_arxiv2/SPyNet-ft_MPIFinalTest33.png}}\hfill
\subfloat[FlowNet2-ft~\cite{Ilg17}]{\includegraphics[width=3.0cm]{figure_arxiv2/FlowNet2-ft_MPIFinalTest33.png}}\hfill
\subfloat[PWC-Net+~\cite{Sun19}]{\includegraphics[width=3.0cm]{figure_arxiv2/PWC-Net+_MPIFinalTest33.png}}\hfill
\subfloat[LiteFlowNet-ft~\cite{Hui18}]{\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet-ft_MPIFinalTest33.png}}\hfill
\subfloat[LiteFlowNet2-ft]{\includegraphics[width=3.0cm]{figure_arxiv2/LiteFlowNet2-ft_MPIFinalTest33.png}}\\
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption{Examples of flow fields from different methods on Sintel training sets (clean pass: first to second rows, final pass: third to fourth rows) and testing sets (clean pass: fifth row, final pass: last row). Fine details are well preserved and less artifacts can be observed in the flow fields of LiteFlowNet2 and LiteFlowNet2-ft. For the best visual comparison, it is recommended to enlarge the figure electronically. (Note: $^{1}$At the time of submission, the authors~\cite{Sun18} only release the trained model of PWC-Net that uses a larger feature encoder (overall footprint: 9.37M vs 8.75M) and has a slower runtime (41.12ms vs 39.63ms) trained on Chairs~$\rightarrow$~Things3D.)}
\label{fig:Sintel flows}
\end{figure*}
\vspace{0.1cm}
\noindent
\textbf{KITTI.} LiteFlowNet consistently performs better than LiteFlowNet-pre especially on KITTI 2015 as shown in Table~\ref{tab:results}. It also outperforms SPyNet~\cite{Ranjan17}, FlowNet2-S~\cite{Ilg17}, and FlowNet2-C~\cite{Ilg17}. LiteFlowNet2, the successor of LiteFlowNet, outperforms FlowNet2~\cite{Ilg17}, LiteFlowNet, and PWC-Net~\cite{Sun18} as well.
We also fine-tuned LiteFlowNet and LiteFlowNet2 on a mixture of KITTI 2012 and KITTI 2015 training data (\textbf{LiteFlowNet-ft} and \textbf{LiteFlowNet2-ft}) using the same augmentation and training schedule as the case of Sintel except that we reduced the amount of augmentation for spatial motion~\cite{Sun18} to fit the driving scene. The height of each image in KITTI dataset is less than that of Sintel about 100 pixels. We randomly crop 896$\times$320 patches to maintain a similar patch area as Sintel and use a batch size of 4. We have experienced that training on KITTI is more challenging than Sintel not only because the training set of KITTI 2012 and KITTI 2015 contains just less than 400 image pairs but also the flow labels are sparse. The insufficient number of per-pixel flow labels greatly affect the performance of the flow network. When fine-tuning LiteFlowNet2 on KITTI, we upsample the constructed flow fields by a factor of 2 in each pyramid level. This effectively increases the number of per-pixel flow labels available. Table~\ref{table:improvements on KITTI} summarizes the improvements in terms of AEE under different network and training configurations.
After fine-tuning, LiteFlowNet and LiteFlowNet2 generalize well to real-world data. LiteFlowNet-ft outperforms all the compared conventional and hybrid methods by a large extent. It also outperforms FlowNet2-ft-kitti~\cite{Ilg17} and PWC-Net-ft~\cite{Sun18}. With the improved architecture and training protocol, LiteFlowNet2-ft outperforms LiteFlowNet, PWC-Net-ft, and PWC-Net+~\cite{Sun19}.
Figure~\ref{fig:KITTI flows} shows some examples of flow fields on the training and testing sets KITTI 2012 and KITTI 2012. As in the case for Sintel, LiteFlowNet(-ft), and LiteFlowNet2(-ft) perform the best among the compared methods. Even though LiteFlowNet and LiteFlowNet2 perform pyramidal descriptor matching in a limited searching range, it yields reliable large-displacement flow fields for real-world data due to the feature warping (f-warp) layer introduced. An ablation study of different components in LiteFlowNet will be presented in Section~\ref{sec:ablation study}.
\begin{figure*}[t]
\begin{center}
\captionsetup[subfigure]{labelformat=empty, justification=centering}
\captionsetup[subfloat]{farskip=0pt,captionskip=0pt}
\begin{tabular}{cccccc}
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/Overlay_KITTI12Train74.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/GT_KITTI12Train74.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/FlowNet2_KITTI12Train74_EPE_6-5200.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/PWC-Net_KITTI12Train74-EPE_6-3382.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet_KITTI12Train74_EPE_4-4110.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet2_KITTI12Train74_EPE_4-0877.png}\\
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/Overlay_KITTI12Train157.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/GT_KITTI12Train157.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/FlowNet2_KITTI12Train157_EPE_8-3683.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/PWC-Net_KITTI12Train157_EPE_7-3161.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet_KITTI12Train157_EPE_5-6203.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet2_KITTI12Train157_EPE-5-4547.png}\\
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/Overlay_KITTI15Train37.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/GT_KITTI15Train37.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/FlowNet2_KITTI15Train37_EPE_9-9877.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/PWC-Net_KITTI15Train37_EPE_8-4168.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet_KITTI15Train37_EPE_8-0986.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet2_KITTI15Train37_EPE_5-9066.png}\\
\subfloat[Image overlay]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/Overlay_KITTI15Train89.png}}\hfill
\subfloat[Ground truth]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/GT_KITTI15Train89.png}}\hfill
\subfloat[FlowNet2~\cite{Ilg17}]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/FlowNet2_KITTI15Train89_EPE_10-5143.png}}\hfill
\subfloat[PWC-Net$^{1}$~\cite{Sun18}]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/PWC-Net_KITTI15Train89_EPE_7-1930.png}}\hfill
\subfloat[LiteFlowNet~\cite{Hui18}]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet_KITTI15Train89_EPE_7-3993.png}}\hfill
\subfloat[LiteFlowNet2]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet2_KITTI15Train89_EPE_6-6827.png}}\\
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/Img1_KITTI12Test47.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/SPyNet-ft_KITTI12Test47.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/FlowNet2-ft_KITTI12Test47.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/PWC-Net+_KITTI12Test47.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet-ft_KITTI12Test47.png}\hfill
\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet2-ft_KITTI12Test47.png}\\
\subfloat[First image]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/Img1_KITTI15Test118.png}}\hfill
\subfloat[SPyNet-ft~\cite{Ranjan17}]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/SPyNet-ft_KITTI15Test118.png}}\hfill
\subfloat[FlowNet2-ft~\cite{Ilg17}]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/FlowNet2-ft_KITTI15Test118.png}}\hfill
\subfloat[PWC-Net+~\cite{Sun19}]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/PWC-Net+_KITTI15Test118.png}}\hfill
\subfloat[LiteFlowNet-ft~\cite{Hui18}]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet-ft_KITTI15Test118.png}}\hfill
\subfloat[LiteFlowNet2-ft]{\includegraphics[width=3.0cm,trim={2cm 0 2cm 0},clip]{figure_arxiv2/LiteFlowNet2-ft_KITTI15Test118.png}}\\
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption{Examples of flow fields from different methods on the KITTI 2012 and 2015 training sets (2012: first to second rows, 2015: third to fourth row) and testing sets (2012: fifth row, 2015: last row). For the best visual comparison, it is recommended to enlarge the figure electronically. (Note: $^{1}$At the time of submission, the authors~\cite{Sun18} only release the trained model of PWC-Net that uses a larger feature encoder (overall footprint: 9.37M vs 8.75M) and has a slower runtime (41.12ms vs 39.63ms) trained on Chairs~$\rightarrow$~Things3D.)}
\label{fig:KITTI flows}
\end{figure*}
\begin{table}[t]
\small
\centering
\caption{AEE of LiteFlowNet2 fine-tuned on KITTI under different configurations. Out-Noc (or Out-All): Percentage of erroneous pixels in non-occluded areas (or in total). Fl-bg (or Fl-fg): Percentage of optical flow outliers averaged only over background (or foreground) regions. (Note: $^{1}$Comparing to LiteFlowNet~\cite{Hui18}, LiteFlowNet2 uses a simplified (pseudo) network structure for flow inference and regularization at level 2 on KITTI.)} \label{table:improvements on KITTI}
\scalebox{0.80}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{}
&\multicolumn{1}{c|}{\cite{Hui18}}
&\multicolumn{3}{c|}{LiteFlowNet2} \\
\hline
\multicolumn{1}{|l|}{Flow levels up to level 3}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark} \\
\multicolumn{1}{|l|}{Flow levels up to (pseudo)$^{1}$ level 2}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark} \\
\multicolumn{1}{|l|}{Double GT resolution at each level}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark} \\
\hline
\multicolumn{1}{|l|}{KITTI 2012: train}
&\multicolumn{1}{c|}{(1.05)}
&\multicolumn{1}{c|}{(1.07)}
&\multicolumn{1}{c|}{(1.00)}
&\multicolumn{1}{c|}{(\textbf{0.95})} \\
\multicolumn{1}{|l|}{~~~~~~~~~~~~~~~~~~~~test (Out-Noc)}
&\multicolumn{1}{c|}{3.27\%}
&\multicolumn{1}{c|}{3.07\%}
&\multicolumn{1}{c|}{2.72\%}
&\multicolumn{1}{c|}{\textbf{2.63\%}} \\
\multicolumn{1}{|l|}{~~~~~~~~~~~~~~~~~~~~test (Out-all)}
&\multicolumn{1}{c|}{7.27\%}
&\multicolumn{1}{c|}{6.92\%}
&\multicolumn{1}{c|}{6.30\%}
&\multicolumn{1}{c|}{\textbf{6.16\%}} \\
\multicolumn{1}{|l|}{~~~~~~~~~~~~~~~~~~~~test (Avg-all)}
&\multicolumn{1}{c|}{1.6}
&\multicolumn{1}{c|}{1.5}
&\multicolumn{1}{c|}{\textbf{1.4}}
&\multicolumn{1}{c|}{\textbf{1.4}} \\
\hline
\multicolumn{1}{|l|}{KITTI 2015: train}
&\multicolumn{1}{c|}{(1.62)}
&\multicolumn{1}{c|}{(1.61)}
&\multicolumn{1}{c|}{(1.47)}
&\multicolumn{1}{c|}{(\textbf{1.33})} \\
\multicolumn{1}{|l|}{~~~~~~~~~~~~~~~~~~~~train (Fl-all)}
&\multicolumn{1}{c|}{(5.58\%)}
&\multicolumn{1}{c|}{(5.57\%)}
&\multicolumn{1}{c|}{(4.80\%)}
&\multicolumn{1}{c|}{(\textbf{4.32\%})} \\
\multicolumn{1}{|l|}{~~~~~~~~~~~~~~~~~~~~test (Fl-bg)}
&\multicolumn{1}{c|}{9.66\%}
&\multicolumn{1}{c|}{8.72\%}
&\multicolumn{1}{c|}{7.85\%}
&\multicolumn{1}{c|}{\textbf{7.62\%}} \\
\multicolumn{1}{|l|}{~~~~~~~~~~~~~~~~~~~~test (Fl-fg)}
&\multicolumn{1}{c|}{7.99\%}
&\multicolumn{1}{c|}{8.20\%}
&\multicolumn{1}{c|}{\textbf{7.20\%}}
&\multicolumn{1}{c|}{7.64\%} \\
\multicolumn{1}{|l|}{~~~~~~~~~~~~~~~~~~~~test (Fl-all)}
&\multicolumn{1}{c|}{9.38\%}
&\multicolumn{1}{c|}{8.63\%}
&\multicolumn{1}{c|}{7.74\%}
&\multicolumn{1}{c|}{\textbf{7.62\%}} \\
\hline
\end{tabular}}
\vspace{-0.3cm}
\end{table}
\vspace{0.1cm}
\noindent
\textbf{LiteFlowNet-CVPR18~\cite{Hui18} vs LiteFlowNet-arXiv~\cite{Hui18-arxiv}.}
In the arXiv version of LiteFlowNet, we excluded a small amount of training data in Things3D undergoing extremely large flow displacement as it is rare to exist in real-world data. On the respective training set, AEE can be improved from 2.52 to 2.48 on Sintel Clean, 4.05 to 4.04 on Sintel Final, 4.25 to 4.00 on KITTI 2012, and 10.46 to 10.39 (Fl-all: 29.30\% to 28.50\%) on KITTI 2015.
For fine-tuning on Sintel, we removed additive noise but introduced image mirroring during data augmentation as~\cite{Sun18}. AEE on the testing set can be improved from 4.86 to 4.54 for the Clean pass and 6.09 to 5.38 for the Final pass.
For fine-tuning on KITTI, we further reduced the amount of augmentation for spatial motion as\cite{Sun18}. On the respective testing sets, AEE can be improved from 1.7 to 1.6 on KITTI 2012 and Fl-all can be improved from 10.24\% to 9.38\% on KITTI 2015.
\vspace{0.1cm}
\noindent
\textbf{An I/O requirement on LMDB generation.} We use the modified Caffe package~\cite{Dosovitskiy15} to train and test our optical flow networks. The LMDB script requires all image pairs and flow fields to have same the spatial dimension. We knew the I/O requirement as early as our previous CVPR 2018 work (LiteFlowNet)~\cite{Hui18}. There are five types of spatial dimensions in the combined training sets of KITTI 2012 and KITTI 2015, namely $1224\times370$, $1226 \times370$, $1238\times374$, $1241\times376$, and $1242\times375$. In order to fulfill the requirement, all the images and flow fields are cropped to $1224\times370$ before generating LMDB files. Recent work~\cite{Sun19} reports the I/O requirement using Caffe and regards the previous improper usage~\cite{Sun18} as an I/O bug.
\begin{table*}[ht]
\small
\centering
\caption{Number of training parameters and runtime. The model for which the runtime is in parentheses is measured using Torch, and hence are not directly comparable to the others using Caffe. (Note: $^{1}$The runtime is longer when comparing to the value provided by the authors~\cite{Sun19} because it was measured by a faster NVIDIA TITAN Xp GPU than ours.)} \label{tab:model size and runtime}
\scalebox{0.85}{
\begin{tabular}{cccccccc}
\hline
\multicolumn{1}{|c|}{} &\multicolumn{2}{c|}{Shallow}
&\multicolumn{5}{c|}{Deep} \\
\hline
\multicolumn{1}{|l|}{Model} &\multicolumn{1}{c|}{FlowNetC~\cite{Dosovitskiy15}}
&\multicolumn{1}{c|}{SPyNet~\cite{Ranjan17}}
&\multicolumn{1}{c|}{FlowNet2~\cite{Ilg17}}
&\multicolumn{1}{c|}{PWC-Net+~\cite{Sun19}}
&\multicolumn{1}{c|}{LiteFlowNetX~\cite{Hui18}}
&\multicolumn{1}{c|}{LiteFlowNet~\cite{Hui18}}
&\multicolumn{1}{c|}{LiteFlowNet2} \\
\hline
\multicolumn{1}{|l|}{No. of learnable layers} &\multicolumn{1}{c|}{26}
&\multicolumn{1}{c|}{35}
&\multicolumn{1}{c|}{115}
&\multicolumn{1}{c|}{59}
&\multicolumn{1}{c|}{69}
&\multicolumn{1}{c|}{94}
&\multicolumn{1}{c|}{91} \\
\multicolumn{1}{|l|}{No. of parameters (M)} &\multicolumn{1}{c|}{39.16}
&\multicolumn{1}{c|}{1.20}
&\multicolumn{1}{c|}{162.49}
&\multicolumn{1}{c|}{8.75}
&\multicolumn{1}{c|}{0.90}
&\multicolumn{1}{c|}{5.37}
&\multicolumn{1}{c|}{6.42}\\
\multicolumn{1}{|l|}{Runtime (ms)} &\multicolumn{1}{c|}{31.51}
&\multicolumn{1}{c|}{(129.83)}
&\multicolumn{1}{c|}{121.49}
&\multicolumn{1}{c|}{39.63$^{1}$}
&\multicolumn{1}{c|}{35.10}
&\multicolumn{1}{c|}{88.53}
&\multicolumn{1}{c|}{39.69} \\
\multicolumn{1}{|l|}{Frame/second (fps)} &\multicolumn{1}{c|}{31}
&\multicolumn{1}{c|}{(8)}
&\multicolumn{1}{c|}{8}
&\multicolumn{1}{c|}{25}
&\multicolumn{1}{c|}{28}
&\multicolumn{1}{c|}{12}
&\multicolumn{1}{c|}{25} \\
\hline
\end{tabular}}
\end{table*}
\subsection{Runtime and Number of Parameters}
\label{sec:runtime}
We measure runtime of CNNs on a machine equipped with an Intel Xeon E5 2.2GHz and an NVIDIA GTX 1080. Timings are averaged over 100 runs for a Sintel image pair with size $1024\times436$. For a fair comparison, we also exclude the reading and writing time as PWC-Net(+)~\cite{Sun18, Sun19}.
As summarized in Table~\ref{tab:model size and runtime},
\begin{itemize}
\item LiteFlowNet requires \textbf{30.3x fewer} parameters than FlowNet2~\cite{Ilg17} and is \textbf{1.4x faster} in the runtime. It requires \textbf{1.6x fewer} parameters ($\downarrow$~3.4M) than PWC-Net+.
\item LiteFlowNetX, a small-model variant of LiteFlowNet, which has no descriptor matching requires \textbf{43.5x fewer} parameters than FlowNetC~\cite{Dosovitskiy15} and has a comparable runtime. It has \textbf{1.3x fewer} parameters than SPyNet~\cite{Ranjan17}.
\item LiteFlowNet2 requires \textbf{25.3x fewer} parameters than FlowNet2 while being \textbf{3.1x faster}. It is \textbf{2.2 times faster} than LiteFlowNet. In comparison to PWC-Net+, LiteFlowNet2 requires \textbf{1.4x fewer} parameters ($\downarrow$~2.3M). Its processing frequency can reach up to 25 flow fields per second and is similar to PWC-Net+.
\end{itemize}
\subsection{Ablation Study}
\label{sec:ablation study}
\begin{figure*}[ht]
\centering
\includegraphics[width=4.4cm]{figure_arxiv2/MPICleanTrain248.png}
\includegraphics[width=4.4cm]{figure_arxiv2/LiteFlowNet-pre-M-MPICleanTrain248.png}
\includegraphics[width=4.4cm]{figure_arxiv2/LiteFlowNet-pre-MS-MPICleanTrain248.png}
\includegraphics[width=4.4cm]{figure_arxiv2/LiteFlowNet-pre-WM-MPICleanTrain248.png}\\
\includegraphics[width=4.4cm]{figure_arxiv2/LiteFlowNet-pre-WSR-MPICleanTrain248.png}
\includegraphics[width=4.4cm]{figure_arxiv2/LiteFlowNet-pre-WMS-MPICleanTrain248.png}
\includegraphics[width=4.4cm]{figure_arxiv2/LiteFlowNet-pre-MPICleanTrain248.png}
\includegraphics[width=4.4cm]{figure_arxiv2/GT-MPICleanTrain248.png}
\caption{Examples of flow fields from different variants of LiteFlowNet trained on Chairs with some of the components disabled. LiteFlowNet is denoted as ``All''. W $=$ Feature \textbf{W}arping, M $=$ Descriptor \textbf{M}atching, S $=$ \textbf{S}ub-Pixel Refinement, R $=$ \textbf{R}egularization.}
\label{fig:ablation flows}
\end{figure*}
\begin{table}[t]
\small
\centering
\caption{AEE of different variants of LiteFlowNet trained on Chairs dataset with some of the components disabled.} \label{tab:ablation study}
\scalebox{0.85}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{Variants}
&\multicolumn{1}{c|}{M}
&\multicolumn{1}{c|}{MS}
&\multicolumn{1}{c|}{WM}
&\multicolumn{1}{c|}{WSR}
&\multicolumn{1}{c|}{WMS}
&\multicolumn{1}{c|}{ALL} \\
\hline
\multicolumn{1}{|l|}{Feature \textbf{W}arping}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark} \\
\multicolumn{1}{|l|}{Descriptor \textbf{M}atching}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark} \\
\multicolumn{1}{|l|}{\textbf{S}ub-pixel Refinement}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\cmark} \\
\multicolumn{1}{|l|}{\textbf{R}egularization}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark}
&\multicolumn{1}{c|}{\xmark}
&\multicolumn{1}{c|}{\cmark} \\
\hline
\multicolumn{1}{|l|}{FlyingChairs (train)}
&\multicolumn{1}{c|}{3.75}
&\multicolumn{1}{c|}{2.70}
&\multicolumn{1}{c|}{2.98}
&\multicolumn{1}{c|}{1.63}
&\multicolumn{1}{c|}{1.82}
&\multicolumn{1}{c|}{\textbf{1.57}} \\
\multicolumn{1}{|l|}{Sintel clean (train)}
&\multicolumn{1}{c|}{4.70}
&\multicolumn{1}{c|}{4.17}
&\multicolumn{1}{c|}{3.54}
&\multicolumn{1}{c|}{3.19}
&\multicolumn{1}{c|}{2.90}
&\multicolumn{1}{c|}{\textbf{2.78}} \\
\multicolumn{1}{|l|}{Sintel final (train)}
&\multicolumn{1}{c|}{5.69}
&\multicolumn{1}{c|}{5.30}
&\multicolumn{1}{c|}{4.81}
&\multicolumn{1}{c|}{4.63}
&\multicolumn{1}{c|}{4.45}
&\multicolumn{1}{c|}{\textbf{4.17}} \\
\multicolumn{1}{|l|}{KITTI 2012 (train)}
&\multicolumn{1}{c|}{9.22}
&\multicolumn{1}{c|}{8.01}
&\multicolumn{1}{c|}{6.17}
&\multicolumn{1}{c|}{5.03}
&\multicolumn{1}{c|}{4.83}
&\multicolumn{1}{c|}{\textbf{4.56}} \\
\multicolumn{1}{|l|}{KITTI 2015 (train)}
&\multicolumn{1}{c|}{18.24}
&\multicolumn{1}{c|}{16.19}
&\multicolumn{1}{c|}{14.52}
&\multicolumn{1}{c|}{13.20}
&\multicolumn{1}{c|}{12.32}
&\multicolumn{1}{c|}{\textbf{11.58}} \\
\hline
\end{tabular}}
\end{table}
We investigate the role of each component in LiteFlowNet trained on Chairs (\emph{i.e. } LiteFlowNet-pre) by evaluating the performance of different variants with some of the components disabled unless otherwise stated. The AEE results are summarized in Table~\ref{tab:ablation study} and examples of flow fields are illustrated in Figure~\ref{fig:ablation flows}.
\vspace{0.1cm}
\noindent
\textbf{Feature Warping.} We consider two variants of LiteFlowNet-pre (WM and WMS) and compare them to the counterparts with feature warping disabled (M and MS). Flow fields from M and MS are more vague. Large degradation in AEE is noticed especially for KITTI 2012 ($\downarrow$~33$\%$) and KITTI 2015 ($\downarrow$~25$\%$). With feature warping (f-warp), pyramidal features that input to flow inference are closer in appearance to each other. This facilitates flow estimation in subsequent levels by computing residual flows.
\vspace{0.1cm}
\noindent
\textbf{Descriptor Matching.} We evaluate WSR without descriptor matching for which the flow inference part is made as deep as that in the unamended LiteFlowNet-pre (ALL). No noticeable difference between the flow fields from WSR and ALL. Since the maximum displacement of the example flow field is not very large (only 14.7 pixels), accurate flow field can still be yielded from WSR. For evaluation covering a wide range of flow displacement (especially large-displacement benchmark, KITTI), degradation in AEE is noticed for WSR. This suggests that descriptor matching is useful in addressing large-displacement flow.
\vspace{0.1cm}
\noindent
\textbf{Sub-Pixel Refinement.} The flow field generated from WMS is more crisp and contains more fine details than that generated from WM with sub-pixel refinement disabled. Less small-magnitude flow artifacts (represented by light color on the background) are observed. Besides, WMS achieves smaller AEE. Since descriptor matching establishes pixel-by-pixel correspondence, sub-pixel refinement is necessary to yield detail-preserving flow fields.
\vspace{0.1cm}
\noindent
\textbf{Regularization.} In comparison WMS with regularization disabled to ALL, undesired artifacts exist in homogeneous regions (represented by very dim color on the background) of the flow field generated from WMS. Flow bleeding and vague flow boundaries are observed. Degradation in AEE is also noticed. This suggests that the proposed feature-driven local convolution (f-lconv) plays the vital role to smooth flow field and maintain crisp flow boundaries as regularization term in conventional variational methods.
\begin{table}[t]
\small
\centering
\caption{AEE and runtime of LiteFlowNet2 trained on Chairs under different cost-volume settings. The value in parentheses represents the setting at level 3.}
\label{tab:cost volume study}
\scalebox{0.85}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{Searching Range (pixels)}
&\multicolumn{1}{c|}{3}
&\multicolumn{1}{c|}{3 (6)}
&\multicolumn{1}{c|}{4}\\
\multicolumn{1}{|l|}{Stride}
&\multicolumn{1}{c|}{1}
&\multicolumn{1}{c|}{1 (2)}
&\multicolumn{1}{c|}{1}\\
\multicolumn{1}{|l|}{Levels}
&\multicolumn{1}{c|}{6 to 3}
&\multicolumn{1}{c|}{6 to 4 (3)}
&\multicolumn{1}{c|}{6 to 3}\\
\hline
\multicolumn{1}{|l|}{Sintel Clean (train)}
&\multicolumn{1}{c|}{2.73}
&\multicolumn{1}{c|}{2.78}
&\multicolumn{1}{c|}{\textbf{2.71}}\\
\multicolumn{1}{|l|}{Sintel Final (train)}
&\multicolumn{1}{c|}{\textbf{4.14}}
&\multicolumn{1}{c|}{\textbf{4.14}}
&\multicolumn{1}{c|}{\textbf{4.14}}\\
\multicolumn{1}{|l|}{KITTI 2012 (train)}
&\multicolumn{1}{c|}{4.26}
&\multicolumn{1}{c|}{\textbf{4.11}}
&\multicolumn{1}{c|}{4.20}\\
\multicolumn{1}{|l|}{KITTI 2015 (train)}
&\multicolumn{1}{c|}{11.72}
&\multicolumn{1}{c|}{11.31}
&\multicolumn{1}{c|}{\textbf{11.12}}\\
\hline
\multicolumn{1}{|l|}{Runtime (ms)}
&\multicolumn{1}{c|}{41.33}
&\multicolumn{1}{c|}{\textbf{39.69}}
&\multicolumn{1}{c|}{44.33}\\
\hline
\end{tabular}}
\vspace{-0.2cm}
\end{table}
\vspace{0.1cm}
\noindent
\textbf{Searching Range.} We compare three variants of LiteFlowNet2 trained on Chairs using different cost-volume settings as shown in Table~\ref{tab:cost volume study}. On the whole, a larger searching range leads to a lower AEE. The improvement is more significant on large-displacement benchmark, KITTI. Our design that uses a larger searching range together with a sparse cost volume in a high-resolution pyramid level not only improves flow accuracy but also promotes a more efficient computation. We choose the second cost-volume setting for our final models due to the fastest computation time.
\section{Conclusion}
\label{sec:conclusions}
We have developed a lightweight and effective convolutional network for addressing the classical problem of optical flow estimation through adopting data fidelity and regularization from variational methods.
LiteFlowNet uses the pyramidal feature extraction, feature warping, multi-level cascaded flow inference and flow regularization to break the de facto rule of accurate flow network requiring large model size. To address large-displacement and detail-preserving flows, it exploits a short-range matching to generate a pixel-level flow field and further improves the estimate to sub-pixel accuracy in each cascaded flow inference. To result crisp flow boundaries, each flow field is adaptively regularized through the feature-driven local convolution.
The evolution of LiteFlowNet creates LiteFlowNet2, that runs 2.2 times faster and attains a better flow accuracy. LiteFlowNet2 outperforms the state-of-the-art FlowNet2~\cite{Ilg17} on Sintel and KITTI benchmarks while being 3.1 times faster in the runtime and 25.3 times smaller in the model size. It also outperforms PWC-Net+~\cite{Sun19} on KITTI 2012 and 2015, and is on par with PWC-Net+ on Sintel Clean and Final while being 1.4 times smaller in the model size.
With its lightweight, accurate, and fast flow computation, LiteFlowNet2 can be deployed to many applications such as video processing, motion segmentation, action recognition, SLAM, 3D reconstruction, and more.
|
1,108,101,564,207 | arxiv | \section{Direct detection of neutralino dark matter}
The identification of dark matter (DM) is one of the most urgent questions
in astroparticle physics. For many decades, evidence for its sizeable
presence in the Universe and its important role in structure formation has
been accumulating, and the overall relic density $\Omega h^2$ ($h$ being the
present Hubble expansion rate in units of 100 km s$^{-1}$ Mpc$^{-1}$) has
been precisely measured \cite{Ade:2015xua}. The
lightest neutral supersymmetric (SUSY) partner of electroweak gauge
and Higgs bosons (neutralino $\tilde{\chi}^0_1$) continues to be a prime
candidate for WIMP (weakly interacting massive particle) DM, and
theoretical calculations of the DM relic density at next-to-leading
order (NLO) of QCD with DM@NLO, which include all coannihilation channels
(with the exception of $\tilde{t}_1\tilde{t}_1\to q\bar{q},gg$)
now match the experimental precision \cite{Harz:2016dql}. For an unambiguous
identification of DM, it must, however, be detected on Earth, e.g.\ with
large kryogenic detectors like XENON1T \cite{Aprile:2017iyp}. Comparisons
with theoretical cross section calculations and correlations with the relic
density or other observables (e.g.\ from indirect detection or the LHC) should
then allow for a precise extraction of the DM mass and couplings. For
neutralinos, this is now possible thanks to the calculation of NLO SUSY-QCD
corrections to the neutralino-nucleon cross section and the inclusion of this
second DM observable in DM@NLO \cite{Klasen:2016qyz}.
\section{Neutralino-nucleon cross section}
The differential rate for direct DM detection (in counts/kg/day/keV)
\begin{equation}
\frac{\mathrm{d}R}{\mathrm{d}E} = \sum_i c_i \frac{\sigma_i}{2m_{\tilde{\chi}^0_1}\mu_i^2}\rho_0 \eta_i
\end{equation}
is usually expressed in terms of the nuclear mass fractions $c_i$, reduced
masses $\mu_i$, local DM density $\rho_0=0.3$ GeV/cm$^3$, and velocity
integrals $\eta_i= \int_{v_{\min,i}}^{v_{\rm esc}}{\rm d}^3v \, f(\vec{v})/v$ with
$v_{\min,i}=\sqrt{m_iE/(2\mu_i^2)}$.
Since the spin-{\em independent} cross sections for each isotope in the target
\begin{equation}
\sigma_i^{\mathrm{SI}} = \frac{\mu_i^2}{\pi}\left|Z_i g_p^{\mathrm{SI}} +(A_i-Z_i)g_n^{\mathrm{SI}}\right|^2|F_i^{\mathrm{SI}}(Q_i)|^2
\end{equation}
depend on the nuclear charges $Z_i$, masses $A_i$ and structure functions
$F_i^{\rm SI}$, they are often replaced by the one for a single nucleon
(assuming $g_p=g_n$) to enable a direct comparison of different experiments.
We use, however, the exact expressions
\begin{equation}
g_N^{\mathrm{SI}} = \sum_{q} \langle N |\bar{q}q| N\rangle \alpha_{q}^{\mathrm{SI}}
\end{equation}
for the spin-independent four-fermion couplings. The Wilson coefficients
$\alpha_{q}^{\mathrm{SI}}$ contain the wanted information on the electroweak
interaction of DM and quarks, while the nuclear matrix elements
$ \langle N |m_q \bar{q}q| N\rangle = f_{Tq}^N m_N$ are known to be subject
to considerable uncertainties from the non-perturbative regime of QCD
\cite{Gondolo:2004sc,Belanger:2007zz,Crivellin:2013ipa}. Beyond
the tree-level, the Wilson coefficients $\alpha_q^{\mathrm{SI}}$ are, however,
also affected by (perturbative) QCD uncertainties and become related to the
nuclear matrix elements through renormalisation group equations.\footnote{The
role of
effective gluon interactions has been discussed in Ref.\ \cite{Drees:1992rr}.}
Similarly, the spin-{\em dependent} cross section
\begin{equation}
\sigma_i^{\mathrm{SD}} = \frac{4\mu_i^2}{2J +1}\big(|g_p^{\mathrm{SD}}|^2S_{\mathrm{pp},i}(Q_i) + |g_n^{\mathrm{SD}}|^2S_{\mathrm{nn},i}(Q_i)
+ |g_p^{\mathrm{SD}}g_n^{\mathrm{SD}}|S_{\mathrm{pn},i}(Q_i)\big)
\end{equation}
depends on the spin structure functions $S_{NN,i}$ and spin-dependent
four-fermion couplings
\begin{equation}
g_N^{\mathrm{SD}} = \sum_{q= u,d,s} (\Delta q)_N \alpha_{q}^{\mathrm{SD}}.
\end{equation}
Here, the nuclear spin $J$ is supposed to be carried mostly by the three light
quark flavours and to be isospin symmetric.\footnote{This need not be the case
as discussed in Refs.\ \cite{deFlorian:2014yva,Li:2015wca}.}
The tree-level diagrams for neutralino-quark scattering are shown in
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig01}
\caption{Full tree-level Feynman diagrams for neutralino-quark scattering.}
\label{fig:01}
\end{center}
\end{figure}
Fig.~\ref{fig:01}. After the calculation of all self-energy, vertex and box
corrections, we renormalise the ultraviolet (UV) divergences in a mixed
on-shell and
$\overline{\rm DR}$ scheme \cite{Klasen:2016qyz}. It has the advantages of
being perturbatively stable, in particular in the top sector, and of allowing
for meaningful correlations with our relic density calculations
\cite{Harz:2016dql} and tree-level comparisons with micrOMEGAs
\cite{Belanger:2007zz}, where the same on-shell squark masses are used that
are provided by the SUSY spectrum generator SPheno \cite{Porod:2011nf}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.22\textwidth]{fig02}
\caption{Effective tree-level diagram for neutralino-quark scattering.}
\label{fig:02}
\end{center}
\end{figure}
In the non-relativistic regime, our full calculation is then matched
to the spin-independent and spin-dependent operators $Q_{1,2}$ in the
effective Lagrangian
\begin{equation}
\mathcal{L}_\mathrm{eff} = c_1Q_1 + c_2Q_2 = c_1\bar{\chi}\chi\bar{q}q + c_2\bar{\chi}\gamma_\mu\gamma_5\chi\bar{q}\gamma^\mu\gamma_5q
\end{equation}
as shown symbolically in Fig.~\ref{fig:02}. As expected, the tree-level
coefficients, obtained after a Fierz transformation for the squark processes,
agree with those in DarkSUSY \cite{Gondolo:2004sc}. After the one-loop
corrections in the effective theory have also been computed, the matching
condition
\begin{eqnarray}
\mathcal{M}_\mathrm{full}^\mathrm{tree} + \mathcal{M}_\mathrm{full}^\mathrm{1loop} & \stackrel{!}{=} & (c_1^\mathrm{tree} + c_1^\mathrm{1loop})(Q_1^\mathrm{tree}+ Q_1^\mathrm{1loop}) + (c_2^\mathrm{tree} + c_2^\mathrm{1loop})(Q_2^\mathrm{tree} + Q_2^\mathrm{1loop})
\end{eqnarray}
leads to a refactorisation and UV-finite, but scale-dependent redefinitions of
Wilson coefficients and operators. In the spin-idependent case, the quark
masses $m_q(\mu)$ are factorised in $c_1$ and run from the high SUSY-breaking
scale 1 TeV to the low scale 5 GeV, where the nuclear matrix elements are
defined. In the spin-dependent case, the running of $c_2$ is given by
\begin{equation}
\frac{c_2(\mu_\mathrm{low})}{c_2(\mu_\mathrm{high})} = \exp\left(\frac{2n_f(\alpha_s(\mu_\mathrm{high}) - \alpha_s(\mu_\mathrm{low}))}{\beta_0\pi}\right).
\end{equation}
\section{Numerical results}
Phenomenological minimal SUSY Standard Model (pMSSM) scenarios with eleven
free parameters and bino-wino, bino-higgsino, or higgsino-bino DM, that satisfy
all current experimental constraints, have been presented in Ref.\
\cite{Harz:2016dql}. Scenario B, e.g., contains a bino-higgsino DM candidate
of about 267 GeV mass and up- and down-type squarks of mass 550 and 556 GeV,
respectively. Fig.\ \ref{fig:03} shows a scan in the bino mass parameter $M_1$
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.66\textwidth]{fig03}
\caption{Combined relic density and direct detection calculation in scnenario B.}
\label{fig:03}
\end{center}
\end{figure}
around this point, indicated there by full vertical lines at tree-level
(black), with micrOMEGAs (orange), and in NLO (blue). In the shown mass
region, a second viable scenario with a lower DM mass of about 228 GeV is
found, indicated by a second set of vertical lines. One observes good
agreement at leading order, but a significant shift at NLO in the
spin-dependent direct detection cross section (left ordinate and full curves).
The corresponding numbers are listed in Tab.\ \ref{tab:01}. When correlated
with the relic density calculations at the same order (dashed curves), this
leads to shifts in the extracted bino mass parameter $M_1$ of several GeV.
In other SUSY scenarios, these effects can even be considerably larger.
\begin{table}
\caption{Resulting $M_1$ and spin-dependent neutralino-proton cross section when combining direct detection and relic density routines in scenario B.}
\centering\vspace*{2mm}
\begin{tabular}{|c|ccc|}
\hline
$\quad$ & $M_1$ [GeV] & $\sigma^{\mathrm{SD}}_p$ [$10^{-43}$cm$^2$]& Shift of $\sigma^{\mathrm{SD}}_p$\\
\hline
micrOMEGAs & 226 & $2.78$ & $+3\%$ \\
Tree level & 228 & $2.70$ & \\
Full NLO & 227 & $1.65$ & $-39\%$ \\
\hline
micrOMEGAs & 270 & $4.14$ & $+8\%$\\
Tree level & 267 & $3.84$ & \\
Full NLO & 269 & $2.47$ & $-36\%$ \\
\hline
\end{tabular}
\label{tab:01}
\end{table}
\section{Conclusion}
In conclusion, we have summarised our recent analytical calculation of
NLO SUSY-QCD corrections to spin-independent and spin-dependent
neutralino-nucleon cross sections, emphasising our choice of renormalisation
scheme, the matching of the full diagrammatic calculation to the effective
scalar and axial-vector operators, and the renormalisation group running of
the Wilson coefficients. More technical issues like our specific tensor
reduction method, that avoids vanishing Gram determinants at non-relativistic
velocities, were omitted from our discussion, but can be found in Ref.\
\cite{Klasen:2016qyz}.
Numerical results for the direct detection of SUSY DM can now be obtained
with DM@NLO for any neutralino decomposition (bino, wino, or higgsino).
For a specific bino-higgsino benchmark scenario we found sizeable NLO
corrections, which are in fact comparable to the nuclear uncertainties,
and we demonstrated that correlations of the relic density and direct
detection rates at NLO lead to more precise determinations of the underlying
SUSY model parameters.
|
1,108,101,564,208 | arxiv | \section{Introduction}\label{sec:intro}
\begin{figure}\label{fig:bumpSummary}
\centering
\includegraphics{bumpAttractorIntro}
\caption{(a) Bifurcation diagram of travelling waves in a continuous
integrate-and-fire model. (b) Bump attractor in a discrete integrate-and-fire
model with 5000 neurons (dots represent neuronal firing events). Model descriptions and
parameters will be
given later in \cref{sec:discreteModel,tab:parameters,fig:exampleBumps,fig:exampleWaves}).
The bifurcation diagram in (a) shows selected branches of stable (blue) and unstable (grey)
travelling waves, in the continuation parameter $\beta$, that is, the timescale
at which neuron process incoming currents. Waves are measured using their width
$\Delta$ and are indexed by the number of advected spikes. The profile of
\tw{11}, a representative wave with $11$ spikes, is shown. A large number of
waves (\tw{2}--\tw{160} in the picture, but many more unstable branches are
omitted) coexist with the trivial homogeneous
state, which is the only steady state in the model, and which is stable for all
values of $\beta$. Thin waves are stable, with a small basin of attraction.
Sufficiently large, localised disturbances of the homogenous state lead to the
formation of a bump with a characteristic width: the bump in (b) is marked as (2)
in (a). The region in parameter space where bumps are
observed is crowded with unstable travelling waves, with a large number of
spikes, and a width comparable to the one of the bump. Branches of waves are
detached from the homogeneous state; they originate at critical points called
grazing points (blue dots in (a)); waves that are born stable become unstable at
oscillatory bifurcations (grey dots in (a)).
}
\end{figure}
Understanding how networks of coupled, excitable units generate collective patterns is a
central question in the life sciences and, more generally, in applied mathematics.
In particular, the study of network models
is ingrained in neuroscience applications, as they provide a natural way to describe
the interaction of neurons within a population, or of neural populations within the
cortex. In the past decades, a large body of work in mathematical neuroscience has
addressed the development and analysis of neurobiological networks, with the view of studying the
origin of large-scale brain
activity~\cite{ermentrout2010mathematical,bressloff2014waves,coombes2014neural}, and
mapping single-cell and population parameters to experimental observations,
including in vivo and in vitro cortical
waves~\cite{Richardson:2005cs,Huang:2004kw,GonzalezRamirez:2015gk},
electroencephalogram recordings~\cite{SteynRoss:2003ep}, and patterns in the visual
cortex~\cite{Camperi:1998ji}.
This paper presents a novel mathematical characterisation of a prominent example of
spatiotemporal pattern in neuroscience applications, and draws an analogy inspired by
recent progress in the fluid-dynamics literature on transition to turbulence in a
pipe~\cite{barkley2016theoretical}. We focus on the so-called \textit{bump
attractor}\footnote{In the neuroscience
literature the term \textit{bump attractor} refers sometimes to a network producing a
localised pattern, as opposed to the pattern itself. Similarly, some authors use
\textit{ring attractor} for a network with ring topology, generating a localised
activity bump. Here, we use these terms to refer to patterns, following the standard
convention in the dynamical system literature.}, a localised pattern of
neural activity observable in experiments and numerical simulations of
spatially-extended, neurobiological networks
\cite{redish1996coupled,zhang1996representation}. Bump attractors have been
associated to \textit{working
memory}, the temporary storage of information in the brain, and experimental evidence
supporting their existence has been found in the navigational systems of
rats~\cite{knierim2012attractor} and flies
\cite{kim2017ring,turner2017angular}, and in oculomotor responses of
monkeys~\cite{wimmer2014bump}.
In a bump attractor, the neural activity is localised around a particular position in the
network (see \cref{fig:bumpSummary}(b)) which may encode, for instance, the animal's head
position. Bumps are elicited by transient localised stimuli, such as visual cues at a
specific locations, but are sustained
autonomously by the network once the stimulus is removed (the network dynamics is
\emph{attracted} to the bump). These coherent structures display a characteristic
wandering motion, and may exhibit discontinuous jumps if the impinging stimulus
undergoes sudden spatial shifts \cite{kim2017ring}.
\subsection{Model descriptions} Mathematical neuroscience has a long-standing
fascination with localised bumps of
activity. Neural field models, which represent the cortex as a continuum, were
introduced in the 1970s, and spatially-localised solutions to these models appeared
already in seminal papers on the subject, by Wilson and
Cowan~\cite{wilson1973mathematical}, and by Amari~\cite{amari1977dynamicsa}. Since
then, many authors have studied localised solutions in neural fields, addressed the
derivation of neural field equations from first principles, their relevance to a
wide variety of neural phenomena, and their rigorous mathematical treatment. We refer
the reader to \cite{ermentrout2010mathematical,bressloff2014waves,coombes2014neural}
for exhaustive introductions on this topic.
Neural fields are integro-differntial equations which model the cortex as an
excitable, spatially-extended medium. Mathematical mechanisms for pattern
formation in neural fields are similar to the ones found in other nonlinear media,
such as reaction-diffusion systems, albeit their analysis requires some
modifications because these models contain nonlocal operators. Stationary bumps form via
instabilities of the homogeneous steady state, and their profile depends strongly on the
coupling, which typically involves excitation on short spatial scales, and inhibition
on longer
scales~\cite{ermentrout2010mathematical,bressloff2014waves,coombes2014neural}. Neural
fields support
travelling bump solutions, as well as wandering bumps. The latter are obtained in
neural fields that incorporate stochastic terms deriving, for instance, from noisy
currents~\cite{kilpatrick2013wandering,maclaurin2019determination}.
Neural fields are heuristic, coarse grained models, hence they bypass microscopic
details that are important in bump attractors. For instance, the
neural firing rate, which is an emergent neural property and an observable in the
bump attractor experiments, is a prescribed feature in neural fields, hardwired in the model
through an ad-hoc firing-rate function. On the other hand, numerical simulations
of large networks of Hodgkin--Huxley-type neurons with realistic biological
details can display emergent neural firing, but their mathematical treatment is
challenging, and still under development~\cite{baladron2012mean}.
\emph{Spiking neural networks} are intermediate, bottom-up models which couple
neurons with idealised dynamics. The
salient feature of spiking models is that the firing of a neuron is described as an
event, and no attempt is made to model the temporal evolution of the membrane
potential during and after the
spike~\cite{izhikevich2007dynamical,gerstner2014neuronal,bressloff2014waves}. Spiking
neural networks are specified by 3 main ingredients: (i) an ordinary differential
equation (ODE) for the membrane potential of each neuron; (ii) rules to define the
occurrence and effects of a spike; (iii) the network coupling.
Since the introduction of the first single-cell spiking model by
Lapicque~\cite{lapicque1907recherches}, the so-called \emph{leaky integrate-and-fire
model}, more realistic variants have been proposed, and spiking neural networks
have become a widely adopted tool in theoretical
neuroscience~\cite{tuckwell1988introduction,Burkitt:2006,gerstner2014neuronal}.
In specific spiking models, analytical progress has been made for single neurons
and spatially-independent networks using coordinate
transformations~\cite{ermentrout1986parabolic,Mirollo:2006ft}, dimension
reduction~\cite{luke2013complete,montbrio2015macroscopic}, and
probabilistic methods~\cite{delarue2015global} (see also the reviews
\cite{sacerdote2013stochastic,bick2019understanding}).
Exact mean-field reductions, amenable to standard pattern-formation
analysis, have been derived in selected spatially-extended
networks~\cite{laing2015exact,esnaola2017synchrony,byrne2019next,schmidt2020bumps},
but generally the study of bumps in spiking models has been possible only with
numerical simulations~\cite{Laing:2001fc,compte2000synaptic}.
The present paper investigates localised patterns supported in discrete and
continuous networks of globally coupled leaky integrate-and-fire neurons. In direct numerical
simulations, we use a well-known discrete model, proposed by Laing and
Chow~\cite{Laing:2001fc}, whose details will be
given later. For now it will suffice to consider a cursory formulation of
the model, simulated in \cref{fig:bumpSummary}(b). The network describes the idealised,
dimensionless voltage dynamics of $n$ all-to-all coupled neurons, evenly-spaced in a
cortex with ring geometry,
\begin{equation}\label{eq:toyNetwork}
\dot v_i = -v_i + I_i(t) + \sum_{j=1}^n S_{ij}(v_j,\beta),
\qquad i = 1,\ldots,n.
\end{equation}
The dynamics of the $i$th neuron's membrane voltage is specified in terms of an Ohmic
leakage current $-v_i$, an external current $I_i(t)$, and voltage-dependent currents,
received from other neurons via synaptic connections; the latter currents, indicated
by $S_{ij}$, have a characteristic time scale $\beta$, and are caused by $v_j$
crossing a fixed threshold (when the $j$th neuron \textit{fires}). After a firing event,
marked with a dot in \cref{fig:bumpSummary}(b), the neuron's voltage is instantly reset to a
default value, from which it can evolve again, following an ODE of
type~\cref{eq:toyNetwork}.
Discrete and continuous networks of this type are canonical
models of neural activity, widely adopted in the mathematical neuroscience
literature~\cite{%
vanVreeswijk:1994fb,
vanVreeswijk:1996jf,
ermentrout1998neural,
Ermentrout1998c,
Bressloff:2000uj,
Laing:2001fc,
Ermentrout:2002dl,
Osan:2002jq,
compte2003cellular,
Osan:2004ko,
Mirollo:2006ft,
Gerstner:2008be}.
It is now established that such networks supports bump attractors and localised
waves, but an explanation of the mathematical origins of the former is still lacking.
This paper presents a new approach to the problem,
and uncovers a novel bifurcation structure for localised travelling waves of the
network, shedding light onto the nature of the bump attractor. Our findings
suggest an intriguing analogy between the bump attractor in the integrate-and-fire
network and the phenomenon of transition to turbulence in a pipe. The analogy between
the bifurcation scenarios of these two problems is notable, and we use it here to
summarise our results, highlighting similarities between the respective bifurcation
structures and dynamical regimes.
\subsection{Transition to turbulence in a pipe}
Stemming from the pioneering experiments of Reynolds~\cite{reynolds1883xxix}, a large
body of work in fluid dynamics has addressed how high-speed pipe flows transition from a
laminar state, whose analytical expression is known in closed form, to complex
spatio-temporal patterns, characteristic of the turbulent regime
(see~\cite{barkley2016theoretical} for a recent review).
In this context, the Navier-Stokes equations are studied as a deterministic dynamical
system, subject to changes in Reynolds number, the principal control parameter.
Experiments and computer simulations indicate that the laminar state is stable to
infinitesimal perturbations (linearly stable) up to
large values of the control parameter (up to at least Reynolds number $10^7$ in
numerical computations)~\cite
salwen1980linear,
darbyshire1995transition,
meseguer2003linearized,
van2009flow,
manneville2015transition}.
However, when a disturbance is applied at sufficiently large Reynolds
numbers, a transition to turbulence is observed, depending sensitively on
the applied stimulus \cite{darbyshire1995transition,hof2003scaling}. Current opinions
view the transition as being determined by \emph{travelling wave solutions} to the
Navier-Stokes equations
\cite{schmiegel1997fractal,echhardt2002turbulence,faisst2003traveling,
wedin2004exact,pringle2009highly,gibson2009equilibrium}.
These invariant states, whose spatial
profiles display
hallmarks of the turbulent transition, coexist with the laminar state at intermediate
Reynolds numbers, are linearly unstable, and provide an intricate blueprint for the
dynamics, in that orbits may visit transiently these repelling solutions in phase
space. Importantly, the waves lie on branches that are disconnected from the stable
laminar state, and emerge at saddle-node bifurcations
\cite{faisst2003traveling,wedin2004exact}: this turbulence mechanism is therefore
different from other paradigmatic routes to chaos, involving the destabilisation of
the laminar state, and the progressive appearance of more complicated structures via
a cascade of
instabilities~\cite{landau1944problem,hopf1948mathematical,ruelle1971nature}.
\subsection{Summary of results} In a series of recent papers addressing turbulence
from a dynamical-system viewpoint, Barkley proposed an analogy between pipe
flows and excitable media, using the propagation of an electrical pulse along the
axon of a neuron as a metaphor for localised turbulence puffs
\cite{barkley2011simplifying,barkley2012pipe,barkley2015rise,barkley2016theoretical}.
The present paper offers a specular view, at a different scale: we are motivated by
studying a canonical, complex neurobiological network of coupled excitable neurons,
supporting localised spatio-temporal chaos, and we find a compelling similarity
between the bifurcation structure of waves in this system, and the one of waves in
the pipe turbulence.
With reference to \cref{fig:bumpSummary}, the principal control parameter of the
problem is $\beta$, the timescale of synaptic currents: a low $\beta$ gives small,
persisting currents, while $\beta \to \infty$ gives large instantaneous currents. A
homogeneous steady state
exists and is linearly stable for all values of $\beta$ ($\Delta =0$
line in \cref{fig:bumpSummary}(b)), but transient localised stimuli trigger the bump
attractor~\cite{Laing:2001fc}. In the analogy, the homogeneous equilibrium plays
the role of a ``laminar state". We stress that the homogeneous steady state
is the only equilibrium of the model. Thus, the model can not support branches of
stationary bump solutions. Instead, we demonstrate that travelling waves are key to
understand the bump attractor.
We consider a spatially-continuous version of model~\cref{eq:toyNetwork}, which is known to support
waves advecting a low number of \emph{localised spikes}, or having a non-localised
profile~\cite{Ermentrout1998c,Bressloff:2000dq,Bressloff:2000uj,Ermentrout:2002dl,Osan:2002jq}.
The travelling waves of interest to us, however, have a localised profile, and advect a
\emph{large} number of spikes, such as the one presented in
\cref{fig:bumpSummary}(a). These structures are not accessible with the
current techniques, hence we develop here analytical and numerical tools to construct
them.
We define particular type of solutions, which retain a fixed number of spikes in
time; this class of solutions is sufficiently general to incorporate
travelling waves with an arbitrary, finite, number of spikes, and small perturbations to
them. We introduce the \emph{voltage mapping}, a new operator which formalises an
idea previously used in the literature for
spiking~\cite{Ermentrout1998c,Bressloff:2000dq,Bressloff:2000uj,Ermentrout:2002dl,Osan:2002jq,Avitabile2017}
and non-spiking networks
\cite{amari1977dynamicsa,ermentrout2010mathematical,bressloff2014waves,coombes2014neural}.
The voltage mapping is based on level sets describing firing events,
and it allows efficient travelling wave constructions and stability computations.
Using the voltage mapping,
we construct numerically waves with more than $200$ concurrent spikes. These waves are
spatially localised, linearly unstable, and
coexist with the trivial (laminar) state (see \cref{fig:bumpSummary}(a)). As in the
turbulence analogy, the waves contain features of the bump attractor: they pack a
seemingly arbitrary number of
spikes within the width of a bump attractor, and they advect them at an
arbitrarily slow speed, depending on $\beta$, and on the number of carried spikes. As
in the fluid-dynamical analogy, waves are disconnected from the laminar state.
Owing to the intrinsic non-smoothness of the network, the waves emerge primarily at
grazing points (as opposed to saddle-node bifurcations seen in the fluid-dynamical
analogy), which are however seen in
certain parameter regimes.
In addition, we present numerical evidence that the dynamics of the bump attractor
displays echoes of the unstable waves which, as in the fluid-dynamics analogy, form
building blocks for the localised structure. Also, the characteristic wandering of the bump
attractor, whose excursions become more prominent as $\beta$ increases, is supported
by this purely \emph{deterministic} system.
The paper is structured as follows: in \cref{sec:discreteModel} we introduce the
discrete model, characterise it as a non-smooth threshold network, and present
numerical simulations of bumps and waves; in \cref{sec:TWm} we introduce the
continuum model, the voltage mapping, and the construction of travelling waves; in
\cref{sec:TWstability} we discuss travelling stability, we present numerical
results in \cref{sec:bif-structure-TW}, and we conclude in \cref{sec:conclusions}.
\section{Coherent structures in the discrete model}\label{sec:discreteModel}
We begin by introducing the discrete model by Laing and Chow~\cite{Laing:2001fc}. We
characterise it as a piecewise-linear dynamical system, and we show numerical
simulations of coherent structures. An important difference from the work by Laing
and Chow is that we consider a \emph{deterministic}
model, which we call the Discrete Integrate-and-Fire Model (DIFM). We remark that the
neurons considered here, taken in isolation, are in an \textit{excitable regime},
that is, they exhibit an all-or-none response, based on the input they receive. This
is considerably different from the so-called \textit{oscillatory regime}, in which
neurons, when decoupled from the network, display oscillations~\cite{vanVreeswijk:1994fb,Bressloff:2000uj,Mirollo:2006ft,Gerstner:2008be}.
\subsection{Description of the DIFM} The DIFM is a spatially-extended system of
$n$ identical \emph{integrate-and-fire} neurons, posed on $\mathbb{S} = \mathbb{R}/2L\mathbb{Z}$,
that is, a ring of period $2L$. Neurons are indexed using the set $\mathbb{N}_n =
\{1,\ldots,n\}$ and occupy the discrete, evenly spaced nodes
$x_i = -L + 2iL/n \in \mathbb{S}$, for $i \in \mathbb{N}_n$.
Neurons are coupled via their synaptic connections, which are modelled by a
continuous, bounded, even and exponentially decaying function $w \colon \mathbb{S} \to
\mathbb{R}$:
the strength of the connections from the $k$th to the $i$th neuron depends solely on
the distance $|x_i-x_k|$, measured around the ring, hence we write it as $W_{ik}=
w(x_i - x_k)$, for all $i,k \in \mathbb{N}_n$.
To the $i$th neuron is associated a real-valued time-dependent voltage function
$v_i(t)$, and the coherent structures of
interest are generated when voltages $\{v_i\}$ attain a threshold value (when neurons
\emph{fire}). The DIFM is formally written as follows:
\begin{align}
\dot v_i(t) &= I_i(t)-v_i(t) +
\frac{2L}{n}\sum_{k \in \mathbb{N}_n} \sum_{j \in \mathbb{N}} W_{ik} \alpha(t-\tau_{k}^{j})
-\sum_{j \in \mathbb{N}} \delta (t-\tau^{j}_{i}),
& i \in \mathbb{N}_n,
\label{eq:vodes} \\
v_i(0) &= v_{0i}, & i \in \mathbb{N}_n.
\label{eq:inicond}
\end{align}
At time $\tau_i^j$, when the voltage $v_i$ reaches the value $1$ from below
for the $j$th time, a firing event occurs; a more precise definition of these
\emph{spiking times} will be given below. The formal evolution
equation~\cref{eq:vodes} expresses the modelling assumption that, when a neuron
fires, its voltage is instantaneously reset to $0$ (hence the Dirac delta), and a
so-called \emph{post-synaptic current} is received by all other neurons in the network,
with intensity proportional to the strength of the synaptic connections. The
time-evolution of this current is modelled via the \emph{post-synaptic function} $\alpha(t) = p(t) H(t)$, expressed as the product of a continuous potential
function $p$ and the Heaviside function $H$, hence the post-synaptic current is zero
before a spike.
In this paper we present concrete calculations for
\begin{equation}\label{eq:alphaAndW}
\alpha(t)=\beta \exp(-\beta t)H(t), \qquad
w(x)= a_1 \exp(-b_1 |x|) - a_2\exp(-b_2|x|),
\end{equation}
with $\beta, a_1, a_2, b_1, b_2 >0$, albeit the analytical and numerical framework
presented below is valid for more generic choices, subject to general assumptions
which will be made precise in \cref{subsec:TWCharacterisation}. The function $\alpha$ models exponentially-decaying currents with
rate $-\beta$ and initial value $\beta$, hence the limit $\beta \to \infty$ approximates
instantaneous currents. Currents with an exponential rise and decay are also
used in literature. The synaptic coupling function $w$ is chosen so that connections are
positive (\emph{excitatory}) on the lengthscale $1/b_1$, and negative
(\emph{inhibitory}) on the lengthscale $1/b_2$.
In addition to the post-synaptic current, neurons are subject to an external stimulus
$I_i(t)$. In certain time simulations, coherent structures will be elicited with the
application of a transient, heterogeneous stimulus of the form
\begin{equation}\label{eq:stimulus}
I_i(t) = I + d_1 H(t-\tau_\textrm{ext})/\cosh(d_2 x_i), \quad i \in \mathbb{N}_n.
\end{equation}
Our investigation, however, concerns asymptotic states of the autonomous homogeneous
case $I_i(t) \equiv I$, hence one should assume $d_1 =0$, unless stated otherwise. A
description of model parameters and their nominal values can
be found in ~\cref{tab:parameters}.
\begin{table}
\caption{Parameter descriptions and nominal values.}
\label{tab:parameters}
\centering
\begin{tabular}{lcc}
\toprule
Parameter & Symbol & Value(s) \\ \midrule[\heavyrulewidth]
Number of neurons & $n$ &
\{80,500,1000,5000\} \\
Domain half-width & $L$ & \{1,3,4\} \\
Synaptic efficacy and time scale & $\beta$ & [0,25] \\
Synaptic excitation coefficient & $a_1$ & 11 \\
Synaptic inhibition coefficient & $a_2$ & 7 \\
Synaptic excitation spatial scale & $b_1$ & 5 \\
Synaptic inhibition spatial scale & $b_2$ & 3.5 \\
Constant external input & $I $ & 0.9 \\
Time-dependent external input duration & $\tau_\textrm{ext} $ & 2 \\
Time-dependent external input strength & $d_1$ & \{0,2\} \\
Time-dependent external input spatial scale & $d_2$ & \{10,12\} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Event-driven DIFM}
Laing and Chow studied and simulated a stochastic version of the DIFM, using the Euler
method and a first-order interpolation scheme to obtain the firing
times~\cite{Laing:2001fc}. We use here a different approach: in preparation for
our analytical and numerical treatment of the problem, we write the formal model
\crefrange{eq:vodes}{eq:inicond} as a system of $2n$ piecewise-linear ODEs.
To this end we introduce the synaptic input variables
\begin{equation} \label{eq:xsub}
s_{i}(t)= \frac{2L}{n} \sum_{k \in \mathbb{N}_n} \sum_{j \in \mathbb{N}} W_{ik} \alpha(t-\tau_{k}^{j}),
\qquad i \in \mathbb{N}_n
\end{equation}
and combining~\cref{eq:alphaAndW} and~\cref{eq:vodes} we obtain formally
\[
\begin{aligned}
\dot v_i(t) &= I_i(t)-v_i(t) + s_i(t)
-\sum_{j \in \mathbb{N}} \delta (t-\tau^{j}_{i})
\\
\dot s_i(t) &= -\beta s_i(t) +
\frac{2L\beta}{n}\sum_{k \in \mathbb{N}_n} \sum_{j \in \mathbb{N}} W_{ik} \delta(t-\tau_{k}^{j})
\end{aligned} \qquad i \in \mathbb{N}_n.
\]
One way to define the associated non-smooth dynamical system is to express the model
as an impacting system, by partitioning the phase space $\mathbb{R}^{2n}$ via a switching
manifold, on which a reset map is prescribed (see \cite{diBernardo:2008cu} and
references therein for a discussion on non-smooth and impacting systems). Here, we
specify the dynamics so as to expose the firing times $\{\tau^j_k\}$, as opposed to
the switching manifold: this is natural in the mathematical neuroscience context, and
it prepares our analysis of the continuum model. Since $\{\tau^j_k\}$ are the times
at which
orbits in $\mathbb{R}^{2n}$
reach the switching manifold, a translation between the two formalisms is possible.
Following these considerations, we set $\tau^0_i = 0$ for all $i \in \mathbb{N}_n$,
introduce the notation $f(\blank^\pm) = \lim_{\mu \to 0^+} f(\blank\pm\mu)$, and
define firing times as follows\footnote{
Note that $\{\tau_i^0\}_i$ are not firing times, but auxiliary symbols for
the definition of firing times~\cref{eq:firingTimes}. Indeed, since the sums in
\cref{eq:vodes} run for $j \in \mathbb{N}$, the $\{\tau_i^0\}_i$ are immaterial for the
dynamics.}
\begin{equation}\label{eq:firingTimes}
\tau_i^j =
\inf
\big\{
t \in \mathbb{R} \colon t > \tau_i^{j-1}, \;
v_i(t^-) = 1, \;
\dot v_i(t^-) > 0
\big\},
\qquad
i \in \mathbb{N}_n, \quad j \in \mathbb{N}.
\end{equation}
We arrange firing times in a monotonic increasing sequence
$\{\tau_{i_k}^{j_k}\}_{k=1}^q$ such that
\begin{equation}\label{eq:timePartition}
(0,T] =
\bigcup_{k \in \mathbb{N}_{q+1}} \big(\tau^{j_{k-1}}_{i_{k-1}},\tau^{j_k}_{i_k} \big],
\qquad
0 = \tau^{j_0}_{i_0}< \tau^{j_1}_{i_1} \leq \ldots
\leq \tau_{i_{q}}^{j_{q}} < \tau_{i_{q+1}}^{j_{q+1}} = T,
\end{equation}
for some time horizon $T > 0$, and obtain the desired set of $2n$
piecewise-linear ODEs
\begin{equation}\label{eq:eventODE}
\dot v_i = I_i-v_i + s_i, \quad \dot s_i = -\beta s_i
\qquad i \in \mathbb{N}_n,
\qquad t \in
\bigcup_{k \in \mathbb{N}_{q+1}} \big(\tau^{j_{k-1}}_{i_{k-1}},\tau^{j_k}_{i_k} \big],
\end{equation}
with initial and reset conditions
\begin{align}
v_i(0) & = v_{0i},
& s_i(0) &= s_{0i},
& i \in \mathbb{N}_n, &
& &
\label{eq:eventICs}
\\
v_{i_k}(\tau^{j_k\,+}_{i_k}) &= 0,
& s_l(\tau^{j_k\,+}_{i_k}) & = s_l(\tau^{j_k \, -}_{i_k}) + \frac{2L\beta}{n}W_{li_k},
&l \in \mathbb{N}_n, &
& & k \in \mathbb{N}_q,
\label{eq:resets}
\end{align}
respectively. Henceforth, we refer to the non-smooth dynamical system
\crefrange{eq:firingTimes}{eq:resets} with connectivity function $w$ given
by~\cref{eq:alphaAndW} and stimulus~\cref{eq:stimulus} as the \emph{event-driven
DIFM} or simply \emph{DIFM}, that is, we view this model as a substitute for the formal
system~\crefrange{eq:vodes}{eq:inicond}.
Even though the firing-time notation may seem cumbersome at first, the evolution of
the DIFM is remarkably simple: \Cref{eq:eventODE} states that between two
consecutive firing times, neurons evolve independently, subject to a linear ODE; a solution in closed form can be
written in terms of exponential functions,
parametrised by the firing times. Constructing a solution amounts to determining
firing times (impacts with the switching manifold), as is customary in piecewise-linear
systems. This aspect will be a recurring theme in the sections analysing
travelling waves in the continuum model.
\begin{figure}
\centering
\includegraphics{exampleBumps_cb}
\caption{Bump attractors obtained via direct numerical simulation of the DIFM
\crefrange{eq:firingTimes}{eq:resets} with external input~\cref{eq:stimulus} and
connectivity function $w$ as in~\cref{eq:alphaAndW}.
We visualise the network voltage (centre) and synaptic current (right) as
functions of space and time and, in the inset (left), a raster plot of the
firing events. Parameters as in~\cref{tab:parameters} with $n=80$, $d_1=2$ $d_2=10$. The
network's synaptic time scale is $\beta = 1$ (a) and $\beta = 3.5$ (b),
respectively. A localised coherent structure is visible in (a), which wanders
when $\beta$ is increased. We remark that the system under consideration is
deterministic.}
\label{fig:exampleBumps}
\end{figure}
In simulations of the DIFM, we time step \Cref{eq:eventODE} rather than using its
analytic solution. We use an explicit adaptive 4-5th order
Runge-Kutta pair with continuous output, and detect events (compute firing times) by
root-finding \cite{Dormand1980,shampine1997matlab}. The simulation stops at each
firing event and is restarted after the reset conditions~\cref{eq:resets} are applied.
Simulating the event-driven DIFM instead of
\cref{eq:vodes} allows us to compute firing
times accurately, and to evolve the system without storing in memory or truncating
the synaptic input sums in~\cref{eq:vodes}.
\subsection{Coherent structures in the DIFM}
\begin{figure}
\centering
\includegraphics{exampleWaves_cb}
\caption{Stable coexistent waves obtained via direct numerical simulation of
DIFM \crefrange{eq:firingTimes}{eq:resets} with external input~\cref{eq:stimulus}
and connectivity function $w$ as in~\cref{eq:alphaAndW}. Parameters as
in~\cref{tab:parameters} with $n=80$, $\beta = 4.5$ for both (a) and (b), but different
initial stimuli: (a) $d_1=0.4$, $d_2=12$, (b) $d_1 = 2$, $d_2=10$. Depending on
the transient stimulus the model displays: (a) a wave
propagating with positive speed, in which pairs of neurons fire asynchronously,
but at short times from each other; (b) a similar structure
involving a quartet of neurons. Coexisting structures with variable number
of firing neurons have also been found (not shown). The spatial profiles
indicate that neurons reach threshold (dashed red line) one at a time within a
pair (a) or two at a time within a quartet (b).}
\label{fig:exampleWaves}
\end{figure}
The DIFM supports standing and travelling localised structures, as in the stochastic
setting~\cite{Laing:2001fc}. Bumps form robustly when we prescribe
homogeneous initial conditions\footnote{Typically we set $v_{0i} = u \in (0,1),
s_{0i} = 0$, for $i \in \mathbb{N}_n$, but the coherent structures discussed in the
paper can also be found with random, independent and identically distributed
initial voltages, for instance $v_{0i} \sim \mathcal{U}([0,1])$, where $\mathcal{U}$ is the uniform
distribution.}
with a short transient stimulus
(\Cref{eq:stimulus} with $\tau_{\textrm{ext}} = 2$). Since $I_i(t) \equiv I$ for
all $t>\tau_{\textrm{ext}}$, the structures observed over long-time intervals are
solutions to a homogeneous, non-autonomous problem.
As seen in \cref{fig:exampleBumps},
the bump wanders when $\beta$ is increased.
In passing, we note that this phenomenon is not due to stochastic effects, as studied
in other contexts
\cite{kilpatrick2013wandering,inglis2016general,Avitabile2017}, because the DIFM is
deterministic. For sufficiently
large $\beta$, the system exhibits stable travelling structures: in
\cref{fig:exampleWaves} we show two coexisting waves, found for $\beta = 4.5$ upon
varying slightly the width $d_1$ and intensity $d_2$ of the transient
stimulus. In each case we plot the voltage and synaptic profiles, and associated
raster plots. We notice different firing patterns in the waves, involving $2$ and $4$
firings, respectively: the wave with $2$ firings travels faster, and its voltage and
synaptic profiles are narrower. We found coexisting waves with a greater number of
firings and progressively lower speed, whose existence and bifurcation
structure will be at the core of the following sections.
\subsection{Remarks about coherent structures in the DIFM} The patterns presented so
far are found in the DIFM with a finite number of neurons $(n=80)$. At first sight,
the raster plots of the waves seem to indicate that neurons fire simultaneously in
pairs (\cref{fig:exampleWaves}(a)) or quartets (\cref{fig:exampleWaves}(b)) as the
structure travels across the network. A closer inspection of the instantaneous
profiles $v_i(t)$ reveal that this is not the case, as the threshold (red dashed
line) is attained by a single neuron in~\cref{fig:exampleWaves}(a), and by two
neurons in~\cref{fig:exampleWaves}(b): neurons in a raster pair fire alternatively at
short times from each other, whereas a quartet displays a more complex firing pattern.
Hence, for finite $n$, the propagating structures displayed in~\cref{fig:exampleWaves} are
not strictly travelling waves, in the sense that the profile is not stationary in the
comoving frame; their dynamics is that of saltatory waves
\cite{Coombes:2003ec,Wolfrum:2015fu,Avitabile2017}. The saltatory nature of the
waves, however, is an effect of the network size: as we increase $n$, the amplitude
of temporal oscillations in the comoving frame scales as $O(n^{-1})$, and the
spatio-temporal profile converges to the one of a travelling wave as $n \to \infty$.
In addition, the structure in \cref{fig:exampleBumps}(a) is not a bump, in the sense
that it is not a spatially heterogeneous steady state of the DIFM, because the
pattern is sustained by firing events. Indeed, the only equilibrium supported by the
DIFM is the homogeneous state $v_i(t) \equiv I$, $s_i(t)\equiv0$,
$i \in \mathbb{N}_n$, which is linearly stable for all values of $\beta$, as can be
deduced by inspecting system~\cref{eq:eventODE}.
It will be shown below that we can gain insight into the structure shown in
\cref{fig:exampleBumps}(a) (and its wandering) by constructing travelling waves
and investigating their stability in a continuum version of the DIFM.
\section{Travelling waves in the continuum model}\label{sec:TWm}
As stated in \cref{sec:discreteModel},
the profiles $\{v_i(t)\}_i$ and $\{s_i(t) \}_i$ in \cref{fig:exampleWaves} behave
like travelling wave solutions as $n \to \infty$. Motivated by this observation, we
study travelling waves in a continuum, translation-invariant version of the DIFM: we
set $d_1=0$ in the stimulus~\cref{eq:stimulus}, consider a continuum spatial domain,
and pose the model on $\mathbb{R}$ as opposed to $\mathbb{S}$, obtaining
\begin{equation} \label{eq:contMod}
\begin{split} \partial_t v(x,t) = -
v(x,t) + I
& + \sum_{j \in \mathbb{N}} \int_{-\infty}^\infty w(x-y) \alpha\big( t - \tau_j(y) \big) \, dy \\
& - \sum_{j \in \mathbb{N}} \delta \big( t - \tau_j(x) \big), \qquad (x,t) \in \mathbb{R} \times
\mathbb{R}.
\end{split}
\end{equation}
The formal evolution equation presented above, which we henceforth call the
\emph{continuous integrate-and-fire model} (CIFM), has been proposed and studied by
several authors in the mathematical neuroscience literature
\cite{Ermentrout1998c,Golomb:1999cr,Bressloff:1999ik,Bressloff:2000dq,Osan:2002jq,Osan:2004ko}.
In the CIFM, firing-time functions $\tau_j(x)$ indicate that the neural patch at
position $x$ fires for the $j$th time, and replace the discrete model's firing
times $\tau_k^j$.\footnote{The index $j$ is used
as a superscript in the firing times, but for notational convenience we use it
as a subscript in the firing functions, so that $\tau_j(x_k) \approx \tau_k^j$.}
A graph of the firing functions replaces the raster plot in the discrete model, so
that a travelling wave in the CIFM corresponding to the $n \to \infty$ limit of the
structure in~\cref{fig:exampleWaves}(a), for instance, will involve $2$ linear firing
functions $\tau_1$, $\tau_2$, with $\tau_1(x)<\tau_2(x)$ for all $x \in \mathbb{R}$.
The existence of travelling waves solutions in \cref{eq:contMod} with a single spike
has been studied
by Ermentrout \cite{Ermentrout1998c} who presented various scalings of the
wavespeed as a function of control parameters. A general formalism for the
construction and linear stability analysis of \emph{wavetrains} (spatially-periodic
travelling solutions) was introduced and analysed by Bressloff
\cite{Bressloff:2000dq}, who derived results in terms of Fourier series expansions.
The construction of travelling waves with multiple spikes was later studied by
O\c{s}an and coworkers~\cite{Osan:2004ko}, albeit stability for these states was not
presented and computations were limited to a few spikes, for purely excitatory
connectivity kernels. The common thread in the past literature on this topic is the idea
that travelling wave
construction and stability analysis rely entirely on knowledge of the firing function
$\tau_j$ (as in the DIFM, with firing times). A similar
approach has been used effectively in Wilson-Cowan-Amari neural field equations,
where it is often called \emph{interfacial dynamics} (see~\cite{Amari1977a} for the first study
of this type, \cite{Coombes:2014aa} for a recent review, and
\cite{coombes2011pulsating,folias2004breathing}, amongst others, for examples of
spatio-temporal patterns analysis).
Here we present a new treatment of travelling wave solutions that draws from this
idea; we introduce an operator, that we call the \emph{voltage mapping}, with the
following aims: (i)
Expressing a mapping between firing functions and
solution profiles, with the view of replacing the formal evolution equation
\cref{eq:contMod} for travelling waves with $m$ spikes (where $m$ is arbitrary).
(ii) Finding conditions for the linear stability of these waves. (iii) Using
root-finding algorithms to compute travelling waves and study their linear stability.
We will relate
to existing literature in our discussion.
\subsection{Notation} Before analysing solutions to the CIFM, we discuss the notation
used in this section. For fixed $m \in \mathbb{N}$, we use $| \blank |_{\infty}$ to
denote, with a little abuse of notation, the $\infty$-norm on $\mathbb{C}^m$. We will use
$|\blank|$ for the standard modulus in
$\mathbb{C}$. We denote by $C(X,Y)$ the set of continuous functions from $X$ to $Y$, and
use $C(X)$ when $Y = \mathbb{R}$. We denote by $B(X)$ the set of real-valued
bounded functions defined on a set $X$, and by $BC(X)$ the set of real-valued, bounded
and continuous functions defined on $X$, respectively; both spaces are endowed with
the supremum norm $\Vert \blank
\Vert_\infty$. We denote by $L^1(\mathbb{R})$ the pace of Lebesgue-integrable functions
defined on $\mathbb{R}$. Further, for a positive number $\eta$, we define the
exponentially weighted space
\[
L^1_\eta(\mathbb{R}) =
\Big\{
u \colon \mathbb{R} \to \mathbb{R} \colon
\Vert u \Vert_{L^1_\eta} = \int_\mathbb{R} e^{\eta x} |u(x)| \, dx < \infty
\Big\},
\]
which is a Banach space equipped with the norm $\Vert \blank \Vert_{L^1_\eta}$,
and the following space of exponentially growing functions
\[
C_\eta(\mathbb{R},\mathbb{C}^m) =
\Big\{
u \in C(\mathbb{R},\mathbb{C}^m) \colon
\Vert u \Vert_{C_{m,\eta}} = \sup_{x \in \mathbb{R}} e^{-\eta |x|} \, |u(x)|_{\infty}
< \infty
\Big\},
\]
which is a Banach space equipped with the norm $\Vert \blank \Vert_{C_{m,\eta}}$.
\subsection{Characterisation of solutions to the CIFM via the voltage mapping}
\label{subsec:TWCharacterisation}
We begin by discussing in what sense a voltage function $v$ satisfies the CIFM formal
evolution equation~\cref{eq:contMod}. While we eschew the definition of the CIFM as a
dynamical system on a Banach space (a characterisation that is currently unavailable
in the literature), we note that progress can be made for voltage profiles with a
constant and finite number of spikes for $t \in \mathbb{R}$. This class of
solutions is sufficiently large to treat travelling waves, and small perturbations to
them.
We make a few assumptions on the network coupling, and
we restrict the type of firing functions and solutions of interest, as follows:
\begin{hypothesis}[Coupling functions] \label{hyp:synapticFunctions}
The connectivity kernel $w$ is an even function in $C(\mathbb{R}) \cap L^1_\eta(\mathbb{R})$,
for some $\eta >0$. The post-synaptic function $\alpha \colon \mathbb{R} \to
\mathbb{R}_{\geq 0}$ can be written as $\alpha(t) = p(t)H(t)$, where $H$ is the
Heaviside function, and $p \colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ is a bounded and
everywhere differentiable Lipschitz function, hence $p,p' \in B(\mathbb{R})$.
\end{hypothesis}
\begin{definition}[$m$-spike CIFM solution]\label{def:vM}
Let $m \in \mathbb{N}$ and $I \in \mathbb{R}$. A function $v_m \colon
\mathbb{R}^2 \to \mathbb{R}$ is an $m$-spike CIFM solution if there exists $\tau
=(\tau_1,\ldots,\tau_m) \in C(\mathbb{R},\mathbb{R}^m)$ such that
$\tau_1 < \ldots < \tau_m$ on $\mathbb{R}$ and
\begin{align}
& \begin{aligned}
v_m(x,t) = I & + \sum_{j\in \mathbb{N}_m} \int_{-\infty}^t \int_{-\infty}^\infty\!\!\! \exp(z-t) w(x-y)
\alpha(z-\tau_j(y)) \, dy \, dz \\
& - \sum_{j\in \mathbb{N}_m} \exp(\tau_j(x)-t) H(t-\tau_j(x)),
\qquad (x,t) \in \mathbb{R}^2
\end{aligned} \label{eq:vProfile}\\
& v_m(x,t)=1, \qquad (x,t) \in \mathbb{F}_\tau,
\label{eq:vCrossings} \\
& v_m(x,t) < 1, \qquad (x,t) \in \mathbb{R}^2 \setminus \mathbb{F}_\tau,
\label{eq:vBounded}
\end{align}
where
\[
\mathbb{F}_\tau = \bigcup_{j \in \mathbb{N}_m} \{ (x,t) \in \mathbb{R}^2 \colon t = \tau_j(x)\}.
\]
We call $\tau$ and $\mathbb{F}_\tau$ the firing functions and the firing set of $v_m$,
respectively.
\end{definition}
The definition above specifies how we interpret solutions to~\cref{eq:contMod}, and
is composed of three ingredients: (i) \Cref{eq:vProfile}, which derives from
integrating~\cref{eq:contMod} on $(-\infty,t)$, and expresses a mapping between the set
of $m$ firing functions $\tau$ and the voltage profile; (ii) System
\cref{eq:vCrossings}, which couples the firing functions by imposing the threshold
crossings; (iii) A further condition on $v_m$, ensuring that the solution has exactly
$m$ spikes, attained at the firing set; this is necessary because, as we shall see
below, it is possible to find a set of $m$ functions $\tau$ satisfying
\Crefrange{eq:vProfile}{eq:vCrossings}, but exhibiting a number of threshold
crossings greater than $m$.
We now aim to characterise $m$-spike CIFM solutions by means of a \textit{voltage
mapping}, which can be conveniently linearised around a firing set, and
is a key tool to construct waves and analyse their stability. Inspecting
\cref{eq:vProfile} we note that the voltage profile features two contributions, one
from the (synaptic) coupling functions $w$ and $\alpha$, and one from reset
conditions. This observation leads to the following definitions:
\begin{definition}[Synaptic and reset operators]\label{def:SR}
Let $u: \mathbb{R} \to \mathbb{R}$. We define the synaptic
operator, $S$, and the reset operator, $R$, by
\begin{align}
(S u)(x,t) & = \int_{-\infty}^t \int_{-\infty}^\infty \exp(z-t) w(x-y)
\alpha(z-u(y)) \, dy \, dz, & (x,t) \in \mathbb{R}^2, \label{eq:S} \\
(R u)(x,t) & = -\exp(u(x)-t) H(t-u(x)), & (x,t) \in \mathbb{R}^2. \label{eq:R}
\end{align}
\end{definition}
These operators map univariate functions, such as a firing function, to bivariate
functions, contributing to the spatio-temporal voltage profile. The following
lemma shows that the synaptic contribution is a continuous function on the plane,
hence discontinuities in the voltage come through the reset operator, as
expected.
\begin{lemma}\label{prop:SRMappings}
If \cref{hyp:synapticFunctions} holds, then for the operators $S$, $R$ in \cref{def:SR}
we have $S \colon C(\mathbb{R}) \to BC(\mathbb{R}^2)$ and $R \colon C(\mathbb{R}) \to B(\mathbb{R}^2)$,
respectively.
\end{lemma}
\begin{proof}
The statement for $R$ is immediate, whereas proving the continuity of $Su$ on
$\mathbb{R}^2$ requires some estimates for the improper integral. A proof is given in
\cref{supp:proof:SRMappings}.
\end{proof}
We can now define the voltage mapping as follows, combining $S$ and $R$.
\begin{definition}[Voltage mapping] \label{def:voltageMapping}
Let $m \in \mathbb{N}$, $I \in \mathbb{R}$ and $\tau \in C(\mathbb{R},\mathbb{R}^m)$. The $m$-spike
voltage mapping, $V_m$, is the operator defined as
\begin{equation}\label{eq:voltageMapping}
V_m\tau = I + \sum_{j \in \mathbb{N}_m} ( S \tau_j + R \tau_j),
\end{equation}
where $S$ and $R$ are given in \cref{def:SR}.
\end{definition}
By construction, the voltage operator characterises $m$-spike CIFM solutions, as the
following proposition shows.
\begin{proposition}\label{prop:voltageMapping}
Let $m \in \mathbb{N}$, $I \in \mathbb{R}$. An $m$-spike CIFM solution exists if,
and only if, there exists $\tau \in C(\mathbb{R},\mathbb{R}^m)$ such that
\begin{align}
&V_m \tau = 1, && \text{in $\mathbb{F}_\tau$},
\label{eq:voltageMappingThresholds}\\
&V_m \tau < 1,&& \text{in $\mathbb{R}^2 \setminus \mathbb{F}_\tau$}
\end{align}
\end{proposition}
\begin{proof}
The statement follows by setting $v_m(x,t) = (V_m \tau)(x,t)$ and applying the
definition of the voltage mapping, \Cref{eq:voltageMapping}.
\end{proof}
In some cases it is useful to replace the threshold conditions
\cref{eq:vCrossings,eq:voltageMappingThresholds} by equivalent conditions
involving left limits of the voltage function and mapping, respectively, as specified
by the following result.
\begin{corollary}[Discontinuities of $v_m$]\label{cor:limits}
Under the hypotheses of
\cref{prop:SRMappings}, $v_m = 1$ in $\mathbb{F}_\tau$ if, and only if,
$
\lim_{\mu \to 0^+} v_m(x,\tau_i(x)-\mu) = v_m(x,\tau_i(x)^-) = 1
$
for all $(i,x) \in \mathbb{N}_m \times \mathbb{R}$.
\end{corollary}
\begin{proof} Using \cref{prop:SRMappings} one can show that $v_m(x,\tau_i(x)) =
v_m(x,\tau_i(x)^-)$, as we derive in \cref{supp:proof:cor:limits}.
\end{proof}
\cref{prop:voltageMapping} implies that the voltage of an
$m$-spike solution can be computed for any $(x,t) \in \mathbb{R}^2$ once the
firing functions $\tau$ are known. The spatio-temporal profile of an $m$-spike
solution is determined entirely by its firing functions. This aspect, which
underlies the formal evolution equation \cref{eq:contMod} and the literature which
analyses it, is a key part of what follows and, as we shall see below, it also
suggests a natural way to compute travelling waves, and determine their linear
stability.
A first step in this direction is the definition of travelling waves via the
voltage mapping.
\subsection{Travelling waves with m-spikes (\tw{m})} Following \cref{prop:voltageMapping},
we can capture travelling waves with $m$ spikes (\tw{m}) using the voltage mapping,
and a set of parallel firing functions. Henceforth, we will assume without
loss of generality that the propagating speed of the wave is positive: for any wave
with $c>0$, there exists a wave with speed $-c$, and the wave profiles related by the
transformation $x \to -x$.
\begin{definition}[\tw{m}] Let $m \in \mathbb{N}$, $c>0$, and let $T \in \mathbb{R}^m$ with
$T_1 < \cdots <T_m$. A travelling wave with
$m$ spikes (\tw{m}), speed $c$, and coarse variables $(c,T)$
is an $m$-spike CIFM solution with firing functions $\{ \tau_j(x) = x/c + T_j \}_{j \in
\mathbb{N}_m}$.
\end{definition}
To each travelling wave solution is associated a travelling wave profile which is
advected with propagation speed $c$. From \cref{prop:voltageMapping} we expect this
profile to be determined entirely by the firing functions, as confirmed in the
following result.
\begin{proposition}[\tw{m} profile]\label{prop:nu}
A \tw{m} with speed $c$ satisfies $(V_m\tau)(x,t)= \nu_m(ct-x;c,T)$,
and its $(c,T)$-dependent travelling wave profile $\nu_m$ is given by
\begin{equation} \label{eq:nuXi}
\begin{aligned}
\nu_m(\xi;c,T)
= I & - \sum_{j \in \mathbb{N}_m}
\exp\bigg( -\frac{\xi -cT_j}{c} \bigg) H\bigg(\frac{\xi- c T_j}{c}\bigg) \\
& + \frac{1}{c} \sum_{j \in \mathbb{N}_m}
\int_{-\infty}^\xi \exp\bigg( \frac{z-\xi}{c} \bigg) \int_0^\infty
w(y-z+cT_j) p(y/c) \, dy \,dz.
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof} The statement follows from evaluating the operators $S$ and $R$ at the
firing functions $\{ \tau_j(x) = x/c + T_j \}_{j \in \mathbb{N}_m}$, and operating a
change of variables. See \cref{supp:proof:prop:nu}.
\end{proof}
\cref{prop:nu} shows that the travelling wave profile is completely determined by
the vector $(c,T) \in \mathbb{R}_{> 0} \times \mathbb{R}^m$, that is, $(c,T)$ is a vector of
coarse variables for the travelling wave.
In the discrete model we introduced an auxiliary spatially-extended variable for the
model, the synaptic input $\{s_i(t)\}_i$ defined in \cref{eq:xsub}. In the
continuum model, the corresponding variable is the function $s_m(x,t) = \sum_{j \in
\mathbb{N}_m}(S\tau_j)(x,t)$, which in a \tw{m} satisfies $s_m(x,t) = \sigma_m(ct-x;c,T)$, with
\begin{equation}\label{eq:sigXi}
\sigma_m(\xi; c,T) = \frac{1}{c} \sum_{j=1}^m \int_0^\infty w(y-\xi+cT_j)
p(y/c) \, dy.
\end{equation}
\subsection{Travelling wave construction}
\begin{figure} \label{fig:TW5-TW20}
\centering
\includegraphics{TW5-TW20}
\caption{Wave profiles for (a) \tw{5} and (b) \tw{20} obtained by
solving \cref{prob:TWm} for $m=5$ and $m=20$, respectively, and then subsitituting
$(c,T_{1},\dots,T_m)$ into the expression for voltage
profile \cref{eq:nuXi} and synaptic profile \cref{eq:sigXi}. The profile $\nu$ is
computable at any $\xi \in \mathbb{R}$, here we plot it using an arbitrary grid in the
intervals (a) $[-0.5,1]$ and (b) $[-1.5,2]$. Parameters as in~\cref{tab:parameters}
with (a) $\beta = 4.5$ and (b) $\beta = 7.7 $.}
\end{figure}
\Cref{prop:nu} suggests a simple way to compute a \tw{m}, by
determining its $m+1$ \emph{coarse variables} $(c,T)$, as a solution to the following
\emph{coarse problem}:
\begin{problem}[Computation of \tw{m}]\label{prob:TWm}
Find $(c,T) \in \mathbb{R}_{> 0} \times \mathbb{R}^{m}$ such that $T_1< \cdots < T_m$ and
\begin{align}
T_1 & = 0, \label{eq:phaseCond} \\
\nu_m(cT_i^-; c, T) & = 1, \qquad \text{for $i \in \mathbb{N}_m$},
\label{eq:nuThreshCross} \\
\nu_m(\xi; c,T) & < 1, \qquad
\text{on $\mathbb{R} \setminus \cup_{j \in \mathbb{N}_m}\{cT_j^-\}$}.
\label{eq:lessThanOne}
\end{align}
\end{problem}
\Cref{eq:nuThreshCross} of the coarse problem imposes that the travelling wave
profile crosses the threshold $1$
when $\xi \to cT^-_j$. As expected, if $\nu_m$ is a travelling wave profile, then so is
$\nu_m(\xi+\xi_0)$ for any $\xi_0 \in \mathbb{R}$; \Cref{eq:phaseCond} fixes the phase of
the travelling wave, by imposing that the profile crosses threshold as $\xi \to 0^-$.
If $m=1$, \Crefrange{eq:phaseCond}{eq:nuThreshCross} of the coarse problem
reduce to a compatibility condition for the speed $c$,
\[
c \int_{-\infty}^0 \int_0^\infty \exp(s) w\big(c(y-s)\big) p(y) \, dy \,ds =
I-1,
\]
which implicitly defines an existence curve for \tw{1} in the ($c$,$I$)-plane. This
result is in agreement with what was found in
\cite{Osan:2004ko,Ermentrout1998c}. Existence curves in other
parameters are also possible, and are at the core of the numerical bifurcation
analysis presented in detail in the sections below.
For $m>1$, the coarse problem must be solved numerically. A simple solution strategy
is to find a candidate solution using Newton's method for the system of $m+1$
transcendental equations \crefrange{eq:phaseCond}{eq:nuThreshCross}, with $\nu_m$
given by \cref{prop:nu}, and with initial guesses estimated from direct simulation of
the discrete model with large $n$, or from a previously computed coarse vector.
The candidate solution can then be evaluated at arbitrary $\xi \in
\mathbb{R}$, hence it is accepted if \cref{eq:nuThreshCross} holds on a spatial grid covering
$[-L,L] \subset \mathbb{R}$, with $L \gg 1$. In passing, we note that this procedure is
considerably cheaper than a
standard travelling wave computation for PDEs, which requires the solution of a
boundary value problem, and hence a discretisation of differential operators on
$\mathbb{R}$. Depending on the particular choice of $\alpha$ and $w$, the profile
$\nu_m$ is either written in closed form, as is the case for the
choices~\cref{eq:alphaAndW}, or approximated using standard quadrature rules.
A concrete calculation is presented in \cref{fig:TW5-TW20}, where we show travelling
wave profiles and speeds of a \tw{5} and a \tw{20}. In passing, we note that the
synaptic profile of a \tw{m} at a given time is similar to a bump, but displays
modulations at the core (visible in \cref{fig:TW5-TW20}), as predicted by the
Heaviside switches in~\cref{eq:sigXi}. Travelling waves with a large number of
spikes, such as these ones, have not been accessible to date.
\begin{remark}
\cref{fig:TW5-TW20} shows that profiles with $\nu_m(cT_j^-)=1$ propagate with
\textit{positive} speed, and this does not contradict the numerical simulations in
\cref{fig:exampleWaves}, where solutions profiles with $v_m(x,\tau_j(x)^-) = 1$
propagate with \textit{negative} speed. This is a consequence of choosing $\xi = ct
-x$ (as in~\cite{Osan:2004ko}), hence initial conditions for the time simulations
are obtained by reflecting $\nu_m$ about the $y$ axis, since $v_m(x,0) = \nu_m(-x)$.
\end{remark}
\section{Wave Stability}\label{sec:TWstability}
\begin{figure}
\centering
\includegraphics{TildeTau}\label{fig:TildeTau}
\caption{(a)-(b): Examples illustrating the destabilisation of a \tw{3} solution. A
time simulation of the DIFM is initialised using wave profiles obtained solving
\cref{prob:TWm} for $m = 3$ at (a) $\beta = 17$ and (b) $\beta = 17.5$.
Parameters as in~\cref{tab:parameters}, domain half-width $L =4$ and network
size $n =1000$. The firing functions $\{\tau_j\}$ are plotted for reference.
Oscillatory perturbations to the firing functions do not decrease with time, hence
the wave is unstable. The dynamics leads to stable (a) \tw{2} and (b)
\tw{1} solutions. (c): Perturbations $\tau + \phi$ to the firing functions $\tau$ of a
\tw{m}. At $t=0$ each firing function $\tau_i$ is perturbed by an
amount $\phi_i(-cT_i)$. A \tw{m} is linearly stable if $\phi_i(-cT_i)$ being small
implies that $\phi_i(x)$ stays small for all $x \in (-cT_i,\infty)$ and $i \in
\mathbb{N}_m$ (see \cref{def:linearStability}).}
\end{figure}
The time simulations in \cref{sec:discreteModel} demonstrate that, for sufficiently
large values of $\beta$, travelling waves with a variable number of spikes coexist and
are stable. It is natural to ask whether these waves destabilise as $\beta$, or any
other control parameter of the model, is varied.
An example of a prototypical wave instability is presented in \cref{fig:TildeTau} for
\tw{3}: a travelling wave is computed solving \cref{prob:TWm}, and this solution is
used as initial condition for a DIFM simulation with $n=1000$ neurons. For
sufficiently large $\beta$, the wave in unstable,
as exemplified by the raster plots in \cref{fig:TildeTau}(a)--(b),
in that the firing function never return to the ones of a \tw{3}.
Inspecting \cref{fig:TildeTau} we observe that the firing set of the solution is
composed of 3 disjoint curves. Initially, these curves are close to the ones of a
\tw{3}, from which they depart progressively. Ultimately, some firing functions
terminate, and the dynamics displays an attracting \tw{2} or \tw{1}. Capturing the
transitions from a \tw{m} to
a travelling wave with fewer spikes is a nontrivial task. Firstly, classifying these
transitions implies studying the \emph{nonlinear
stability} of the wave. Secondly, the characterisation
we have provided of the CIFM solutions, \cref{def:vM}, assumes that the number of
spikes is constant at all times: accounting for changes in numbers of the firing
functions would require a redefinition of the model, starting from the
formal evolution equation \cref{eq:contMod}, and patching together solutions segments
with variable numbers of spikes.
The voltage mapping, however, opens up the possibility of studying the \emph{linear
stability} of \tw{m}: the spatio-temporal voltage profile of an $m$-spike
solution is determined by its firing functions, $\tau$, via \cref{eq:voltageMapping};
small perturbations $\tau + \varepsilon \phi$ to $\tau$, induce small perturbations to
the spatio-temporal profile, and we expect that a suitable linearisation of the
voltage mapping carries information concerning the asymptotic behaviour of these
perturbations.
The main aim of this section is to formalise the concept of linear stability for the
problem under consideration, and to provide an algorithm for \tw{m} linear stability
computations.
We begin by showing that if two distinct $m$-spike solutions have firing functions
$\tau$ and $\tau+\phi$, respectively, then the perturbations $\phi$ satisfy a linear
equation to leading order. The following lemma also specifies admissible
perturbations, namely $\phi$ are in the Banach space $C_\eta(\mathbb{R},\mathbb{R}^m)$:
perturbations are allowed to grow exponentially as $|x| \to
\infty$, at a rate at most equal to $\eta$, which bounds the decay rate of the
connectivity kernel function $w \in L^1_\eta(\mathbb{R})$.
\begin{lemma}[Linearised voltage mapping operator]\label{lemma:linearVM}
Assume \cref{hyp:synapticFunctions}, and let $(c,T)$ be the coarse variables of a
\tw{m} with firing functions $\tau$. Further, let $L$ be the linear operator defined
by $L\phi = \big( (L\phi)_1,\ldots,(L\phi)_m \big)$, where
\[
(L\phi)_i =\sum_{j \in \mathbb{N}_m} (\phi_i - \phi_j) 1_{j<i}
+ \int_{cT_{ji}}^\infty
e^{-y/c} w(y) \psi_{ij}(y) \big[ \phi_i - \phi_j(\blank - y) \big] \, dy,
\quad i \in \mathbb{N}_m,
\]
with coefficients $T_{ij}$ and functions $\psi_{ij}$ given by
\[
T_{ij} = T_i - T_j, \qquad \psi_{ij} \colon
[cT_{ij},\infty) \to \mathbb{R}, \quad y \mapsto p(0) + \int_0^{y/c-T_{ji}}
e^s p'(s)\, ds, \qquad i,j \in \mathbb{N}_n,
\]
respectively. The following statements hold:
\begin{enumerate}
\item $L$ is a bounded operator from $C_\eta(\mathbb{R},\mathbb{C}^m)$ to itself.
\item Let $0 < \varepsilon \ll 1$ and $\phi \in C_\eta(\mathbb{R},\mathbb{R}^m)$. If $\tau + \varepsilon
\phi$ are firing functions of an $m$-spike CIFM solution (a perturbation of the \tw{m}), then
\begin{equation}\label{eq:LPhi}
0 = L \phi + O(\varepsilon) \qquad \text{in $\mathbb{R}$}
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
To prove that $L \colon C_\eta(\mathbb{R},\mathbb{C}^m) \to C_\eta(\mathbb{R},\mathbb{C}^m)$ is well
defined and bounded, one shows that for any $\phi \in C_\eta(\mathbb{R},\mathbb{C}^m)$ there
exists a positive constant $\kappa_{m,\eta}$ such that $ \Vert L\phi
\Vert_{C_{m,\eta}}\leq \kappa_{m,\eta}
\Vert \phi \Vert_{C_{m,\eta}}$. A proof is obtained by obtaining preliminary
estimates on $\psi_{ij}$, and $|(L\phi)(x)|_\infty$, and relies on the decaying
properties of $w$, stated in
\cref{hyp:synapticFunctions}. Part 2 is proved by setting $u = \tau + \varepsilon \phi \in
C_\eta(\mathbb{R},\mathbb{R}^m)$, linearising the mapping $u \mapsto V_m u - 1
\vert_{\mathbb{F}_u}$
around $\tau$, and exploiting properties of $m$-spikes solutions
outlined in the \cref{subsec:TWCharacterisation}. A proof is given in
\cref{supp:sec:linearVM}.
\end{proof}
\begin{remark}
Note that the operator $L$ depends on the coarse variables $(c,T)$, albeit we omit
this dependence for notational simplicity.
\end{remark}
We are now ready to define linear stability for a \tw{m}, which we
adapt from \cite{Bressloff:2000dq}. Intuitively, we compare the firing set
$\mathbb{F}_\tau$ of a \tw{m} with the firing set $\mathbb{F}_{\tau+\phi}$ of a perturbed
$m$-spike solution with $\Vert \phi \Vert_{C_{m,\eta}} \ll 1$, for which $\phi$
satisfy \cref{eq:LPhi} to leading order. If the sets $\mathbb{F}_\tau$ and $\mathbb{F}_{\tau+\phi}$
are close around $t=0$ and remain close for all positive times, we deem the wave
linearly stable. With reference to
\cref{fig:TildeTau}(c), we observe that,
when \tw{m} crosses the axis $t=0$, each one of its firing functions $\tau_i$ is
perturbed by an amount $\phi_i(-cT_i)$. Roughly speaking, a \tw{m} is linearly stable
if $\phi_i(-cT_i)$ being small implies that $\phi_i(x)$ stays small for all $x \in
(-cT_i,\infty)$ and $i \in \mathbb{N}_m$. If a wave is linearly stable and all $\phi_i$
decay to $0$ as $x \to \infty$ we say that the wave is asymptotically linearly
stable. More precisely:
\begin{definition}[Linear stability of \tw{m}]
\label{def:linearStability}
A \tw{m} with coarse varaibles $(c,T)$ is linearly stable to perturbations $\phi$
if $\phi \in \ker L$, and for each $\varepsilon >0$ there exists $\delta = \delta(\varepsilon)
>0$, such that if $|\phi_i(-cT_i)| < \delta$, then $|\phi_i(x)| < \varepsilon$
for all $(i,x) \in \mathbb{N}_m \times (-cT_i,\infty)$.
A \tw{m} is asymptotically linearly stable to perturbations $\phi$ if is linearly
stable to perturbations $\phi$ and $| \phi (x) |_{\infty} \to 0$ as $x \to
\infty$.
\end{definition}
We have seen that a \tw{m} can be constructed by solving a nonlinear problem in the
unknowns $(c,T)$. The following lemma, which is the central result of this section,
establishes that linear stability of a \tw{m} with respect to exponential
perturbations of the firing functions can also be determined by finding roots
of a $(c,T)$-dependent, complex-valued function.
\begin{lemma}[\tw{m} stability]\label{lem:twStability}
Assume \cref{hyp:synapticFunctions}, let $(c,T)$ be coarse variables of a
\tw{m}, and let $\mathbb{D}_{a,b} = \{ z \in \mathbb{C} \colon a \leq \real z \leq b
\}$. Further, let $E$ be the complex-valued function
\begin{equation}\label{eq:EDefinition}
E \colon \mathbb{D}_{-\eta,\eta} \to \mathbb{C}, \quad z \mapsto \det[ D - M(z) ],
\end{equation}
where
$M \in \mathbb{C}^{m \times m}$, $D = \diag(D_1, \ldots,D_m) \in \mathbb{R}^{m \times m}$,
are the matrices with elements
\[
M_{ij}(z) = e^{T_{ji}}
\bigg[
1_{j<i} + \int_{cT_{ji}}^\infty e^{-(z+1/c)y} w(y) \psi_{ij}(y)\, dy
\bigg],
\qquad
D_i = \sum_{k \in \mathbb{N}_m} M_{ik}(0),
\]
respectively, then:
\begin{enumerate}
\item If $\lambda$ is a root of $E$, then its complex conjugate $\lambda^*$ is
also a root of $E$, and there exists a nonzero $\Phi \in \ker[
D-M(\lambda)]$ such that $\Phi e^{\lambda x}, \Phi^* e^{\lambda^* x} \in
\ker{L}$, where $L$ is defined as in \cref{lemma:linearVM}.
\item $E$ has a root at $0$. \tw{m} is linearly stable (but not
asymptotically linearly stable) to perturbations
$\phi \colon x \mapsto \kappa v$, where $\kappa \in \mathbb{R} \setminus \{0\}$ and $v=(1,\ldots,1) \in
\mathbb{R}^m$.
\item If $\lambda$ is a root of $E$ in $\mathbb{D}_{-\eta,0} \setminus \mathrm{i}\mkern1mu \mathbb{R}$,
then $\tw{m}$ is linearly asymptotically stable to perturbations $\Phi
e^{\lambda x} + \Phi^* e^{\lambda^* x}$.
\end{enumerate}
\end{lemma}
\begin{proof}
A proof of the statement is given in \cref{supp:sec:twStability}.
\end{proof}
\Cref{lem:twStability} provides a link between exponential perturbations to the
firing times of a \tw{m} and zeroes of the function $E$ in the strip
$\mathbb{D}_{-\eta,\eta} \subset \mathbb{C}$. The function $E$ depends on $(c,T)$ via the
entries of the matrices $D, M$, and can be evaluated numerically at each point
$z \in \mathbb{D}_{-\eta,\eta}$.
In PDEs, linear stability of a travelling wave is determined by the spectrum of a
linear operator, which contains a $0$ eigenvalue corresponding to a translational
perturbation mode.
Part 2 of \Cref{lem:twStability} provides an analogous result
for a \tw{m}, which is linearly stable, but not asymptotically linearly stable
(therefore neutrally stable), to perturbations that shift the firing functions
homogeneously. Part 3 of \Cref{lem:twStability} suggests
that a \tw{m} is stable if all nonzero roots of $E$ have strictly negative real
parts. Initial guesses for the roots can be obtained by plotting $0$-level sets of
the function $E$, for fixed $(c,T)$.
\section{Bifurcation structure of travelling waves}\label{sec:bif-structure-TW}
\begin{figure} \label{fig:bif-diag-TW3}
\centering
\includegraphics{bif-diag-TW3}
\caption{(a) Branch of $\tw{3}$ solutions in the parameter $\beta$, using $c$ as
solution measure. The branch originates
at a grazing point $G$, illustrated by the profile in (b). As
$\beta$ increases, three pairs of complex conjugate roots of $E$ (see
\Cref{eq:EDefinition}) cross the imaginary axis at the oscillatory (Hopf)
bifurcation points $\hb1$, $\hb2$, $\hb3$. Panels (c)
and (d) show selected roots of $E$, before and after $\hb1$, at $\beta = 10$ and
$16$, respectively. (e)--(g) Raster plots for time simulations of the
DIFM with $n=500$ and domain half-width $L=3$, initialised from solutions to
\cref{prob:TWm} at $\beta = 2.17$, $10$, and $16$, respectively. The simulations
show the dynamics of the model for $\beta < \beta_\textrm{G}$ (where a \tw{3}
does not exist in the continuum limit), for $\beta \in
(\beta_\textrm{G},\beta_{\textrm{HB}_1})$ (where \tw{3} is stable according to
the analysis in (c)), and for $\beta > \beta_{\textrm{HB}_1}$ (where \tw{3} is unstable to
oscillatory perturbations, as predicted in (d)). Parameters as in
\cref{tab:parameters}, with $d_1=0$.}
\end{figure}
The pseudo-arclength continuation routines developed in
\cite{rankin2014continuation,avitabile2020zenodo}
have been used to compute solutions to \cref{prob:TWm}, continue waves in parameter
space, and investigate their stability. A \tw{m} is constructed by
solving \cref{prob:TWm} in the corase variables $(c,T) \in \mathbb{R}_{>0} \times
\mathbb{R}^{m}$, which is sufficient to reconstruct the wave profile~\cref{eq:nuXi}, and
the corresponding synaptic profile~\cref{eq:sigXi};
in addition, starting from a solution to \cref{prob:TWm}, the linear asymptotic
stability of a \tw{m} is determined by finding roots of the $(c,T)$-dependent
nonlinear function $E$ defined in \cref{eq:EDefinition}.
\cref{fig:bif-diag-TW3} shows the bifurcation
structure of \tw{3}, which is common to most travelling waves found in the model.
The simulations in \Cref{sec:discreteModel} suggest to take the synaptic timescale
parameter $\beta$ as the principal continuation parameter. We use the wavespeed $c$
as solution measure. A branch of solutions originates from a grazing point (G, see
below for a more detailed explanation) and it is initially stable, before
destabilising at a sequence of oscillatory bifurcations
($\hb{1}$--$\hb{3}$), as seen in \cref{fig:bif-diag-TW3}(a). In passing, we note that
there exists a second, fully unstable, branch of \tw{3} solutions characterised by a
slower speed and a smaller width.
This branch, which we omit from the bifurcation diagrams for simplicity, also originates at a
grazing point.
\subsection{Grazing points} In a wide region of parameter space, branches of
\tw{m} solutions originate at a grazing point $\beta = \beta_\textrm{G}$, as
seen in \cref{fig:bif-diag-TW3}(a)--(b) for \tw{3}\footnote{Note that
$\beta_\textrm{G}$ depend on $m$, but we omit this dependence to simplify notation.
The same is true for other quantities in the paper such as $c$ and $T_\textrm{G}$,
for instance.}. At a
grazing point the
\tw{m} profile crosses threshold $m$ times, and attains the threshold
tangentially at a further spatial location, $cT_\textrm{G}$, as shown in
\cref{fig:bif-diag-TW3}(b). This tangency exists at the critical value $\beta =
\beta_\textrm{G}$, signalling a non-smooth transition and a branch termination.
For $\beta > \beta_\textrm{G}$ we observe profiles with exactly $m$ threshold
crossings (a branch of \tw{m} solutions). These profiles exhibit a further local
maximum, which is strictly less than $1$ by construction, at a point
$\xi_\textrm{max} > cT_m$. As $\beta
\to \beta^+_\textrm{G}$ we observe $\xi_\textrm{max} \to cT_\textrm{G}^+$ and
$\nu(\xi_\textrm{max}) \to 1^-$, until the threshold is reached at $\beta =
\beta_\textrm{G}$, where the tangency originates.
For $\beta < \beta_\textrm{G}$ we find solutions to the nonlinear problem
\crefrange{eq:phaseCond}{eq:nuThreshCross} for which $V_m \tau < 1$ in a bounded
interval of $\mathbb{R}$. Since these states violate the condition \cref{eq:lessThanOne},
they do not correspond to $\tw{m}$ solutions, and we disregard them (the branch
terminates at $\beta_G$). We note, however, that in a neighbourhood of
$\beta_\textrm{G}$ there exist branches of travelling wave solutions with different
number of threshold crossings (as it will be shown below).
We found grazing points for every $\tw m$ with $2 \leq m \leq 230$, for the parameters in
\cref{tab:parameters} with $d_1=0$. We observe that for $\beta < \beta_\textrm{G}$
the system evolves towards a DIFM bump attractor (see \cref{fig:bif-diag-TW3}(e)).
Understanding the origin of this transition is the subject of the following sections.
Grazing points are found generically as a secondary control parameter is varied, and
$2$-parameter continuations of grazing points can be obtained numerically, by freeing
one parameter and imposing tangency of the wave profile at one additional point (see
\cref{prob:G} in \cref{sec:supp:TwoParameterContinuation}).
\subsection{Oscillatory bifurcations}
Along the \tw{m} branch, we compute and monitor the roots of $E$ with the largest
real part. \cref{fig:bif-diag-TW3}(c)-(d) show examples
for $\tw{3}$ at $\beta =10$ and $\beta = 16$ respectively. At
$\beta = 10$ we observe a root at $0$, as expected, and other roots with small
negative real part: the wave is therefore linearly asymptotically stable to
firing-threshold perturbations $x \mapsto \Phi e^{\lambda x} + \Phi^* e^{\lambda^*
x}$, with $E(\lambda) = 0$ and $\Phi \in \ker[D - M(\lambda)]$, as confirmed
via simulation in \cref{fig:bif-diag-TW3}(f). In contrast, there exists a pair of
unstable complex conjugate roots for the solution at $\beta = 16$,
indicating an oscillatory (Hopf) instability, which is also confirmed by direct
simulation, in \cref{fig:bif-diag-TW3}(g): after the initial oscillatory instability,
the system destabilises to a \tw{2}. It should be noted that, in other
regions of parameter space and for simulations with different network sizes, we
observed a \tw{3} destabilise to a \tw{1} or the homogeneous steady state.
We expect that branches of periodically modulated \tw{m} solutions
(which are also supported by neural fields~\cite{Ermentrout:2014bw,Coombes:2014uy})
emerge from each of the Hopf bifurcations reported in \cref{fig:bif-diag-TW3}(a). We
note that we could not find stable structures of this type via direct simulations
near onset and, while it is possible to extend our framework to continue such
periodic states, we did not pursue this strategy here.
As shown in \cref{fig:bif-diag-TW3}(a), the \tw{3} branch undergoes a sequence of
Hopf bifurcations $\{ \hb{i} \}_i$: our stability analysis shows several
pairs of complex conjugate roots progressively crossing the imaginary axis as
$\beta$ increases: the computation in \cref{fig:bif-diag-TW3}(d), for instance, is for a
solution at $\beta \in (\beta_{\textrm{HB}_1}, \beta_{\textrm{HB}_2})$. We have
verified numerically (not shown) that the firing functions of spatio-temporal DIFM
solutions in this region of parameter behave as predicted by the leading eigenvalues
in \cref{fig:bif-diag-TW3}(d), that is, they feature two dominant oscillatory modes:
one stable, and one unstable.
Similarly to grazing points,
Hopf bifurcations can be continued in a secondary parameter (see \cref{prob:Hopf}
in~\cref{sec:supp:TwoParameterContinuation}).
\subsection{Nested branches of travelling waves}
\begin{figure} \label{fig:c-beta-TW1-TW160}
\centering
\includegraphics{c-beta-TW1-TW160}
\caption{Bifurcation structure of $\tw{m}$ branches for $m=1,\ldots,160$ in the
parameter $\beta$. (a): For $m \geq 3$, branches are similar to the one shown in
\cref{fig:bif-diag-TW3}(a). As $m$ increases, the waves become slower and their
stability region narrower. The shaded area in (a) is enlarged in (b): the
inset shows selected branches for $m=2,\ldots,160$; oscillatory instabilities occur
within the red segments, and the branches with $m \geq 57$
are fully unstable (solid grey lines). We used here the same data as in
\cref{fig:bumpSummary}, but we present it in terms of $c$, not $\Delta$. Parameters
as in \cref{tab:parameters}, with $d_1 = 0$.}
\end{figure}
We computed branches of \tw{m} solutions for increasing values of $m$, as reported in
\cref{fig:c-beta-TW1-TW160}(a), using DIFM simulations as initial guesses. In
\cref{fig:bumpSummary} waves were represented by their width, whereas here we use the
propagation speed $c$.
In the
region of parameter space explored in the DIFM model, branches with $m\geq 2$ feature
a grazing point for low
$\beta$, and branches with $m \geq 3$ display sequences of Hopf Bifurcations,
following the scenario already discussed in \cref{fig:bif-diag-TW3}(a).
In this region, the \tw{1} branch has a distinct behaviour, featuring a saddle node
point in place of a grazing point. For each \tw{m} branch terminating at a grazing point,
there is a corresponding slow unstable branch originating at a different
grazing point: in \cref{fig:c-beta-TW1-TW160}(a) this behaviour is exemplified
by plotting the fully unstable slow $\tw{5}$ branch (the branch with slowest waves in the
figure), but is omitted for all other branches. The two $\tw{5}$ branches should be
understood as a ``broken saddle-node".
The bifurcation structure of \cref{fig:c-beta-TW1-TW160}(a), valid for the CIFM,
supports numerical simulations of the DIFM, in which a $\tw{m}$ destabilises at
\hb{1}, and gives rise to a new travelling wave
state, \tw{m'} with $m'<m$ (see for instance \cref{fig:TildeTau,fig:bif-diag-TW3}).
These \emph{coexisting \tw{m} branches} are nested in a characteristic fashion, so
far unreported in the literature; the
higher $m$, the slower the wave, and the narrower the stable interval
between $G$ and \hb{1}. This structure is noteworthy: firstly, it is
known that the speed of \tw{1} typically changes \emph{as a secondary parameter is
varied}~\cite{Ermentrout1998c,Bressloff:1999a,Bressloff:2000dq}; however, in networks
with purely excitatory kernels, waves with multiple threshold crossings coexist, and
their speed does not depend strongly on $m$~\cite{Ermentrout1998c}, which has been
a principle reason for studying approximately and analytically the only tractable case,
$m=1$~\cite[Section 5.4]{bressloff2014waves} (this scenario is also confirmed by our
calculations, see~\cref{fig:sup:purelyexcitatory_bif_profiles}); secondly, it is
known that Hopf instabilities with purely excitatory connectivity kernel are possible
only if delays are present in the network~\cite{Bressloff:2000dq}.
The results in \cref{fig:c-beta-TW1-TW160} have been obtained using a methodology
that works for arbitrary $m$, and on generic connectivity kernels. They show that, when
inhibition is present: (i) coexisting nested branches of \tw{m} exist; (ii) the speed
of such waves depends
strongly on $m$, and in particular it is possible to construct waves with arbitrarily
small speed, by increasing the number of spikes; (iii) oscillatory instabilities are
present in models without delays, for sufficiently large $m$ and/or sufficiently large
$\beta$. As we shall see, the latter aspect plays a role in understanding the so
called \emph{bump attractor}.
\begin{figure}\label{fig:graze_m_figs}
\centering
\includegraphics{graze_analysis_large_m_01}
\caption{(a) The quantities $c$ and $T_m$, evaluated at the grazing points $\beta =
\beta_G$, are $O(m^{-1})$ and $O(m)$,
respectively. Since $T_1 = 0$ for all waves, the quantity $cT_m$ measures the
wave width, and we expect the sequence $\{cT_m\}_{m \in \mathbb{N}}$, the sequence of
wave widths, to converge to a fixed value as $m \to \infty$. (b) The solid grey
line is the gain function, proposed in \cite{Laing:2001fc} for
a stationary asynchronous bump, red dots mark the
instantaneous firing rate for \tw{230} at the grazing point, computed according
to the formula $(T_{i+1}-T_i)^{-1}$ at position $x=cT_i$.}
\end{figure}
\subsection{The bump attractor}
From the grazing point of \tw{m}, one can compute the grazing point of \tw{m+1}.
For instance, from the \tw{3} grazing profile in \cref{fig:bif-diag-TW3}(b), we
obtain $(c,T_1,T_2,T_3,T_G)$. A grazing point can then be computed
solving~\cref{prob:G}, and its solution can be used to produce an initial
guess $(c,T_1,T_2,T_3, (T_3 + T_G) / 2,T_G)$ for a grazing point of \tw{4}.
Exploiting this iterative strategy, we computed grazing points and branches for
large values of $m$, obtaining the diagram in \cref{fig:c-beta-TW1-TW160}(b),
corresponding to the shaded area in \cref{fig:c-beta-TW1-TW160}(a).
The branches accumulate as $m$ increase, and for $m \geq 57$, they are fully unstable
for this parameter set. The diagrams provide evidence that there exists unstable
waves with arbitrarily many spikes and vanishingly small speed. It seems therefore
natural to postulate a relationship between these waves and the structures found by
Laing and Chow~\cite{Laing:2001fc} (see also
\cref{fig:exampleBumps,fig:bif-diag-TW3}(e)).
We initially explore this relationship by inspecting the travelling wave profiles: since
each \tw{m} branch with $m > 1$ terminates at a grazing point, we examine a few
features of the profile at these points. The leftmost spike of each wave occurs at
$\xi_1=0$ by construction (see \cref{prob:TWm}), while its righmost spike is at
$\xi_m = cT_m$, which is therefore a proxy for the wave's width\footnote{Recall that
$c$ is also a function of $m$, but we omit this dependence for ease of notation}.
\cref{fig:graze_m_figs}(a) shows $c$ and $T_m$, computed at the grazing points, as functions of
$m$: we find $c = O(m)$, $T_m = O(m^{-1})$, therefore we expect the sequence
$\lbrace\xi_m \rbrace_{m \in \mathbb{N}}$ to converge to a finite value $\xi_*$ as $m \to \infty$.
These data show that, as the wavespeed tends
to zero, the growing number of spikes are distributed in a fixed interval $[0,\xi_*]$
resembling therefore a stationary bump of width $\xi_*$.
This conclusion is also supported by an inspection of the distribution of spikes
$\{\xi_{i,m}\}_i$ as $m \to \infty$. Instead of looking directly at this distribution, we
plot the inverse inter-spike time $1/(T_{i+1}-T_i)$, as a function of $\xi_i$, for
the grazing point of $\tw{230}$. Laing and Chow~\cite{Laing:2001fc} call this quantity the
\emph{gain function}, and propose it as an emerging firing rate function for the
asynchronous bumps of the DIFM~\cite{Laing:2001fc}. Importantly, Laing and Chow show
that the gain function for asynchronous bumps resembles the one of a neural network
rate equation, but it is an emergent property of the DIFM. \cref{fig:graze_m_figs}(b)
shows that gain function by Laing and Chow is in excellent agreement with the one
obtained for \tw{230} (the slowest wave we computed in the CIFM), confirming that,
from a macroscopic viewpoint, asynchronous bumps could be understood in the limit
$m\to \infty$ of \tw{m}.
\begin{figure}\label{fig:bumpAttractor}
\centering
\includegraphics[width=0.9\textwidth]{bumpAttractor}
\caption{(a): Mean instantaneous speed ($\bar c$ in \cref{eq:statistics}, purple dots) and interval
estimators ($[\bar c - \sigma_c, \bar c + \sigma_c]$ and
$[c_\textrm{min},c_\textrm{max}]$, dark and light purple shades, respectively)
in direct simulations of the DIFM,
superimposed on \tw{m}
branches of the CIFM (an inset of \cref{fig:c-beta-TW1-TW160}(b), which has been
reflected by the $c=0$ axis to signpost waves with negative speed). The bump
attractor is characterised by $\bar c \approx 0$, and speed fluctuations which
grow with $\beta$, while echoing the oscillatory instabilities of the branch (red
segments). (b): exemplary solutions in (a) displaying an initial advection,
followed by a bump attractor (1,2) or a stable wave (3).
(c): Histograms of the instantaneous speed in selected time intervals, indicated
by the lateral coloured bars in (b). We observe transitions through weakly unstable
waves (orange interval in (3), blue intervals in (1, 2)), corresponding to sharp
peaks in the histograms.
}
\end{figure}
We have further investigated the bump attractor state, in relation to the \tw{m}: the
analysis of the continuum model, in the region of parameter space where the bump
attractor is observed, predicts the coexistence of the trivial
attracting solution $v(x,t) \equiv I$, with arbitrarily slow, unstable waves, whose
spatial profile approximates the one of a bump. We simulated the DIFM with $n=5000$,
initialising the model from an unstable travelling wave of the CIFM, \tw{105}, and
estimated the instantaneous speed $c(t)$ of the numerical DIFM solution at $q$
time points $\{t_k \colon k\in \mathbb{N}_q\}$, using a level set of the synaptic
profile and finite differences, as follows:
\[
z(t) = \max \{ x \in \mathbb{S} : s(x,t) = 0.1 \}, \qquad
c_k = (z(t_k) - z(t_{k-1}))/(t_k - t_{k-1}),
\qquad
k \in \mathbb{N}_q.
\]
A CIFM travelling wave solution corresponds to a constant $c$: when the DCLM solution
displays a wave for large $n$,
the sequence $\{c_k\}_k$ converges to a constant value, if one disregards small
oscillations due to the finite $n$, and which vanish as $n \to \infty$. On the other
hand we expect that no differentiable function $c(t)$ exists for a bump attractor,
albeit some information
may be contained in the mean, $\overline{c}$, standard deviation, $\sigma_c$, and
extrema, $c_\textrm{min}$, $c_\textrm{max}$ of the \emph{deterministic} scalar $c_k$
\begin{equation}\label{eq:statistics}
\bar c = \frac{1}{q} \sum_{k \in \mathbb{N}_q} c_k,
\quad
\sigma^2_c = \frac{1}{q-1} \sum_{k \in \mathbb{N}_q} (c_k-\bar c )^2,
\quad
c_\textrm{min} = \min_{k \in \mathbb{N}_q} c_k,
\quad
c_\textrm{max} = \max_{k \in \mathbb{N}_q} c_k,
\end{equation}
which we computed for various values of $\beta$, and superimposed on the bifurcation
diagram of the CIFM model, in \cref{fig:bumpAttractor}(a): we plot $\bar c$ (purple
dots) and two interval estimators, $[\bar c - \sigma_c, \bar c + \sigma_c]$ (dark
purple shade) and $[c_\textrm{min}, c_\textrm{max}]$ (light purple shade); we recall
that the CFIM admits branches of waves with positive and negative speed, both plotted
in the figure. The bump attractor is characterised by a zero-mean speed, and
interval estimators which grow with $\beta$, indicating that the instantaneous speed
undergoes larger fluctuations as the bump meanders. We observe that the interval
estimators echo the onset of oscillations (red segments) in the \tw{m} branches, as
$\beta$ increases.
This behaviour is robust, for low and medium values of $\beta$, albeit the fine
details of the dynamics depend on initial conditions. \Cref{fig:bumpAttractor}(b)
shows 3 examples whose estimated speeds appear also in \cref{fig:bumpAttractor}(a).
The space-time plots display an initial advection, followed by a bump attractor, or a
stable travelling wave. To highlight the transitions, we computed histograms of $c_k$ in selected time intervals,
indicated by blue, orange, yellow, and purple bars in \cref{fig:bumpAttractor}(b).
The histograms provide numerical evidence of transitions near weakly unstable waves
(orange intervals/histograms in example 3, blue intervals/histograms in examples 1
and 2), which manifest themselves via sharp peaks.
\subsection{Composite waves}\label{subsec:compositeWaves}
In addition to the waves studied thus far, we found by direct simulation waves whose
firing functions are split into well-separated groups, that is, firing functions in
the same group are closer to each another than they are to those in other
groups, see~\cref{fig:composite_formation_solution}. We call these structures
\textit{composite waves}, as they may be
formed via the interaction of travelling waves with various numbers of spikes. As in
other non-smooth dynamical systems~\cite{granados2017period}, we expect that
these solutions have discontinuities that are rearranged with respect to a \tw{m}.
For illustrative purposes, we denote a composite waves with
$k\in\mathbb{N}$ groups by \tw{m_1} + \dots +
\tw{m_k}, where $\{m_i\}_{i=1}^k$ is a sequence of positive
integers specifying the number of spikes in each group. There are constraints
for the groups, dictated by dynamical considerations: for instance a \tw{1} +
\tw{3} cannot exist, because a \tw{1}, taken in isolation, is faster than a \tw{3}.
The construction of asymptotic profiles and computation of linear stability for
composite waves follow in the same way as defined in Sect.~\ref{sec:TWm} and
Sect.~\ref{sec:TWstability}.
In \cref{fig:composite_formation_solution}(a), we show a selection of of
composite waves near the \tw{3} branch. Roughly speaking, the wave profile along
each depicted branch comprises a \tw{3} as its leading group, followed by two
additional spike group that collectively form a compound satisfying the
travelling wave conditions (e.g.~branch 1 combines a \tw{3}, a \tw{2} and a
\tw{1}). The branches of composite waves are separate from each other and from
the previously computed \tw{m} branches in \cref{fig:c-beta-TW1-TW160}, however,
all branches possess a bifurcation structure similar to the one of the \tw{m}
discussed in the past section. Moreover, we see that the magnitude of the speed
of the composite wave is bounded above by the magnitude of the speed of the
group at the leading edge of the wave (the slowest wave, \tw{3} in this case).
\begin{figure}
\label{fig:composite_formation_solution}
\centering
\includegraphics{compositeWavesSimulationsNew}
\caption{(a) Bifurcation diagram of selected composite waves. The red curve is a \tw{3}
branch, as computed in \cref{fig:bif-diag-TW3}. The blue curves are branches of
composite waves, featuring an approximate \tw{3} at the front of the wave. The
composite waves are slightly slower than \tw{3}. The diagram shows selected
profiles at the first oscillatory bifurcation points.
(b) Examples of composite waves obtained via collisions of multi-spike waves.
(c) Collisions between $m$-spike propagating structures and wandering bumps
generate composite waves (left) or bump repulsion (right), depending on initial
conditions. Simulations in panels (b)--(c) have a lattice spacing of $\Delta x =
2L/n = 0.01$.}
\end{figure}
Direct numerical simulation highlights that composite waves can be formed from the
interaction of multi-spike waves as shown in the left panel
of \cref{fig:composite_formation_solution}(b). Here we choose an initial
condition with well separated $\tw{1}$, $\tw{2}$ and $\tw{3}$ profiles.
Initially, these separated structures travel with different speeds (\tw{1} being
the fastest and \tw{3} the slowest, in line with what was found in
\cref{fig:c-beta-TW1-TW160}(a)). After a transient, the waves come closer and
form a compound (the composite wave), with a common intermediate speed. The
dynamics of composite waves depend greatly on the initial conditions: in the
right panel of \cref{fig:composite_formation_solution}(b), we see that an
initial condition in which a \tw{1} lies between another \tw{1} and a \tw{3}
leads to the extinction of the intermediate wave resulting in a composite wave
with a total of $4$ spikes.
Composite waves also result from the collision between waves and wandering bumps
(\cref{fig:composite_formation_solution}(c), left panel). Here, we see a
transition of two bump states into a composite wave that is compounded with a
pre-existing \tw{5}. The interaction with the \tw{5} causes the left-most bump
to visit the branches of travelling wave solutions whereupon the combined state
settles on a stable \tw{5} + \tw{9} branch. This process is repeated for the
right-most bump, giving rise to an overall \tw{5} + \tw{9} + \tw{12} wave.
In the right panel of the
\cref{fig:composite_formation_solution}(c), we see that the same kind of collision
can instead result in the wave packet transitioning to a wandering bump itself,
highlighting the dependence of the formation of composite waves on initial
conditions. In this scenario, the bump state does not visit a stable
travelling wave branch and so only transiently adopts a weakly unstable wave
profile before returning to a bump attractor state.
\section{Conclusions} \label{sec:conclusions}
We have provided evidence that the relationship between bump attractors and travelling
waves in a classical network of excitable, leaky integrate-and-fire neurons bears
strong similarities to the one between complex spatiotemporal patterns and waves at
the onset of pipe turbulence.
We made analytical and numerical progress in the
construction and stability analysis of travelling waves with a large number of
localised spikes, and gained access to their intricate bifurcation structure. This
step was essential, because such waves advect, at low speed, localised patterns that
resemble the bump attractor core. It should be noted that the waves we computed are
only a subset of the ones supported by the model.
As we completed the present paper, a recent publication~\cite{laing2020moving}
reported the existence of waves with vanishingly small speed, and discontinuous
profiles, in networks of theta neurons, which can be cast as spiking networks with a
polynomial ODE of quadratic type. A natural question
arises as to whether the fluid-dynamical analogy applies in that and other network
models. The level-set approach used in the present paper was particularly effective
because one can
define $m$-spike waves starting from mild solutions to the formal evolution equation
\cref{eq:contMod}, and derive a relatively simple expression for the wave
profile~\cref{eq:nuXi}. While this approach may be harder to carry out in more
detailed spiking models, the general idea of a relationship between localised waves
and bumps in spiking networks could be investigated, by direct simulations, in more
realistic networks (spiking or not).
An important open question concerns the definition of~\cref{eq:contMod} and, more
generally, of spatially-continuous spiking networks, as dynamical systems posed on
function spaces. This problem has been circumvented here by defining a suitable class
of solutions, introducing the voltage mapping, and then providing proofs of its
relevance to the construction and stability of multiple-spike waves. We believe that
a full dynamical-systems characterisation of similar models will be a key ingredient to
uncover further links between localised waves and bumps in complex,
spatially-extended threshold networks.
\section*{Acknowledgments}
We are grateful to Stephen Coombes,
Predrag Cvitanovi\'c,
Gregory Faye,
Joel Feinstein,
John Gibson,
Joost Hulshof, and Edgar Knobloch.
for insightful discussions.
|
1,108,101,564,209 | arxiv | \section{Introduction}
Mesh patterns were introduced by Br\"and\'en and Claesson in \cite{branden claesson} to generalize many existing varieties of permutation patterns, including classical patterns (brought to prominence in \cite{simion schmidt}), vincular patterns (see \cite{babson steingrimsson, steingrimsson}), bivincular patterns (introduced in \cite{bousquet-melou claesson dukes kitaev}), Bruhat-restricted patterns (\cite{woo yong}), and certain cases of barred patterns (introduced in \cite{west-thesis}). There has also subsequently been an additional generalization by \'Ulfarsson, to marked mesh patterns \cite{ulfarsson}.
A mesh pattern (defined precisely in Section~\ref{section:flavors}) involves both a classical permutation and a region in the plane known as the ``mesh.'' In certain cases, the information contained in this mesh turns out to be unnecessary. For example, the mesh pattern
$$
\left(123,\{(1,1)\}\right) = \begin{minipage}{.86in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (1,1) rectangle (2,2);
\foreach \x in {1,2,3} {\draw (\x,0) -- (\x,4); \draw (0,\x) -- (4,\x); \fill[black] (\x,\x) circle (5pt);}
\end{tikzpicture}\end{minipage}
$$
is avoided (respectively, contained) by exactly the same permutations as those avoiding (respectively, containing) the classical pattern $123$.
In this paper we characterize all mesh patterns in which the mesh is superfluous in this way, answering a question of Kitaev. This result, Theorem~\ref{thm:main}, states that a mesh pattern has this property if and only if it has no configurations of the form depicted in Figure~\ref{fig:enclosed}.
The proof of the main result is given in Section~\ref{section:proof}, and the paper concludes with some related enumerative results for extremal cases of superfluous meshes.
\section{Classical and mesh patterns}\label{section:flavors}
We will use the word \emph{permutation} to refer to an automorphism of the set $[k] = \{1, \ldots, k\}$ for some positive integer $k$, and will say that two sequences of real numbers are \emph{order isomorphic} if they are in the same relative order, denoted ``$\approx$.'' We let $\mf{S}_k$ denote the set of all permutations of $[k]$.
\begin{ex}
$\sqrt{5} \ -\!1 \ 0 \ \approx \ 3\ 1\ 2$.
\end{ex}
A permutation $\pi \in \mf{S}_k$ can be written in one-line notation as the word $\pi(1) \cdots \pi(k)$. Classical pattern containment and avoidance are defined in terms of this notation.
\begin{defn}
Given permutations $\pi \in \mf{S}_k$ and $\sigma \in \mf{S}_n$, where $k \le n$, we say that $\sigma$ \emph{contains} a $\pi$-pattern if there exist indices $i_1 < \cdots < i_k$ so that
$$\sigma(i_1) \sigma(i_2) \cdots \sigma(i_k) \approx \pi.$$
If there are no such indices, then $\sigma$ \emph{avoids} the pattern $\pi$.
\end{defn}
\begin{ex}\
\begin{itemize}
\item The permutation $42135$ contains the pattern $213$ in five ways:
$$425 \approx 415 \approx 435 \approx 213 \approx 215.$$
\item The permutation $42315$ avoids the pattern $132$.
\end{itemize}
\end{ex}
We can also represent a permutation $\pi \in \mf{S}_k$ graphically, as
$$G(\pi) = \{(i,\pi(i)) : 1 \le i \le k\} \subseteq [1,k] \times [1,k].$$
This will be useful when discussing mesh patterns below.
\begin{ex}\label{ex:graph of 42135}
$$G(42135) =
\begin{minipage}{1.25in}\begin{tikzpicture}[scale=.5]
\foreach \x in {1,2,3,4,5} {\draw (0,\x) -- (6,\x); \draw (\x,0) -- (\x,6);}
\foreach \x in {(1,4),(2,2),(3,1),(4,3),(5,5)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\end{minipage}.$$
\end{ex}
To say that $\sigma$ contains a $\pi$-pattern means that $G(\sigma)$ contains at least one copy of the graph $G(\pi)$.
\begin{ex}\
\begin{itemize}
\item The graph $G(42135)$ depicted in Example~\ref{ex:graph of 42135} contains
$$
G(213) =
\begin{minipage}{.85in}\begin{tikzpicture}[scale=.5]
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,2),(2,1),(3,3)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\end{minipage}
$$
in five ways. The copy of $G(213)$ that corresponds to the occurrence $425$ is
$$
\begin{minipage}{1.25in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {1,2,3,4,5} {\draw (0,\x) -- (6,\x); \draw (\x,0) -- (\x,6);}
\foreach \x in {(1,4),(2,2),(3,1),(4,3),(5,5)} {\fill[black] \x circle (5pt);}
\foreach \x in {(1,4),(2,2),(5,5)} {\draw \x circle (10pt);}
\end{tikzpicture}\end{minipage}.$$
\item The graph depicted in Example~\ref{ex:graph of 42135} does not contain any copies of
$$
G(132) =
\begin{minipage}{.85in}\begin{tikzpicture}[scale=.5]
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,1),(2,3),(3,2)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\end{minipage}.
$$
\end{itemize}
\end{ex}
In \cite{branden claesson}, Br\"and\'en and Claesson introduced a new type of permutation patterns, called ``mesh patterns.'' These include, as special cases, all classical, (bi)vincular, and Bruhat-restricted patterns, as well as certain barred patterns.
\begin{defn}
A \emph{mesh pattern} is an ordered pair $(\pi,R)$, where $\pi \in \mf{S}_k$ is a permutation, and $R$ is a subset of the $(k+1)^2$ unit squares in $[0,k+1]\times[0,k+1]$, indexed by their lower-left corners. The set $R$ will be called the \emph{mesh} of the mesh pattern. Mesh patterns are depicted by drawing $G(\pi)$ and shading all squares in the mesh $R$.
\end{defn}
\begin{ex}\label{ex:mesh 213}
$$\big(213,\{(0,3),(1,2),(1,3),(3,0)\}\big) =
\begin{minipage}{.85in}\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,3),(1,2),(1,3),(3,0)} {\fill[black!20] \x rectangle ++ (1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,2),(2,1),(3,3)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\end{minipage}.
$$
\end{ex}
\begin{defn}
A permutation $\sigma$ \emph{contains} a mesh pattern $(\pi,R)$ if $G(\sigma)$ contains a copy of $G(\pi)$ (that is, there is an occurrence of $\pi$ in $\sigma$) such that the regions of $G(\pi)$ which get shaded in the mesh corresponded to regions in the graph of $\sigma$ that contain no points of $G(\sigma)$.
\end{defn}
\begin{ex}
The permutation $42135$ contained the pattern $213$ in five ways. This same permutation contains the mesh pattern $(213,\{(0,3),(1,2),(1,3),(3,0)\})$ in only four ways, depicted below. We use thick lines in this example to clarify how the four elements in the mesh appear in each picture. For example, $(1,2) \in R$ describes the region whose horizontal coordinates are between the first and second symbols in the $213$-pattern, and whose vertical coordinates are between the first and third symbols in the $213$-pattern.
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,5),(1,4),(1,5),(5,0),(5,1)}{\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3,4,5} {\draw (0,\x) -- (6,\x); \draw (\x,0) -- (\x,6);}
\foreach \x in {(1,4),(2,2),(3,1),(4,3),(5,5)} {\fill[black] \x circle (5pt);}
\foreach \x in {(1,4),(2,2),(5,5)}{\draw \x circle (10pt);}
\foreach \x in {1,2} {\draw[ultra thick] (\x,4) -- (\x,6);}
\draw[ultra thick] (1,4) -- (2,4); \draw[ultra thick] (0,5) -- (2,5);
\draw[ultra thick] (5,0) -- (5,2) -- (6,2);
\end{tikzpicture}\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (1,4) rectangle (3,5);
\fill[black!20] (0,5) rectangle (3,6);
\fill[black!20] (5,0) rectangle (6,1);
\foreach \x in {1,2,3,4,5} {\draw (0,\x) -- (6,\x); \draw (\x,0) -- (\x,6);}
\foreach \x in {(1,4),(2,2),(3,1),(4,3),(5,5)} {\fill[black] \x circle (5pt);}
\foreach \x in {(1,4),(3,1),(5,5)}{\draw \x circle (10pt);}
\draw[ultra thick] (0,5) -- (1,5) -- (1,6);
\draw[ultra thick] (1,5) -- (3,5) -- (3,6);
\draw[ultra thick] (1,5) -- (1,4) -- (3,4) -- (3,5);
\draw[ultra thick] (5,0) -- (5,1) -- (6,1);
\end{tikzpicture}\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (1,4) rectangle (4,5);
\fill[black!20] (0,5) rectangle (4,6);
\fill[black!20] (5,0) rectangle (6,3);
\foreach \x in {1,2,3,4,5} {\draw (0,\x) -- (6,\x); \draw (\x,0) -- (\x,6);}
\foreach \x in {(1,4),(2,2),(3,1),(4,3),(5,5)} {\fill[black] \x circle (5pt);}
\foreach \x in {(1,4),(4,3),(5,5)}{\draw \x circle (10pt);}
\draw[ultra thick] (0,5) -- (1,5) -- (1,6);
\draw[ultra thick] (1,5) -- (4,5) -- (4,6);
\draw[ultra thick] (1,5) -- (1,4) -- (4,4) -- (4,5);
\draw[ultra thick] (5,0) -- (5,3) -- (6,3);
\end{tikzpicture}\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,5) rectangle (2,6);
\fill[black!20] (2,2) rectangle (3,6);
\fill[black!20] (5,0) rectangle (6,1);
\foreach \x in {1,2,3,4,5} {\draw (0,\x) -- (6,\x); \draw (\x,0) -- (\x,6);}
\foreach \x in {(1,4),(2,2),(3,1),(4,3),(5,5)} {\fill[black] \x circle (5pt);}
\foreach \x in {(2,2), (3,1), (5,5)}{\draw \x circle (10pt);}
\draw[ultra thick] (0,5) -- (2,5) -- (2,6);
\draw[ultra thick] (2,5) -- (3,5) -- (3,6);
\draw[ultra thick] (2,5) -- (2,2) -- (3,2) -- (3,5);
\draw[ultra thick] (5,0) -- (5,1) -- (6,1);
\end{tikzpicture}$$
The remaining occurrence of $213$ in $42135$ does not obey the restrictions of the mesh because the shaded region includes an element of $G(42135)$, as shown in the following diagram.
$$\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,3) rectangle (2,6);
\fill[black!20] (2,2) rectangle (3,6);
\fill[black!20] (4,0) rectangle (6,1);
\foreach \x in {1,2,3,4,5} {\draw (0,\x) -- (6,\x); \draw (\x,0) -- (\x,6);}
\foreach \x in {(1,4),(2,2),(3,1),(4,3),(5,5)} {\fill[black] \x circle (5pt);}
\foreach \x in {(2,2), (3,1), (4,3)}{\draw \x circle (10pt);}
\draw[ultra thick] (0,3) -- (2,3) -- (2,6);
\draw[ultra thick] (2,3) -- (3,3) -- (3,6);
\draw[ultra thick] (2,3) -- (2,2) -- (3,2) -- (3,3);
\draw[ultra thick] (4,0) -- (4,1) -- (6,1);
\end{tikzpicture}$$
\end{ex}
In this paper, we are concerned with which permutations contain or avoid a given pattern. To this end, we define two sets.
\begin{defn}
For a (classical or mesh) pattern $p$, let $\textup{Av}(p)$ be the set of permutations that avoid $p$, and let $\textup{Cont}(p)$ be the set of permutations that contain $p$. Similarly, if $P$ is any collection of patterns, then
\begin{eqnarray*}
\textup{Av}(P) &=& \bigcap_{p \in P} \textup{Av}(p), \text{ and}\\
\textup{Cont}(P) &=& \bigcup_{p \in P} \textup{Cont}(p).
\end{eqnarray*}
\end{defn}
The following fact is obvious for any set $P$ of patterns:
\begin{equation}\label{eqn:av and cont}
\bigcup_n \mf{S}_n = \textup{Av}(P) \sqcup \textup{Cont}(P).
\end{equation}
\section{Main results}
In a personal communication, Kitaev asked which mesh patterns could be equivalently described by classical patterns. That is, we want to know for which mesh patterns $(\pi,R)$ there exists a set $S(\pi,R)$ of classical patterns so that
$$\textup{Av}((\pi,R)) = \textup{Av}(S(\pi,R)).$$
By equation~\eqref{eqn:av and cont}, this property can be equivalently stated as
$$\textup{Cont}((\pi,R)) = \textup{Cont}(S(\pi,R)).$$
Throughout this paper, we will assume that the set $S(\pi,R)$, when it exists, is of minimal cardinality. More precisely, if $\sigma$ is contained in $\tau$, then $\textup{Av}(\{\sigma,\tau\}) = \textup{Av}(\{\sigma\})$. Thus it suffices to consider just $\{\sigma\}$ when discussing avoidance and containment of $\{\sigma,\tau\}$.
The possibility of different types of patterns characterizing the same sets of permutations has arisen before, and was the subject of a recent paper comparing barred and vincular avoidance \cite{coincidental}. We will expand upon the language of that paper here.
\begin{defn}
Suppose that $P$ and $P'$ are two sets of permutation patterns. If $\textup{Av}(P) = \textup{Av}(P')$, then $P$ and $P'$ are \emph{coincident}. If $P = \{p\}$, then we may say that $p$ and $P'$ are \emph{coincident}.
\end{defn}
Recall that $P$ and $P'$ are Wilf-equivalent if $|\textup{Av}(P) \cap \mf{S}_n| = |\textup{Av}(P') \cap \mf{S}_n|$ for all $n$. Thus coincidence is stronger than Wilf-equivalence because coincidence requires that the sets $\textup{Av}(P)$ and $\textup{Av}(P')$ themselves coincide.
Observe that the complement to Kitaev's question is entirely straightforward to answer.
\begin{lem}
Every classical permutation $\pi$ is coincident to the mesh pattern $(\pi,\emptyset)$.
\end{lem}
In order to answer Kitaev's question, we will need to introduce an additional piece of terminology.
\begin{defn}\label{defn:enclosed}
Let $(\pi,R)$ be a mesh pattern. An \emph{enclosed diagonal} in $(\pi,R)$ is a triple $((i,j),\varepsilon, h)$ where $\varepsilon \in \{-1,1\}$, $h \ge 1$, and
\begin{itemize}
\item $\{(i+x,j+x\varepsilon) : 1 \le x < h\} \subseteq G(\pi)$,
\item $(i+x,j+x\varepsilon) \not\in G(\pi)$ for $x \in \{0,h\}$, and
\item $\{(i+x,j+x\varepsilon) : 0 \le x < h\} \subseteq R$.
\end{itemize}
\end{defn}
In the graph of a mesh pattern, the enclosed diagonals have one of the forms depicted in Figure~\ref{fig:enclosed}. The terminology of Definition~\ref{defn:enclosed} refers to the fact that the diagonal of consecutive elements of $G(\pi)$ is entirely enclosed by elements of the mesh.
\begin{figure}[H]
$$
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,3,4} {\fill[black!20] (\x,\x) rectangle ++(1,1); \draw(\x,\x) rectangle ++(1,1);}
\foreach \x in {1,2,3,4} {\fill[black] (\x,\x) circle (5pt);}
\foreach \x in {2.25,2.5,2.75} {\fill[black] (\x,\x) circle (1.5pt);}
\end{tikzpicture}
\hspace{1in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,3,4} {\fill[black!20] (\x,-\x) rectangle ++(1,-1); \draw(\x,-\x) rectangle ++(1,-1);}
\foreach \x in {1,2,3,4} {\fill[black] (\x,-\x) circle (5pt);}
\foreach \x in {2.25,2.5,2.75} {\fill[black] (\x,-\x) circle (1.5pt);}
\end{tikzpicture}$$
\caption{Enclosed diagonals in a mesh pattern $(\pi,R)$. A point is in $G(\pi)$ if and only if it is marked $\bullet$.}\label{fig:enclosed}
\end{figure}
\begin{ex}\label{ex:enclosed diagonal}
The mesh pattern $(231, \{(1,1),(2,0),(3,1)\})$ depicted by
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {(1,1),(2,0),(3,1)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (\x,0) -- (\x,4); \draw (0,\x) -- (4,\x);}
\foreach \x in {(1,2),(2,3),(3,1)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}$$
has exactly one enclosed diagonal. It is $\big((2,0), 1, 2\big)$.
\end{ex}
Note that when $h=1$ in Definition~\ref{defn:enclosed}, the value of $\varepsilon$ is irrelevant. Also, in this case, the enclosed diagonal consists of a single square of the mesh. This means that there is an $(i,j) \in R$ with $\{\pi(i),\pi(i+1)\} \cap \{j,j+1\} = \emptyset$. In other words, the shaded square with lower-left corner $(i,j)$ must look like the square drawn below, where again a point is in $G(\pi)$ if and only if it is marked $\bullet$.
\begin{equation}\label{eqn:empty corners}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1,1);
\draw (0,0) rectangle (1,1);
\end{tikzpicture}
\end{equation}
To put this another way, for a square $(i,j) \in R$ to not, itself, be an enclosed diagonal, it must have one of the following forms.
$$
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1,1);
\foreach \x in {0,1} {\draw (0,\x) -- (1,\x); \draw (\x,0) -- (\x,1);}
\fill[black] (0,0) circle (5pt);
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1,1);
\foreach \x in {0,1} {\draw (0,\x) -- (1,\x); \draw (\x,0) -- (\x,1);}
\fill[black] (0,1) circle (5pt);
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1,1);
\foreach \x in {0,1} {\draw (0,\x) -- (1,\x); \draw (\x,0) -- (\x,1);}
\fill[black] (1,0) circle (5pt);
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1,1);
\foreach \x in {0,1} {\draw (0,\x) -- (1,\x); \draw (\x,0) -- (\x,1);}
\fill[black] (1,1) circle (5pt);
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1,1);
\foreach \x in {0,1} {\draw (0,\x) -- (1,\x); \draw (\x,0) -- (\x,1);}
\fill[black] (0,0) circle (5pt);
\fill[black] (1,1) circle (5pt);
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1,1);
\foreach \x in {0,1} {\draw (0,\x) -- (1,\x); \draw (\x,0) -- (\x,1);}
\fill[black] (0,1) circle (5pt);
\fill[black] (1,0) circle (5pt);
\end{tikzpicture}
$$
These are the only possibilities because $\pi$ is a permutation, and so $G(\pi)$ contains exactly one element along each column and along each row.
We are now able to state the main result of this paper.
\begin{thm}\label{thm:main}
A mesh pattern $(\pi,R)$ is coincident to a set $S(\pi,R)$ of classical patterns if and only if $(\pi,R)$ has no enclosed diagonals. Moreover, if $(\pi,R)$ satisfies this requirement, then in fact $S(\pi,R) = \{\pi\}$, and so $(\pi,R)$ is coincident to $\pi$.
\end{thm}
\begin{ex}\
\begin{itemize}
\item The mesh pattern $(231, \{(1,1)\})$ depicted by
$$\begin{tikzpicture}[scale=.5]
\fill[black!20] (1,1) rectangle (2,2);
\foreach \x in {1,2,3} {\draw (\x,0) -- (\x,4); \draw (0,\x) -- (4,\x);}
\foreach \x in {(1,2),(2,3),(3,1)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}$$
is coincident to the permutation $231$.
\item The mesh pattern $(231, \{(1,1),(3,2)\})$ depicted by
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {(1,1),(3,2)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (\x,0) -- (\x,4); \draw (0,\x) -- (4,\x);}
\foreach \x in {(1,2),(2,3),(3,1)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}$$
is not coincident to any set of classical patterns because $\big((3,2), 1, 1\big)$ is an enclosed diagonal.
\item The mesh pattern $(231, \{(1,1),(2,0),(3,1)\})$ of Example~\ref{ex:enclosed diagonal} is not coincident to any set of classical patterns because of its enclosed diagonal.
\end{itemize}
\end{ex}
In light of the last statement of Theorem~\ref{thm:main}, we introduce the following terminology.
\begin{defn}
If a mesh pattern $(\pi,R)$ is coincident to the classical pattern $\pi$, then the mesh $R$ is \emph{superfluous}.
\end{defn}
Thus Theorem~\ref{thm:main} could be rephrased as follows.
{\renewcommand{\thethm}{\ref{thm:main}$^{\boldsymbol{\prime}}$}
\begin{thm}
A mesh pattern has a superfluous mesh if and only if it has no enclosed diagonals.
\end{thm}
\addtocounter{thm}{-1}
}
Related to one direction of Theorem~\ref{thm:main} is the Shading Lemma of \cite{hilmarsson jonsdottir sigurdardottir vidarsdottir ulfarsson}, discovered independently.
One application of Theorem~\ref{thm:main} recovers a result about so-called ``boxed'' mesh patterns from \cite{avgustinovich kitaev valyuzhenich}.
\begin{cor}[{{\cite[Proposition 1]{avgustinovich kitaev valyuzhenich}}}]
The only permutations of $k$ letters for which $[1,k-1]\times[1,k-1]$ is a superfluous mesh are
$1$, $12$, $21$, $132$, $213$, $231$, and $312$.
\end{cor}
\begin{proof}
Suppose that $\pi \in \mf{S}_k$ is a permutation for which $[1,k-1]\times[1,k-1]$ is a superfluous mesh. Given \eqref{eqn:empty corners}, and the fact that $\pi$ is a bijection on $[k]$, we can conclude that $k \le 4$.
If $k=4$, then it is easy to see that there will always be at least one square in the mesh that looks like \eqref{eqn:empty corners}, such as the darkly shaded square indicated below, yielding an enclosed diagonal.
$$\begin{tikzpicture}[scale=.5]
\fill[black!20] (1,1) rectangle (4,4);
\fill[black!60] (3,2) rectangle (4,3);
\foreach \x in {1,2,3,4} {\draw (0,\x) -- (5,\x); \draw (\x,0) -- (\x,5);}
\foreach \x in {(1,3),(2,2),(3,4),(4,1)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}$$
The permutations with $k\le 3$ can each be checked by hand, and exactly two of these have an enclosed diagonal, marked below with darkly shaded squares.
$$\begin{tikzpicture}[scale=.5]
\fill[black!20] (1,1) rectangle (3,3);
\foreach \x in {(1,2),(2,1)} {\fill[black!60] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4); \fill[black] (\x,\x) circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (1,1) rectangle (3,3);
\foreach \x in {(1,1),(2,2)} {\fill[black!60] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4); \fill[black] (\x,4-\x) circle (5pt);}
\end{tikzpicture}$$
\end{proof}
\section{Proof of Theorem~\ref{thm:main}}\label{section:proof}
The main result of this paper can be proved in three steps, which we break into propositions below. The first of these will prove the final sentence in the statement of the theorem.
\begin{prop}
If a mesh pattern $(\pi,R)$ is coincident to a set $S(\pi,R)$ of classical patterns, then $S(\pi,R) = \{\pi\}$.
\end{prop}
\begin{proof}
Because $\pi$ is the unique element of $\textup{Cont}((\pi,R))$ using the fewest letters, we must have $\pi \in S(\pi,R)$.
Suppose there is some other element $\sigma \in S(\pi,R) \setminus \{\pi\}$. By the minimality of $|S(\pi,R)|$, we can assume that $\sigma$ does not contain a $\pi$-pattern. However, because $\sigma \in \textup{Cont}(S(\pi,R)) = \textup{Cont}((\pi,R))$, we must have that $\pi$ is contained in $\sigma$ after all, obtaining a contradiction.
\end{proof}
We now address the biconditional statement of Theorem~\ref{thm:main}. The first direction of this is below.
\begin{prop}
If a mesh pattern $(\pi,R)$ has an enclosed diagonal, then its mesh $R$ is not superfluous.
\end{prop}
\begin{proof}
Suppose that the mesh pattern $(\pi,R)$ has an enclosed diagonal $((i,j), \varepsilon, h)$. Without loss of generality, let $\varepsilon = 1$. Then
\begin{eqnarray*}
&\{(i+x,j+x) : 1 \le x < h\} \subseteq G(\pi),&\\
&(i+x,j+x) \not\in G(\pi) \text{ for } x \in \{0,h\}, \text{ and}&\\
&\{(i+x,j+x) : 0 \le x < h\} \subseteq R.&\\
\end{eqnarray*}
Consider the permutation $\sigma$ that is order isomorphic to
$$\pi(1) \ \cdots \ \pi(i) \ (j + \textstyle{\frac{1}{2}}) \ \pi(i+1) \ \pi(i+2) \ \cdots,$$
formed by inserting $j+\frac{1}{2}$ between the $i$th and $(i+1)$st symbols in $\pi$.
By construction, $\sigma \in \textup{Cont}(\pi)$. In fact, $\sigma$ contains $h+1$ occurrences of $\pi$ --- obtained by choosing which of $\{\sigma(i+1), \sigma(i+2),\ldots, \sigma(i+h+1)\}$ should play the roles of $\{\pi(i+1),\pi(i+2),\ldots,\pi(i+h)\}$ in the pattern. However, none of these occurrences of $\pi$ can be drawn so as to avoid all of the $h+1$ shaded boxes in the enclosed diagonal of $(\pi,R)$. Thus $\sigma \not\in \textup{Cont}((\pi,R))$.
Hence $\textup{Cont}((\pi,R)) \neq \textup{Cont}(\pi)$, and so $(\pi,R)$ and $\pi$ are not coincident. In other words, the mesh $R$ is not superfluous.
\end{proof}
There is now one piece remaining in the proof of Theorem~\ref{thm:main}.
\begin{prop}
If a mesh pattern $(\pi,R)$ does not have a superfluous mesh, then it has an enclosed diagonal.
\end{prop}
\begin{proof}
Suppose that the mesh pattern $(\pi,R)$ is not coincident to $\pi$. Certainly $\textup{Cont}((\pi,R))\subseteq \textup{Cont}(\pi)$, so consider some $\sigma \in \textup{Cont}(\pi) \setminus \textup{Cont}((\pi,R))$. By construction, this $\sigma$ has at least one occurrence of the pattern $\pi$, and no occurrence of $\pi$ in $\sigma$ avoids all of the shaded regions from the mesh $R$.
Given an occurrence $\langle \pi \rangle$ of $\pi$ in $\sigma$, let $(\langle \pi \rangle, R)_{\sigma}$ denote the number of elements of $G(\sigma)$ that land in the shaded regions from the mesh $R$ relative to this $\langle \pi \rangle$.
Now choose an occurrence $\langle \pi \rangle$ for which $(\langle \pi \rangle, R)_{\sigma}$ is minimal, and let $(i,j) \in R$ correspond to a shaded region that contains some $(z,\sigma(z)) \in G(\sigma)$. Note that because $(z,\sigma(z))$ lies in a shaded region of this occurrence of the mesh pattern, we necessarily have that $\sigma(z)$ itself is not part of $\langle \pi \rangle$.
If this $(i,j) \in R$ is itself in an enclosed diagonal in $(\pi,R)$, then we are done.
Otherwise, without loss of generality, there exists $h \ge 1$ such that
\begin{eqnarray}
\nonumber &\{(i+x,j+x) : 0 \le x < h\} \subseteq R,&\\
\nonumber &(i+h,j+h) \not\in R, \text{ and}&\\
\label{eqn:consecutive values} &\pi(i+x) = j+x \text{ for all } 1 \le x \le h.&
\end{eqnarray}
For all $1 \le x \le h$, let $\sigma(l_x)$ represent $\pi(i+x)$ in the occurrence $\langle \pi \rangle$, and note from equation~\eqref{eqn:consecutive values} that $\pi(i+1),\pi(i+2),\ldots,\pi(i+h)$ are consecutive increasing values in the pattern. This means that $\sigma(l_1),\sigma(l_2),\ldots,\sigma(l_h)$ are increasing values in $\sigma$, and that no values in the occurrence $\langle \pi \rangle$ are strictly between $\sigma(l_x)$ and $\sigma(l_{x+1})$ for any $1 \le x < h$.
Now define $\langle \pi \rangle'$ from $\langle \pi \rangle$ by replacing $\{\sigma(l_1),\ldots,\sigma(l_h)\}$ with $\{\sigma(z),\sigma(l_1),\ldots,\sigma(l_{h-1})\}$, and not changing any other elements of the pattern. By construction, this $\langle \pi \rangle'$ is another occurrence of $\pi$ in $\sigma$. Also, in the diagram $G(\sigma)$, the shaded regions from the mesh change as follows, where a point in $G(\sigma)$ is filled in if and only if it is part of the pattern occurrence, and a region is shaded if and only if it is shaded by the mesh $R$.
$$
\begin{minipage}{1.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (2,3);
\fill[black!20] (2,3) rectangle (4,5);
\fill[black!20] (4,5) rectangle (6,6);
\draw (6,6) rectangle (7,7);
\draw (4,5) rectangle (6,6);
\draw (2,3) rectangle (4,5);
\draw (0,0) rectangle (2,3);
\foreach \x in {(2,3),(4,5),(6,6)} {\fill[black] \x circle (5pt);}
\foreach \x in {(1.5,1)} {\fill[white] \x circle (5pt); \draw \x circle (5pt);}
\end{tikzpicture}
\end{minipage}
\Longrightarrow
\begin{minipage}{1.5in}
\begin{tikzpicture}[scale=.5]
\fill[black!20] (0,0) rectangle (1.5,1);
\fill[black!20] (1.5,1) rectangle (2,3);
\fill[black!20] (2,3) rectangle (4,5);
\draw (4,5) rectangle (7,7);
\draw (2,3) rectangle (4,5);
\draw (1.5,1) rectangle (2,3);
\draw (0,0) rectangle (1.5,1);
\foreach \x in {(1.5,1),(2,3),(4,5)} {\fill[black] \x circle (5pt);}
\draw[dotted] (1.5,0) -- (2,0) -- (2,1); \draw[dotted] (0,1) -- (0,3) -- (1.5,3);
\draw[dotted] (4,6) -- (7,6);\draw[dotted] (6,5) -- (6,7);
\foreach \x in {(6,6)} {\fill[white] \x circle (5pt); \draw \x circle (5pt);}
\end{tikzpicture}
\end{minipage}
$$
This yields
$$(\langle \pi \rangle', R)_{\sigma} < (\langle \pi \rangle, R)_{\sigma},$$
which contradicts the minimality assumption of $(\langle \pi \rangle, R)_{\sigma}$. Thus $(i,j)$ must have been part of an enclosed diagonal.
\end{proof}
\section{Extremal enumeration}
Given a permutation $\pi$, it is not difficult to enumerate the mesh patterns $(\pi,R)$ for which the mesh $R$ is superfluous. This is because, by Theorem~\ref{thm:main}, we must simply choose an $R$ that has no enclosed diagonals.
\begin{defn}
Given a permutation $\pi$, let $\textup{\textsf{sup-mesh}}(\pi)$ be the number of meshes that are superfluous for $\pi$.
\end{defn}
To calculate \textup{\textsf{sup-mesh}} for a given permutation, we can use the inclusion-exclusion principle.
From there, it is also straightforward to enumerate the meshes that are not superfluous for some $\pi \in \mf{S}_k$, because these are all the remaining subsets of $[0,k]\times[0,k]$: $2^{(k+1)^2} - \textup{\textsf{sup-mesh}}(\pi)$.
\begin{ex}\label{ex:123 superfluous}
Consider $123 \in \mf{S}_3$. Because any square of the form depicted in \eqref{eqn:empty corners} would itself constitute an enclosed diagonal, each $(i,j)$ in a superfluous mesh for $123$ must satisfy $|i-j| \le 1$. The squares satisfying this requirement are shaded below.
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,0), (0,1), (1,0), (1,1), (1,2), (2,1), (2,2), (2,3), (3,2), (3,3)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4); \fill[black] (\x,\x) circle (5pt);}
\end{tikzpicture}$$
With this restriction in place, the only other enclosed diagonals to worry about are the following four possibilities.
$$
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,2,3} {\fill[black!20] (\x,\x) rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4); \fill[black] (\x,\x) circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {(1,0),(0,1)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4); \fill[black] (\x,\x) circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {(2,1),(1,2)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4); \fill[black] (\x,\x) circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {(3,2),(2,3)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4); \fill[black] (\x,\x) circle (5pt);}
\end{tikzpicture}
$$
We now use inclusion-exclusion to count the superfluous meshes for $123$:
\begin{eqnarray*}
\textup{\textsf{sup-mesh}}(123) &=& 2^{10} - (2^6 + 2^8 + 2^8 + 2^8) + (2^4 + 2^4 + 2^4 + 2^6 + 2^6 + 2^6)\\
&& \phantom{2^{10}} - (2^2 + 2^2 + 2^2 + 2^4) + 2^0\\
&=& 405.
\end{eqnarray*}
\end{ex}
\begin{ex}\label{ex:132 superfluous}
Consider $132 \in \mf{S}_3$. As in the previous example, the only possible squares that can be shaded by a superfluous mesh for $132$ are indicated below.
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,0), (0,1), (1,0), (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,1), (2,3), (3,2)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}$$
There are five other enclosed diagonals that we must also avoid.
$$
\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,0), (1,1)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,1), (2,3), (3,2)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {(1,2), (2,3)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,1), (2,3), (3,2)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {(2,1), (3,2)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,1), (2,3), (3,2)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,1), (1,0)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,1), (2,3), (3,2)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
\hspace{.5in}
\begin{tikzpicture}[scale=.5]
\foreach \x in {(1,3), (2,2), (3,1)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3} {\draw (0,\x) -- (4,\x); \draw (\x,0) -- (\x,4);}
\foreach \x in {(1,1), (2,3), (3,2)} {\fill[black] \x circle (5pt);}
\end{tikzpicture}
$$
Thus
\begin{eqnarray*}
\textup{\textsf{sup-mesh}}(132) &=& 2^{11} - (2^9 + 2^9 + 2^9 + 2^9 + 2^8)\\
&& \phantom{2^{11}} + (2^7 + 2^7 + 2^7 + 2^6 + 2^7 + 2^7 + 2^6 + 2^7 + 2^6 + 2^6) \\
&& \phantom{2^{11}} - (2^5 + 2^5 + 2^4 + 2^5 + 2^4 + 2^4 + 2^5 + 2^4 + 2^4 + 2^4)\\
&& \phantom{2^{11}} + (2^3 + 2^2 + 2^2 + 2^2 + 2^2) - 2^0\\
&=& 576.
\end{eqnarray*}
\end{ex}
From these examples, it is clear how to find those permutations that have the fewest and the most superfluous meshes. These extremes can be achieved by looking at how two different configurations
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,2} {\draw (\x,0) -- (\x,2);\draw (0,\x) -- (2,\x);}
\fill[black] (1,1) circle (5pt);
\end{tikzpicture}$$
in $G(\pi)$ can overlap; in other words, by looking at when $|\pi(i) - \pi(i+1)| = 1$.
\begin{cor}\label{cor:extremal}
Suppose $\pi \in \mf{S}_k$.
\begin{enumerate}\renewcommand{\labelenumi}{(\alph{enumi})}
\item $\textup{\textsf{sup-mesh}}(\pi)$ is minimized if and only if $|\pi(i) - \pi(i+1)| = 1$ for all $1 \le i < k$; that is, if and only if $\pi = 123\cdots k$ or $\pi = k \cdots 321$.
\item $\textup{\textsf{sup-mesh}}(\pi)$ is maximized if and only if $|\pi(i) - \pi(i+1)| \neq 1$ for all $1 \le i < k$; that is, there are maximally many ($4k$) enclosed diagonals to avoid.
\end{enumerate}
\end{cor}
\begin{proof}
This result relies on the fact that if $|\pi(i) - \pi(i+1)|$ equals $1$, then there will be fewer superfluous meshes possible than if $|\pi(i) - \pi(i+1)|$ does not equal $1$.
To see why this is the case, suppose that $\pi(i)$ is the $I$th in a sequence of increasing consecutive values in $\pi$, and the $D$th in a sequence of decreasing consecutive values in $\pi$, where certainly $\min\{I,D\} = 1$. More precisely,
$$\pi(i-1) = \pi(i) - 1,\ \pi(i-2) = \pi(i) - 2,\ \ldots, \ \pi(i-I+1) = \pi(i) - I + 1,$$
and
$$\pi(i-1) = \pi(i) + 1,\ \pi(i-2) = \pi(i) + 2,\ \ldots, \ \pi(i-D+1) = \pi(i) + D-1.$$
Similarly, suppose that $\pi(i+1)$ is first in a sequence of $I'$ increasing consecutive values in $\pi$, and first in a sequence of $D'$ decreasing consecutive values in $\pi$, where again $\min\{I',D'\} = 1$.
If $\pi(i) - \pi(i+1) = 1$, then $\{(i,\pi(i)),(i+1,\pi(i+1))\} \in G(\pi)$ could belong to three possible enclosed diagonals: one with $I+1$ shaded squares, one with $I'+1$ shaded squares, and one with $D+D'+1$ shaded squares. The case of $\pi(i+1) - \pi(i) = 1$ is analogous. On the other hand, if $|\pi(i) - \pi(i+1)| \neq 1$, then there are four enclosed diagonals to avoid: one with $I+1$ shaded squares, one with $D+1$ shaded squares, one with $I'+1$ shaded squares, and one with $D'+1$ shaded squares.
Now, for each case, count the superfluous meshes involving the squares along these diagonals. The difference between this number in the latter situation and this number in the former is
\begin{align*}
&\Big[2^{I+I'+D+D'+4} - (2^{I+I'+D+3} + 2^{I+I'+D'+3} + 2^{I+D+D'+3} + 2^{I'+D+D'+3})\\
& \hspace{.75in} + (2^{I+I'+2} + 2^{I+D+2} + 2^{I+D'+2} + 2^{I'+D+2} + 2^{I'+D'+2} + 2^{D+D'+2})\\
& \hspace{.75in} - (2^{I+1} + 2^{I'+1} + 2^{D+1} + 2^{D'+1}) + 2^0\Big]\\
& \hspace{.25in} - \Big[2^{I+I'+D+D'+3} - (2^{I+I'+2} + 2^{I+D+D'+2} + 2^{I'+D+D'+2}) + (2^{I+1} + 2^{I'+1} + 2^{D+D'+2}) - 2^0\Big]\\
& \hspace{.25in} = 2(2^{I+1} - 1)(2^{I'+1}-1)(2^{D}-1)(2^{D'}-1)\\
& \hspace{.25in} > 0.
\end{align*}
\end{proof}
The permutations characterized in Corollary~\ref{cor:extremal}(b) can also be described as avoiding the so-called 1-box pattern, as discussed in \cite{kitaev remmel}. Additionally, these permutations are enumerated by sequence A002464 of \cite{oeis}.
Applying the same counting techniques as in Examples~\ref{ex:123 superfluous} and~\ref{ex:132 superfluous}, we can enumerate the superfluous meshes for each extremal case described in Corollary~\ref{cor:extremal}.
\begin{cor}
Fix a positive integer $k$. Then
$$\min_{\pi \in \mathfrak{S}_k} \ \textup{\textsf{sup-mesh}} (\pi) = \sum_{i=0}^{k+1} (-1)^i\left(\binom{k}{i-1} 2^{2k-2i+2} + \binom{k}{i} 2^{3k+1-2i}\right),$$
where $\binom{b}{a} = 0$ for $a<0$ or $b<a$, and
$$\max_{\pi \in \mathfrak{S}_k} \ \textup{\textsf{sup-mesh}} (\pi) = \sum_{i=0}^k (-1)^i \binom{2k}{i} 2^{4k-2i}.$$
\end{cor}
\begin{proof}
In the first case, where the permutation is either $123\cdots k$ or $k \cdots 321$, there will be $3k+1$ potential elements of a superfluous mesh. These possible squares can be partitioned into $k+1$ enclosed diagonals that need to be avoided: one of length $k+1$, and $k$ of length $2$. The following picture gives an example of this partition, with the arrows indicating the (hazardous) enclosed diagonals.
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,0), (0,1), (1,0), (1,1), (1,2), (2,1), (2,2), (2,3), (3,2), (3,3), (3,4), (4,3), (4,4), (4,5), (5,4), (5,5),(5,6),(6,5),(6,6)} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3,4,5,6} {\draw (0,\x) -- (7,\x); \draw (\x,0) -- (\x,7);\fill[black] (\x,\x) circle (5pt);}
\draw[<->] (.5,.5) -- (6.5,6.5);
\foreach \x in {.5,1.5,2.5,3.5,4.5,5.5} {\draw[<->] (\x,\x+1) -- ++(1,-1);}
\end{tikzpicture}$$
The result follows from a calculation as in Examples~\ref{ex:123 superfluous} and~\ref{ex:132 superfluous}.
The only difference in the second case is that now there are $4k$ potential elements of a superfluous mesh, and these can be partitioned into $2k$ enclosed diagonals of length $2$ that must be avoided. The following picture gives an example of this partition, and again the arrows indicate the (hazardous) enclosed diagonals.
$$\begin{tikzpicture}[scale=.5]
\foreach \x in {(0,4),(0,5),(1,4),(1,5),(1,2),(1,3),(2,2),(2,3),(2,0),(2,1),(3,0),(3,1),(3,5),(3,6),(4,5),(4,6),(4,3),(4,4),(5,3),(5,4),(5,1),(5,2),(6,1),(6,2))} {\fill[black!20] \x rectangle ++(1,1);}
\foreach \x in {1,2,3,4,5,6} {\draw (0,\x) -- (7,\x); \draw (\x,0) -- (\x,7);}
\foreach \x in {.5,1.5,2.5}{\fill[black] (\x+.5,6-\x-\x) circle (5pt); \draw[<->] (\x,6.5-\x - \x) -- ++(1,-1); \draw[<->] (\x+1,6.5-\x-\x) -- ++(-1,-1);}
\foreach \x in {3.5,4.5,5.5}{\fill[black] (\x+.5,13-\x-\x) circle (5pt); \draw[<->] (\x,13.5-\x-\x) -- ++(1,-1); \draw[<->] (\x+1,13.5-\x-\x) -- ++(-1,-1);}
\end{tikzpicture}$$
\end{proof}
\section*{Acknowledgements}
I am grateful to Sergey Kitaev for thoughtful comments on a draft of this paper, as well as for suggesting the problem in the first place. I also appreciate the input from an anonymous referee, particularly regarding terminology.
|
1,108,101,564,210 | arxiv | \section{Introduction}
\label{se:Introduction}
A magnetic skyrmion is a nanometric magnetic domain wall structure of which the spin configuration is swirling in the planar space and would wrap a unit three-dimensional spherical surface with spins pointing in all directions in the compactification of the planar space~\cite{Roszler_NATURE2006,Ezawa_QHE2008,Nagaosa_NNANO2013,Bergmann_JPCM2014,Liu_CPB2015,Wiesendanger_Review2016}. Recently, experiments have revealed the existence of both N\'eel-type and Bloch-type skyrmions in thin films as well as bulk materials~\cite{Muhlbauer_SCIENCE2009,Yu_NATURE2010,Heinze_NPHYS2011,Seki_SCIENCE2012,Du_NANOLETT2014,Kezsmarki_NMATER2015,Nahas_NCOMMS2015,Woo_NMATER2016,Moreau-Luchaire_NNANO2016,Boulle_NNANO2016}. The spin configuration of a skyrmion is topologically protected, which is stable as long as it does not overlap tilted spins on the sample edge or shrink to the lattice scale. Another merit of a skyrmion is that it can be displaced by a spin-polarized current in confined structures. Thus, the skyrmion is expected to be a key component of future device applications in the emerging field of skyrmionics~\cite{Iwasaki_NC2012,Fert_NNANO2013,Sampaio_NNANO2013,Iwasaki_NNANO2013,Sun_PRL2013,Iwasaki_NL2014,Koshibae_NCOMMS2014,Tomasello_SREP2014,Yan_NCOMMS2014,Yan_NCOMMS2015,Xichao_SREP2015B,Koshibae_JJAP2015,Fusheng_NANOLETT2015,Upadhyaya_PRB2015,Beg_SREP2015,Crum_NCOMMS2015,Xichao_NCOMMS2016,Schutte_PRB2014}. Recent experiments have demonstrated the creation, manipulation, and elimination of skyrmions in confined geometries using cutting-edge technologies~\cite{Romming_SCIENCE2013,Finazzi_PRL2013,Du_NCOMMS2015,Nii_NCOMMS2015,Buttner_NPHYS2015,Wanjun_SCIENCE2015,Woo_NMATER2016,Moreau-Luchaire_NNANO2016,Boulle_NNANO2016}. However, one significant obstacle to the essential transmission process of skyrmions in devices is the skyrmion Hall effect (SkHE)~\cite{Zang_PRL2011,Wanjun_ARXIV2016}, where a skyrmion gains a transverse motion perpendicular to the imposed driving direction. As a consequence, the skyrmion, which carries encoded information, might be destructed when it passes a narrow racetrack-type channel~\cite{Parkin_SCIENCE2008,Parkin_NNANO2015} and/or when it travels at a high speed, by touching the edge of the device~\cite{Xichao_SREP2015A,Purnama_SREP2015,Xichao_NCOMMS2016}.
Recently we have shown in Ref.~\onlinecite{Xichao_NCOMMS2016} that a bilayer-skyrmion moves perfectly straight in the direction of the driving current in a bilayer synthetic antiferromagnetic (SAF) racetrack, where the SkHE is completely suppressed. The two skyrmions in the top and bottom ferromagnetic (FM) layers of the bilayer SAF racetrack have the opposite skyrmion numbers, and thus the potential transverse shift directions are opposite and canceled~\cite{Xichao_NCOMMS2016}. We note that the antiferromagnetic (AFM) skyrmions in monolayer~\cite{Xichao_SREP2016} and bilayer~\cite{Tretiakov_PRL2016} AFM racetracks are also free from the SkHE, where the AFM skyrmions have the skyrmion number of zero.
Another challenging problem is about the effect of thermal fluctuations, which is detrimental to the skyrmion-based device applications. A skyrmion is largely deformed and might be easily destroyed by the thermal fluctuations. Hence, the thermal effect is one of the important factors to realize the skyrmion-based device applications~\cite{Yu_NMATER2011,Yu_NCOMMS2012,Schulz_NPHYS2012,Oike_NPHYS2015}.
In this paper, we study the thermal fluctuation effect on the motion of skyrmions in multilayer SAF racetracks as well as in monolayer FM racetracks based on numerical simulations and a multilayer Thiele equation. The skyrmions in all FM layers of a multilayer SAF racetrack are bound to one single SAF multilayer skyrmion by the interlayer AFM exchange coupling. We call it a SAF $N$-layer skyrmion, where $N$ stands for the number of constituent FM layers.
First, we investigate the current-driven dynamics of a SAF $N$-layer skyrmion at zero temperature ($T=0$ K). We find the odd-even effect of the number $N$ of constituent FM layers on the SkHE, which is understood as the total skyrmion number of the skyrmions in each individual FM layer. Namely, the SAF $N$-layer skyrmion shows the SkHE only when $N$ is odd. Furthermore, we find that the velocity of the SAF $N$-layer skyrmion at a given driving current density is inversely proportional to $N$ when the driving current is applied only in the bottom FM layer. We also point out that when the driving current is applied in all FM layers, the velocity of the SAF $N$-layer skyrmion is basically equal to that of the FM monolayer skyrmion at a given driving current density.
\begin{figure*}[t]
\centerline{\includegraphics[width=1.00\textwidth]{FIG1}}
\caption{(Color online)
Schematics of the simulation models including the monolayer ($N=1$) FM, bilayer ($N=2$) SAF, trilayer ($N=3$) SAF and quadrilayer ($N=4$) SAF racetracks. The length along the $x$-axis, width along the $y$-axis, and thickness along the $z$-axis of each FM layer and nonmagnetic spacer are respectively equal to $500$ nm, $50$ nm and $1$ nm, where the FM layer L1 is placed on the heavy-metal substrate. In the SAF racetracks, the adjacent FM layers are antiferromagnetically exchange-coupled through their respective FM/spacer/FM interfaces. The initial magnetization configurations of FM layers L1 ($n=1$) and L3 ($n=3$) are pointing along the $+z$-direction, while those of FM layers L2 ($n=2$) and L4 ($n=4$) are pointing along the $-z$-direction. The arrows represent the initial magnetization directions.
}
\label{FIG1}
\end{figure*}
Second, we investigate the current-driven dynamics of a SAF $N$-layer skyrmion at finite temperature ($T>0$ K). It is shown that the FM monolayer skyrmion with typical material parameters is destroyed when $T=100$ K. On the other hand, the SAF bilayer skyrmion is stable and goes straight along the bilayer SAF racetrack even at room temperature ($T=300$ K). This is because that the two skyrmions consisting the SAF bilayer skyrmion are tightly bound by the interlayer AFM exchange coupling, where the SkHEs acting on the two skyrmions are opposite and canceled out. In general, for the system with a relatively large damping coefficient, the thermal fluctuation effect decreases rapidly as the layer number $N$ increases at a certain temperature. Our results provide a promising route to realizing the skyrmion-based device applications at room temperature.
\section{Modeling and Simulation}
\label{se:Modeling}
\subsection{Monolayer FM and multilayer SAF racetracks}
\label{se:Monolayer-and-multilayer}
Figure~\ref{FIG1} illustrates the monolayer FM racetrack as well as the multilayer SAF racetracks studied in this paper. The monolayer FM racetrack contains one FM layer and a heavy-metal substrate underneath the FM layer. The $N$-layer SAF racetrack ($N\geq2$) includes $N$ FM layers, which are separated by $N-1$ nonmagnetic spacer layers. The FM layers are denoted from bottom to top as L1, L2, L3, $\cdots$. In all models, the length along the $x$-axis, width along the $y$-axis, and thickness along the $z$-axis of each FM layer and nonmagnetic spacer are respectively equal to $500$ nm, $50$ nm, and $1$ nm, where the FM layer L1 is attached to the heavy-metal substrate.
In the $N$-layer SAF racetracks ($N\geq2$), the neighboring FM layers are antiferromagnetically exchange-coupled through their FM/spacer/FM interfaces. The magnetization in each FM layer is perpendicular to the racetrack plane due to the high perpendicular magnetic anisotropy (PMA), while the magnetization in neighboring FM layers are antiparallel due to the interlayer AFM exchange coupling. The Dzyaloshinskii-Moriya interaction (DMI) in FM layers lead to the tilt of magnetization near the edges of the FM layers. It is worth mentioning that the DMI in FM layers can be induced by both the heavy-metal substrate and spacer layers in real experiments~\cite{Wiesendanger_Review2016}. Recent theoretical~\cite{Yang_PRL2015,Dupe_NCOMMS2016} and experimental~\cite{Chen_APL2015,Stebliy_JAP2015,Moreau-Luchaire_NNANO2016,Boulle_NNANO2016,Woo_NMATER2016} studies have suggested and developed the methods to induce the DMI in multilayers. More promisingly, it has been recently shown that by constructing the spacer layer made of two different heavy-metal materials, additive DMI can be achieved in multilayers~\cite{Moreau-Luchaire_NNANO2016}.
In our numerical simulations we explicitly consider the SAF $N$-layer skyrmions with $N=1,2,3,4$. We assume that the relaxed magnetization distributions of the FM layers L1 and L3 are almost pointing along the $+z$-direction, while those of the FM layers L2 and L4 are almost pointing along the $-z$-direction. The background magnetization directions determine the skyrmion number of the skyrmion in each FM layer. The skyrmion number equals $1$ in the FM layers L1 and L3, while it equals $-1$ in the FM layers L2 and L4. The total skyrmion number $Q_{\text{tot}}$ of the SAF $N$-layer skyrmion is $N$ modulo $2$. Namely, $Q_{\text{tot}}=1,0,1,0$ for $N=1,2,3,4$, respectively (see Sec.~\ref{se:SkHE} for details).
At the initial state, the skyrmions are first created and relaxed at the position of $x=100$ nm, $y=25$ nm. With regard to the injection scheme of the driving current, we consider a confined current-perpendicular-to-plane (CPP) geometry. Namely, an electron current flows through the heavy-metal substrate in the $+x$-direction, which is converted into a spin current polarized along the $-y$-direction and is perpendicularly injected into the FM layer L1, due to the spin Hall effect (see Ref.~\onlinecite{Wanjun_SCIENCE2015} for a recent experimental example). The skyrmion in the FM layer L1 is driven by the vertical spin current, while the other skyrmions in the FM layers L2, L3, and L4 move accordingly due to the interlayer AFM exchange coupling between each adjacent FM layers. It should be noted that, for comparison purpose, we also simulate the straightforward case of unconfined CPP geometry with the bilayer SAF racetrack, where the spin current is injected into all FM layers, neglecting the spin current absorption in the model.
\subsection{Hamiltonian}
\label{se:Hamiltonian}
We investigate the multilayer SAF racetrack comprised of $N$ FM layers, where the neighboring FM layers are antiferromagnetically exchange-coupled by the interlayer AFM exchange interaction, as illustrated in Fig.~\ref{FIG1}. The total Hamiltonian $H$ is decomposed into the Hamiltonian for each FM layer $H_{n}$ and the interlayer AFM exchange coupling $H_{\text{inter}}$ between neighboring FM layers, that is,
\begin{equation}
H=\sum_{n=1}^{N}H_{n}+H_{\text{inter}}.
\label{eq:Hamil-total}
\end{equation}
The Hamiltonian for each FM layer reads
\begin{eqnarray}
H_{n}&=&
-A_{\text{intra}}\sum_{\langle i,j\rangle}\boldsymbol{m}_{i}^{n}\cdot\boldsymbol{m}_{j}^{n}
+D_{ij}\sum_{\langle i,j\rangle}(\boldsymbol{\nu}_{ij}\times\hat{z})\cdot(\boldsymbol{m}_{i}^{n}\times\boldsymbol{m}_{j}^{n}) \nonumber \\
&&+K\sum_{i}[1-(m_{i}^{n,z})^{2}]+H_{\text{DDI}},
\label{eq:Hamil-intralayer}
\end{eqnarray}
where $n$ is the FM layer index ($n=1,2,\cdots,N$), $\boldsymbol{m}_{i}^{n}$ represents the local magnetic moment orientation normalized as $|\boldsymbol{m}_{i}^{n}|=1$ at the site $i$, and $\left\langle i,j\right\rangle$ runs over all the nearest-neighbor sites in each FM layer. The first term represents the intralayer FM exchange interaction with the intralayer FM exchange stiffness $A_{\text{intra}}$. The second term represents the DMI with the DMI coupling energy $D_{ij}$, where $\boldsymbol{\nu}_{ij}$ is the unit vector between sites $i$ and $j$. The third term represents the PMA with the anisotropy constant $K$. $H_{\text{DDI}}$ represents the dipole-dipole interaction. When $N>1$, there exists an AFM exchange coupling between the nearest-neighbor FM layers
\begin{equation}
H_{\text{inter}}=-\sum_{n=1}^{N-1}A_{\text{inter}}\sum_{i}\boldsymbol{m}_{i}^{n}\cdot\boldsymbol{m}_{i}^{n+1}.
\label{eq:Hamil-interlayer}
\end{equation}
The sign of the interlayer exchange stiffness $A_{\text{inter}}$ is negative for the interlayer AFM exchange interaction. We take the initial magnetization direction in the FM layer L1 to be pointing upward (see Fig.~\ref{FIG1}).
\begin{figure}[t]
\centerline{\includegraphics[width=0.50\textwidth]{FIG2}}
\caption{(Color online)
The velocity-velocity correlation function Eq.~(\ref{eq:v-v-correlations}) as a function of $T$ for the SAF $N$-layer skyrmion at (a) $\alpha=\D$ and (b) $\alpha=\D/10$. The solid curve shows the case of the SAF $N$-layer skyrmion with an odd $N$, which has $Q_{\text{tot}}=1$. The dashed curve shows the case of the SAF $N$-layer skyrmion with an even $N$, which has $Q_{\text{tot}}=0$. Here we assume $2k_{\text{B}}a^{2}/\hbar=\mathcal{D}=1$ in Eq.~(\ref{eq:v-v-correlations}).
}
\label{FIG2}
\end{figure}
\subsection{LLG equation at finite temperature}
\label{se:LLG}
The dynamics of a skyrmion at a given finite temperature is described by introducing a Gaussian stochastic magnetic field $\boldsymbol{h}$ describing the thermal agitation of the magnetization~\cite{Brown,Kubo,Duine,Mochizuki,Tronco}, which satisfies
\begin{equation}
\langle h_{i}(\boldsymbol{x},t)h_{j}(\boldsymbol{x}^{\prime},t^{\prime})\rangle=\frac{2\alpha k_{\text{B}}T}{\hbar}a^{2}\delta(\boldsymbol{x}-\boldsymbol{x}^{\prime})\delta_{ij}\delta(t-t^{\prime}),
\label{eq:h}
\end{equation}
where $i,j=x,y$, and $a^{2}$ is the area of the lattice. In the CPP geometry, we numerically solve the Landau-Lifshitz-Gilbert (LLG) equation including the spin-transfer torque (STT) term extended into the following form
\begin{align}
\frac{d\boldsymbol{m}_{i}}{dt}=&-|\gamma|\boldsymbol{m}_{i}\times(\boldsymbol{H}_{i}^{\text{eff}}+\boldsymbol{h})+\alpha\boldsymbol{m}_{i}\times\frac{d\boldsymbol{m}_{i}}{dt} \notag \\
&+\left\vert\gamma\right\vert u(\boldsymbol{m}_{i}\times\boldsymbol{p}\times\boldsymbol{m}_{i}),
\label{eq:LLGS}
\end{align}
with the layer index $n$ suppressed. Here, $\boldsymbol{H}_{i}^{\text{eff}}=-\partial H_{\text{total}}/\partial\mathbf{m}_{i}$ is the effective magnetic field induced by the total Hamiltonian $H_{\text{total}}$, $\gamma$ is the gyromagnetic ratio, $\alpha$ is the Gilbert damping coefficient originating from the spin relaxation, $u$ is the STT coefficient, and $\boldsymbol{p}$ represents the unit spin polarization vector of the spin current. We have $u=|\frac{\hbar}{\mu_{0}e}|\frac{jP}{2dM_{\text{S}}}$ with $\mu_{0}$ the vacuum magnetic permittivity, $d$ the thickness of the FM layer, $M_{\text{S}}$ the saturation magnetization, $j$ the applied current density, and $P$ the spin polarization rate. The STT exerted on the FM layer L1 is induced by the spin Hall effect. It should be noted that we have $j=0$ in the FM layers above the FM layer L1, thus there is no STT effect on the FM layer with the layer index number $n>1$.
\subsection{SkHE in multilayer SAF racetracks}
\label{se:SkHE}
\begin{figure*}[t]
\centerline{\includegraphics[width=1.00\textwidth]{FIG3}}
\caption{(Color online)
Typical trajectories of SAF $N$-layer skyrmions in $N$-layer SAF racetracks.
(a) The trajectory of a FM monolayer skyrmion in a monolayer FM racetrack ($N=1$) at $j=10$ MA cm$^{-2}$. The transverse shift of the monolayer skyrmion due to the SkHE is obvious. It reaches a stable velocity of $v_x\sim 70$ m s$^{-1}$.
(b) The trajectory of a SAF bilayer skyrmion in a bilayer SAF racetrack ($N=2$) at $j=20$ MA cm$^{-2}$. The SAF bilayer skyrmion moves along the central line ($y=25$ nm) of the racetrack, which reaches a stable velocity of $v_x\sim 70$ m s$^{-1}$.
(c) The trajectory of a SAF trilayer skyrmion in a trilayer SAF racetrack ($N=3$) at $j=30$ MA cm$^{-2}$. It reaches a stable velocity of $v_x\sim 70$ m s$^{-1}$.
(d) The trajectory of a SAF quadrilayer skyrmion in a quadrilayer SAF racetrack ($N=4$) at $j=40$ MA cm$^{-2}$. The SAF quadrilayer skyrmion moves along the central line ($y=25$ nm) of the racetrack, which reaches a stable velocity of $v_x\sim 70$ m s$^{-1}$. The dot denotes the center of the skyrmion. The total simulation time is $5000$ ps, which is indicated by the color scale.
}
\label{FIG3}
\end{figure*}
\begin{figure*}[t]
\centerline{\includegraphics[width=1.00\textwidth]{FIG4}}
\caption{(Color online)
(a) The skyrmion Hall angle $v_{y}/v_{x}$ as a function of $t$. It is almost a constant for $N=1$ and $N=3$. Here, we use a square sample of $200$ nm $\times$ $200$ nm $\times$ $1$ nm in order to reduce the impact of the edge effect. The skyrmion is located at the film center ($100$ nm, $100$ nm) at the initial time.
(b) The velocity $v_{x}$ as a function of $j$ for the motion of SAF $N$-layer skyrmions, where the driving current is injected into the bottom FM layer L1. When $j>10$ MA cm$^{-2}$ and $j>100$ MA cm$^{-2}$, the moving FM monolayer and SAF trilayer skyrmions are destroyed en-route caused by the SkHE, respectively. The open symbols stand for the numerical results. The solid lines represent the theoretical results given by Eq.~(\protect\ref{eq:Mean-Velocity}) with $\protect\alpha\mathcal{D}=0.57$. The dashed line with cross symbol indicates the velocity $v_x$ of a SAF bilayer skyrmion in a bilayer SAF racetrack, where the driving current is injected into both the bottom FM layer L1 and the top FM layer L2.
(c) The inverse velocity $v_{x}^{-1}$ as a function of the total FM layer number $N$ at small $j$, where no skyrmion is destroyed by the SkHE. The open symbols stand for the numerical results. The solid lines represent the theoretical results given by Eq.~(\protect\ref{eq:Mean-Velocity}).
}
\label{FIG4}
\end{figure*}
We employ the Thiele equation~\cite{Thiele_PRL1973,Tomasello_SREP2014} with the inclusion of the stochastic force~\cite{Tronco} in order to interpret the numerical results. We generalize it to the SAF $N$-layer skyrmion system in multilayer SAF racetracks driven by the spin current with the CPP geometry at finite temperature. It would read in each FM layer as
\begin{equation}
\mathbf{G}_{n}\times\mathbf{v}^{n}-{\mathcal{D}}\alpha\mathbf{v}^{n}+\mathbf{j}_{\text{spin}}^{n}+\mathbf{I}_{\text{AFM}}^{n}=\mathbf{\eta}^{n},
\label{eq:ThieleEq-layer}
\end{equation}
with $n$ the layer index, where $\mathbf{v}^{n}$, $\mathbf{j}_{\text{spin}}^{n}$ and $\mathbf{I}_{\text{AFM}}^{n}$ represent the skyrmion velocity, the spin current, and the interlayer AFM exchange force, respectively. $\mathbf{G}_{n}=(0,0,4\pi Q_{n})$ is the gyromagnetic coupling constant representing the Magnus force with $Q_{n}$ the skyrmion number, which is defined as
\begin{equation}
Q_{n}=-\frac{1}{4\pi}\int\boldsymbol{m}^{n}(\boldsymbol{x})\cdot\left(\partial_{x}\boldsymbol{m}^{n}(\boldsymbol{x})\times\partial_{y}\boldsymbol{m}^{n}(\boldsymbol{x})\right)d^{2}\boldsymbol{x},
\label{eq:SkNum}
\end{equation}
and $\mathbf{\eta}^{n}$ is the Gaussian stochastic forces acting on the skyrmions representing the finite temperature effect, which satisfies
\begin{equation}
\langle\eta_{i}^{n}(t)\eta_{j}^{n}(t^{\prime})\rangle=\frac{2\alpha k_{\text{B}}T}{\hbar}a^{2}\mathcal{D}\delta_{ij}\delta(t-t^{\prime}),
\label{eq:Eta}
\end{equation}
where $i,j=x_{n},y_{n}$. We have taken the same dissipation matrix ${\mathcal{D}}$ and the same damping coefficient $\alpha$ for all racetracks.
We now postulate that all skyrmions move together with the same velocity $\mathbf{v}$ since they are tightly bound. Summing all $N$ Thiele Eqs.~(\ref{eq:ThieleEq-layer}), we would phenomenologically obtain
\begin{equation}
\mathbf{G}_{\text{tot}}\times\mathbf{v}-N{\mathcal{D}}\alpha\mathbf{v}+\mathbf{j}_{\text{spin}}=\mathbf{\eta}_{\text{tot}},
\label{eq:ThieleEq-total}
\end{equation}
where the interlayer AFM forces are assumed to be canceled out, that is, $\sum\mathbf{I}_{\text{AFM}}^{n}=0$, and $\mathbf{G}_{\text{tot}}=(0,0,4\pi Q_{\text{tot}})$ with
\begin{equation}
Q_{\text{tot}}=\sum_{n=1}^{N}Q_{n},
\label{eq:SkNum-total}
\end{equation}
and $\mathbf{j}_{\text{spin}}=\sum_{n=1}^{N}\mathbf{j}_{\text{spin}}^{n}$, $\mathbf{\eta}_{\text{tot}}=\sum_{n=1}^{N}\mathbf{\eta}^{n}$. Actually, $\mathbf{j}_{\text{spin}}^{n}=\delta_{n1}\mathbf{j}_{\text{spin}}$, where $\mathbf{j}_{\text{spin}}$ is the spin current induced by the charge current in the heavy-metal substrate due to the spin Hall effect (see Sec.~\ref{se:Monolayer-and-multilayer}).
The first term on the left hand side of Eq.~(\ref{eq:ThieleEq-total}) corresponds to the Magnus force. The total skyrmion number equals one, $Q_{\text{tot}}=1$, when the number $N$ of the FM layers is odd, while it equals zero, $Q_{\text{tot}}=0$, when the number $N$ of the FM layers is even since $Q_{n}=-(-1)^{n}$. The variant of the sum of the Gaussian noise is given by
\begin{equation}
\langle\eta_{\text{tot}}^{i}(t)\eta_{\text{tot}}^{j}(t^{\prime})\rangle=\frac{2\alpha k_{\text{B}}T}{N\hbar}a^{2}\mathcal{D}\delta_{ij}\delta(t-t^{\prime}),
\label{eq:Eta-total}
\end{equation}
where $i,j=x,y$. As a result, the variance of the total noise becomes $1/N$ in the SAF $N$-layer skyrmion. Therefore, the SAF $N$-layer skyrmion is $N$ times more stable than the FM monolayer skyrmion at a certain temperature and a certain damping coefficient.
The velocity is given by explicitly solving the Thiele Eq.~(\ref{eq:ThieleEq-total}) as
\begin{widetext}
\begin{eqnarray}
v_{x}&=&\frac{\alpha N{\mathcal{D}}}{Q_{\text{tot}}^{2}
+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}}\left(j_{\text{spin}}-\eta_{\text{tot}}^{x}\right)
+\frac{Q_{\text{tot}}}{Q_{\text{tot}}^{2}+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}}\eta_{\text{tot}}^{y}, \quad \\
v_{y}&=&\frac{Q_{\text{tot}}}{Q_{\text{tot}}^{2}+{\alpha^{2}N}^{2}
{{\mathcal{D}}^{2}}}\left(j_{\text{spin}}-\eta_{\text{tot}}^{x}\right)
-\frac{\alpha N{\mathcal{D}}}{Q_{\text{tot}}^{2}+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}}\eta_{\text{tot}}^{y}.
\label{eq:vx-vy}
\end{eqnarray}
The mean velocity is thus given by
\begin{equation}
\left\langle v_{x}\right\rangle=\frac{\alpha N{\mathcal{D}}}{Q_{\text{tot}}^{2}
+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}}j_{\text{spin}},\quad
\left\langle v_{y}\right\rangle=\frac{Q_{\text{tot}}}{Q_{\text{tot}}^{2}
+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}}j_{\text{spin}}.
\label{eq:Mean-Velocity}
\end{equation}
\end{widetext}
When $Q_{\text{tot}}=1$, which is the case for $N$ being odd, a skyrmion undergoes a transverse motion, where
\begin{equation}
\frac{\left\langle v_{y}\right\rangle}{\left\langle v_{x}\right\rangle}=\frac{1}{\alpha N{\mathcal{D}}}.
\label{eq:HallAngle}
\end{equation}
When $Q_{\text{tot}}=0$, which is the case for $N$ being even, a skyrmion goes straight, where
\begin{equation}
\left\langle v_{x}\right\rangle=\frac{1}{\alpha N{\mathcal{D}}}j_{\text{spin}}, \quad
\left\langle v_{y}\right\rangle=0.
\label{eq:vx-vy-Q-0}
\end{equation}
Consequently the SAF $N$-layer skyrmion experiences the SkHE only when $N$ is odd. We call it the odd-even effect on the SkHE.
The velocity of the skyrmion decreases with increasing $N$ at a given driving current density. The velocity-velocity correlation functions are calculated as
\begin{widetext}
\begin{eqnarray}
\langle v_{x}(t)v_{x}(t^{\prime })\rangle&=&\frac{\left(\alpha N{\mathcal{D}}\right)^{2}\langle\eta_{\text{tot}}^{x}(t)\eta_{\text{tot}}^{x}(t^{\prime})\rangle+Q_{\text{tot}}^{2}\langle\eta_{\text{tot}}^{y}(t)\eta_{\text{tot}}^{y}(t^{\prime})\rangle}{\left(Q_{\text{tot}}^{2}+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}\right)^{2}}, \\
\langle v_{y}(t)v_{y}(t^{\prime})\rangle&=&\frac{Q_{\text{tot}}^{2}\langle\eta_{\text{tot}}^{x}(t)\eta_{\text{tot}}^{x}(t^{\prime})\rangle+\left(\alpha N{\mathcal{D}}\right)^{2}\langle\eta_{\text{tot}}^{y}(t)\eta_{\text{tot}}^{y}(t^{\prime})\rangle}{\left(Q_{\text{tot}}^{2}+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}\right)^{2}}.
\label{eq:vx-vy-correlations}
\end{eqnarray}
Substituting Eq.~(\ref{eq:Eta-total}), the correlation functions are obtained as
\begin{eqnarray}
\langle v_{x}(t)v_{x}(t^{\prime })\rangle&=&\langle v_{y}(t)v_{y}(t^{\prime })\rangle \nonumber \\
&=&\frac{1}{Q_{\text{tot}}^{2}+{\alpha^{2}N}^{2}{{\mathcal{D}}^{2}}}\frac{2\alpha k_{\text{B}}T}{N\hbar}a^{2}\mathcal{D}\delta_{II^{\prime}}\delta(t-t^{\prime}),
\label{eq:v-v-correlations}
\end{eqnarray}
\end{widetext}
which are functions of $T$ and $\alpha$ for a given SAF $N$-layer skyrmion.
Since the scope of this paper is focused on the temperature effect on the SAF $N$-layer skyrmion, we assume $2k_{\text{B}}a^{2}/\hbar=\mathcal{D}=1$ in Eq.~(\ref{eq:v-v-correlations}) and show the correlation functions as a function of $T$ for the SAF $N$-layer skyrmions under the assumptions of a large damping coefficient ($\alpha=\D$) and a small damping coefficient ($\alpha=\D/10$) in Figs.~\ref{FIG2}(a) and \ref{FIG2}(b), respectively.
It can be seen that, for the case of large damping coefficient ($\alpha=\D$) [Fig.~\ref{FIG2}(a)], the correlation functions of the $N$-layer skyrmion, where $N=1,2,\cdots,6$, increase with increasing $T$. On the other hand, the correlation functions are inversely proportional to $N$. This indicates that the stability of the SAF $N$-layer skyrmion with a large damping coefficient increases with $N$ but decreases with $T$. In contrast, for the case of small damping coefficient ($\alpha=\D/10$) [Fig.~\ref{FIG2}(b)], it shows the correlation functions of the SAF $N$-layer skyrmion, where $N=1,2,\cdots,6$, increase with increasing $T$. However, the correlation functions are nonmonotonic with respect to $N$. At a certain $T$, it can be seen that the correlation functions of the SAF $N$-layer skyrmions with $N=2,4$ are larger than that of the SAF $N$-layer skyrmion with $N=1$. This means that the stability of the SAF $N$-layer skyrmion with a small damping coefficient decreases with $T$, however, the FM monolayer skyrmion with $Q_{\text{tot}}=1$ is more stable than the SAF bilayer skyrmion with $Q_{\text{tot}}=0$ at a certain $T$. It is noteworthy that this result is in good agreement with a recent study on the AFM skyrmion (see Ref.~\onlinecite{Tretiakov_PRL2016}), where the fluctuation of the AFM skyrmion with $Q_{\text{tot}}=0$ is found to be inversely proportional to $\alpha$, and is more significant than that of the FM skyrmion with $Q_{\text{tot}}=1$ at a small damping coefficient $\alpha=0.01$. It is also worth mentioning that the fluctuation of the FM skyrmion with $Q_{\text{tot}}=1$ is proportional to $\alpha$ (see Refs.~\onlinecite{Tretiakov_PRL2016,Schutte_PRB2014}). Hence, in order to remain in a monotonic temperature dependence with respect to $N$, we numerically study the SAF $N$-layer skyrmion under the large damping coefficient assumption in the following sections.
\begin{figure*}[t]
\centerline{\includegraphics[width=1.00\textwidth]{FIG5}}
\caption{(Color online)
Top views of the motion of a FM monolayer skyrmion and a SAF bilayer skyrmion at different $T$ and selected $t$.
(a) The motion of a FM monolayer skyrmion ($Q_{\text{tot}}=1$) in a monolayer FM racetrack ($N=1$). A moderate driving current of $j=10$ MA cm$^{-2}$ is applied. The FM monolayer skyrmion driven by the spin current moves safely from the left to the right end of the racetrack at $T=0$ and $50$ K. However, when $T\geq 100$ K, it is destroyed by touching the upper edge.
(b) The motion of a SAF bilayer skyrmion ($Q_{\text{tot}}=0$) in a bilayer SAF racetrack ($N=2$). A moderate driving current of $j=20$ MA cm$^{-2}$ is applied. The SAF bilayer skyrmion, which is immune from the SkHE, reliably moves along the central line ($y=25$ nm) of the racetrack even at $T=300$ K. The seed of the random number generator used to generate the thermal fluctuation field is set to $100$ in all simulations with $T>0$ K. The average out-of-plane magnetization component $m_z$ is denoted by the green-white-orange color scale.
}
\label{FIG5}
\end{figure*}
\begin{figure*}[t]
\centerline{\includegraphics[width=1.00\textwidth]{FIG6}}
\caption{(Color online)
Typical trajectories of a FM monolayer skyrmion and a SAF bilayer skyrmion at different $T$. Trajectories of a FM monolayer skyrmion ($Q_{\text{tot}}=1$) in a monolayer FM racetrack at (a) $T=0$ K, (b) $T=50$ K, (c) $T=100$ K, (d) $T=150$ K, (e) $T=200$ K, (f) $T=250$ K, and (g) $T=300$ K. A moderate driving current of $j=10$ MA cm$^{-2}$ is applied. Trajectories of a SAF bilayer skyrmion ($Q_{\text{tot}}=0$) in a bilayer SAF racetrack at (h) $T=0$ K, (i) $T=50$ K, (j) $T=100$ K, (k) $T=150$ K, (l) $T=200$ K, (m) $T=250$ K, and (n) $T=300$ K. A moderate driving current of $j=20$ MA cm$^{-2}$ is applied. The seed of the random number generator used to generate the thermal fluctuation field is set to $100$ in all simulations with $T>0$ K. The total simulation time is $5000$ ps, which is represented by the color scale. The dot denotes the center of the skyrmion. The red cross indicates the destruction of the skyrmion by touching the upper edge caused by the SkHE.
}
\label{FIG6}
\end{figure*}
\begin{figure*}[t]
\centerline{\includegraphics[width=1.00\textwidth]{FIG7}}
\caption{(Color online)
Distributions of the $y$-position of SAF $N$-layer skyrmions at different $T$ fitted by the Gaussian distribution. (a) A FM monolayer skyrmion ($Q_{\text{tot}}=1$) driven by a current of $j=10$ MA cm$^{-2}$. At $T=0$ K, the $y$-position equals $\sim 29.3$ nm. In order to ensure a safe motion of $\sim 300$ nm, $T$ is only increased up to $90$ K. (b) A SAF bilayer skyrmion ($Q_{\text{tot}}=0$) driven by a current of $j=20$ MA cm$^{-2}$. At $T=0$ K, the $y$-position equals $\sim 25.0$ nm. (c) A SAF trilayer skyrmion ($Q_{\text{tot}}=1$) driven by a current of $j=30$ MA cm$^{-2}$. At $T=0$ K, the $y$-position equals $\sim 26.7$ nm. (d) A SAF quadrilayer skyrmion ($Q_{\text{tot}}=0$) driven by a current of $j=40$ MA cm$^{-2}$. At $T=0$ K, the $y$-position equals $\sim 25.0$ nm. (e) The mean $y_0$ of the distribution of the $y$-position as a function of $T$. (f) The standard deviation $\protect\sigma_y$ of the distribution of the $y$-position as a function of $T$. The $y$-position of the skyrmion at $T=0$ K is indicated by the vertical dashed line in (a-d). The seed of the random number generator used to generate the thermal fluctuation field is set to $100$ in all simulations with $T>0$ K.
}
\label{FIG7}
\end{figure*}
\begin{figure*}[t]
\centerline{\includegraphics[width=1.00\textwidth]{FIG8}}
\caption{(Color online)
Trajectories of a FM monolayer skyrmion and a SAF bilayer skyrmion at selected $T$ with several different random seed values. Trajectories of a FM monolayer skyrmion ($Q_{\text{tot}}=1$) (a) at $T=0,50$ K and (b) at $T=0,100$ K with a driving current of $j=10$ MA cm$^{-2}$. (c) Trajectories of a SAF bilayer skyrmion ($Q_{\text{tot}}=0$) at $T=0, 300$ K with a driving current of $j=20$ MA cm$^{-2}$. The seed of the random number generator used to generate the thermal fluctuation field is set to $100$, $101$, and $102$, respectively. The total simulation time is $5000$ ps. The symbol denotes the center of the skyrmion. The red cross indicates the destruction of the skyrmion by touching the upper edge caused by the SkHE.
}
\label{FIG8}
\end{figure*}
\subsection{Simulation methods}
\label{se:Simulation-methods}
The three-dimensional micromagnetic simulations are performed by using the 1.2 alpha 5 release of the Object Oriented MicroMagnetic Framework (OOMMF) software developed at the National Institute of Standards and Technology (NIST)~\cite{OOMMF}. The simulations are handled by the OOMMF extensible solver (OXS) objects of the standard OOMMF distribution with the OXS extension modules for including the interface-induced DMI~\cite{OOMMFDMI,Rohart_PRB2013} and the thermal fluctuation~\cite{OOMMFXF_ThermSpinXferEvolve}. The magnetization dynamics at zero temperature ($T=0$ K) is controlled by the LLG equation including the STT term~\cite{OOMMF,LLGSTT}, while a highly irregular fluctuating field representing the irregular influence of temperature is added into the LLG equation including the STT term when the thermal effect is considered ($T>0$ K). The finite temperature simulations are performed with a fixed time step of $10$ fs, that is, $1\times 10^{-14}$ s, while the time step in the zero temperature simulations is adaptive ($\sim 1\times 10^{-13}$ s). Each simulation with a certain finite temperature is performed $10$ times individually with different random seed values. The models built in the micromagnetic simulations are discretized into regular cells with a constant cell size of $2$ nm $\times$ $2$ nm $\times$ $1$ nm, which allows for a trade-off between numerical accuracy and computational efficiency. In the CPP geometry, the spin current polarized along the $-y$-direction flows upward in the bottom FM layer L1, which is induced by the charge current flowing in the heavy-metal substrate due to the spin Hall effect. The current-induced Oersted field is neglected in all simulations for simplicity since it makes only a minor contribution to the overall magnetization dynamics. The typical material parameters used in the micromagnetic simulations are adopted from Refs.~\cite{Fert_NNANO2013,Sampaio_NNANO2013,Tomasello_SREP2014,Xichao_NCOMMS2016}: Gilbert damping coefficient $\alpha=0.3$ which is large enough to satisfy the large damping coefficient assumption; gyromagnetic ratio $\gamma=-2.211\times 10^{5}$ m A$^{-1}$ s$^{-1}$; saturation magnetization $M_{\text{S}}=580$ kA m$^{-1}$; intralayer FM exchange stiffness $A_{\text{intra}}=15$ pJ m$^{-1}$; interlayer AFM exchange stiffness $A_{\text{inter}}=-1$ pJ m$^{-1}$; interface AFM exchange coefficient $\sigma=-1$ mJ m$^{-2}$; DMI constant $D=3.5$ mJ m$^{-2}$; PMA constant $K=0.8$ MJ m$^{-3}$; and spin-polarization rate $P=0.4$.
\section{Current-induced motion of skyrmions at zero temperature}
\label{se:motion-at-zero-temperature}
We start with a numerical investigation of the current-velocity relation of skyrmions in $N$-layer SAF racetracks, with $N=1,2,3,4$, at zero temperature ($T=0$ K) (see Fig.~\ref{FIG3}).
Let us recapitulate the current-induced motion of a FM monolayer skyrmion in a monolayer FM racetrack, which undergoes a transverse motion toward the upper edge of the racetrack because of the SkHE (see Ref.~\onlinecite{SI} for Movie 1). We show the trajectory at a moderate driving current of $j=10$ MA cm$^{-2}$ in Fig.~\ref{FIG3}(a). The moving skyrmion reaches a stable velocity of $v_x\sim 70$ m s$^{-1}$ and has a transverse shift of $\sim 5$ nm due to the SkHE. It does not touch the edge because of the repulsive force from the edge.
Nevertheless, when $j>10$ MA cm$^{-2}$, it is destroyed by touching the edge shortly after the driving current is applied (see Ref.~\onlinecite{SI} for Movie 1).
Let us also recapitulate the current-induced motion of a SAF bilayer skyrmion in a bilayer SAF racetrack (see Ref.~\onlinecite{SI} for Movie 2). It goes straight in the bilayer SAF racetrack as a result of the suppression of the SkHE ($Q_{\text{tot}}=0$). The trajectory at a moderate driving current of $j=20$ MA cm$^{-2}$ is shown in Fig.~\ref{FIG3}(b), where it reaches a stable speed of $\sim 70$ m s$^{-1}$. The SAF bilayer skyrmion strictly moves along the central line ($y=25$ nm) of the racetrack.
We go on to study the current-induced motion of a SAF trilayer skyrmion with $Q_{\text{tot}}=1$ [see Fig.~\ref{FIG3}(c)], and a SAF quadrilayer skyrmion with $Q_{\text{tot}}=0$ [see Fig.~\ref{FIG3}(d)]. The SAF trilayer skyrmion experiences the SkHE (see Ref.~\onlinecite{SI} for Movie 3) as in the case of the FM monolayer skyrmion. On the other hand, the SAF quadrilayer skyrmion moves reliably in the quadrilayer SAF racetrack even at a strong driving current (see Ref.~\onlinecite{SI} for Movie 4), demonstrating the suppression of the SkHE as in the case of a SAF bilayer skyrmion. The SAF quadrilayer skyrmion moves along the central line ($y=25$ nm) of the racetrack.
We compare the SkHEs at $N=1$ and $N=3$ quantitatively. We show the skyrmion Hall angle $v_{y}/v_{x}$ as a function of time $t$ in Fig.~\ref{FIG4}(a) for $N=1,3$. The skyrmion Hall angle is antiproportional to $N$. Namely, the SkHE for $N=3$ is three times smaller than that for $N=1$. The theoretical expectation Eq.~(\ref{eq:HallAngle}) explains the numerical data remarkably well with the choice of $\alpha\mathcal{D}=0.57$.
In Fig.~\ref{FIG4}(b), we show the velocity $v_{x}$ as a function of the applied driving current density $j$ for the motion of SAF $N$-layer skyrmions, where $N=1,2,3,4$. We have fitted the data successfully by theoretical expectation Eq.~(\ref{eq:Mean-Velocity}) with the use of $\alpha\mathcal{D}=0.57$. The velocity $v_{x}$ is almost antiproportional to $N$ as shown in Fig.~\ref{FIG4}(c). This is because the driving current is only applied to the bottom FM layer L1.
In order to further improve the $j$-$v$ relation of the SAF $N$-layer skyrmions, here taking the SAF bilayer skyrmion as an example, we also investigate the $j$-$v$ curve when the driving current is applied in both constituent FM layers of the bilayer SAF racetrack, which is plotted in Fig.~\ref{FIG4}(b) as a dashed curve. It can be seen that, when both FM layers are driven by the current, the $j$-$v$ relation of the SAF bilayer skyrmion matches well with that of a FM monolayer skyrmion moving in a monolayer FM racetrack at the small driving current regime. When $j=10$ MA cm$^{-2}$, the velocity of the FM monolayer skyrmion is $v_x=70$ m s$^{-1}$, while the velocity of the SAF bilayer skyrmion is $v_x=72$ m s$^{-1}$.
\section{Current-induced motion of skyrmions at finite temperature}
\label{se:motion-at-finite-temperature}
We proceed to investigate the effect of random thermal perturbations on the motion of SAF $N$-layer skyrmions. Figure~\ref{FIG5} demonstrates the motion of a FM monolayer (SAF bilayer) skyrmion driven by a moderate current of $j=10$ ($20$) MA cm$^{-2}$ at temperatures ranging from $T=0$ K to $T=300$ K (see Ref.~\onlinecite{SI} for Movies 5 and 6).
A FM monolayer skyrmion moves safely from the left to the right terminal of the racetrack at $T=0$ and $50$ K. However, when $T$ is increased and larger than $100$ K, it becomes unstable and is destroyed by touching the upper edge. Figures~\ref{FIG6}(a)-(g) show its trajectories at $T=0$, $50$, $100$, $150$, $200$, $250$, and $300$ K, respectively. At $T=0$ K, as we have already stated, the transverse shift is constant due to the balance between the SkHE and the edge effect. At a finite temperature, the motion is affected by thermal effect, where the transverse shift is fluctuating. It is destroyed when the fluctuated skyrmion touches the edge. Indeed, it is destroyed after having moved $\sim 200$ nm at $T=100$ K. When $T$ is raised to room temperature, that is, $T=300$ K, the FM monolayer skyrmion is destroyed very shortly after the driving current is applied.
On the other hand, a SAF bilayer skyrmion moves safely from the left to the right terminal of the racetrack even at $T=300$ K. The corresponding trajectories of the SAF bilayer skyrmion at $T=0$, $50$, $100$, $150$, $200$, $250$, and $300$ K, are given in Figs.~\ref{FIG6}(h)-(n), respectively. Since the SAF bilayer skyrmion exhibits no transverse shift caused by the SkHE, it is not destroyed by touching the edge as the racetrack width is wide enough here.
In Figs.~\ref{FIG7}(a)-(d), we show the distributions of the $y$-position of a SAF $N$-layer skyrmion for $\sim 300$ nm of motion at different $T$, which are fitted by using the Gaussian distribution. In Figs.~\ref{FIG7}(e)-(f), we show the mean value $y_{0}$ and standard deviation $\sigma_{y}$ of the distribution of the $y$-position as functions of $T$ corresponding to Figs.~\ref{FIG7}(a)-(d). The mean $y_{0}$ is around the one half of the sample width for $N=2$ and $N=4$ since the SAF $N$-layer skyrmion with an even $N$ goes straight reflecting the absence of the SkHE. On the other hand, the mean $y_{0}$ is away from the one half of the sample width for $N=1$ and $N=3$ due to the SkHE. The deviation is larger for $N=1$ than $N=3$, which indicates that the SkHE exerted on a FM monolayer skyrmion is larger than that on a SAF trilayer skyrmion.
In Fig.~\ref{FIG6} we see that a FM monolayer skyrmion can go further at $T=150$ K than at $T=100$ K. This occurs accidentally due to the choice of the thermal random seed, which is a number employed by the random number generator to generate the thermal fluctuation field in thermal simulations~\cite{OOMMFXF_ThermSpinXferEvolve}. In order to evaluate and examine the effect of the thermal random seed on the simulation results, we perform each simulation at a given $T$ $10$ times individually with different thermal random seed values. The trajectories of a FM monolayer skyrmion in a monolayer FM racetrack at $T=50$ and $100$ K with three selected random seed values are shown in Figs.~\ref{FIG8}(a)-(b), respectively, where a moderate driving current of $j=10$ MA cm$^{-2}$ is applied. It can be seen that the trajectories at a certain $T$ with different random seed values are modestly influenced by thermal perturbations. At $T=50$ K, although the motion of the FM monolayer skyrmion is fluctuating, the FM monolayer skyrmion reaches the right terminal of the racetrack in all $10$ simulations with different random seed values. At $T=100$ K, the FM monolayer skyrmion is destroyed by touching the upper edge of the racetrack in all $10$ simulations with different random seed values. On the other hand, the SAF bilayer skyrmion is safely conveyed between the two terminals without touching the edge in all these simulations even at $T=300$ K, as shown in Fig.~\ref{FIG8}(c).
\section{Conclusions}
\label{se:Conclusions}
We have studied the motion of skyrmions in multilayer SAF racetracks in contrast to the motion of skyrmions in conventional monolayer FM racetracks. The thermal effect on the motion of skyrmions in monolayer FM racetracks as well as multilayer SAF racetracks have been investigated by including random thermal perturbations in the micromagnetic simulations. We have found that a moving SAF bilayer skyrmion is much more stable than a moving FM monolayer skyrmion, even when the temperature effect is taken into account, since the two skyrmions consisting the SAF bilayer skyrmion are tightly bound by the interlayer AFM exchange coupling, and thus the SAF bilayer skyrmion is immune from the SkHE. Besides, we have shown that the detrimental effect of the SkHE on the moving FM monolayer skyrmion is enhanced as temperature increases, while the SAF bilayer skyrmion can safely move along the racetrack even at room temperature. In addition, the odd-even effect of the constituent FM layer number on the SkHE in multilayer SAF racetracks has also been demonstrated. Due to the suppression of the SkHE, the skyrmions have no transverse motion in multilayer SAF racetracks with even constituent FM layers. In conclusion, we find that the bilayer SAF racetrack is a preferred host for skyrmion transmission in racetrack-type device applications since it realizes a minimum system which does not show the SkHE.
\begin{acknowledgments}
X.Z. was supported by the RONPAKU program of the Japan Society for the Promotion of Science. M.E. acknowledges the support by the Grants-in-Aid for Scientific Research from JSPS KAKENHI (Grants No. 25400317 and No. 15H05854). Y.Z. acknowledges the support by National Natural Science Foundation of China (Project No. 1157040329), the Seed Funding Program for Basic Research and Seed Funding Program for Applied Research from the HKU, ITF Tier 3 funding (ITS/203/14), the RGC-GRF under Grant HKU 17210014, and University Grants Committee of Hong Kong (Contract No. AoE/P-04/08). X.Z. thanks X.X. Liu for getting him involved in the JSPS RONPAKU program. M.E. is very much grateful to N. Nagaosa for many helpful discussions on the subject.
\end{acknowledgments}
|
1,108,101,564,211 | arxiv | \section{Invitation: Matter matters in quantum gravity}
\subsection{Motivation: Why matter matters}
There are two reasons why matter\footnote{By matter we mean all non-gravitational degrees of freedom, including the scalar, fermionic and gauge fields of the Standard Model as well as potential beyond-Standard-Model fields.} matters in quantum gravity: the first is theoretical and relates to the differences between theories of quantum gravity only versus theories of all fundamental interactions and matter; the second is phenomenological and relates to observational tests of quantum gravity. We expand on both reasons below.
The theoretical search for a quantum theory of gravity is often conducted in a setting without matter. The underlying rationale says that a viable quantum theory of pure gravity can be constructed first, and matter added later, in such a way that key features of the pure-gravity theory remain intact. The rationale would break down in two cases:\footnote{In principle, there is a third scenario, where the pure gravitational theory is not UV-complete, and matter degrees of freedom induce a UV-completion.}
\\
First, it would break down if key features of matter-gravity theories would be very different from those of pure-gravity theories. An analogy for this case is non-Abelian Yang-Mills theory with and without matter. Without matter, the quantum theory is asymptotically free. With matter, asymptotic freedom can be lost in the quantum theory, changing the very nature of the ultraviolet (UV) completion. \\
Second, it would break down if the coupling to quantum gravity would not render the matter sector UV complete, because the combined theory would then not be UV complete. In many quantum-gravity approaches, it is argued that UV completeness of the matter sector is not an issue, because there is a fundamental Planck-scale cutoff due to fundamental spacetime discreteness. This is actually insufficient for a properly UV complete theory, because such a theory should not just be free of divergences (in the sense of Landau poles, not in the sense of loop divergences removable by renormalization), but also predictive. An effective field theory of the matter sector which comes with a Planckian cutoff is only predictive at energies much lower than the cutoff. At energies close to the cutoff, an infinite number of interactions, each parameterized by its own coupling, can exist. Unless the combination with quantum gravity provides a predictivity principle, the combined matter-gravity theory is not a proper UV complete theory. Such a predictivity principle either sets all but finitely many couplings to zero at Planckian scales, or provides an infinite number of relations such that a finite number of free parameters remain.
Observational test of quantum gravity typically rely on the gravitational effect on matter. For instance, potential
quantum-gravity effects in the very early universe are typically looked for in matter observables, such as the cosmic microwave background. Further, quantum-gravity effects in particle physics or even table-top-experiments rely on the interplay of quantum gravity with matter. Finally, even tests relying on putative pure-gravity observables, such as gravitational waves, are ultimately only accessible to us in experimental setups that rely on the interplay of matter and spacetime, see \cite{Addazi:2021xuf} for an overview.\\
Additionally, many more observational tests become available at low energies, if one explores matter-gravity systems. For pure gravity, the only requirement is that at low energies, it reduces to General Relativity, with higher-order corrections to it being sufficiently small. Current observations constrain curvature-squared couplings to be smaller than $10^{60}$ \cite{Berry:2011pb}, which indicates that those are currently only very weakly constrained. In terms of free parameters, one is essentially left with the Newton coupling (and the cosmological constant) as potentially predictable quantities from a quantum theory of gravity.
For gravity-matter theories, there is an additional requirement, namely that the matter sector reduces to the Standard Model (SM), plus potential dark matter and other beyond Standard Model (BSM) fields. This provides many more and stronger constraints than the pure-gravity-setting does. In terms of free parameters, one gains the additional 19 free parameters of the SM as potentially predictable quantities from a quantum theory of gravity with matter.
\subsection{Matter matters in asymptotically safe gravity}
Here we focus on the asymptotic-safety paradigm. Its starting point is the perturbative nonrenormalizability of gravity, which means that gravity loses predictivity at the Planck scale, because the couplings of all possible interactions are free parameters. Asymptotic safety restores predictivity because an additional symmetry, not easily seen in standard perturbation theory (see, however, \cite{Niedermaier:2009zz}), holds above the Planck scale: Quantum scale symmetry means that all couplings, made dimensionless by division through an appropriate power of a scale, are constant. This is referred to as a fixed point in the Renormalization Group flow, which describes how the theory changes with respect to an energy scale. As an analogy, one may view the Renormalization Group flow as the mathematical counterpart of a microscope, with which one can change the resolution scale at which a system is considered. At a fixed point, changes of the resolution scale do not result in changes of the system, i.e., scale symmetry -- a form of self-similarity -- is achieved.\\
Just like any symmetry in a QFT, quantum scale symmetry relates the values of couplings to each other. The special aspect of quantum scale symmetry is that relations continue to hold at lower energy scales/larger distances scales than the Planck scale, where quantum scale symmetry is no longer realized. The reason for these relations is that departure from quantum scale symmetry is only achievable in a QFT, if particular interactions (so-called relevant ones) are present. These relations restore predictivity and make a fundamental QFT of gravity conceivable.
Examples of relevant interactions and the resulting relations will be shown in \autoref{sec:SMUVcompletion} below.\\
In the asymptotic-safety paradigm, evidence is starting to accumulate for the following scenario:
\begin{itemize}
\item An asymptotically safe pure-gravity fixed point can be step by step deformed to an asymptotically safe fixed point in theories which contain matter degrees of freedom, most importantly the SM. We discuss this in detail in \autoref{sec:matterongrav}.
\item Gravitational fluctuations induce new interactions in the matter sector. All induced interactions respect the symmetries of the kinetic terms, which includes global symmetries. Hence, the properties of the asymptotically safe fixed point may be in part determined by those global symmetries.\footnote{In the SM, the global symmetries of the kinetic terms are typically broken by the marginal interactions. The presence of global symmetries may therefore be more relevant for BSM physics, e.g., in a dark sector.} We discuss this in detail in \autoref{sec:GlobalSymms}.
\item Under the impact of gravitational fluctuations, the SM becomes UV complete. The Landau poles that the SM on its own contains are substituted by an asymptotically safe, quantum scale invariant regime.
\item In the infrared (IR), some of the free parameters of the SM, i.e., some of its perturbatively renormalizable couplings, become calculable quantities that can be predicted from first principles. The technical reason is that they are irrelevant couplings in the asymptotically safe regime.
This provides observational tests of asymptotically safe matter-gravity systems. We discuss this in detail in \autoref{sec:SMUVcompletion}.
\item Beyond the SM, not all theories are asymptotically safe. This provides predictions for ongoing and future searches for new physics, including the nature of dark matter. We discuss this in detail in \autoref{sec:DarkBSM}.
\end{itemize}
\subsection{Interplay of quantum gravity and matter and structure of this chapter}
This chapter is structured as follows: First, we introduce key concepts of asymptotically safe quantum gravity, as well as the most important methods to explore it. Then, we start from investigating asymptotic safety in a system with few interactions and step by step add interactions, and later also fields: First, we review how matter that is non-interacting impacts a gravitational fixed point. Then, we discuss the role of symmetries and how they determine which interactions of matter are unavoidably present. Finally, we add those interactions that need to be present in order to obtain a viable phenomenology, including the SM and some physics beyond the SM.\\
It is in fact a nontrivial consequence of the methodology used, namely the functional Renormalization Group, that the two sides of the interplay of quantum gravity and matter can be ``factorized" at the level of calculations, at least approximately:\footnote{These approximations consist in neglecting the effect of non-minimal interactions, as well as the impact of the anomalous dimensions of matter fields on the gravitational couplings, and vice versa.} Within approximations, one can consider first the impact of matter fields on quantum gravity and second the effect of quantum gravity and matter, cf.~\autoref{fig:interplay}. These two independent studies are combined in a second step, where the fully coupled system is investigated.
We keep this introduction as non-technical and pedagogical as possible, and highlight the basic mechanisms behind the results. In several sections we provide \emph{Further reading} paragraphs, where we discuss some more technical details or the relation to other results.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\linewidth, clip=true, trim=2cm 7cm 10cm 6cm]{Figures/interplay.pdf}
\caption{\label{fig:interplay}The interplay of quantum gravity and matter can as a first step be approximately ``factorized", and the effect of matter on the gravitational fixed point be considered separately from the effect of quantum gravity on matter. Therefore, we construct this chapter by considering the impact of matter on quantum gravity first, and the effect of quantum gravity on matter second, adding more and more realistic interactions and field content as we go along.}
\end{figure}
\subsection{Methods to investigate asymptotically safe gravity-matter systems}
\label{sec:methods}
Since asymptotic safety makes an interacting theory scale-invariant, it is characterized by an interacting fixed point of the Renormalization Group flow, where all scale-dependence is lost. A fixed point is called interacting, if interactions are present, i.e., (some of) the couplings are non-zero.
Methods to explore such a scale invariant regime for quantum gravity and matter include
\begin{enumerate}
\item[i)] perturbative methods,
\item[ii)] lattice methods,
\item[iii)] functional methods.
\end{enumerate}
Due to the interacting nature of the fixed point, couplings are generically nonzero. Thus, perturbative methods, which explore the theory in the vicinity of the non-interacting limit, can only be used to limited extent, see \cite{Niedermaier:2009zz, Niedermaier:2010zz} for examples. \\
However, an interacting theory can often become perturbative, when a parameter in the theory is taken to a particular limit, e.g., a limit of many fields, or of a special spacetime dimensionality, see, e.g., \cite{Eichhorn:2018yfc} for examples. In gravity, $d=2$ is a special dimensionality, because it makes the Newton coupling dimensionless. In the language of statistical physics, we therefore call $d=2$ the critical dimension $d_{\rm c}$ for the Newton coupling.
It is a general feature that theories that are asymptotically free in their critical dimension $d_{\mathrm{c}}$ are asymptotically safe in $d=d_{\mathrm{c}}+\epsilon$. Because gravity is also topological in $d=2$, the situation is somewhat special. However, calculations in $d=2+\epsilon$ show a beta function with a negative quadratic term, i.e., the limit $\epsilon \rightarrow 0$ gives a beta function that exhibits asymptotic freedom. At $\epsilon>0$ it therefore features an asymptotically safe fixed point. Whether or not $\epsilon$ can be extended to $\epsilon=2$ is an open question \cite{Gastmans:1977ad, Christensen:1978sc, Kawai:1992np, Kawai:1993mb, Aida:1996zn}.\\
Lattice methods are powerful tools to discover and characterize scale invariant regimes. On the lattice, an asymptotically safe fixed point generates a higher-order phase transition in the phase diagram (spanned by the couplings of the system). At the phase transition, the system looses any memory of scales due to diverging correlation lengths, which is characteristic for critical phenomena. \\
For gravity, one faces the challenge that the lattice itself has to become dynamical, because gravity is not a theory \emph{on} spacetime, but a theory \emph{of} spacetime, where the path integral sums over different configurations of the lattice.
In addition, one may face the challenge that a lattice can (but need not) break spacetime symmetries, e.g., a regular lattice breaks local Lorentz invariance.
In dynamical triangulations, the $d$-dimensional spacetime is discretized in terms of $d$-dimensional simplices, and the path integral for quantum gravity is expressed as a sum over all possible combinatorics of such triangulations. These are weighted by the exponential of their respective action. In the Euclidean setting of Dynamical Triangulations, one can run Monte-Carlo simulations of the path integral directly; in the Lorentzian setting of Causal Dynamical Triangulations, one builds configurations so that they can be Wick-rotated, thereby transforming the Lorentzian path integral to a statistical generating functional that can be explored with Monte-Carlo techniques. Evidence for the existence of a higher-order phase transition, which is necessary to take the continuum limit, has been collected in the case of Causal Dynamical Triangulations \cite{Loll:2019rdj}, see also \cite{Catterall:2018dns, Dai:2021fqb, Bassler:2021pzt, Asaduzzaman:2022kxz} for recent studies in Euclidean Dynamical Triangulations. For further discussions, we refer the reader to the section on Dynamical Triangulations of this handbook. \\
In addition, lattice simulations may be based on Regge calculus, which varies not the triangulation, but the edge lengths of the building blocks to sum over all spacetime configurations \cite{Hamber:2009mt}.
Further, a combinatorial approach based on random graphs may also feature a second-order phase transition \cite{Kelly:2018diy, Kelly:2019rpx}, as could another combinatorial approach based on tensor models \cite{Eichhorn:2019hsa}. \\
These phase transitions need not necessarily constitute the same universality class\footnote{ The terminology ``universality class" is taken from statistical physics/condensed matter, where interacting fixed points have long played an important role, because they characterize continuous phase transitions. A universality class is determined by the dimensionality, field content and symmetries, and quantitatively described by the set of critical exponents, which describe the scaling behavior of physical quantities in the vicinity of the phase transition.} that is commonly known as ``asymptotically safe gravity", i.e., while they may be asymptotically safe in a technical sense, their emergent physics may differ from that encoded in the continuum functional approach we discuss below. The decision whether or not the physics agrees between such different approaches which all search for a second-order phase transition, can be based on sufficiently precise calculations of the critical exponents, which uniquely characterize the universality class of a phase transition.\\
Finally, it has been proposed in \cite{Eichhorn:2017bwe, Eichhorn:2019xav} that causal sets, reviewed in another section of this book, may also shed light on asymptotic safety: it is usually assumed that the discreteness scale in causal sets is fixed. However, if one can take it to zero at a higher-order phase transition, one obtains a continuum limit in a genuinely Lorentzian setting.\\
In summary, while current explorations of asymptotically safe gravity are mainly based on functional methods reviewed below, there is certainly scope to extend the toolbox and achieve complementary insights based on other methods or by ``repurposing" other approaches, such as the causal-set approach.
\\
The ideal tool to probe an asymptotically safe theory can do two things: first, it can probe the UV regime to search for scale symmetry. Second, it can connect a scale-symmetric regime in the UV to emergent phenomenology in the IR.\\
Functional methods are such a tool, because they allow to extract the scale dependence of a system within and beyond perturbation theory. Since most research on asymptotically safe gravity-matter systems relies on functional methods, in particular on the functional renormalization group (FRG), we will briefly introduce the method and some notation in the following.\\
The key object in the FRG is the scale-dependent effective action $\Gamma_k$. It is a scale-dependent counterpart of the classical action, i.e., it gives rise to the equations of motion for the expectation value of the field.
As a function of the RG scale $k$, it interpolates between the microscopic action $\Gamma_{k\to\infty}$ when no quantum fluctuations are integrated out, and the full quantum effective action $\Gamma_{k\to0}$ when all quantum fluctuations are integrated out.\footnote{The microscopic action $\Gamma_{k\to\infty}$ is sometimes also referred to as the classical action. This is technically not completely accurate, see \cite{Manrique:2009tj, Morris:2015oca, Fraaije:2022uhg} for the relation between bare (or classical) action and $\Gamma_{k\to\infty}$. Further, the term ``classical action" can be conceptually confusing in the gravitational context: Observationally, we know that the action that describes gravity at low curvature scales is $S_{\rm EH}$, the Einstein-Hilbert action of GR and we usually refer to it as the classical action, given that GR does not contain quantum effects. However, in the context of asymptotic safety it is not correct that $S_{\rm EH}$ is the action that is ``quantized" in the sense of a path integral $Z= \int \mathcal{D}g_{\mu\nu}\, e^{i\, S_{\rm EH}}$. Instead, $S_{\rm EH}$ should be recovered as the leading approximation to $\Gamma_{k\rightarrow 0}$ in the limit of low curvature.} We are interested in the scale-derivative of $\Gamma_k$, i.e., in $k\, \partial_k \Gamma_k$, because it allows us to do the two things we are interested in: first, finding whether there is a scale-invariant regime, related to $k\, \partial_k \Gamma_k=0$, and second, integrating $k\, \partial_k \Gamma_k$ from the scale-invariant regime in the limit $k \rightarrow \infty$ to $k=0$ to investigate the phenomenology of asymptotic safety.\\
The FRG indeed provides a flow equation for $\Gamma_{k}$, which reads \cite{Wetterich:1992yh, Morris:1993qb, Ellwanger:1993mw, Reuter:1996cp}
\begin{equation}
\label{eq:floweq}
k\partial_k\,\Gamma_k=\frac{1}{2}\mathrm{sTr}\left[\left(k\partial_k\,\ensuremath{R_k}\right)\left(\Gamma_k^{(2)}+\ensuremath{R_k}\right)^{-1}\right]\,.
\end{equation}
Here, the right-hand-side integrates over quantum fluctuations, with those fluctuations with momenta of the order of $k$ contributing most to the change of $\Gamma_k$ at $k$. The technical ingredients of the right-hand side are:
the second functional derivative of $\Gamma_k$ with respect to all fields of the system, $ \Gamma_k^{(2)}$, the regulator functional $\ensuremath{R_k}$ and a supertrace $\mathrm{sTr}$ which sums/integrates over all discrete/continuous indices.
The combination $\left(\Gamma_k^{(2)}+\ensuremath{R_k}\right)^{-1}$ is the regularized propagator. In the propagator, $\ensuremath{R_k}$ acts akin to a scale-dependent mass-term, because it appears just like a standard mass term would, together with the momentum, in the schematic form $p^2+\ensuremath{R_k}$ or $p^2+m^2$, respectively. The difference to a standard mass term is that it is not constant, but only present for low-energy modes, $p^2<k^2$. Therefore, these are suppressed; and a $\sim 1/p^2$ divergence, i.e., an IR divergence, is avoided. Thus, technically speaking, the regulator ensures the IR-finiteness of the flow equation. In addition, the physical masses of modes also enter the propagator and ensure that a mode decouples dynamically, once $k$ falls below the mass scale. This is relevant even for massless fluctuations, which couple through a mass-like term: gravity decouples automatically, once $k \simeq M_{\rm Planck}$, see \autoref{fig:RGflowschematic} for a schematic illustration of the functional RG flow and the different regimes for gravity-matter theories.
In the numerator, the scale derivative $k\partial_k\,\ensuremath{R_k}$ suppresses quantum fluctuations of high momenta.
The two occurrences of $\ensuremath{R_k}$ therefore realize the Wilsonian idea of integrating out quantum fluctuations according to their momentum in a step-wise fashion.
To achieve this, the regulator has to satisfy several conditions, most importantly $\ensuremath{R_k}(p^2)>0$ for $p^2< k^2$ (where $p$ denotes a four-momentum) to suppress low-energy modes, and $\ensuremath{R_k} (p^2)=0$ for $p^2>k^2$, such that the high-energy modes are also suppressed in the flow equation and only modes with $p^2 \approx k^2$ remain. Since quantum fluctuations are integrated out according to their four-momentum squared, the FRG is best employed in Euclidean settings, where a cutoff on the four-momentum squared indeed distinguishes UV and IR. For steps towards a generalization to Lorentzian spacetimes in the context of quantum gravity, see \cite{Manrique:2011jc, Bonanno:2021squ, Fehre:2021eob}.
\begin{figure}[!t]
\includegraphics[width=\linewidth]{Figures/illustration_RG_flow.pdf}
\caption{\label{fig:RGflowschematic} We illustrate the functional RG flow of gravity coupled to the SM, which has three regimes as a function of $k$: in the UV, asymptotic safety is realized and discoverable through the flow equation by setting $\beta=0$ for all couplings. At the Planck scale, gravity decouples dynamically. Below the Planck scale, the SM couplings (here shown are the three gauge couplings $g_i$, the two largest Yukawa couplings $y_t$ and $y_b$ and the Higgs quartic self-interaction $\lambda$) exhibit a perturbative scale dependence and the flow equation easily reproduces one-loop perturbation theory. (At even lower scales, in the very deep IR, perturbation theory no longer describes QCD; the FRG can also be used in that regime successfully, see, e.g., \cite{Dupuis:2020fhh} for reviews. )}
\end{figure}
Structurally, the flow equation \eqref{eq:floweq} is of one-loop form, but in terms of the full and regularized propagator, such that it is valid beyond perturbation theory. In fact, $\Gamma_k^{(2)}$ is not just the perturbative expression $p^2+m^2$ (or the appropriate version for fields which are not scalar), but contains higher-order terms, i.e., it is the inverse propagator fully dressed by quantum fluctuations. Despite this difference, calculations share similarities with perturbative loop calculations, and, most importantly, are feasible for a wide range of theories, including gravitational ones.\\
The flow equation is successfully employed in various physical scenarios which are governed by an interacting fixed point, see, e.g., \cite{Dupuis:2020fhh} for an overview.\\
Quantum fluctuations generate all interactions compatible with the symmetries of a system. Thus,
the scale dependent effective action $\Gamma_k$ contains all these interactions, and the scale-dependence of the corresponding couplings can be extracted by projecting the left and right-hand sides of the flow equation \eqref{eq:floweq} on the corresponding interaction. In practise this is done by taking functional derivatives with respect to the fields. In this way, one obtains the beta function for the coupling of an interaction term.\\
In practise, only a subset of interactions can be accounted for.
Practical computations therefore have to restrict the set of terms -- kinetic terms and interactions -- that enter $\Gamma_k$ to a typically finite subset. This constitutes a truncation in the space of all interactions and introduces a systematic uncertainty in the results obtained within the method. Extending the truncation by adding more and more interactions into the system decreases this systematic uncertainty, see \cite{Balog:2019rrg} for an example.\\
Crucially, completely random choices of interactions do of course not lead to robust results. Instead, reliable truncations are based on physical insight into the nature of the system, for example, regarding the degree of a system's non-perturbativeness. Systematic expansion schemes can be employed, e.g., a derivative expansion (including all orders in the field, but subsequent orders in derivatives), a vertex expansion (including all orders in derivatives, but subsequent orders in fields), or an expansion based on canonical dimension (including the most relevant interactions first). Calculations in gravity-matter systems are typically based on the last scheme; based on the assumption that the fixed point is near-perturbative, i.e., the canonical dimension remains a useful ordering principle. We will get back to this point at the very end of this chapter, see \autoref{sec:ner-pert}, to review whether this assumption is self-consistent and supported by the results obtained by basing truncations on it.\\
In the discussion above, we have referred to the four-momentum of modes, which is of course a notion closely tied to flat spacetime. On a curved background, it can be generalized: just as $p^2$ are the eigenvalues of the flat-spacetime d'Alembertian, $\lambda_p$ are the eigenvalues of a suitable curved-spacetime d'Alembertian $\Box.$ However, if spacetime itself is fluctuating, the definition of a suitable generalization of momenta is non-trivial. Therefore,
when applying the FRG to gravity, one has to specify an auxiliary background metric, with respect to which the momenta of quantum fluctuations can be measured \cite{Reuter:1996cp}. This is best implemented via the background-field method, see \cite{Martini:2022sll} for details, because this method also allows to preserve a background diffeomorphism invariance, which becomes full diffeomorphism invariance when the auxiliary background metric is removed.\\
Therefore, we split the metric into background metric $\bar{g}_{\mu\nu}$ and a fluctuation field $h_{\mu\nu}$, for example with a linear\footnote{Other choices are possible, most popular among them an exponential split.} split
\begin{equation}
\label{eq:linsplit}
g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}\,,
\end{equation}
where the metric fluctuation $h_{\mu\nu}$ is not restricted to be perturbatively small. \\
We emphasize that the background is merely a technical ingredient and does not even need to be specified in calculations (although in practise, many studies choose to specify it for reasons of technical simplicity). Given the background metric, the regulator can then be introduced as a mass-like term for $h_{\mu\nu}$, while $\bar{g}_{\mu\nu}$ is specified to measure the ``momentum" of $h_{\mu\nu}$. Due to this regularization, $\Gamma_k$ depends on $\bar{g}_{\mu\nu}$ and $h_{\mu\nu}$ independently. To zeroth order in $h_{\mu\nu}$, i.e., $\Gamma_k[\bar{g};h=0]$ features the so-called \emph{background couplings}, which are the physical ones which enter observable quantities. However, due to the regularized propagator in the flow equation, their scale dependence is driven by those couplings of $\Gamma_k[\bar{g};h]$ appearing at higher orders in an expansion in $h_{\mu\nu}$, the so-called \emph{fluctuation couplings}. In a wide-spread approximation-scheme, the \emph{background-field approximation}, this difference is neglected when computing the scale dependence of background couplings. We will provide further details in the \emph{further reading} part of this section, and refer the reader to the chapter on the vertex expansion.\\
Due to the formulation as a QFT of the metric, it is rather straightforward to couple matter degrees of freedom to gravity in asymptotic safety. When investigating asymptotically safe quantum gravity within the FRG framework, this is especially true, since the continuum formulation allows using standard formulations of scalars, gauge fields and fermions. This is in contrast with other approaches to quantum gravity, where the mere definition of matter fields can be more involved.\\
In summary, this provides a flow equation for gravity-matter systems, which allows to first search for scale-symmetry, and second, start from a scale-symmetric regime in the UV and integrate out all quantum fluctuations to obtain the resulting effective dynamics $\Gamma_{k\rightarrow 0}$, which can be compared to observations.
\\
\FRT{Further reading:}\\
\FR{Background field approximation and fluctuation computations}\\
Due to the regulator, (and because one has to gauge-fix $h_{\mu\nu}$,) $\Gamma_k$ depends on $\bar{g}_{\mu\nu}$ and $h_{\mu\nu}$ independently, i.e., $\Gamma_k=\Gamma_k[\bar{g};h]$.
$\Gamma_k[\bar{g};h]$ can be expanded as a series in metric fluctuations $h_{\mu\nu}$, with different couplings at each order in $h_{\mu\nu}$. The zeroth order in this expansion, i.e., $\Gamma_k[\bar{g};h=0]$, contains the so-called background couplings, whose scale dependence is driven by the couplings appearing in higher-order terms of the expansion, the so-called fluctuation couplings. It is a wide-spread approximation to neglect the difference between background and fluctuation couplings, when computing the scale dependence of $\Gamma_k[\bar{g};h=0]$. We call this the background-field approximation. Computations which go beyond the zeroth order in the expansion in metric fluctuations will be referred to as fluctuation computations, see also \cite{Pawlowski:2020qer}.\\
Non-trivial symmetry identities, the so-called Nielsen, or split-Ward identities, restore background independence, see, e.g., \cite{Manrique:2009uh, Pawlowski:2020qer}. These identities encode the difference between correlation functions of the background field and correlation functions of the fluctuation field.\\
In the presence of a fluctuation field $h_{\mu\nu}$, the scale-dependent effective action can be expanded in terms of a vertex expansion as
\begin{equation}
\label{eq:vertexp}
\Gamma_k[\bar{g},h]=\Gamma_k[\bar{g},0]+\sum_{n=1}^{\infty}\left(\frac{\delta^n \Gamma_k[\bar{g},h]}{\delta h_{\gamma_1\delta_1}\dots \delta h_{\gamma_n\delta_n}}\bigg|_{h=0}\right)h_{\gamma_1\delta_1}\dots\, h_{\gamma_n\delta_n}\,.
\end{equation}
The first term in this expansion depends only on the background metric, and we refer to the couplings in this term as background couplings. These couplings are the physical couplings that eventually enter observable quantities. The second term in Eq.~(\ref{eq:vertexp}) is the sum over $n$-point vertices, which generally have different scale dependences than the background couplings. We refer to the couplings appearing in these $n$-point vertices as fluctuation couplings. We can see from Eq.~(\ref{eq:vertexp}) and the flow equation Eq.~(\ref{eq:floweq}) that the scale dependence of the $n$-point vertex depends on the $n+1$ and the $n+2$ point vertex. In practical computations, the tower of scale-dependent couplings is therefore truncated by identifying the couplings appearing in the $n+1$ and $n+2$-point vertices with those of the $n$-point vertex.\\
As an alternative to the expansion in background metric and fluctuation field, one can also work in a bimetric setting, with background metric and full metric. For gravity-matter systems, this has been implemented in \cite{Manrique:2010mq}, but a majority of works in this context uses the fluctuation field instead. A main reason for this choice is that the scale dependence of the fluctuation couplings can be extracted by choosing a flat background metric, which is technically advantageous.\\
While the physical couplings are only contained in $\Gamma_k[\bar{g},0]$, their scale dependence is driven by the fluctuation couplings. The \emph{background field approximation} only computes the scale dependence of these physical couplings and identifies all fluctuation couplings with the corresponding background coupling. In particular, the momentum-independent part of the two-point function is identified with the cosmological constant $\bar{\Lambda}$. This approximation scheme allows computing the scale-dependence of curvature operators of high powers, see, e.g., \cite{Falls:2013bv, Falls:2017lst, Kluth:2020bdv}, and even of form factors, see, e.g., \cite{Knorr:2019atm, Knorr:2022dsx}. It also allows to extract the scale dependence on non-trivial and even arbitrary backgrounds, see, e.g., \cite{Benedetti:2010nr, Falls:2020qhj, Sen:2021ffc}.
The background field approximation has the virtue of relying on background diffeomorphism invariance by extracting the scale-dependence at vanishing fluctuation field $h$. However, it neglects the difference between the fact that the full effective action individually depends on the background metric $\bar{g}$ and metric fluctuations $h$ and therefore assumes a trivial realization of the so-called Nielsen identities.\\
Fluctuation computations focus on the computation of the scale dependences included in the second term of Eq.~(\ref{eq:vertexp}). The tower of $n$-point vertices is typically truncated at some order $m$, and the couplings appearing in the $m+1$ and $m+2$-point vertices are identified with those of the $m$-point vertex.
\subsection{Fundamentals}
There are a few notions that will be key to this whole chapter. We discuss these fundamentals here.\\
\emph{The direction of the RG flow}\\
Which direction of the RG flow should we think of, when talking about asymptotic safety? It is tempting to think about the RG flow from IR to UV, because in this way we extrapolate from known and measured physics into the unknown. Indeed, asymptotic safety is sometimes discussed in this way.
However, this direction is not physically meaningful, because in nature, microphysics determines macrophysics and not the other way around. Thus, a meaningful RG flow always starts at high energies and goes towards low energies, where the high-energy physics has low-energy consequences.
\\
\emph{Quantum scale symmetry -- what is constant?}\\
Asymptotic safety is an enhancement of the symmetry of the QFT. The added symmetry is quantum scale symmetry, which means that couplings are non-vanishing and constant when the energy scale is changed. In a classical field theory, requiring constant couplings would be a trivial requirement; in a quantum theory, it is not. In a quantum theory, quantum fluctuations screen or anti-screen couplings and thereby generate a scale dependence. Asymptotic safety means that this scale dependence vanishes \emph{in the dimensionless counterparts of couplings}. This is a key difference to what is often referred to as scale invariance in the literature. Because scale invariance is, loosely translated, the absence of distinct physical scales, scale invariance is often taken to mean that there cannot be dimensionful couplings in the theory. This statement is true in quantum scale symmetry, but in a more subtle way: when a dimensionful coupling vanishes in the UV, $\bar{g}=0$, then its dimensionless counterpart, $g = \bar{g}\, k^{-d_{\bar{g}}}$, need not vanish: the RG scale $k$ is taken to make $g$ dimensionless; and when, e.g., the dimension of the coupling, $d_{\bar{g}}$, is negative, then $\bar{g}$ vanishes in the limit $k \rightarrow \infty$, even if $g$ is nonzero in that limit. This is the sense in which quantum scale symmetry is a scale symmetry: the dimensionful quantities scale to zero (or infinity) if one takes the formal limit $k \rightarrow \infty$, so that no scales are present, cf.~\autoref{fig:Gdimfulldimless}. Nevertheless, the dimensionless counterparts of these dimensionful quantities remain non-vanishing and constant. In this sense, asymptotic safety is a generalization of asymptotic freedom, which is familiar from Quantum Chromodynamics. In asymptotic freedom the coupling vanishes in the $k \rightarrow \infty$ limit. In those systems classical scale invariance is restored, since the classical theory only contains dimensionless quantities, and quantum fluctuations vanish for $k \rightarrow \infty$.\\
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.9\linewidth]{Figures/Gdimfulldimless.pdf}
\end{center}
\caption{\label{fig:Gdimfulldimless} We show the dimensionful Planck mass ($M_{\rm Planck}^2$ and its dimensionless counterpart $m_{\rm Planck}^2 = M_{\rm Planck}^2/k^2$), as well as the dimensionful Newton coupling $G$ and its dimensionless counterpart $G_N= G\cdot k^2$. Beyond the Planck scale, at $10^{19}\, \rm GeV$, the dimensionless quantities are constant and the dimensionful ones show scaling behavior. In the limit $k \rightarrow \infty$, the theory therefore emerges from a regime without scales, because all masses scale to infinity.\\
Thinking about the theory from the IR to the UV, as is sometimes done, one expects a strong-coupling regime to set in at the Planck scale. Instead, a scaling regime sets in, which is characterized by $G_N$ decreasing. Thus, in asymptotic safety, gravity is more weakly coupled than expected, which can be viewed as a reason why the continuum spacetime picture for the gravitational interaction continues to hold and does not need to be substituted by a description of radically different character. Similarly, the Planck mass, at which one expects gravity to become strongly coupled, ``runs away", once $k^2=10^{19}\, \rm GeV$ is reached, i.e., the theory may dynamically protect itself from strong-coupling phenomena.}
\end{figure}
\emph{Predictions from asymptotic safety}\\
Because asymptotic safety is an enhancement of the symmetry of the QFT, one may expect that it relates different interactions to each other, because this is what happens in a QFT, when an additional symmetry is imposed. However, the special property of quantum scale symmetry is that such relations between couplings can remain intact, even if the theory departs from quantum scale symmetry under the RG flow towards the infrared (IR). In fact, the simple observation that there are distinct scales in nature, e.g., those of the masses of elementary particles, means that the RG flow must leave the asymptotically safe fixed point regime at some scale, i.e., quantum scale symmetry can only hold in the UV, not the IR, see, though, \cite{Wetterich:2020cxq} for an alternative. Nevertheless, the IR physics can still carry imprints of quantum scale symmetry in relations between couplings. We now exemplify this property. In short, it stems from the fact that the RG flow has "sources" and "sinks" -- in technical language, relevant and irrelevant directions. It can depart from the asymptotically safe fixed-point regime along a "source-direction", but not a "sink-direction". In "sink-directions", quantum fluctuations generate scale symmetry on the way to the IR, i.e., even if one chooses the value of a coupling slightly away from the fixed point, quantum fluctuations drive the coupling back to the fixed point. Therefore, along a "sink-direction", there is only one value that the coupling can take in the IR, namely its fixed-point value. This is true even if the RG flow has departed from the fixed-point value along a "source direction", see left panel in \autoref{fig:predictions_schematic}.\\
In practise, the simple picture we just sketched out is slightly modified in terms of the quantitative aspects of the predictions: first, when a relevant coupling departs from its fixed-point value, it can pull an irrelevant coupling along with it. The special value that the irrelevant coupling is fixed to is then no longer the fixed-point value, but instead a value that depends on the relevant coupling. In technical language, this means that the critical hypersurface of the fixed point (spanned by all its "source directions" is curved). Second, at an asymptotically safe fixed point, it is often a superposition of couplings that corresponds to a "source" or a "sink-direction".\\
Both aspects combined means that one often gets \emph{relations between couplings} that are predicted at low energies. In practise, these can often only be calculated numerically.
\begin{figure}[!t]
\includegraphics[width=0.45\linewidth]{Figures/predictivity_illustration1.pdf}\quad\includegraphics[width=0.45\linewidth]{Figures/predictivity_illustration2.pdf}
\caption{\label{fig:predictions_schematic} The schematic RG flow features three fixed points in both panels. The one at $(g_1=1, g_2=0)$ acts as a source in both directions and thus does not generate predictions. The one at $(g_1=0, g_2=1)$ acts as a sink in both directions. It therefore generates two predictions, but also requires scale symmetry at all scales, because the RG flow cannot depart from it. Finally, the point at $(g_1=1, g_2=1)$ (left panel) and $g_1=1.2, g_2=0.4$ (right panel) has one "source-direction" and one "sink-direction". In both cases, it generates a prediction of the value of $g_2$. In the simple case (left panel), the prediction is the fixed-point value. In the more realistic case (right panel) the prediction is a function of $g_1$.}
\end{figure}
In more technical language, predictions arise from irrelevant directions of the RG flow, which are encoded in negative critical exponents. The critical exponents measure the changes of the flow, i.e., they are related to first derivatives of the beta function. Thereby they encode whether a direction corresponds to a source or sink. The critical exponents are calculated as
\begin{equation}
\theta_I =- {\rm eig} \left( \frac{\partial \beta_{g_i}}{\partial g_j}\right)\Big|_{g_i = g_{i\, \ast}},
\end{equation}
where $g_{i\, \ast}$ is the fixed-point value of a coupling. Because of the extra negative sign, $\theta_I>0$ corresponds to a source (a relevant direction) and $\theta_I<0$ to a sink (an irrelevant direction).\\
\emph{Partial vs. full fixed points}\\
A fixed point is a point where the scale dependence of all couplings $g_i$ vanishes, i.e., $\beta_{g_i}=0$ for all couplings $g_i$ of the system. However, deciding whether a fixed point exists is a major challenge, because it requires knowledge of all beta functions. In practise, we will nevertheless refer to, e.g., the gravitational fixed point, because a large number of couplings have been included in studies and there are no indications that the remaining couplings would spoil the fixed point in gravity.\\
In the following, and in particular in \autoref{sec:GlobalSymms} and \autoref{sec:SMUVcompletion}, we will at times investigate the fixed-point structure of a subsystem, while treating other couplings as external parameters. This allows us to factorize the search for an asymptotically safe fixed-point for gravity and matter into different sub-systems. Strictly speaking, we then search for \emph{partial} fixed points within these different subsystems. The fixed points are partial, because some couplings are treated as external parameters, instead of also being evaluated at their respective fixed point. \\
The existence of a partial fixed point is a necessary, but not sufficient condition to find a fixed point of the full system. We will refer to partial fixed points as fixed points for coupling $g_i$, while full fixed points will be referred to as fixed points of the full system.\\
\emph{The robustness of results}\\
How sure are we about results in asymptotically safe gravity-matter systems? The answer differs, depending on which result we have in mind. There is no result about the existence of an asymptotically safe fixed point in gravity-matter systems which has been proven in a strict mathematical sense. However, for some results, so much evidence has been accumulated that they can be assumed to hold beyond reasonable doubt. For others, the uncertainty is larger, because the effect of approximations and assumptions is less well understood. We try to highlight when this is the case, without making our text too cluttered by constantly repeating phrases like "within an approximation", "under certain assumptions", etc.~.\\
There is a way of testing the robustness of results that we will refer to: if an approximation is advanced enough, then unphysical choices (for instance, choices of gauge parameters) do not change the value of physical quantities at all or not much. Conversely, if an approximation is not advanced enough, the choice of unphysical parameters can start to matter.
\section{Global Symmetries persist and have phenomenological consequences}
\label{sec:GlobalSymms}
\emph{Synopsis: Global symmetries, such as shift symmetry for scalar fields or chiral symmetries for fermions, are left intact by gravitational fluctuations. In turn, they determine through which terms matter fields interact at an asymptotically safe fixed point. These interactions may lead to a bound on the strength of gravity, the weak-gravity bound. Further, these interactions find a way to circumvent a mechanism for chiral symmetry breaking which would leave fermions with Planck-scale masses and would rule out asymptotic safety.}
\subsection{The status of global symmetries in asymptotic safety}
\emph{Synopsis: There is a general argument suggesting that there cannot be global symmetries in quantum theories of gravity. We review the argument and point out its assumptions, which may not hold in asymptotically safe gravity. We then review explicit calculations that show that quantum fluctuations of gravity generate new interactions for matter fields. These interactions respect the maximum set of global symmetries of the corresponding matter fields. In contrast, interactions which violate the maximum set of global symmetries of the matter fields are not generated, i.e., can consistently be set to zero.}
In quantum field theory, symmetries play a central role by dictating the interactions of the fields in the theory.
Global symmetries play different roles -- and are understood to varying degrees -- at different scales. At condensed-matter scales, many different global symmetries occur, under which order-parameter fields transform. The corresponding theories describe phases and phase transitions through the spontaneous breaking of these global symmetries.
Moving towards smaller length scales, namely those of particle physics, there is only a single global symmetry in the SM, namely a global $U(1)_{\rm B-L}$, where $\rm B$ and $\rm L$ stand for baryon- and lepton-number. Additional global symmetries are present, if interactions are switched off. Beyond the SM, global symmetries play an important role, e.g., in dark-matter models, where they may ensure the stability of dark matter.
Finally, moving towards even smaller scales, in the quantum-gravity regime, what is the fate of global symmetries?\\
There is a general argument that states that any global symmetry should be broken in quantum gravity \cite{Banks:1988yz,Banks:2010zn}, however, as any such arguments, it relies on assumptions which may or may not hold in a given setting.\footnote{In the context of string theory, this is known as the no-global-symmetries swampland conjecture, see, e.g., \cite{Daus:2020vtf} and references therein.} We will now present the argument and explain where assumptions are being made.\\
The argument relies on virtual black-hole configurations in the gravitational path-integral, together with the assumption that global charges are not preserved by black holes. It says that among the various spacetime configurations that the gravitational path integral sums over, there are configurations that correspond to black holes. In turn, if one extrapolates Hawking radiation all the way to zero mass (i.e., through the semi-classical and also the quantum regime), then black holes destroy information on global charges: for instance, one can imagine building a black hole from only protons (so it has a well-defined baryonnumber and leptonnumber), but it evaporates not just into baryons, but into various elementary particles, completely destroying any memory of the initial global charge. Therefore, the argument concludes, there is a contribution in the gravitational path integral that destroys global charges and thus global symmetries cannot be preserved in quantum gravity.\\
However, first, the actual contribution of virtual black-hole configurations to the gravitational path-integral is not known (and indeed depends on the microscopic dynamics -- so may be different in an asymptotically safe setting than, e.g., when using the Einstein action). One can imagine settings where the microscopic dynamics $S$ is such that the phase factor $e^{iS}$, when evaluated on black-hole configurations, leads to destructive interference, see, e.g., \cite{Borissova:2020knn}. Second, whether or not global charges are conserved also depends on whether it is true that there are no black-hole remnants, for which, indeed, there are counter-indications in asymptotic safety \cite{Bonanno:2006eu, Falls:2012nd}. If there are black-hole remnants, i.e., the Hawking evaporation process stops at a finite mass of the black hole, then the original information on global charges may be stored by the remnant.\\
In turn, there is a general argument against the existence of remnants \cite{Susskind:1995da}, which itself relies on assumptions about the behavior of those remnants in scattering processes.\\
In summary, one cannot in general conclude that global symmetries are broken in quantum gravity without knowing more about the specific properties of the theory.
\\
To settle the question whether or not global symmetries are conserved in asymptotically safe gravity, or whether indeed only local symmetries may exist, it is necessary to calculate the gravitational effect on matter systems with global symmetries. Indeed, many such calculations have been performed, which we review below. As an upshot, none of the calculations indicates that global symmetries are broken by asymptotically safe quantum-gravity effects. One reason may be that gravity-matter systems may be near-perturbative in asymptotic safety (see \autoref{sec:ner-pert}), and thus nonperturbative contributions in the path integral, which may break global symmetries, are negligible. An important caveat to this is that calculations are done in a Euclidean regime. There, analytic continuations of black-hole spacetimes exist, but are physically quite distinct from black holes in a Lorentzian regime, because Lorentzian signature is necessary for the existence of causal relations and thus horizons. Thus, while no indications for global symmetry breaking exist in asymptotic safety to date, the question is not fully settled yet and a different result may be found in Lorentzian signature.\footnote{ First studies of asymptotic safety in Lorentzian gravity exist, which yield a fixed point similar to the Euclidean one \cite{Manrique:2011jc,Fehre:2021eob}.}\\
There are two ways in which asymptotically safe gravity could reduce the global symmetries of a matter system: the first is by explicitly generating new interaction terms for matter with a lower degree of symmetry. The second is by preventing an asymptotically safe fixed point in the theory space with the maximum global symmetry. For clarity, we illustrate the two possibilities in \autoref{fig:globalsym}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.45\linewidth]{Figures/GS_fig1.pdf}
\newline
\includegraphics[width=0.45\linewidth]{Figures/GS_fig2.pdf}
\quad
\includegraphics[width=0.45\linewidth]{Figures/GS_fig3.pdf}
\end{center}
\caption{\label{fig:globalsym} We illustrate the three distinct possibilities for the status of global symmetries in asymptotically safe gravity. Here, $s$ is a coupling that respects the global symmetry, whereas $b$ breaks it. In the upper figure, the global symmetry is left intact, because quantum gravity does not generate the symmetry-breaking interaction: if $b=0$ is chosen, the RG flow remains at $b=0$ (red line); and because there are fixed points in the theory space with the global symmetry: they lie at $s\neq 0$ and $b=0$ (blue dots). In both lower panels, the global symmetry is broken: in the left panel, gravity fluctuations induce $b$, even if it is set to zero (red lines). In the right panel, gravity fluctuations do not generate the symmetry-breaking interactions. However, there is no fixed point in the theory space with the global symmetry; the only fixed points lie at nonzero $s$ and $b$.}
\end{figure}
The simplest setting in which the conservation of global symmetries can be investigated in asymptotic safety through explicit calculations looks as follows:
A non-interacting field is considered, i.e., only a kinetic term is specified. This minimal coupling to gravity suffices for quantum gravity fluctuations to generate interaction terms.
For a given field, the kinetic term has a maximum set of global symmetries -- e.g., for $\ensuremath{N_{\mathrm{S}}}$ real scalar fields, there is an O($\ensuremath{N_{\mathrm{S}}}$) symmetry, in addition to $\ensuremath{N_{\mathrm{S}}}$ separate shift symmetries (for each of the scalar fields). Under the impact of quantum gravity fluctuations, the field does not remain non-interacting, i.e., quantum gravity fluctuations generate interactions.
If the first possibility for symmetry-breaking is realized, then gravitational fluctuations generate interactions which break the global symmetries explicitly.
If the second possibility is realized, then no such symmetry-breaking interactions are generated, but there is also no fixed point among the symmetry-preserving interactions.
In contrast, if quantum gravity does not break global symmetries, then the interactions which are generated share the symmetries of the kinetic term and these interactions admit an asymptotically safe fixed point.
\\
In terms of the beta function for a coupling $b$ that breaks a global symmetry, the question can be investigated as follows:
The most general form of such a beta function is
\begin{equation}
\label{eq:symmbreak}
\beta_b = \beta_0\, G_N^{\alpha_1} + \left(\beta_1+ \beta_{1\, G_N} G_N^{\alpha_2} \right) b+ \mathcal{O}(b^2).
\end{equation}
If quantum gravity breaks the global symmetry through the generation of symmetry-breaking interactions, then $\beta_0\neq 0$ must hold, so that any (partial or full) fixed point must have the symmetry-breaking interaction present.\\
Conversely, if $\beta_0=0$, gravity does not generate the symmetry-breaking interaction. Then, $b=0$ is a zero of the beta function and the symmetry-breaking coupling can consistently be set to be zero, i.e., it is a partial fixed point for any value of $\ensuremath{G_{\mathrm{N}}}$. Of course, this is not sufficient to guarantee that asymptotically safe gravity respects the corresponding global symmetry, because there may not be a full fixed point in the theory space with the global symmetry.\\
To show that asymptotically safe gravity respects global symmetries, one therefore has to do two things: first, one has to show that symmetry-breaking interactions are not generated ($\beta_0=0$ in all corresponding beta functions). This means that the symmetry-breaking interactions feature a partial fixed point at vanishing coupling values.
Second, one has to show that the theory space with the maximum symmetry contains an asymptotically safe fixed point. This means that the partial fixed point extends to a full fixed point.
\subsubsection{Step 1: No symmetry-breaking interactions are generated by gravitational fluctuations}\label{sec:nosymbreak}
\emph{Synopsis: In all examples with global continuous symmetries for matter fields which have been studied, these symmetries are preserved by quantum gravitational fluctuations, under the assumptions spelled out in the corresponding papers, which include Euclidean signature.}
Continuous global symmetries that have explicitly been investigated include:\\
\begin{itemize}
\item For scalar fields:
\begin{itemize}
\item O($\ensuremath{N_{\mathrm{S}}}$) symmetry, under which an $\ensuremath{N_{\mathrm{S}}}$-component scalar field transforms in the fundamental representation; $\phi^a \rightarrow O_{\ensuremath{N_{\mathrm{S}}}}^{ab}\phi_b$, where $O_{\ensuremath{N_{\mathrm{S}}}}^{ab}$ is a generator in the fundamental representation. This symmetry would be broken, for example, if gravity generated distinct anomalous dimensions for some of the $\ensuremath{N_{\mathrm{S}}}$ scalars, or if gravity generated interaction terms with uneven numbers of scalars field, or if it generated different masses or interaction terms for some of the $\ensuremath{N_{\mathrm{S}}}$ scalars. Neither of these possibilities is realized in \cite{Labus:2015ska,deBrito:2021pyi}.
\item Shift symmetry, under which $\phi \rightarrow \phi + \rm const$. This symmetry would be broken by a scalar potential. It was found in \cite{Narain:2009fy,Percacci:2015wwa,Eichhorn:2017als} that a scalar potential is not generated at the asymptotically safe fixed point. In contrast, it was found in \cite{Eichhorn:2012va, Eichhorn:2013ug, Eichhorn:2017sok, deBrito:2021pyi, Laporte:2021kyp} that shift-symmetric interactions (which are proportional to derivatives of the scalar field) are induced at an asymptotically safe fixed point.
\item A complex scalar, which has a global U(1) symmetry, was studied in \cite{Ali:2020znq}, where quantum gravity does not generate terms that would break the global U(1) symmetry to a discrete $\mathbb{Z}_n$ symmetry.
\end{itemize}
Thus we find that quantum gravity generates interaction terms for scalar matter. These respect the maximum set of symmetries of the kinetic term, irrespective of the number of scalar fields. In addition, these interactions feature a fixed point, if the weak-gravity bound is respected, see \autoref{sec:WGB} below.\\
\item For fermion fields:
\begin{itemize}
\item $SU(\ensuremath{N_{\mathrm{F}}})_L \otimes SU(\ensuremath{N_{\mathrm{F}}})_R$ symmetry, under which the left- and the right-handed components of $\ensuremath{N_{\mathrm{F}}}$ Dirac fermions transform separately. This is called a chiral symmetry, because it refers to the chiral components (the left- and right-handed Weyl spinors) of a Dirac fermion.
This is a symmetry of the kinetic term, because a kinetic term for $\ensuremath{N_{\mathrm{F}}}$ Dirac fermions decomposes into a kinetic term for $\ensuremath{N_{\mathrm{F}}}$ right-handed and $\ensuremath{N_{\mathrm{F}}}$ left-handed Weyl spinors, $\bar{\psi}^i \slashed{\nabla}\psi^i =\bar{\psi}_L^i \slashed{\nabla}\psi_L^i+ \bar{\psi}_R^i \slashed{\nabla}\psi_R^i $ (where $i=1,...,\ensuremath{N_{\mathrm{F}}}$). This symmetry is broken by a mass term, $m \bar{\psi}^i\psi^i = m \bar{\psi}^i_R\psi^i_L + m \bar{\psi}^i_L\psi^i_R$, a non-minimal term of the form $R \bar{\psi}^i\psi^i = R \bar{\psi}^i_R\psi^i_L + R \bar{\psi}^i_L\psi^i_R$, a four-fermion interaction of the form $\bar{\psi}^i \psi^i\, \bar{\psi}^j\psi^j$ and others. Neither of these interactions is generated\footnote{A breaking of chiral symmetry can be introduced explicitly by choosing a regulator that breaks chiral symmetry \cite{Daas:2020dyo}; then such chiral-symmetry breaking interactions are generated.} in the studies in \cite{Eichhorn:2011pc, Eichhorn:2016vvy, Eichhorn:2017eht, deBrito:2020dta}. In contrast, it was found in \cite{Eichhorn:2011pc, Eichhorn:2016vvy, Eichhorn:2017eht, Eichhorn:2018nda} that chirally symmetric four-fermion and non-minimal interactions are generated and feature an asymptotically safe fixed point.
\end{itemize}
Thus we find that quantum gravity generates interaction terms for fermionic matter. These respect the maximum set of symmetries of the kinetic term, irrespective of the number of fermion fields. In addition, these interactions feature a fixed point under the inclusion of quantum gravity, without additional conditions, see \autoref{sec:lightfermions} below. The phenomenological consequences of this result entail that fermions can stay light in the presence of quantum gravity, as we will discuss below.\\
\item For gauge fields:
\begin{itemize}
\item O(\ensuremath{N_{\mathrm{V}}}) symmetry, under which an $\ensuremath{N_{\mathrm{V}}}$-component gauge field transforms in the fundamental representation; $A_{\mu}^a \rightarrow O_{\ensuremath{N_{\mathrm{V}}}}^{ab}A_{\mu}^b$, where $O_{\ensuremath{N_{\mathrm{V}}}}^{ab}$ is a generator in the fundamental representation. Similar to the case for scalar fields, this symmetry is broken if gravitational fluctuations induce distinct anomalous dimensions for some of the gauge fields, or interactions that only involve some of the $\ensuremath{N_{\mathrm{V}}}$ gauge fields. These possibilities are not realized \cite{Eichhorn:2021qet}, indicating that gravitational interactions do not break the global $O(\ensuremath{N_{\mathrm{V}}})$ symmetry.
\item Shift symmetry in a gauge field is nothing but the global part of the Abelian gauge symmetry, which is of course also preserved by gravitational fluctuations.
\end{itemize}
Thus we find that quantum gravity generates interaction terms for vector fields. These respect the maximum set of symmetries of the kinetic term, irrespective of the number of vector fields. In addition, these interactions feature a fixed point, if the weak-gravity bound is respected, see \autoref{sec:WGB} below.
\end{itemize}
\subsubsection{Step 2: The symmetry-preserving interactions which are generated by gravity feature an asymptotically safe fixed point}
\label{sec:WGB}
Here we proceed in two steps. We first explore, which interactions are generated by gravity. Second, we explore under which conditions these interactions feature an asymptotically safe fixed point.
If these conditions are fulfilled by the gravitational fixed-point values, then, together with step 1, the result suggests that global symmetries do indeed remain intact in asymptotically safe gravity matter steps (with the caveats discussed above).\\
\paragraph{\textbf{Step 2 a: Gravity generates new interactions for matter}}
\emph{Synopsis: There are interactions for matter which are necessarily generated by gravity, i.e., which cannot be set to zero consistently at an asymptotically safe fixed point. These interactions satisfy the symmetries of the kinetic term.}
At an asymptotically safe fixed point, gravity is interacting. This implies that matter must also have interactions: because gravity couples to any form of energy and matter, it couples to any two free fields, and generates an interaction between them -- already classically. At the quantum level, the same statement is true, i.e., gravity induces interactions also at the loop level. The only way to switch off these induced interactions is to turn off the gravitational coupling, $\ensuremath{G_{\mathrm{N}}}$. This is not possible when gravity is asymptotically safe; thus asymptotically safe gravity-matter systems necessarily contain interactions for matter. \\
To see this explicitly, let us start with a scalar field $\phi$ which is minimally coupled to gravity and does not have any interactions. The minimal coupling is encoded in the kinetic term of the scalar field
\begin{equation}
\Gamma_{k\, \rm scal}
= \frac{1}{2}\int\mathrm{d}^4x \sqrt{g}\, g^{\mu\nu}\partial_{\mu} \phi \partial_{\nu} \phi\,.
\end{equation}
At the classical level, the presence of the metric in this kinetic term gives rise to gravity-mediated scattering via a tree-level diagram. At the quantum level, loop diagrams generate interaction terms for matter.
In particular, the minimal coupling between the scalar field and gravity gives rise to scalar-gravity vertices, by expanding the kinetic term of the scalar in terms of metric fluctuations. Since we started from the kinetic term, the only coupling appearing in these vertices is the Newton coupling $\ensuremath{G_{\mathrm{N}}}$.
We use these vertices in one-loop diagrams with four external scalar fields and a loop of gravitational fluctuations as well as diagrams with gravitational fluctuations and scalars in the loop. Such diagrams generate a scalar self-interaction $g_1$
\begin{equation}
\label{eq:ScalInd}
S_{\mathrm{Scal, int.}}=\frac{g_1}{8 k^{4}}\,\int\mathrm{d}^4x \sqrt{g}\, g^{\mu\nu}g^{\rho\sigma}\partial_{\mu} \phi \partial_{\nu} \phi\partial_{\rho} \phi \partial_{\sigma} \phi\,.
\end{equation}
We show the corresponding diagrams in the left panel of \autoref{fig:scalpar}, stressing that all vertices are independent of $g_1$.
These diagrams contribute to the beta function of $g_1$, which is schematically given by
\begin{equation}
\label{eq:IndSchem}
\beta_{g_1}=C_0 + C_1\, g_1 +C_2\, g_1^2 +\mathcal{O}(g_1^3)\,,
\end{equation}
where $C_0$ and $C_1$ are functions of the gravitational couplings. The coefficient $C_0$ contains the contribution of diagrams that do not contain a vertex with four scalar fields and come with $\ensuremath{G_{\mathrm{N}}}^2$, i.e., those in \autoref{fig:scalpar}, such that $C_0\to 0$ when $\ensuremath{G_{\mathrm{N}}}\to0$. Therefore, for vanishing gravitational fluctuations, i.e., $\ensuremath{G_{\mathrm{N}}}=0$, $g_1=0$ is a fixed point, cf.~ the blue line in \autoref{fig:scalpar}. However, when gravitational fluctuations are present, the coefficient $C_0$ is non-vanishing, such that a (partial) fixed point for $g_1$ is necessarily non-zero, i.e., $g_{1,\,*}(\ensuremath{G_{\mathrm{N}}} \neq 0)\neq0$. Hence, in the presence of gravitational fluctuations, the coupling $g_1$ is necessarily \emph{induced}, since it cannot be consistently set to zero\footnote{Note that this is not a unique feature of gravity: for a scalar field that is charged under an Abelian gauge field, a finite value of the gauge coupling also induces a four-scalar interaction which is similar to Eq.~(\ref{eq:ScalInd}). In the case of charged matter, however, these interactions are induced under the RG flow towards the IR, and not necessarily at the fixed point.}. Instead, the (partial) Gaussian fixed point is shifted and becomes an interacting (partial) fixed point, which we will call \emph{shifted Gaussian fixed point} (sGFP) in the following, cf.~the green line in \autoref{fig:scalpar}. Since the sGFP is continuously connected to the Gaussian fixed point, where $g_1$ is irrelevant, the induced coupling also corresponds to an irrelevant direction at the sGFP. Hence, gravitational fluctuations induce new interactions in the matter sector, but do not necessarily introduce new relevant directions, and hence do not reduce the predictivity of the theory.
\begin{figure}[!t]
\centering
\includegraphics[width=0.35\linewidth,clip=true,trim=4.3cm -.5cm 4.3cm 0cm]{Figures/dg1_inducing_v2.pdf}
\quad
\includegraphics[width=0.6\linewidth]{Figures/betag.pdf}
\caption{\label{fig:scalpar} Left panel: We show the diagrams through which gravitational fluctuations induce a non-vanishing fixed-point value for $g_1$. The depicted diagrams contribute to the $g_1$ independent contribution $C_0$ of $\beta_{g_1}$, see \eqref{eq:IndSchem}. The circle with cross indicate the regulator insertion $k\partial_k\, \ensuremath{R_k}$ of the flow equation. The depicted diagrams with the regulator insertion on all other internal lines, which are not shown separately, also contribute to $C_0$. Right panel: We show the $\beta$-function for the induced coupling $g_1$ defined in \eqref{eq:ScalInd} as a function of $g_1$. For vanishing gravitational fluctuations (green solid line), $g_1=0$ is a fixed point, and $g_1$ can consistently be set to zero. For sufficiently small but non-zero values of the Newton coupling (blue dashed line), $g_1=0$ is not a (partial) fixed point anymore. Increasing the Newton coupling further, the two (partial) fixed points of $\beta_{g_1}$ might collide, such that beyond a critical value of the Newton coupling $\beta_{g_1}$ might not feature any fixed point anymore (red, dotted line), see \autoref{sec:WGB}. }
\end{figure}
Crucially, the interaction in Eq.~\eqref{eq:ScalInd} respects the symmetries of the kinetic term, because it essentially corresponds to a square of the kinetic term. Similarly, one can show that higher-order induced interactions, or induced non-minimal interactions, satisfy the same symmetry-requirement. This completes step 2a: we have reviewed that gravity generates matter interactions, and those satisfy the global symmetries of the kinetic term.
While we introduced the induced coupling in the scalar sector, the same mechanism also takes place in the fermionic \cite{Eichhorn:2011pc} and the gauge sector \cite{Christiansen:2017gtg}, and terms with a larger number of fields. In Tab.~\ref{tab:inducedints} we provide a list of induced interactions that have been explicitly studied in the literature.
Generically, we expect all interactions that respect the symmetries of the kinetic terms, to be induced by gravitational fluctuations.\\
\begin{table}[!t]
\begin{tabular}{c|c|c|c|c|}
field & global symmetry & selfinteraction & non-minimal inter. & ref.\\\hline\hline
single scalar & shift & $\left(\partial_{\mu} \phi \partial^{\mu}\phi\right)^2$ &- & \cite{Eichhorn:2012va} \\\hline
single scalar & shift &- & $\partial_{\mu}\phi \partial_{\nu}\phi R^{\mu\nu}$& \cite{Eichhorn:2017sok} \\\hline
single scalar & shift & $\left(\partial_{\mu} \phi \partial^{\mu}\phi\right)^2$ & $\partial_{\mu}\phi \partial_{\nu}\phi R^{\mu\nu}$ &\\
& & & \& $\partial_{\mu} \phi \partial^{\mu}\phi R$ & \cite{Laporte:2021kyp, Knorr:2022ilz}\\\hline
$N_S$ scalars & $N_S$ shift symmetries & $\partial_{\mu}\phi^a \partial^{\mu}\phi^a\,\partial_{\nu}\phi^b \partial^{\nu}\phi^b$ &- & \\
& \& $O(N_S)$ symmetry & $\partial_{\mu}\phi^a \partial^{\mu}\phi^b\,\partial_{\nu}\phi^a \partial^{\nu}\phi^b$ & -&\cite{deBrito:2021pyi} \\ \hline\hline
single vector & shift & $\left(F_{\mu\nu}F^{\mu\nu}\right)^2$ & - & \cite{Christiansen:2017gtg, Eichhorn:2019yzm}\\ \hline
single vector & shift & $\left(F_{\mu\nu}F^{\mu\nu}\right)^2$ & - & \\
& & \& $\left(F_{\mu\nu}\tilde{F}^{\mu\nu}\right)^2$& - & \cite{Eichhorn:2021qet}\\ \hline
$N_V$ vectors & shift & $\left(F^a_{\mu\nu}F^{a\,\mu\nu}\right)^2$ & - & \\
& \& $O(N_V)$ symmetry & \& $\left(F^a_{\mu\nu}\tilde{F}^{b\,\mu\nu}\right)^2$ & - & \cite{Eichhorn:2021qet}\\ \hline
\hline
$\ensuremath{N_{\mathrm{F}}}$ fermions & chiral & $\left(\bar{\psi}^i\gamma_{\mu}\psi^i\right)\left(\bar{\psi}^j\gamma^{\mu}\psi^j\right)$ & - & \\
& & \&$ \left(\bar{\psi}^i\gamma_{\mu}\gamma_5\psi^i\right)\left(\bar{\psi}^j\gamma^{\mu}\gamma_5\psi^j\right)$ &-& \cite{Eichhorn:2011pc, Meibohm:2016mkp, deBrito:2020dta}\\\hline
$\ensuremath{N_{\mathrm{F}}}$ fermions & chiral & - & $R^{\mu\nu}\, \bar{\psi}^i\gamma_{\mu}\nabla_{\nu}\psi^i$ & \cite{Eichhorn:2018nda} \\ \hline \hline
single scalar & shift \& chiral & $\left(\bar{\psi}\gamma^{\mu}D_{\nu}\psi\right)\left(\partial_{\mu}\phi\partial^{\nu}\phi\right)$ & - & \\
\& single fermion & & \&$ \left(\bar{\psi}\slashed{D}\psi\right)\left(\partial_{\nu}\phi\partial^{\nu}\phi\right)$ &-& \cite{Eichhorn:2016esv, Eichhorn:2017eht}\\\hline
\end{tabular}
\caption{\label{tab:inducedints}We list the interactions divided by $\sqrt{g}$ (and corresponding references) that were explicitly shown to be generated by quantum gravity. They all satisfy continuous global symmetries which are the maximum continuous global symmetries of their respective kinetic terms.
}
\end{table}
\paragraph{\textbf{Step 2 b: The weak-gravity bound as a condition under which a symmetry-preserving fixed point exists}}
\emph{Synopsis: It is non-trivial to satisfy the condition that all generated interactions from Step 2a feature an asymptotically safe fixed point. While some of them feature a partial fixed point for any value of gravitational couplings, others only feature a partial fixed point if gravity is sufficiently weakly coupled, i.e., if $G_{\rm eff}$ and its generalizations are sufficiently small. Such a weak-gravity-bound (WGB) has been discovered for scalars, vectors, and for scalars coupled to fermions, although not for fermions on their own.}
The WGB arises, because the beta function in Eq.~\eqref{eq:IndSchem} only has real zeros under certain conditions on the coefficients $C_i$. If we neglect contributions $\mathcal{O}(g_i^3)$, the sGFP is only real for
\begin{equation}
\label{eq:WGBCond}
4C_0\,C_2\leq C_1^2\,.
\end{equation}
For some systems, such as four-fermion couplings, this condition is satisfied automatically, see \autoref{sec:lightfermions}. For others, such as four-scalar couplings, this condition is only fulfilled up to critical values of the gravitational couplings. In fact, if gravity is too strongly coupled, i.e., simply put, $G_{\rm eff}$ (the strength of metric fluctuations) is too large, then the condition is violated. Therefore, the corresponding bound on the couplings is called the \emph{weak-gravity bound}.
When studying induced interactions and the WGB, one usually proceeds order by order in canonical dimension. All induced interactions are canonically irrelevant, because they are essentially the square, or higher powers, of the kinetic terms.
For a single shift symmetric scalar field, the scalar self interaction (\ref{eq:ScalInd})
is the only self-interaction up to this level of the canonical mass dimension.
The coupling $g_1$ is indeed induced by gravitational fluctuations \cite{Eichhorn:2012va, deBrito:2021pyi, Laporte:2021kyp}, since the coefficient $C_0$ in Eq.~(\ref{eq:IndSchem}) is non-zero in general. This induced interaction gives rise to a WGB \cite{Eichhorn:2012va, deBrito:2021pyi}, which excludes a part of the plane spanned by the Newton coupling $\ensuremath{G_{\mathrm{N}}}$ and the cosmological constant $\Lambda$ from the viable gravitational parameter-space, see the left panel of \autoref{fig:WGBScalars}.
In $G_N$ and $\Lambda$ it is less straightforward to see where the strong-coupling-regime is, because $\Lambda \rightarrow 1/2$ is also a strong-coupling limit, not just $G_N \gg 1$. We therefore work in terms of the effective strength of metric fluctuations, $G_{\rm eff}$ and $G_{\rm eff}^{(2)}$, defined in Eq.~\eqref{eq:Geff}, and Eq.~\eqref{eq:Geffn}, respectively.
Both $G_{\rm eff}$ and $G^{(2)}_{\rm eff}$ can be thought of as measures of the strength of metric fluctuations and dominate the scale dependence of induced matter interactions. As we can see in the right panel of \autoref{fig:WGBScalars}, the WGB is described by a rather constant value of $G^{(2)}_{\rm eff}$ , which enters the diagrams in \autoref{fig:scalpar} at leading order, for some range of $\Lambda$. Deviations from the constant appear due to dependencies on $G_{\rm eff}$.
When adding more scalar fields to the system, only those interactions that respect the $O(\ensuremath{N_{\mathrm{S}}})$ symmetry of the kinetic term are induced by gravitational fluctuations \cite{deBrito:2021pyi}. Increasing the number of scalar fields $\ensuremath{N_{\mathrm{S}}}$ makes the WGB stronger, such that more gravitational parameterspace is excluded.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\linewidth]{Figures/WGBCompGLN.pdf}\quad
\includegraphics[width=0.45\linewidth]{Figures/WGBComp_Geff2LB1.pdf}
\caption{\label{fig:WGBScalars} We show the WGB for a shift symmetric scalar, see \eqref{eq:ScalInd}, an Abelian gauge field, see \eqref{eq:GaugeInd}, and for a scalar-fermion system \eqref{eq:ScalFermInd}. In the gray region, no partial fixed point for the respective induced coupling exists, such that asymptotic safety cannot be realized in the respective region of the gravitational parameterspace.. It is therefore excluded from the viable parameterspace. In the left panel we show the WGB in the plane spanned by the gravitational coupling $\Lambda$ and $\ensuremath{G_{\mathrm{N}}}$, indicating that for any value of $\Lambda$, there is a critical value ${\ensuremath{G_{\mathrm{N}}}}_{,\mathrm{crit}}$ at which the asymptotically safe partial fixed point for the induced coupling vanishes. In the right panel we show the same WGB, but in a plane spanned by $\Lambda$ and the effective gravitational coupling $G_{\rm eff}^{(2)}$, see \eqref{eq:Geffn}.}
\end{figure}
For fermions, the leading-order generated interactions are special, because they contain information not just about global symmetries. Because the maximum symmetry for fermions is chiral, i.e., distinguishes left- and right-handed fermions, the generated interactions also contain information about fermion masses. Thereby, they provide an important observational consistency test of asymptotic safety, because fermion masses in the SM are measured. We therefore devote \autoref{sec:lightfermions} to the discussion of generated fermion interactions and their phenomenological consequences. For the present section, it is only relevant that no WGB has been discovered yet in purely fermionic systems.\\
The situation is different for systems with fermions and scalars. The leading-order induced interactions are
\begin{equation}
\begin{aligned}
\label{eq:ScalFermInd}
S_{\mathrm{Scal-Ferm ,int.}}=\frac{i}{k^{4}}\,\int\mathrm{d}^4x&\left(\chi_1[\bar{\psi}\gamma^{\mu}D_{\nu}\psi-(D_{\nu}\bar{\psi})\gamma^{\mu}\psi]\partial_{\mu}\phi\partial^{\nu}\phi\right.\\
&+\left. \chi_2[\bar{\psi}\gamma^{\mu}D_{\mu}\psi-(D_{\mu}\bar{\psi})\gamma^{\mu}\psi]\partial_{\nu}\phi\partial^{\nu}\phi\right)\,,
\end{aligned}
\end{equation}
which indeed give rise to a WGB \cite{Eichhorn:2016esv, Eichhorn:2017eht}.
%
For a single gauge field and at lowest order in the canonical mass dimension there are two linearly independent induced four-vector interactions, namely
\begin{equation}\label{eq:GaugeInd}
S_{\mathrm{Vector ,int.}}=\frac{1}{8 k^{4}}\,\int\mathrm{d}^4x\left(w_2 (F_{\mu\nu}F^{\mu\nu})^2+\kappa_2(F_{\mu\nu}\tilde{F}^{\mu\nu})^2\right)\,,
\end{equation}
with the field strength tensor $F$ and the dual field strength $\tilde{F}$. These interactions give rise to a WGB \cite{Christiansen:2017gtg, Eichhorn:2021qet}, which generalizes to the case of several gauge fields \cite{Eichhorn:2021qet}.
Comparing the WGBs from the different sectors -- scalar, scalar-fermion and gauge -- we find a nearly universal curve, see \autoref{fig:WGBScalars}. This indicates that the different sectors are equally sensitive to gravitational fluctuations.\\
\FRT{Further reading:} \\
\FR{Preservation of global symmetries: beyond truncations}\\
As in any computation relying on the FRG, practical computations require choosing a truncation, i.e., only take a finite subset of interactions into account. All statements above on global symmetry breaking are therefore made within a truncation. However, from the structure of the flow equation, one can infer that the specific statements above, which pertain to the non-generation of symmetry-breaking interaction terms, generalize beyond truncations, see \cite{Eichhorn:2020mte} for a discussion. Further, \cite{Laporte:2021kyp} contains a proof of shift-symmetry-preservation in scalar-gravity systems.\\
\FR{WGB: condition to prevent symmetry breaking and constraint on gravitational parameter-space}\\
There are two points of view on the WGB: the first is as a necessary condition to prevent the breaking of global symmetries; the second is as a condition on microscopic gravitational couplings that arises because matter-gravity-theories should feature fixed points. This second view has mostly been discussed in the literature, see \cite{Eichhorn:2011pc, Eichhorn:2012va, Meibohm:2016mkp, Eichhorn:2016esv, Eichhorn:2017eht, Eichhorn:2017sok, Christiansen:2017gtg, Eichhorn:2018nda, Eichhorn:2019yzm, deBrito:2020dta, deBrito:2021pyi, Laporte:2021kyp, Eichhorn:2021qet, Knorr:2022ilz}.\\
\FR{The WGB and asymptotically safe gravity-matter systems}\\
Ultimately, we would like to know if scalar-gravity, or more generally, gravity-matter systems can be asymptotically safe with the maximum set of symmetries. This is the case if gravitational fixed-point values in the presence of matter satisfy the WGB. For systems with $\ensuremath{N_{\mathrm{V}}}$ Abelian gauge fields this is the case in all studies to date. For scalars, the answer changes, when nonminimal interactions are accounted for: For $\ensuremath{N_{\mathrm{S}}}$ minimally coupled scalars, there is no fixed point of the full scalar-gravity systems, i.e., the WGB is violated \cite{deBrito:2021pyi}. At nonminimal coupling, the WGB holds for a single scalar \cite{Laporte:2021kyp, Knorr:2022ilz}. It is an open question whether this result may change under the inclusion of further interactions and whether the WGB is violated at $\ensuremath{N_{\mathrm{S}}}>1$.\\
\FR{Robustness of the WGB}\\
As a test of robustness, gauge parameter dependence of the WGB has been studied in \cite{deBrito:2021pyi} and in \cite{Eichhorn:2021qet} and is weak in both cases. \cite{Eichhorn:2021qet} also demonstrates that it is important to include a full basis of generated interactions at leading order in canonical power counting. If not all interactions are included, the results strongly depend on the gauge on a qualitative level.\\
In \cite{deBrito2022WIP} the WGB for a single scalar field has been investigated by considering a minimal coupling of gravity to a full function of the kinetic term for the scalar field. By studying the fixed-point structure of the pure-matter system upon expansion of the full function, it was concluded that only the free fixed point is a viable fixed point of the matter system, see also \cite{Laporte:2022ziz}. Accordingly, the WGB, which in this system arises as a collision between the (shifted) GFP and an interacting "fixed point", is interpreted as a truncation artifact. Furthermore, the WGB only appears in specific expansions of the full function of the kinetic term. In summary, it is not established whether the WGB is a strict boundary for asymptotically safe theories, or whether it merely indicates the transition to a more strongly-coupled regime, in which significantly larger truncations of the system are required to produce robust results. In any case, a near-perturbative fixed point, which matches seamlessly to the perturbative SM at the Planck scale, is very likely incompatible with gravitational couplings beyond the WGB found in small truncations.
\subsection{Gauge couplings in the Standard Model}\label{sec:Gaugesector}
\emph{Synopsis: The gauge sector of the SM has a problem and three riddles. The problem is the Landau pole (or triviality problem) in the Abelian gauge coupling. The riddle is, what sets the values of the three gauge couplings at low energies, in particular, what sets the value of the finestructure constant $\alpha=1/137$?\\
There are compelling indications that asymptotically safe gravity could solve the problem. Whether or not it may also solve one of the three riddles and explain why $\alpha=1/137$ is less sure, because it depends on additional properties of asymptotically safe gravity.}
The gauge sector of the Standard Model is divided into an Abelian and a non-Abelian sector. The Abelian hypercharge group $U_Y(1)$ with coupling $\ensuremath{g_{\mathrm{y}}}(k)$ is, together with the non-Abelian $SU(2)$, spontaneously broken to the $U_{\rm em}(1)$ electromagnetic gauge group with the electromagnetic coupling $e(k)$, related to the finestructure constant $\alpha(k) = e(k)^2/(4\pi)$. This sector features a problem and a riddle: the problem is the Landau-pole problem; the riddle is why the finestructure constant takes the value $\alpha(k \rightarrow 0) = 1/137$. It is a compelling scenario that asymptotically safe gravity could solve the problem and the riddle at the same time. We review the evidence for this scenario below.
The Landau pole or triviality problem is present in the absence of gravity, i.e., with $f_{\ensuremath{g_{\mathrm{y}}}}=0$ in Eq.~\eqref{eq:matterbetaschem}. Quantum fluctuations of charged matter, i.e., the charged lepton fields and the quarks, turn the vacuum into a screening medium, such that for the Abelian hypercharge
$\beta_{\ensuremath{g_{\mathrm{y}}}, \, 1}>0$
in \eqref{eq:matterbetaschem} \cite{Gell-Mann:1954yli} (with $\beta_{\ensuremath{g_{\mathrm{y}}},\, 1}
=41/(6\cdot 16 \pi^2)$). Thus, the coupling decreases when flowing towards lower energies.
As a consequence, a divergent value of $\ensuremath{g_{\mathrm{y}}}(k = \Lambda_{\rm Landau})$ is mapped to the measured value of $\alpha$ at low energies. This can already be seen at one-loop order.
To this order, the solution to $\beta_{\ensuremath{g_{\mathrm{y}}}}$ reads
\begin{equation}
\ensuremath{g_{\mathrm{y}}}^2(k)=\frac{\ensuremath{g_{\mathrm{y}}}^2(k_0)}{1-2\, \beta_{\ensuremath{g_{\mathrm{y}}},\, 1}\,\ensuremath{g_{\mathrm{y}}}^2(k_0)\ln\left(\frac{k}{k_0}\right)}\,,
\end{equation}
where $k_0$ is a reference scale. Because ${\beta_{\ensuremath{g_{\mathrm{y}}},\, 1}}>0$, $\ensuremath{g_{\mathrm{y}}}^2(k)$ diverges at a finite scale $\Lambda_{\rm Landau}$
\begin{equation}
\Lambda_{\rm Landau} =\exp\left(\frac{1}{2{\beta_{\ensuremath{g_{\mathrm{y}}},\, 1}}\, \ensuremath{g_{\mathrm{y}}}^2(k_0)} \right) k_0.\label{eq:Landaupole}
\end{equation}
In theory, this divergence, the so-called Landau pole, can be avoided by setting $\ensuremath{g_{\mathrm{y}}}^2(k_0)=0$. However, then $\ensuremath{g_{\mathrm{y}}}^2(k)$ would be zero at all scales $k$, which is clearly in contradiction with the experimental observation of an interacting electromagnetic sector at low energies in our universe.
Hence, the presence of a Landau pole in the Abelian gauge sector signals the breakdown of the Standard Model. From Eq.~\eqref{eq:Landaupole}, with $e(k_0 = 511\, \rm keV)$ and $\beta_{e,\, 1}=1/(12 \pi^2)$ (i.e., looking at QED), one can infer that $\Lambda_{\rm Landau} \approx 10^{286}\, \rm eV$ (which is shifted to $\sim 10^{34}\,\rm GeV$ for the matter content of the SM and in a two-loop calculation), which is highly transplanckian.
The computation giving rise to the Landau pole was performed within a perturbative one-loop approximation. Clearly, this approximation is not expected to be accurate when $\ensuremath{g_{\mathrm{y}}}$ becomes large. However, non-perturbative studies using lattice \cite{Gockeler:1997dn, Gockeler:1997kt} and functional \cite{Gies:2004hy} methods find indications that the triviality problem in the Abelian gauge coupling persists beyond perturbation theory.\\
The triviality problem indicates that new physics must exist at very high energies. As the energy-scale of the Landau pole is beyond the Planck scale, quantum gravity is one candidate for such new physics. In fact, it is a compelling candidate, because it is not really ``new physics", given that we know for sure that the gravitational field exists.\\
Asymptotically safe gravitational fluctuations contribute through $f_{\ensuremath{g_{\mathrm{y}}}}$ to the scale dependence of $\ensuremath{g_{\mathrm{y}}}$, cf.~\eqref{eq:matterbetaschem}. $f_{\ensuremath{g_{\mathrm{y}}}}$ depends on the gravitational couplings; explicit forms have been calculated in \cite{Daum:2009dn, Folkerts:2011jz, Harst:2011zx, Christiansen:2017gtg, Eichhorn:2017lry, Christiansen:2017cxa, DeBrito:2019gdd, deBrito:2022vbr, Pastor-Gutierrez:2022nki}.\\
It is crucial that the gravitational contribution is linear in $\ensuremath{g_{\mathrm{y}}}$ itself. This may be understood in two ways: first, it is a consequence of counting powers of $\ensuremath{g_{\mathrm{y}}}$ in the loop diagrams that generate the gravity-contribution to $\beta_{\ensuremath{g_{\mathrm{y}}}}$. Second, no lower-order contribution $\sim \ensuremath{g_{\mathrm{y}}}^0$ exists, because it would break the global chiral symmetry of charged fermions, where left- and right-handed components transform under independent phase rotations. In \autoref{sec:GlobalSymms}, we explain why such global symmetries prevent the generation of such low-order terms in beta functions.
Therefore, for small values of the coupling, i.e., close to the Gaussian fixed point, the gravitational contribution dominates over the pure-matter contribution. \\
Explicit computations using the FRG indicate that $f_{\ensuremath{g_{\mathrm{y}}}}\geq0$ \cite{Daum:2009dn, Folkerts:2011jz, Harst:2011zx, Christiansen:2017gtg, Eichhorn:2017lry, Christiansen:2017cxa, deBrito:2022vbr, Pastor-Gutierrez:2022nki}, such that gravitational fluctuations have an anti-screening effect on the gauge coupling. One can also argue that the gravitational contribution should have a negative sign, i.e., be antiscreening: gravity generates a self-coupling of the Abelian gauge field, such that effectively, the Abelian gauge field behaves like a non-Abelian one. In a non-Abelian gauge theory, gauge-field fluctuations antiscreen the vacuum. Thus, one may argue, the combined effect of gravity and Abelian gauge field should also be antiscreening, and thus, $f_{\ensuremath{g_{\mathrm{y}}}}>0$ is the expected result.
The gravitational contribution $f_{\ensuremath{g_{\mathrm{y}}}}$ is a function of the dimensionless Newton coupling $\ensuremath{G_{\mathrm{N}}}$, and, in the absence of other gravitational couplings, given by
\begin{equation}
f_{\ensuremath{g_{\mathrm{y}}}}= \frac{5\ensuremath{G_{\mathrm{N}}}}{18\pi}.
\end{equation}
Above the Planck scale, $\ensuremath{G_{\mathrm{N}}}$ assumes its fixed-point value, such that $f_{\ensuremath{g_{\mathrm{y}}}} = \rm const$.
Below the Planck scale, $\ensuremath{G_{\mathrm{N}}}$ scales like $k^2$, i.e., decreases towards the IR.
Thus, gravitational fluctuations decouple very quickly below the Planck scale, and are completely negligible at experimentally accessible scales -- just as one expects.
Therefore, $f_{\ensuremath{g_{\mathrm{y}}}}$ can be approximated as zero below the Planck scale, such that the scale dependence of the gauge coupling is only driven by SM fields.
In addition to $\ensuremath{G_{\mathrm{N}}}$, $f_{\ensuremath{g_{\mathrm{y}}}}$ depends on the other gravitational couplings, including the cosmological constant $\Lambda$ and higher-order couplings \cite{DeBrito:2019gdd}, such as the $R_{\mu\nu}R^{\mu\nu}$-coupling $b$. In terms of $\ensuremath{G_{\mathrm{N}}}$, $\Lambda$ and $b$, $f_{\ensuremath{g_{\mathrm{y}}}}$ reads
\begin{equation}
f_{\ensuremath{g_{\mathrm{y}}}} =\frac{\ensuremath{G_{\mathrm{N}}}}{36\pi} \frac{10+7b- 40 \Lambda}{\left(1+b-2 \Lambda\right)^2}
\end{equation}
In the gravitational fixed-point regime, different scenarios are realized, depending on the sign of $f_{\ensuremath{g_{\mathrm{y}}}}$. If $f_{\ensuremath{g_{\mathrm{y}}}}<0$, then gravitational fluctuations are screening and the triviality problem persists.
If $f_{\ensuremath{g_{\mathrm{y}}}}>0$, then gravitational fluctuations are anti-screening. This is the scenario that appears o be realized, when using fixed-point values obtained in the literature.
Thus, they compete with the screening fluctuations of charged matter fields. At small $\ensuremath{g_{\mathrm{y}}}$, the gravitational contribution dominates and the gauge coupling becomes asymptotically free. At large $\ensuremath{g_{\mathrm{y}}}$, the matter contribution dominates and the triviality problem persists. Inbetween, at $\ensuremath{g_{\mathrm{y}}}=\ensuremath{g_{\mathrm{y},\,\ast}}$, with
\begin{equation}
\ensuremath{g_{\mathrm{y},\,\ast}} = 4\pi\, \sqrt{\frac{6\,f_{\ensuremath{g_{\mathrm{y}}}}}{41}}
\end{equation}
the screening and antiscreening effects cancel out exactly and generate an interacting fixed point.
\begin{figure}[t]
\includegraphics[width=\linewidth, clip=true, trim=1cm 15cm 34cm 7cm]{Figures/combinedAbelian.pdf}
\caption{\label{fig:gyschem} We show the beta function for $\ensuremath{g_{\mathrm{y}}}$ (right panel, rotated), such that the gravity-dominated regime is the lower, green-shaded range of coupling-values. The matter-dominated regime is the upper, red-shaded range of coupling values. Gravity and matter fluctuations balance out at an interacting fixed point (magenta). The resulting trajectories are shown in the left panel: in the matter-dominated regime, the triviality problem persists; in the gravity-dominated regime, trajectories emanate from an asymptotically free fixed point in the very far UV. A single trajectory is the asymptotically safe one, on which quantum scale symmetry holds at transplanckian scales. The fixed-point value is mapped to a unique value in the IR. For $f_{\ensuremath{g_{\mathrm{y}}}} \approx 9.7\cdot 10^{-3}$, the IR value corresponds to the measured IR value of the Abelian hypercharge coupling.}
\end{figure}
This fixed point has predictive power: because gravity fluctuations antiscreen the coupling at $\ensuremath{g_{\mathrm{y}}}<\ensuremath{g_{\mathrm{y},\,\ast}}$, they drive the coupling towards the fixed-point value from below. Conversely, because matter fluctuations screen the coupling at $\ensuremath{g_{\mathrm{y}}}>\ensuremath{g_{\mathrm{y},\,\ast}}$, they drive the coupling towards the fixed-point value from above. Thus, the coupling stays fixed at $\ensuremath{g_{\mathrm{y}}}=\ensuremath{g_{\mathrm{y},\,\ast}}$ all the way down to the Planck scale. Asymptotic safety thereby produces a unique value of the coupling at the Planck scale. Below the Planck scale, the SM RG flow maps this unique value to a unique value in the IR.\\
Rephrased in a more technical manner, at this fixed point, $\ensuremath{g_{\mathrm{y}}}$ is an irrelevant direction with $\theta = - \partial \beta_{\ensuremath{g_{\mathrm{y}}}}/\partial \ensuremath{g_{\mathrm{y}}} \vert_{\ensuremath{g_{\mathrm{y}}} = \ensuremath{g_{\mathrm{y},\,\ast}}}= -2 f_{\ensuremath{g_{\mathrm{y}}}}<0$, and therefore does not contribute a free parameter. Hence, the IR value of the Abelian hypercharge following from this fixed point is a prediction of the UV completion.
At the same time, the interacting fixed point generates an upper bound, above which the triviality problem cannot be avoided: trajectories for which $\ensuremath{g_{\mathrm{y}}}(k=M_{\rm Planck})<\ensuremath{g_{\mathrm{y},\,\ast}}$ are those that come from an asymptotically free fixed point and are therefore UV complete. They reach IR values that lie below the value from the interacting fixed point. However, trajectories for which $\ensuremath{g_{\mathrm{y}}}(k=\rm Planck)> \ensuremath{g_{\mathrm{y},\,\ast}}$ diverge if followed further into the UV. They reach IR values that lie above the value from the interacting fixed point. In summary, the IR value of the gauge coupling is bounded from above by the prediction from asymptotic safety, if one wants to avoid the triviality problem.
In consequence, asymptotic safety becomes testable: if the prediction for the coupling from the interacting fixed point (assuming this fixed point persists beyond the approximations it was seen in do date) is below the measured value, the triviality problem persists, and a UV completion of the SM with gravity is ruled out.
Intriguingly, the predicted value of the Abelian hypercharge comes out above the experimentally measured value, and, within estimates of the systematic uncertainties of FRG computations, might even be in agreement with experimental observations \cite{Harst:2011zx, Eichhorn:2017lry}. Therefore, asymptotically safe quantum gravity might not only provide a UV completion for the Abelian gauge sector, but this UV completion might even be predictive, solving the long-standing riddle, why the finestructure constant is $1/137$ in the IR, see also \cite{Eichhorn:2018whv, Eichhorn:2017muy}.
The non-Abelian gauge sector of the SM with and without gravity has a simpler structure:
for non-Abelian gauge couplings, the matter contribution $\beta_{g_i, \, 1}$
is negative in the SM, giving rise to asymptotic freedom. The minimal coupling of gravity to matter is not sensitive to internal symmetries of the matter sector. Hence, the gravitational contribution $f_{\ensuremath{g_{\mathrm{y}}}}$ is also the gravitational contribution to the scale dependence of non-Abelian gauge couplings. Hence, asymptotic freedom for the strong coupling remains intact in the presence of asymptotically safe quantum gravity \cite{Folkerts:2011jz, Pastor-Gutierrez:2022nki}. The IR values of the two non-Abelian gauge couplings remain free parameters.\\
\FRT{Further reading:}\\
\FR{Chiral-symmetry breaking with the Abelian gauge coupling:}\\
Assuming the realization of the interacting fixed point for $\ensuremath{g_{\mathrm{y}}}$, a fixed-point collision in induced chirally symmetric four-fermion interactions, see \autoref{sec:lightfermions}, may occur, see \cite{deBrito:2020dta}. In this scenario the non-vanishing fixed point value for $\ensuremath{g_{\mathrm{y}}}$ triggers a fixed-point collision in four-fermion interactions. Such a fixed point collision is well studied in QCD, where it is associated with spontaneous breaking of chiral symmetry. If this effect were to occur in the asymptotically safe system, it would prevent the existence of light fermions, i.e., fermions with masses below the Planck scale.
To prevent this, the value of the interacting fixed-point has to be small enough to allow for a UV complete and chirally symmetric theory. This is only possible if the number of fermions in the system exceeds a critical number, see \cite{deBrito:2020dta}. Hence, the interplay of quantum gravity and matter might put lower bounds on the number of fermions, in addition to the upper bounds discussed in \autoref{sec:lightfermions}.\\
\FR{Subtleties in $f_{\ensuremath{g_{\mathrm{y}}}}$:}\\
The interpretation of $f_{\ensuremath{g_{\mathrm{y}}}}$ has some subtleties which we did not discuss above. For this discussion, it is useful to isolate the $\ensuremath{G_{\mathrm{N}}}$-dependence in $f_{\ensuremath{g_{\mathrm{y}}}}$ and write $\tilde{f}_{\ensuremath{g_{\mathrm{y}}}} = \ensuremath{G_{\mathrm{N}}}\, f_{\ensuremath{g_{\mathrm{y}}}}$.
Individual terms in beta functions are not necessarily physical, and may therefore depend on the scheme.
Indeed, in perturbative studies, it depends on the scheme, whether $\tilde{f}_{\rm gy}$ vanishes or not, see, e.g., \cite{Robinson:2005fj,Pietrykowski:2006xy, Toms:2007sk, Ebert:2007gf, Tang:2008ah,Toms:2010vy, Anber:2010uj}. From this, it was concluded that there is no gravitational contribution to the scale dependence of the Abelian gauge coupling.
However, there is an important hidden assumption in these perturbative studies: they treat the gravitational coupling $\ensuremath{G_{\mathrm{N}}}$ as a constant which is finite. The gravitational contribution $f_{\ensuremath{g_{\mathrm{y}}}} = \ensuremath{G_{\mathrm{N}}}\, \tilde{f}_{\ensuremath{g_{\mathrm{y}}}}$ vanishes, when $\tilde{f}_{\ensuremath{g_{\mathrm{y}}}}$ and $\ensuremath{G_{\mathrm{N}}}$ is finite. However, when $\ensuremath{G_{\mathrm{N}}}$ diverges, one must be careful when evaluating $f_{\ensuremath{g_{\mathrm{y}}}}$. In \cite{deBrito:2022vbr}, it was shown that there are FRG regulators \cite{Baldazzi:2020vxk, Baldazzi:2021guw}, for which $\tilde{f}_{\ensuremath{g_{\mathrm{y}}}}$ also vanishes. In contrast to perturbative studies using dimensional regularization, the FRG regulator features a smooth limit in which $\tilde{f}_{\ensuremath{g_{\mathrm{y}}}}$ goes to zero as a function of a control parameter. As a function of the same control parameter, the fixed-point value $\ensuremath{G_{\mathrm{N},\,\ast}}$ diverges. We therefore find that $f_{\ensuremath{g_{\mathrm{y}}}}$ may indeed vanish for some schemes, when $\ensuremath{G_{\mathrm{N}}}$ is held fixed. However, in at least one of those schemes, $\ensuremath{G_{\mathrm{N},\,\ast}}$ diverges such that $f_{\ensuremath{g_{\mathrm{y}}}}$ remains finite while $\tilde{f}_{\ensuremath{g_{\mathrm{y}}}}$ goes to zero.\\
Further, $f_{\ensuremath{g_{\mathrm{y}}}}$ evaluated at $\ensuremath{G_{\mathrm{N}}} = \ensuremath{G_{\mathrm{N},\,\ast}}$ is a universal quantity, because it is a critical exponent at $\ensuremath{g_{\mathrm{y},\,\ast}}=0$. As a universal quantity, it may not depend on the scheme (in practise, in approximations, it still does). It is therefore reassuring that even in schemes, in which $f_{\ensuremath{g_{\mathrm{y}}}} (\ensuremath{G_{\mathrm{N}}}) \rightarrow 0$, for the universal quantity, it holds that $f_{\ensuremath{g_{\mathrm{y}}}} (\ensuremath{G_{\mathrm{N},\,\ast}}) \neq 0$.\\
We therefore conclude that in perturbation theory, when gravity does not assume a fixed point, it may be the case that gravitational contributions to beta functions vanish in some schemes. However, when gravity assumes a fixed point, and $f_{\ensuremath{g_{\mathrm{y}}}}$ is evaluated with the appropriate care, there is a nonzero gravitational contribution and $f_{\ensuremath{g_{\mathrm{y}}}}$ is a universal quantity.
\subsection{Yukawa couplings in the Standard Model}\label{sec:Yukawas}
\emph{Synopsis: Gravity can either screen or antiscreen a Yukawa coupling, depending on the gravitational fixed-point values. In the antiscreening case, the Yukawa coupling can become asymptotically free or safe, with an upper bound on its IR value. In the screening case, the Yukawa coupling is not UV complete.\\
Based on this result, there is a mechanism that ties the quark masses to their charges: If gravity is antiscreening, and the Abelian gauge coupling is at its interacting fixed point, there is an interacting fixed point in the Yukawa sector, for which the up-type quarks and down-type quarks have different fixed-point values, because they are charged differently under the Abelian hypercharge.\\
For the third generation, this mechanism gives rise to a SM-like IR phenomenology, with bottom quark mass and top quark mass predicted at or in the vicinity, of their measured values.\\
For the full quark sector of the SM, mixing between flavors becomes important at highly transplanckian scales, and fixed points with nonzero Yukawa couplings no longer produce SM-like IR phenomenology. Instead, a fixed point which is made asymptotically free under the impact of gravity is available for all quark Yukawa couplings and CKM matrix elements, rendering the SM quark Yukawa sector UV complete.}
In the SM, quark masses are generated by two mechanisms: first, by electroweak symmetry breaking in the Higgs-Yukawa sector -- called the current mass, and second by chiral symmetry breaking in the strongly-interacting phase of QCD -- called the constituent mass.
Lepton masses are generated only through electroweak symmetry breaking. The ratio of lepton masses and current quark masses to the Higgs vacuum expectation value is determined by Yukawa couplings -- one for each quark flavor and lepton species.
Schematically, this is the same as for a simple Yukawa system built out of a Dirac fermion and a real scalar; although the SM is based on Weyl fermions and a complex Higgs scalar that is an SU(2) doublet. The Yukawa coupling $y\,\phi\bar{\psi}\psi$ between the Dirac fermion $\psi$, the corresponding antifermion $\bar{\psi}$ and the real scalar $\phi$ gives rise to a mass term, when the scalar develops a vacuum-expectation value $\langle \phi \rangle = v$ in the symmetry-broken phase. There, one can express the scalar field as excitations $\varphi$ around its expectation value, leading to $y\,\phi\bar{\psi}\psi \rightarrow m\, \bar{\psi}\psi + y\varphi \bar{\psi}\psi$, where $m = y\, v$. In the SM, the IR values of the Yukawa couplings are therefore known, because the masses of all fermions have been measured. In addition, both ATLAS and CMS have measured the Yukawa couplings of the heaviest quarks \cite{CMS:2018uxb,ATLAS:2018mme,CMS:2018nsn,ATLAS:2018kot} and the heaviest lepton \cite{ATLAS:2015xst,CMS:2017zyp}. This motivates a study of the Yukawa sector coupled to asymptotically safe gravity, to find out, whether (i) the Yukawa sector is UV complete when gravity is present and (ii) whether the measured IR values can either be accommodated or even ``retrodicted".
\subsubsection{Simple Yukawa system}
The structure of the gravity-Yukawa-system for the SM follows from the basic structure of a single Yukawa coupling and the gravitational effect on it: That structure is the same as for gauge couplings. Out of a competition between a screening matter contribution and an antiscreening gravity contribution, an asymptotically free fixed point arises. Trajectories that start from it, reach a range of values in the IR, which is bounded from above by the prediction from an asymptotically safe fixed point. The difference between gauge and Yukawa sector is that this mechanism is only at work in a part of the gravitational parameter space, cf.~\autoref{fig:mattermattersinterplay}.
For a simple Yukawa system as introduced above, the matter contribution in Eq.~\eqref{eq:matterbetaschem} is positive, $\beta_{y,\, 1}>0$, such that a simple Yukawa system features a Landau pole.
The sign of the gravitational contribution $f_y$ depends on the fixed-point value of the cosmological constant \cite{Oda:2015sma, Hamada:2017rvn, Eichhorn:2016esv, Eichhorn:2017eht},
see also \autoref{fig:mattermattersinterplay}: for fixed-point values below a critical value $\Lambda_{\mathrm{crit}}$, $f_y>0$ holds, such that the Gaussian fixed point $y_*=0$ is IR-repulsive.
Starting from this fixed point, finite values for the Yukawa couplings in the IR can be reached and the IR value is a free parameter. Just like in the Abelian gauge sector, $f_y>0$ gives rise to a second, interacting fixed point, where the Yukawa coupling corresponds to an irrelevant, i.e., IR repulsive direction. This fixed point hence is connected to a single predictive trajectory, where the IR value of the coupling is a prediction. This predictive trajectory produces an upper bound of the IR value of the Yukawa coupling, which can be reached from the Gaussian fixed point $y_*=0$. \\
For $\Lambda>\Lambda_{\mathrm{crit}}$, the gravitational contribution is screening, i.e., $f_y<0$ \cite{Oda:2015sma, Hamada:2017rvn, Eichhorn:2016esv, Eichhorn:2017eht}. This makes the Landau-pole problem worse. Accordingly, if $\Lambda_*>\Lambda_{\mathrm{crit}}$, the only possibility to achieve a UV-complete Yukawa sector is to set $y=0$ at the Planck scale. Once $y$ is set to zero, there is an additional global symmetry, namely a chiral rotation for the fermion, $\psi \rightarrow e^{i\gamma_5\, \alpha}\psi$. This symmetry protects the Yukawa coupling, such that it cannot be regenerated below the Planck scale. Accordingly, the case $f_y<0$ results in a prediction, namely of a vanishing Yukawa coupling.
Thus, fixed-point values $\Lambda_{\ast}>\Lambda_{\mathrm{crit}}$ are excluded from the viable parameterspace for the gravitational fixed-point values, because vanishing Yukawa couplings are in contradiction with observations.\\
Calculations of the gravitational fixed-point values in the presence of a single Dirac fermion and real scalar (i.e., the fields that make up a simple Yukawa system) yield the result $\Lambda_{\ast}> \Lambda_{\rm crit}$ \cite{Dona:2013qba, Meibohm:2015twa, Eichhorn:2016vvy, Eichhorn:2018ydy}. At larger number of fields, in particular, in the presence of all SM fields, different studies find differing results; however, e.g., \cite{Dona:2013qba, Eichhorn:2016vvy} (which rely on the background field approximation\footnote{In \cite{Pastor-Gutierrez:2022nki} $\Lambda_{\ast}< \Lambda_{\rm crit}$ is achieved in fluctuation computations by integrating out gravitational and matter fluctuations at slightly different scales.}) find that, once a third generation of SM fermions is present, the gravitational fixed-point value has moved to $\Lambda_{\ast}< \Lambda_{\rm crit}$. \\
\FRT{Further reading:}\\
\FR{Retrodicting the top mass}\\
In \cite{Eichhorn:2017ylw}, the gravity-generated interacting fixed point for the top Yukawa coupling allows to calculate the top quark mass from first principles, yielding a value of about 171 GeV, which is, within the systematic uncertainties of the calculation, very well compatible with the experimental value of 172.8 GeV.
\subsubsection{Top-bottom-system}\label{sec:tby}
In the SM, the Yukawa couplings couple the right-handed $SU(2)$-singlets to left-handed $SU(2)$-doublets and the Higgs field. Nevertheless, the gravitational contribution is the same as for a real scalar coupling to a Dirac fermion and its antifermion. The underlying reason is that gravity is ``blind" to internal symmetries (in this case, the $SU(2)$). Therefore, the gravitational contribution to the top-quark Yukawa coupling $y_t$ and the bottom-quark Yukawa coupling $y_b$ is the same. Thus, one can, if $\Lambda_{\ast}< \Lambda_{\rm crit}$, achieve asymptotic freedom for the two Yukawa couplings. Asymptotic safety for both Yukawa couplings is ruled out, because the fixed-point value for the top and bottom Yukawa is equal to each other. Thus, their IR values are also close to each other (they are not equal, because gauge field contributions to the two scale-dependences differ), and this contradicts experiment: the top quark is about forty times as heavy as the bottom quark.
\begin{figure}[!t]
\includegraphics[width=0.45\linewidth]{Figures/Yukawa_stream1.pdf}\quad \includegraphics[width=0.45\linewidth]{Figures/Yukawa_stream2.pdf}
\caption{\label{fig:yukawastreams} We show the RG flow towards the IR in the plane spanned by $y_t$ and $y_b$. In the absence of the Abelian gauge field (left panel) there is complete symmetry between $y_t$ and $y_b$. In particular, the fully interacting fixed point, which attracts all trajectories, lies at $y_{t\, \ast} = y_{b\, \ast}$, resulting in a prediction of $y_t(M_{\rm Planck})= y_b(M_{\rm Planck})$, which cannot result in viable IR phenomenology. In the presence of an Abelian gauge coupling (right panel) the symmetry is broken. For purposes of illustration, we have chosen a large $f_y$. For $f_y = 1.188 \cdot 10^{-4}$, as in \cite{Eichhorn:2018whv}, the fully interacting fixed point lies at $y_{b\, \ast}\ll y_{t\, \ast}$, very close to the fixed point at $y_{t\, \ast}\neq 0, y_{b\, \ast}=0$.}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.8\linewidth]{Figures/tbyflow.pdf}
\end{center}
\caption{\label{fig:tbyflow} We show the scale-dependence of the top-bottom-gauge-system, with all three gauge couplings and the two Yukawa couplings, cf.~\cite{Eichhorn:2018whv}. Above the Planck scale, $y_t$, $y_b$ and $g_Y$ start out at an interacting fixed point that satisfies relation Eq.~\eqref{eq:FPrelationtby}. $f_g=9.7\cdot 10^{-3}$ and $f_y = 1.188\cdot 10^{-4}$ are chosen such that the bottom quark Yukawa and Abelian gauge coupling are predicted to agree with their measured value. The resulting top quark Yukawa coupling is somewhat higher than in the SM.}
\end{figure}
However, besides the gravitational contribution, there can also be a contribution from the Abelian gauge field. If $\ensuremath{g_{\mathrm{y}}}$ starts out at the asymptotically safe fixed point in \autoref{sec:Gaugesector}, then $y_{t\, \ast}\neq y_{b\, \ast}$ follows. This is because the top quark and the bottom quark do not have the same electric charge, and thus also not the same $U(1)$ hypercharge. This can be seen from their beta functions, together with that for the Abelian hypercharge\footnote{ The non-Abelian gauge couplings have vanishing fixed-point values and are therefore not included in the beta function when one searches for fixed points. They are included, when one follows the RG flow from the fixed points down to the IR, as in \autoref{fig:tbyflow}.} and in a truncation without higher-order couplings and to leading order in the couplings, read:
\begin{eqnarray}
\beta_{y_t}&=&\frac{y_t}{16\pi^2}\left( \frac{9}{2}y_t^2 + \frac{3}{2}y_b^2 -\frac{17}{12} g_Y^2\right)-f_y\, y_t,\label{eq:betayt}\\
\beta_{y_b}&=&\frac{y_b}{16\pi^2}\left( \frac{9}{2}y_b^2 + \frac{3}{2}y_t^2 -\frac{5}{12} g_Y^2\right)-f_y\, y_b,\label{eq:betayb}\\
\beta_{g_Y}&=&\frac{g_{Y}^3}{16\pi^2}\frac{41}{6}-f_{g}\, g_Y \label{eq:betagy}.
\end{eqnarray}
The system features an interacting fixed point, for which the fixed-point relation
\begin{equation}
y_{t\, \ast}^2 - y_{b\, \ast}^2 = \frac{1}{3}g_{Y\, \ast}^2\label{eq:FPrelationtby}
\end{equation}
holds. This fixed-point relation distinguishes the fixed-point values for top and bottom Yukawa, as soon as a non-zero fixed-point value for the Abelian gauge coupling is realized. Then, the top quark Yukawa coupling is also automatically much larger than the bottom-quark Yukawa coupling. The resulting beta functions for top and bottom are illustrated in \autoref{fig:yukawastreams}.
Taking into account the flow of all three gauge couplings of the SM, and choosing $f_g$ and $f_y$ appropriately, produces an IR phenomenology that is rather close to that of the SM, cf.~\autoref{fig:tbyflow}.\\
This mechanism is remarkable, because it links the charge ratio of the two quarks to its mass ratio. In fact, other charge ratios are incompatible with the measured masses of top and bottom quark, even if arbitrary values of $f_y$ and $f_g$ are considered.
In calculations based on Eq.~\eqref{eq:betayt}-\eqref{eq:betagy}, full agreement with the measured values cannot be reached and the top quark is $\sim 5-10$ GeV too heavy (depending on which approximation is used), cf.~\cite{Eichhorn:2018whv,Alkofer:2020vtb}. However, these calculations come with significant systematic uncertainties, e.g., by neglecting further, higher-order interactions.
Further, the Yukawa couplings of the other generations cannot be neglected at very high scales, contrary to what one may first think. This is because, although the other Yukawa couplings themselves are very small compared to $y_t$, the three-generation quark system features CKM-mixing. At very high energies, the CKM matrix elements are scale dependent, triggering a deviation of the fixed-point structure from Eq.~\eqref{eq:FPrelationtby}. Therefore, we consider the full quark sector of the SM next.
\subsubsection{Quark sector of the SM}
In the SM, the quark Yukawa sector contains ten beta functions: Six for the Yukawa couplings and four for the CKM matrix elements. The CKM matrix describes mixing in the quark sector, i.e., the electroweak interaction can change the flavors.
Because it is unitary, the CKM matrix contains four physical parameters, with gravity-independent beta functions.\footnote{The independence of the CKM matrix from gravity contributions can be shown from the flavor-universality of gravity, i.e., the ``blindness" of gravity to internal symmetries.}\\
In \cite{Alkofer:2020vtb}, see also \cite{Kowalska:2022ypk}, the resulting complexity of the analysis was dealt with by making an assumption about the fixed-point structure, namely that the CKM-matrix elements assume fixed-point values which are independent of the Yukawa fixed-point values. The fixed-point conditions then factorize, and a fixed-point search for the CKM matrix elements can be conducted first.\\
Thereby, one obtains several simple fixed-point configurations for the CKM matrix, which have zeros or ones as the only entries. Two of those fixed-point solutions are phenomenologically important:
It was already observed in \cite{Pendleton:1980as} that a diagonal CKM matrix (which is close to the actual measured values) is an IR fixed point, because it has three IR attractive directions. This fixed point is approached in the IR, starting from another fixed point, namely an off-diagonal CKM matrix with four IR repulsive directions. The corresponding flow is very slow (even on logarithmic scales, the CKM matrix elements are essentially constant); therefore the transition from the off-diagonal to the near-diagonal configuration occurs at highly transplanckian scales.\\
In a second step, one can analyze the consequences for the Yukawa system. Because the CKM-matrix elements enter the beta functions for the Yukawa couplings, those beta functions change, when the CKM matrix changes from an off-diagonal to a near-diagonal configuration.
Therefore, in the very far UV, where the CKM matrix is off-diagonal, the fixed-point values for the Yukawa couplings are modified compared to an analysis without flavor mixing, as in \cite{Eichhorn:2018whv}.
It turns out that among the many fixed points that the beta functions have, only the asymptotically free one is phenomenologically relevant. It can be achieved if $f_y>-2.2\cdot 10^{-4}$. Interacting fixed points, most importantly one at which $y_{t\ast} \neq 0$, remain important, because, starting from the asymptotically free fixed point in the very far UV, the system approaches such an interacting fixed point at intermediate scales, see the schematic illustration in \autoref{fig:CKMschematic}.
\begin{figure}
\includegraphics[width=\linewidth]{Figures/CKM_schematic.pdf}
\caption{\label{fig:CKMschematic} Under the impact of asymptotically safe gravity, the quark Yukawa couplings start out at an asymptotically free fixed point in the very deep UV, at which the CKM matrix is off-diagonal. They are attracted towards an interacting fixed point, due to its irrelevant directions. When the CKM matrix elements transition towards a near-diagonal configuration, the fixed-point values at the interacting fixed point for the Yukawa couplings change. Over a large range of scales, that fixed point determines the properties of the Yukawa system. At the Planck scale, gravity decouples dynamically, and the flow of the quark Yukawa sector is exactly that of the SM, cf.~\cite{Alkofer:2020vtb} for more details.}
\end{figure}
In summary, asymptotically safe gravity may UV complete the quark Yukawa sector of the SM. It may even have predictive power: if one assumes that asymptotic safety holds above the Planck scale, but not to arbitrarily high scales (e.g., because of a more fundamental UV completion), the mechanism discussed in \autoref{sec:tby} may link quark masses to their charges. Only if one insists that the RG flow should continue to make sense at scales as high as $k\approx 10^{1000}\, \rm GeV$, does the flow of the CKM matrix matter, triggering an approach towards the asymptotically free fixed point (if one follows the RG flow in the reverse direction, i.e., towards the UV).
Several open questions currently remain, including (i) a study including the full lepton sector of the SM, (ii) a search for fixed points, for which the factorization hypothesis between CKM matrix elements and Yukawa couplings is given up and (iii) the inclusion of higher-order effects in the beta functions, coming from additional interactions beyond the truncations considered to date.\\
\FRT{Further reading:}\\
\FR{Learning about dark sectors from predictions for the SM:}\\
All matter gravitates. This is also true for dark matter, such that the scale-dependence of the gravitational couplings are affected by all visible and dark matter. Hence, dark matter influences the fixed-point values of the Newton coupling and the cosmological constant. These in turn enter the gravitational contribution to the scale-dependence of SM couplings, and hence determine the interacting fixed-point value for the Abelian gauge coupling and the Yukawa couplings. On the one hand, this changes the prediction for the low-energy values arising from the interacting fixed point. At the same time, the presence of dark matter lowers the upper bound on the low-energy couplings, which can be reached by the free fixed points. Hence, if too many dark-matter fields are present, the lower bound might drop below the experimentally observed value, thereby ruling such dark-matter models out. Thus, the predicted low-energy value of SM couplings might put constraints on the number of dark matter fields. In \cite{Eichhorn:2017ylw} a proof-of-principle for this idea was given, because it was shown that the upper bound on the top quark mass depends on the presence of additional matter fields. According to that analysis, if three right-handed neutrinos and an axion are added to the SM, the upper bound increases, which results in viable IR phenomenology (and implies an asymptotically free fixed point for the top Yukawa).
\subsection{Higgs quartic coupling in the Standard Model}
\label{sec:Higgs}
\emph{Synopsis: Gravity screens the Higgs quartic interaction, such that the ratio of the Higgs mass to the Higgs vacuum expectation value is a prediction from asymptotically safe quantum gravity, dating back to before the measurement of the Higgs mass at the LHC.\\
The value for this prediction depends on the gauge and Yukawa couplings in the SM, in particular the Abelian gauge coupling and the top Yukawa coupling. If both are asymptotically free, the Higgs mass is predicted to be only a few GeV above the experimental value (at least if the central value for the top quark mass is assumed).
If both are asymptotically safe, the Higgs mass comes out larger than that, but can be lowered by a BSM Higgs portal coupling to a dark sector.}
The beta function for the Higgs quartic coupling $\lambda_H$ is given by
\begin{eqnarray}
\label{eqn:beta_lambda}
\beta_{\lambda_\text{H}}= - f_s \lambda_\text{H}
&+& \frac{1}{16\pi^2} \left( - 6 y_t^4 + \frac{3}{8}\left(2 g_2^4 + (g_2^2 + \frac{5}{3} g_Y^2)^2\right) \right) \nonumber \\
&+& \frac{1}{16\pi^2} \lambda_{\rm H} \left(12 y_t^2-9 g_2^2-5 g_Y^2 \right) + \frac{3}{2\pi^2}\lambda_{\rm H}^2\,
,\end{eqnarray}
with the top Yukawa coupling $y_t$, the Abelian hypercharge coupling $g_Y$, the ${\rm SU}(2)_L$ gauge coupling $g_2$ and the gravitational contribution $f_s$. All other Yukawa couplings also contribute in principle, but are in practise negligible, because all other fermions are much lighter than the top quark. In \cite{Shaposhnikov:2009pv}, the following idea was developed: if the gauge couplings and the top Yukawa coupling are asymptotically free under the impact of quantum gravity and $f_s<0$, then Eq.~\eqref{eqn:beta_lambda} has the fixed-point solution $\lambda_{H\, \ast}=0$. This fixed point is IR-attractive, i.e., quantum-gravity fluctuations ensure that $\lambda_H(k=M_{\rm Planck})=0$. It is one of the astonishing results of the LHC, that $\lambda_H(k=M_{\rm Planck})\approx0$ is what needs to be realized to obtain the measured Higgs mass. Explaining this special value is indeed one of the key challenges for particle physics at the moment.
Below the Planck scale, the Higgs quartic interaction is regenerated by gauge and top quark fluctuations, which enter through the terms $y_t^4$ and $g_i^4$ in Eq.~\eqref{eqn:beta_lambda}. The vanishing Planck-scale-value of $\lambda_\text{H}$ is thereby mapped onto a unique value at the electroweak scale, where it determines the ratio of Higgs mass $M_{\rm Higgs}$ and Higgs vacuum expectation value $v_{\rm Higgs} = 246\, \rm GeV$:
\begin{equation}
\lambda_H(k_{\rm IR})=\frac{1}{2} \left(\frac{M_{\rm Higgs}}{v_{\rm Higgs}}\right)^2.
\end{equation}
The map depends sensitively on the top Yukawa coupling \cite{Bezrukov:2014ina}, which is only known with a significant systematic uncertainty. Thereby, a Higgs mass of $M_{\rm Higgs} = 129 \, \rm GeV$ comes out for a top mass of $M_t = 172.9\, \rm GeV$, but a top mass of $M_t= 170.9\, \rm GeV$ is not ruled out, which means that the measured value $M_{\rm Higgs} = 126 \, \rm GeV$ could be compatible with $\lambda_H(k=M_{\rm Planck})=0$.\\
Thus, the idea in \cite{Shaposhnikov:2009pv} led to a successful prediction of the Higgs mass in the vicinity of the measured value, several years prior to the experimental discovery of the Higgs particle at the LHC \cite{ATLAS:2012yve, CMS:2012qbp}. This distinguishes the Higgs sector from the other sectors of the SM, where asymptotic safety may also allow to calculate masses and couplings from first principles, however, only after they have been measured, i.e., as ``postdictions", not genuine predictions.\\
There is a second difference to the prediction in the gauge and Yukawa sector: there, the IR values of the couplings depend sensitively on the values of $f_g$ and $f_y$. In contrast, the prediction for the Higgs mass only depends on the sign of $f_s$. This is because the fixed point for $\lambda_H$ is always the non-interacting one, $\lambda_{H\, \ast}=0$, independent of the value of $f_s$. As long as $f_s<0$, this fixed point is IR attractive and a prediction follows.\\
Following \cite{Percacci:2015wwa, Labus:2015ska, Oda:2015sma, Hamada:2017rvn, Eichhorn:2017als,Eichhorn:2017ylw, Pawlowski:2018ixd, Wetterich:2019rsn, Eichhorn:2020sbo} and even predating \cite{Narain:2009fy} the idea for the prediction of the Higgs mass \cite{Shaposhnikov:2009pv}, evidence for $f_s<0$ has accumulated (with many papers working in the opposite sign convention with $\beta_{\lambda_H}|_{\mathrm{gravity}} = f_s \, \lambda_H$).\\
As a second possibility for a UV-complete Higgs sector, the Abelian hypercharge and top Yukawa coupling may assume an interacting fixed point, in turn inducing an interacting fixed point for the Higgs quartic coupling
\begin{equation}
\label{eq:lambda_interacting_fp}
\lambda_{\text{H}\, \ast}= \frac{5}{48}\ensuremath{g_{\mathrm{y},\,\ast}}^2-\frac{1}{4} y_{t \ast}^2 + \frac{\pi^2}{3} f_s + \frac{1}{48}\sqrt{\left(12 y_{t\ast}^2-5\ensuremath{g_{\mathrm{y},\,\ast}}^2 - 16 \pi^2 f_s \right)^2+576 y_{t \ast}^4 - 100 \ensuremath{g_{\mathrm{y},\,\ast}}^4}.
\end{equation}
The fixed-point value now depends on the value of $f_s$, not just its sign, and also depends on gauge and Yukawa coupling. If they assume fixed-point values which result in the measured values in the IR, the Higgs mass comes out larger than $129 \, \rm GeV$. This rules out such an interacting fixed point in the SM with gravity. New physics is thus required in this scenario, see below.\\
\FRT{Further reading:}\\
\FR{Higgs mass prediction with dark sector}\\
In \cite{Eichhorn:2021tsx}, it was proposed that adding a Higgs portal, i.e. a coupling between the Higgs-field and a dark scalar field, to an interacting dark sector with scalar and fermions could simultaneously solve two problems: first, it could provide a particle candidate to explain the observed dark-matter relic density. Second, it could shift the predicted Higgs mass towards lower values, starting from the fixed point at which top Yukawa and Abelian gauge coupling are nonzero. This would place a truly asymptotically safe UV completion of the SM with gravity and dark matter, with a high predictive power, within reach. The study in \cite{Eichhorn:2021tsx} is done within a toy model of the full SM (containing a real scalar as the Higgs and a Dirac fermion as the top quark, but no gauge fields), with the extension to the full SM an obvious and important open question.
Similarly, in \cite{Kwapisz:2019wrl} it was found that a new massive $Z'$ boson, corresponding to a gauged $U(1)_{B-L}$ symmetry lowers the predicted value of the Higgs mass, while a sterile quark axion only has little impact on the prediction.
\FR{Resurgence mechanism and scalar mass parameter}\\
In \cite{Shaposhnikov:2009pv}, the ratio of Higgs mass to electroweak scale is predicted, but not the electroweak scale itself. This is because the Higgs mass parameter, i.e., the quadratic term in an expansion of the potential about vanishing field value, is a free parameter, i.e., assumed to be RG relevant. In \cite{Wetterich:2016uxm}, it was suggested that this could change, consistent with results in \cite{}, which confirm that quantum gravity contributes negatively to the corresponding critical exponent. Thus, if gravitational fluctuations are large enough, the Higgs mass parameter becomes irrelevant. If it does so at vanishing fixed-point value, the resulting low-energy prediction is a vanishing electroweak scale. However, if the asymptotically safe gravity-matter fixed point has nonvanishing gauge and/or Yukawa couplings, these shift this fixed-point value away from zero. Whether or not this may lead to a scenario in which the electroweak scale is predicted at the right value is currently an open question. It should be stressed, though, that the required strength of gravitational fluctuations is large and not compatible with weak-gravity bounds, see \autoref{sec:WGB}.\\
\FR{Higgs inflation}\\
Higgs inflation \cite{Bezrukov:2007ep} is based on a nonminimal coupling $\xi$ between the Higgs scalar and the curvature scalar. It could explain inflation without the need for extra fields beyond the SM. A change to the Einstein-frame, i.e., a conformal transformation of the metric that removes the nonminimal coupling, produces a potential that is appropriate for inflation for suitable values of the couplings. This model is attractive due to its predictive power, because it does not require any BSM fields and contains only one free parameter, namely the nonminimal coupling. In \cite{Eichhorn:2020kca}, it was found that the nonminimal coupling is predicted in asymptotic safety, at least for those values of $G$ and $\Lambda$, for which Yukawa couplings can be nonzero. It turns out that the predicted ratio $\lambda_4/\xi^2$ between the Higgs quartic coupling and the nonminimal coupling is much too large to be compatible with CMB data. This result, if confirmed in extended truncations, rules out Higgs inflation in asymptotically safe gravity. The same conclusion was achieved already in \cite{Wetterich:2019rsn}, based on a study of the Higgs potential at large field values.
\subsection{Inflation}
\subsection{Higgs portal to dark sectors}
\emph{Synopsis: The portal coupling between the SM Higgs and a dark scalar, which is a popular, but increasingly tightly constrained coupling between the SM and a WIMP, is predicted to vanish in asymptotic safety. In contrast, a portal coupling to a dark sector with additional fields beyond the dark scalar may be generated either above or below the Planck scale. In contrast to phenomenological models of dark matter, such asymptotically safe portal models have a high predictive power.}
A (massive) dark scalar $d$ may couple to the Higgs scalar $H$ of the SM through a portal coupling
\begin{equation}
S = \lambda_{p}\, \int d^4x\, H^{\dagger} H\, d^2.
\end{equation}
If the coupling is of similar size to SM couplings, the dark scalar is in thermal equilibrium and thus produced through a standard freeze-out mechanism in the early universe. Whether or not a single scalar is a viable dark-matter candidate therefore depends on the size of the coupling.
The coupling is also key to observational constraints, e.g., through the non-observation of scattering off SM particles in dedicated dark-matter experiments and non-observation of production at the LHC.\\
Similar to the quartic Higgs self-interaction, the gravitational contribution to $\lambda_p$ is towards irrelevance at the free fixed point, which therefore is IR-attractive, and no other fixed point is generated:
\begin{equation}
\beta_{\lambda_p}\vert_{\rm grav} = - f_{s} \lambda_p.
\end{equation}
Furthermore, since no contributions from gauge or Yukawa couplings contribute to the beta function of $\lambda_p$, it is not regenerated below the Planck scale.
Thus, $\lambda_p=0$ is a prediction from asymptotic safety that holds at all scales \cite{Eichhorn:2017als}.\\
Intriguingly, experiments continuously improve the strength of bounds on $\lambda_p$, but have not led to a detection, making the prediction from asymptotic safety compatible with the current experimental situation.\\
This result may be circumvented in a more complex dark sector which contains more than one field. For instance, the portal coupling is regenerated below the Planck scale, if a new gauge field couples to the Higgs and the dark scalar. Gauge field fluctuations generate the portal coupling below the Planck scale, leading to a prediction for the portal coupling as a function of the new gauge coupling \cite{Reichert:2019car,Hamada:2020vnf}. As a second example, an additional dark fermion with a Yukawa coupling to the dark scalar, can also generate the Higgs portal coupling \cite{Reichert:2019car, Eichhorn:2020kca}. Dark fermions may even generate a portal coupling at transplanckian scales, i.e., in the fixed-point regime: in a two-step mechanism, a fixed point with a finite dark and visible Yukawa coupling necessarily features a dark and visible non-minimal coupling. Together, the two non-minimal couplings generate a portal coupling. This model, although a toy model with an incomplete SM sector, is a striking example of the predictive power asymptotic safety may have: when viewed as an effective, phenomenological model, it has 9 free parameters (two scalar masses, two quartic scalar couplings, one portal coupling, two non-minimal couplings, two Yukawa couplings). At an asymptotically safe fixed point, only the two mass parameters remain as free parameters. This reduces the parameter space of the model dramatically, see \cite{Eichhorn:2020kca,Eichhorn:2020sbo}. \\
Similarly, \cite{Grabowski:2018fjj} derives specific predictions for the masses of dark-matter particles, if the \emph{conformal Standard Model} \cite{Meissner:2006zh} becomes asymptotically safe. A similar predictive power was observed in a study \cite{Kowalska:2020zve} of BSM physics which provide a dark matter candidate simultaneously with explaining the value of the muon $g-2$ measurement. There, many phenomenological models were ruled out when the coupling to gravity was included in a parameterized way, by including appropriate terms $\sim f_g, \, f_y$ into the beta functions, see also \autoref{sec:SMUVcompletion}. \\
\FRT{Further Reading}\\
\FR{Higgs portal couplings and gauged $B-L$ symmetry}\\
A Higgs portal to a dark sector could also become relevant for models involving a gauged $B-L$ symmetry and thus a new gauge boson beyond the SM. In this case, the dark scalar field spontaneously breaks the $B-L$ symmetry. Imposing that these models become asymptotically safe under the inclusion of quantum gravity restricts the parameterspace of new physics significantly. In particular, the kinetic mixing between the new gauge boson and the SM gauge bosons is fixed, and in some cases the branching fraction of the new gauge boson to SM particles can be predicted, see \cite{Boos:2022jvc}. This model can also be extended to accommodate fermionic dark matter, which is consistent with the observed relic density \cite{Boos:2022pyq}. Imposing asymptotic safety allows constraining the mass of the dark-matter particles as a function of the mass of the new gauge boson.
\subsection{Axion-like particles in the asymptotically safe landscape}
\emph{Synopsis: ALPs couple to the electromagnetic field through a dimension-five operator, which allows a conversion between photons and ALPs that enables experimental searches for ALPs. Within asymptotically safe gravity, there are indications that the ALP-photon coupling is driven to zero, unless gravity is strongly coupled. At strong coupling, there may be a tension with the weak-gravity bound (if it indeed exists), such that there may be a prediction from asymptotic safety, that the ALP-photon coupling vanishes.}\\
The axion is a conjectured BSM particle which solves the strong CP problem. That ``problem" consists in the observation that the coupling of the term $F_{\mu\nu}\tilde{F}^{\mu\nu}$ is very small or potentially zero in QCD.\footnote{For an Abelian gauge theory, $F_{\mu\nu}\tilde{F}^{\mu\nu}$ is a total derivative, but not for a non-Abelian one.} \footnote{Here, we have put quotation marks to highlight that the CP problem is not actually a consistency problem, but a finetuning problem. Such finetuning problems start from the assumption that it is ``natural" for dimensionless numbers to be close to one. Therefore, a small number is said to require an explanation. We disagree with the expectation that a small number requires an explanation more than a number of order one does. Ultimately, a theory in which all free parameters become calculable from first principles would be most satisfying. In the absence of such a theory, any value can be chosen for a free parameter and a particular deviation from 1 does not require less explanation than a particular deviation from 0.} It can be solved by introducing an additional degree of freedom, namely the axion, \cite{Peccei:1977hh}, which takes the place of the coupling of that term. A dynamical mechanism \cite{Peccei:1977ur} drives the expectation value of the axion, i.e., the coupling of that term, to zero.\footnote{It remains an intriguing open question whether the work in \cite{Peccei:1977ur} can be extended to a gravitational setting, where gravitational contributions to the anomalous dimension of the gauge field generate a flow for the coupling. This may solve the strong CP "problem" without the need for new degrees of freedom.} At the same time, a coupling between the axion field $a$ and the electromagnetic field strength is generated, which takes the form
\begin{equation}
S_{\rm axion-photon} = \int d^4x\, \bar{g}_a\, a\,F_{\mu\nu}\tilde{F}^{\mu\nu}.
\end{equation}
ALPs are pseudoscalars, like the axion, which have the same coupling to photons. Axions and ALPs are very weakly-coupled dark-matter candidates which can be generated out of equilibrium in the early universe, see \cite{Ferreira:2020fam} for a review.\\
In string theory, the axion and ALPs are expected to exists \cite{Ringwald:2012cu}. If, therefore, they do not exist (or do not couple to photons) in asymptotic safety, that would be a discriminator between the two candidates for quantum gravity. In view of numerous searches for axion-photon and ALP-photon couplings, this discriminator is highly relevant.
In fact, the gravitational contribution to the flow of the ALP-photon coupling $g_a = \bar{g}_a \, k$ is towards relevance at the fixed point $g_{a\, \ast}=0$. However, the gravitational contribution needs to overwhelm the canonical scaling dimension. Otherwise, the fixed point at $g_{a\, \ast}=0$ cannot be connected to a nonzero ALP-photon coupling in the IR. The gravitational contribution can overwhelm the canonical scaling dimension, if gravitational fluctuations are strong enough. This is in conflict with the weak-gravity bound, if the latter indeed exists. In \cite{deBrito:2021akp}, it is therefore concluded that the ALP-photon coupling may be predicted to vanish in asymptotically safe gravity.\\
Should the weak-gravity bound not persist, the ALP-photon coupling satisfies an upper bound in asymptotic safety, because schematically, the beta function is of the form
\begin{equation}
\beta_{g_a} = g_a +\beta_{1}\, g_a\, G_N + \beta_2 g_a^3,
\end{equation}
where $\beta_1$ is a function of the cosmological constant that is typically negative and $\beta_2$ is positive. The associated fixed-point structure is therefore the same as for the Abelian gauge coupling or some of the Yukawa couplings. The fixed point at $g_{a\, \ast}>0$ hence imposes an upper bound on IR values of the ALP-photon coupling. In contrast to the Abelian gauge coupling and some of the Yukawa couplings, this fixed point only becomes available if $\beta_1\, G_N <-1$, because of the canonical dimension of $g_a$.
In summary, if the weak-gravity bound persists, ALP-photon couplings are likely driven to zero in asymptotic safety, implying that experiments will continue to place tighter constraints without a discovery. If the weak-gravity bound turns out to be a truncation artefact, an upper bound on the ALP-photon coupling exists. Both scenarios are experimentally testable and are in contrast to string theory.
\subsection{Grand unified theories}
\emph{Synopsis: If the matter content of a grand unified theory is compatible with an asymptotically safe fixed point in gravity, then asymptotically safe GUTs are much more predictive than GUTs without gravity. The scalar potential for the many scalars that are required to spontaneously break the large gauge group to the SM gauge groups is completely fixed, except for a single parameter for each scalar field. Thereby, many breaking chains that are considered in gravity-free GUTs are no longer available in asymptotically safe GUTs.}
GUTs are attractive, because they explain charge quantization and explain why the charge of proton and electron are exactly equal in absolute value. Further they can, upon spontaneous symmetry breaking to the SM gauge group, automatically give rise to a right-handed neutrino with the required SM charges. Besides this motivation, the near-crossing of the values of SM gauge couplings at energy scales of about $10^{16}\, \rm GeV$ may be interpreted as an indication for a unification of the SM gauge groups to one larger group.
However, GUTs are unattractive, because they come with a plethora of free parameters. These are linked to the scalar potential: in order to break the GUT gauge group to the SM gauge groups, several scalars are needed, which typically introduce numerous quartic couplings. There are multiple quartic invariants, because there are typically several scalar fields transforming in different representations of the gauge group. Depending on the representation, several different quartic invariants exist. In addition, quartic interactions can be built from quadratic interactions of two different scalars.
In a typical GUT setting, these are free parameters. In turn, starting from a grand unified symmetry, many different chains of spontaneous symmetry breaking are available, depending on the values one chooses for these free parameters. It is therefore unexplained, why the SM, instead of a theory with a different symmetry, should come out as the low-energy limit of the GUT.
In \cite{Eichhorn:2019dhg}, it was proposed that, if a GUT can become asymptotically safe under the impact of quantum gravity\footnote{It is not conclusively established whether this is the case; the numerous matter fields of a GUT may even destroy the fixed point in the gravitational sector. Different studies come to different conclusions on this point, see, e.g., \cite{Dona:2013qba, Wetterich:2019zdo}. Because these studies differ mainly in their choice of regulator function, the differing results may be interpreted as indicating the need to include further interactions in the studies.}, then scalar potentials may largely be fixed. Specifically, by the same mechanism as in \autoref{sec:Higgs} for the Higgs sector of the SM, all quartic couplings of the scalar potential may be predicted. The quadratic couplings, linked to the mass parameters, are expected to remain free parameters, but these add only a single free parameter for each scalar. In addition, the unified gauge coupling may also be predicted, by the same mechanism as the Abelian gauge coupling in the SM, if the matter content of the GUT is such that the gauge coupling is no longer asymptotically free \cite{Eichhorn:2017muy}.\\
In turn, fixing the values of the quartic couplings removes much of the freedom in choosing scalar potentials to accommodate different breaking chains. As a consequence, one may expect that multiple breaking chains may be excluded and the asymptotically safe GUT setting may combine the attractive attributes of a GUT with the high predictive power of asymptotic safety, achieving explanations of many of the properties of the SM.
This general idea was put to the test in \cite{Held:2022hnw}, where it was indeed shown that for SO(10) GUTs with a 16- and 45-dimensional scalar representation, particular breaking chains can be excluded.
Such a result motivates further studies to determine whether (i) asymptotic safety can be achieved in gravity-GUT-theories and (ii) whether breaking chains to the SM are available or not.
\subsection{Neutrino masses}
\emph{Synopsis: Neutrinos masses may be generated by adding a right-handed Weyl fermion to each generation in the SM, together with a Yukawa coupling to the Higgs field. At an asymptotically safe fixed point, such Yukawa couplings are automatically driven to zero. A cross-over trajectory may therefore exist, which spends many scales close to such a fixed point and thereby drives the neutrino Yukawa coupling to tiny values, thus providing an explanation for the smallness of neutrino masses.\\
Alternatively, the Seesaw mechanism may provide naturally small neutrino masses through heavy right-handed neutrinos. Asymptotic safety may constrain models based on the seesaw mechanism.}
There are several possible ways to generate neutrino masses, two of which we will discuss, namely the inclusion of right-handed Weyl neutrinos, and the addition of Majorana masses for right-handed neutrinos.\\
First, by adding a right-handed Weyl fermion to each generation of the SM, one can introduce a Yukawa coupling $y_{\nu}$ for neutrinos. To generate neutrino masses in the meV range, this Yukawa coupling has to be as small as $y_{\nu}\sim 10^{-13}$. In
\cite{Held:2019vmi,Kowalska:2022ypk,Eichhorn:2022vgp}, it was shown that asymptotic safety may provide an explanation for such a small value: Extending the studies of the Yukawa sector of the SM in \cite{Eichhorn:2017ylw,Eichhorn:2018whv}, see \autoref{sec:Yukawas}, by the neutrino Yukawa coupling, one finds two possible fixed points: one, at which all Yukawa couplings are asymptotically free, and a second one, at which top and bottom Yukawa coupling are nonzero. The first fixed point can accommodate the tiny IR-value of the neutrino-Yukawa coupling by choosing a suitable trajectory, but cannot explain it. The second fixed point, where top and bottom Yukawa couplings are non-vanishing in the UV predicts a vanishing neutrino Yukawa coupling, and is therefore phenomenologically not viable.
In combination, however, these two fixed points dynamically generate a tiny neutrino Yukawa coupling: starting from the asymptotically free fixed point, all Yukawa couplings grow, until the top and bottom Yukawa coupling reach the vicinity of the interacting fixed point. There, the critical exponent of the neutrino Yukawa coupling switches sign from positive (relevant) to negative (irrelevant), driving the neutrino Yukawa coupling back down to tiny values.
Hence, the neutrino generically ends up much lighter than the other fermions.
Because, without an asymptotically safe fixed point of the above type, no mechanism appears to exist to explain a tiny neutrino Yukawa coupling, this form of neutrino mass generation was long considered unappealing, because it is not ``natural".\footnote{There is clearly some arbitrariness in the notion of naturalness: the ratio between electron mass and top quark mass is already $10^{-6}$, but is usually not viewed as a motivation to think about alternative mass generation mechanisms for the electron. Instead, the line between ``natural" and ``unnatural" is in this case drawn somewhere below $10^{-6}$, so that the ratio of about $10^{-9}$ between an meV-neutrino-mass-scale and the electron mass scale is considered ``unnatural". This arbitrariness already suggests that naturalness may at best be a slight motivation to search for alternative explanations, but not a strong and unequivocal reason to rule an ``unnatural" setting out.}
As an alternative, one can introduce heavy (with masses around the GUT scale) right-handed neutrinos with a Majorana mass term. The neutrino mass matrix, upon diagonalization, produces neutrino masses which are inversely proportional to the heavy Majorana mass scale, making neutrinos ``naturally" light.\\
Majorana masses were investigated in the context of asymptotically safe systems in \cite{DeBrito:2019rrh}, where they were found to remain relevant under the impact of quantum gravity fluctuations. Accordingly, the corresponding mass scale can be chosen freely, providing a basis for the seesaw-mechanism with heavy right-handed neutrinos. In \cite{Domenech:2020yjf}, the seesaw mechanism for a specific choice of heavy fields was investigated and constrained. In particular, the additional fields have an impact on the prediction of the Higgs mass and top quark mass, because both depend on physics at the new, heavy scale. \cite{Domenech:2020yjf} therefore also constitutes an example for how an embedding into an asymptotically safe UV completion constrains not just the deep IR (around the electroweak scale), but also constrains physics at intermediate scales which are beyond the reach of current experiments.
\subsection{$g-2$ and flavor anomalies}
\emph{Synopsis: There might be a possibility for new physics at the electroweak scale, in order to resolve tensions between SM predictions and experimental data on the muon magnetic moment and on lepton-flavor non-universality in rare B meson decays. Among the phenomenological models that have been proposed, asymptotic safety could act as a discriminator, because its predictive power may rule out values of couplings which are required to resolve the tensions.}
Current experimental data on parameters of the SM indicate several anomalies, i.e., discrepancies between measurement and theoretical prediction. Since these discrepancies are below a statistical significance of $5\sigma$, they are not discoveries, but merely anomalies. Future experiments and updated theoretical methods will either resolve the tension, or increase the significance, possibly beyond $5\sigma$. The most commonly anomalies concern the anomalous magnetic moment of the muon, $(g-2)_{\mu}$, and flavor anomalies in the $b\to s$ and the $b\to c$ transitions. While these anomalies do not provide sufficient evidence for a significant deviation from SM predictions (yet), many models involving particle physics beyond the SM were developed to explain the anomalies.\\
Some of these models have been investigated in the context of asymptotically safe quantum gravity, see \cite{Kowalska:2020gie, Kowalska:2020zve, Chikkaballi:2022urc}. In particular, it was investigated whether quantum gravity might turn some parameters of the extensions into irrelevant directions. In this case, the predictive power of asymptotically safe quantum gravity would extend to physics beyond the SM and predict, for example, the mass of dark-matter particles that are required to resolve the anomaly. Confronting these predictions with existing bounds from searches for physics beyond the SM can either rule out such solutions or provide strong constraints on the parameter-space. These constraints might guide experimental searches, and allow insights in the most promising next-generation particle colliders.
For instance, in \cite{Kowalska:2020gie}, the leptoquark solution to flavor anomalies was investigated and it was found that asymptotic safety limits the mass range of the leptoquark to $4-7\, \rm TeV$, where it is within reach of future colliders.\\
On the technical level, these studies proceed within a \emph{parameterized} framework, first introduced in \cite{Eichhorn:2018whv}, in which the gravity contributions are parameterized by $f_{c}$'s and resulting fixed points in matter beta functions are investigated. These studies therefore provide experimentally testable consequences of asymptotic safety under the assumption that asymptotic safety is realized in the full system. Checking this assumption would require (i) to account for the impact of the new matter fields on the gravitational fixed point to check whether it exists, (ii) to calculate the resulting values of $f_{c}$ to compare with those required on a phenomenological level and (iii) to check that higher-order interactions in the matter sector as well as non-minimal interactions between matter and gravity are subleading and do not change the conclusions much.
\section{Gravity-matter systems in $d\neq4$ dimensions}
\label{sec:dgreater4}
\emph{Synopsis: Experiments support the hypothesis that we live in a $3+1$-dimensional spacetime. We do not, however, know, why this is the case, or whether it could be different. Here, we review evidence that asymptotic safety of gravity with Standard Model matter may not be achievable in dimensions much beyond four; i.e., evidence that the predictive power of asymptotic safety may extend to free parameters of the geometry of spacetime.}\\
Current observations indicate that our universe is four dimensional (or rather, 3+1 dimensional), at least down to length scales corresponding to an energy of $\sim 10$ TeV. Accordingly, our universe might be of higher dimension in the deep UV, if the additional dimensions are compact and therefore inaccessible at low energies.\footnote{Here, we refer to the topological dimension. There are in fact indications that other notions of dimensionality, most importantly the spectral dimension, instead exhibit a dynamical reduction in the UV, \cite{Lauscher:2005qz,Reuter:2011ah,Rechenberger:2012pm,Calcagni:2013vsa}. Such different notions of dimensionality are not related to each other and may thus exhibit differences in the UV.} Indeed, in string theory, such extra dimensions are necessary for the internal consistency of the theory. It is therefore interesting to understand what the status of extra dimensions is in other approaches to quantum gravity.\\
The compatibility of extra dimensions with the asymptotic-safety paradigm for quantum gravity and matter has been tested by studying i) the impact of matter fields on the gravitational fixed point ii) mechanisms for a UV complete matter sector, and iii) gravitational contributions to LHC scattering amplitudes to connect to observational constraints on extra dimensions. We will briefly discuss these results in the following.\\
\\
First, the coefficients $b_{\mathrm{grav}}$ and $a_{i}$ in \eqref{eq:betaGN} are dimension-dependent. All studies so far indicate that gravitational contributions remain anti-screening in $d>4$, i.e., $b_{\mathrm{grav}}>0$ in \eqref{eq:betaGN} \cite{Litim:2003vp, Fischer:2006fz, Ohta:2013uca}, such that pure gravity can become asymptotically safe in a larger number of dimensions. Since the gravitational contribution and the matter contributions scale differently with the dimensionality, see \cite{Dona:2013qba} for an explicit example, bounds on the number of matter fields may arise in $d>4$. It may thus be the case that asymptotic safety of gravity with Standard-Model matter is not achievable in $d>4$, see \cite{Dona:2013qba} for a first study of this question. A systematic investigation of this question, which lifts the approximations made in \cite{Dona:2013qba}, has not been completed yet. \\
\\
Second, if we assume that the higher-dimensional theory features a fundamental Abelian gauge coupling, demanding a UV completion constrains the number of dimensions.
This is because the triviality problem in the Abelian gauge sector becomes more severe in larger dimensions, where the Abelian gauge coupling has a negative canonical mass dimension. This acts akin a screening contribution to the scale dependence of the gauge coupling. Hence, to induce asymptotic freedom in $d>4$ in the gauge sector, the anti-screening gravitational contribution has to overcome this screening dimensional contribution, see also \autoref{sec:Gaugesector}. This requires that $f_g$ (cf.~Eq.~\eqref{eq:matterbetaschem}) increases with increasing dimensionality.
Explicit studies indicate that this only possible if gravitational fluctuations become stronger when we increase the dimensionality \cite{Eichhorn:2019yzm, Schiffer:2021gwl, Eichhorn:2021qet}. Hence, a UV-completion of the Abelian gauge sector is only possible for a more strongly-coupled gravity theory. However, such a strongly coupled regime might be excluded due to the WGB in the Abelian gauge sector, see \autoref{sec:WGB}. Indeed, according to the studies in \cite{Eichhorn:2019yzm, Schiffer:2021gwl, Eichhorn:2021qet}, it is not possible to reconcile the strong coupling required to solve the triviality problem with the weak coupling required to satisfy the weak-gravity bound, if $d\geq6$.\\
\\
Third, a more phenomenological approach to extra dimensions was taken, prior to the start of the LHC, with the hope of constraining asymptotic safety in large extra dimensions by experiment.
If the extra dimensions are large enough, the fundamental Planck scale is close to $TeV$ scales, see \cite{Arkani-Hamed:1998jmv}. Accordingly, scattering processes at the $TeV$ scale would be sensitive to the production or exchange of virtual Kaluza-Klein-gravitons. These would leave an imprint on scattering amplitudes of SM particles, for example by missing energy signatures. In \cite{Litim:2007iu, Litim:2007ee, Gerwick:2011jw} this scenario was investigated within asymptotically safe quantum gravity\footnote{We caution here that the scattering cross sections in \cite{Litim:2007iu, Litim:2007ee, Gerwick:2011jw} were computed within an approximation where the RG scale $k$ is identified with the physical momentum scale of a scattering process. A more careful investigation of gravity-mediated scattering amplitudes requires to encode the full momentum dependence of the vertices, e.g., by means of form factors, see \cite{Knorr:2019atm, Draper:2020bop}. Recent studies of gravity-mediated scattering processes indicate that scattering amplitudes can indeed be finite, despite the presence of trans-Planckian modes \cite{Draper:2020bop}. We refer the reader to the chapter on form factors and scattering amplitudes \cite{Knorr:2022dsx} for details.}. Specifically, it was found that the gravitational di-lepton production at LHC energies would be well above SM backgrounds, if the fundamental Planck scale was at the $TeV$ scale. Similarly, \cite{Dobrich:2012nv} found that gravity-mediated photon-photon-scattering can rise above the SM background (which in this case is a pure loop effect, with no tree-level contribution).
To date, no such signatures were discovered, constraining the radii of large extra dimensions.\\
At a more formal level, it is also of interest to explore the gravitational dressing of matter theories with asymptotic safety in $d=3$ or $d=2$. Such matter theories encode universal critical behavior at continuous phase transitions; with the Wilson-Fisher fixed point a paradigmatic example. For such systems in statistical physics, gravitational fluctuations are not expected to be relevant, because the Planck length is much smaller than the atomic scale in those systems, where the quantum-field-theoretic description breaks down. Nevertheless, it is of interest to understand whether such universality classes persist and are gravitationally dressed. This sheds light on how two distinct, asymptotically safe theories can be brought together. It may even, within an AdS/CFT-type of correspondence, become of interest for quantum gravity in a more indirect way.\\
Explicit studies of gravitationally dressed universality classes have started from the Wilson-Fisher fixed point \cite{Percacci:2015wwa} and considered universality classes in three-dimensional $O(N)$ models in \cite{Labus:2015ska}, finding evidence that these universality classes can be dressed gravitationally.
\section{Gravitational fixed point under the impact of (minimally coupled) matter}
\label{sec:matterongrav}
There is compelling evidence for an asymptotically safe fixed point in pure gravity, see, e.g., other sections in this handbook. This fixed point is the starting point for our discussion. We will explore whether and how this fixed point continues to exist, when more and more matter fields of different spins are added. In this section, the self-interactions of matter are ignored, because they do only indirectly impact the gravitational fixed point, although they may or may not feature a fixed point themselves.
\subsection{Screening and anti-screening effects of matter on the Newton coupling}\label{sec:mattereffectsonGN}
\emph{Synopsis:
Matter fields of different spins have different effects on the fixed point in the Newton coupling: Scalars and fermions disfavor it; gauge fields favor it. This can be understood in terms of screening and anti-screening contributions, i.e., weakening and strengthening of gravity.
}
Asymptotic safety arises when there is an overall anti-screening contribution of quantum fluctuations of all fields -- matter and gravitational. An anti-screening contribution is one with a negative sign in the beta function. Such a contribution is necessary to achieve quantum scale symmetry, because the canonical dimension of the Newton coupling generates a contribution with positive sign. An anti-screening contribution can compensate this, such that asymptotic safety is present.\footnote{There is a way of understanding why an anti-screening contribution is necessary which uses continuation of the theory across dimensions: In $d=2$, the Newton coupling is dimensionless, which is similar to the gauge coupling in $d=4$: just like screening effects mean that the Abelian gauge coupling is screened to zero in four-dimensional QED, screening effects would mean a non-gravitating gravity theory. In contrast, just like anti-screening effects mean that the non-Abelian gauge coupling is anti-screened to a nonzero value in four-dimensional QCD, anti-screening effects yield a gravity theory with nonvanishing gravitational interaction. In the UV, this theory is asymptotically free, i.e., the coupling starts out at zero in the UV and is anti-screened to nonzero in the IR.
Going from $d=2$ to $d>2$, the anti-screening contribution can remain, but must compete with a contribution from a positive sign from the canonical dimension of the coupling, such that asymptotic freedom is no longer available, but asymptotic safety is.}
\\
When gravity is coupled
to $\ensuremath{N_{\mathrm{S}}}$ scalar fields, $\ensuremath{N_{\mathrm{F}}}$ Dirac fermions and $\ensuremath{N_{\mathrm{V}}}$ vector fields, the scale dependence of the Newton coupling can be schematically written as
\begin{equation}
\beta_{\ensuremath{G_{\mathrm{N}}}}=2\ensuremath{G_{\mathrm{N}}}-\ensuremath{G_{\mathrm{N}}}^2 \left( b_{\mathrm{grav}}+ a_{\mathrm{S}}\, \ensuremath{N_{\mathrm{S}}}+ a_{\mathrm{F}}\, \ensuremath{N_{\mathrm{F}}}+ a_{\mathrm{V}}\, \ensuremath{N_{\mathrm{V}}}\right)+\mathcal{O}(\ensuremath{G_{\mathrm{N}}}^3)\,,\label{eq:betaGN}
\end{equation}
where the first term encodes the scale-dependence due to the canonical mass dimension of the Newton coupling which leads to perturbative nonrenormalizability. Gravitational fluctuations and Faddeev-Popov ghosts generate $b_{\mathrm{grav}} >0$, see, e.g., \cite{Reuter:2001ag, Lauscher:2001ya, Lauscher:2002sq, Litim:2003vp, Niedermaier:2006wt, Codello:2008vh, Manrique:2009uh, Benedetti:2009rx, Benedetti:2009iq, Manrique:2010am, Groh:2011vn, Rechenberger:2012pm, Donkin:2012ud, Christiansen:2012rx, Benedetti:2012dx, Dietz:2012ic, Falls:2013bv, Christiansen:2014raa, Becker:2014qya, Falls:2014tra, Gies:2015tca, Christiansen:2015rva, Demmel:2015oqa, Ohta:2015fcu, Gies:2016con, Denz:2016qks, Christiansen:2017bsy, Knorr:2017fus, Gonzalez-Martin:2017gza, Falls:2018ylp, DeBrito:2018hur, Kluth:2020bdv, Falls:2020qhj, Knorr:2021slg}, as well as \cite{Bonanno:2020bil} and references therein. The coefficients $a_{\mathrm{S}}$ ($a_{\mathrm{F}}$, $a_{\mathrm{V}}$) encode whether scalar (fermionic, vector) fields screen ($a_i<0$) or anti-screen ($a_i>0$) the gravitational coupling\footnote{More precisely matter fields also impact all other gravitational couplings, including, e.g., the cosmological constant, see \autoref{sec:matterbounds}.}. For the case of minimally coupled matter fields, the $a_i$ are numerical factors \cite{Narain:2009fy, Dona:2012am, Dona:2013qba, Percacci:2015wwa, Meibohm:2015twa, Labus:2015ska, Dona:2015tnf, Meibohm:2016mkp, Biemans:2017zca, Christiansen:2017cxa, Alkofer:2018fxj, Eichhorn:2018akn, Eichhorn:2018ydy, Eichhorn:2018nda, Burger:2019upn, Daas:2020dyo, Daas:2021abx}; going beyond minimal coupling, the $a_i$ become functions of the couplings, see, e.g., \cite{Oda:2015sma, Eichhorn:2016vvy, Hamada:2017rvn, Eichhorn:2017sok, Eichhorn:2018nda, Laporte:2021kyp, Knorr:2022ilz} and have to be evaluated at the corresponding fixed-point values.
The resulting gravitational fixed-point value is
\begin{equation}
\ensuremath{G_{\mathrm{N},\,\ast}} = \frac{2}{b_{\mathrm{grav}}+ a_{\mathrm{S}}\, \ensuremath{N_{\mathrm{S}}}+ a_{\mathrm{F}}\, \ensuremath{N_{\mathrm{F}}}+ a_{\mathrm{V}}\, \ensuremath{N_{\mathrm{V}}}}.\label{eq:gravityFP}
\end{equation}
Eq.~\ref{eq:gravityFP} shows that a screening contribution ($a_i<0$) increases the fixed-point value of the Newton coupling $\ensuremath{G_{\mathrm{N},\,\ast}}$, while an anti-screening contribution ($a_i>0$) decreases it. If the screening contributions dominate over the anti-screening contributions, then $\ensuremath{G_{\mathrm{N},\,\ast}} \rightarrow \infty$ (and subsequently $\ensuremath{G_{\mathrm{N},\,\ast}}<0$) and the theory is not asymptotically safe.
Thus, screening contributions destabilize the asymptotically safe gravity system, because they can overcome the gravitational contribution $b_{\mathrm{grav}}$ and thereby remove the interacting fixed point. Conversely, an anti-screening contribution $a_i>0$ stabilizes the system, and drives the system to $\ensuremath{G_{\mathrm{N},\,\ast}} \rightarrow 0$, when increasing the corresponding $N_{i}$.\footnote{This does not necessarily indicate that a perturbative fixed point is approached, because fixed-point values of couplings can always be made arbitrarily small by an appropriate rescaling. Instead, the critical exponents are a meaningful measure of perturbativity: if they approach the canonical values, the fixed point becomes perturbative. In the present case, the critical exponent of the Newton coupling in the approximation (\ref{eq:betaGN}) is always 2, independent of the fixed-point value.}
In principle, there are different ways of coupling matter to gravity -- minimally and non-minimally. For minimally coupled fields, the interaction with gravity lies in the kinetic term of the matter fields. For non-minimally coupled fields, explicit couplings with curvature terms are present. In classical or phenomenological studies, one can usually choose which coupling to include. In asymptotically safe gravity-matter systems, there is no such choice to make: all interactions compatible with the symmetries are generically present, and this includes some non-minimal interactions. Nevertheless, their effect does not need to be significant, because the nonminimal couplings may be small at a fixed point. Indeed, in studies to date, the minimal coupling determines whether matter fields screen or anti-screen the gravitational fixed point.
Minimally coupled scalars screen the gravitational coupling, $a_{\mathrm{S}}<0$, see \cite{Narain:2009fy, Dona:2013qba, Percacci:2015wwa, Labus:2015ska, Meibohm:2015twa, Dona:2015tnf, Biemans:2017zca, Alkofer:2018fxj, Eichhorn:2018akn,Wetterich:2019zdo, Laporte:2021kyp, Sen:2021ffc}. These studies cover a range of different approximations, and technical choices, e.g., regulator function and gauge parameters. The sign of $a_{\rm S}$ can thus be regarded as settled.
A screening contribution was also found using using other methods, namely perturbative heat-kernel methods \cite{Kabat:1995eq, Larsen:1995ax} and an $\epsilon$ expansion around $2+\epsilon$ dimensions \cite{Christensen:1978sc}.
Minimally coupled fermions also screen the gravitational coupling, $a_{\mathrm{F}}<0$. This was found in various studies which cover different approximations, regulator functions and gauge parameters \cite{Dona:2012am, Dona:2013qba, Meibohm:2015twa, Meibohm:2016mkp, Alkofer:2018fxj, Eichhorn:2018ydy, Eichhorn:2018nda, Wetterich:2019zdo, Daas:2020dyo, Daas:2021abx, Sen:2021ffc}, and also in perturbative studies \cite{Kabat:1995eq, Larsen:1995ax}.
Minimally coupled gauge fields anti-screen the gravitational coupling, $a_{\mathrm{V}}>0$, as was found in various studies \cite{Dona:2013qba, Biemans:2017zca, Christiansen:2017cxa, Alkofer:2018fxj, Wetterich:2019zdo, Sen:2021ffc}, in agreement with perturbative studies \cite{Kabat:1995eq, Larsen:1995ax}.
Because gauge fields anti-screen the Newton coupling, one may expect that the gravitational fixed point becomes the free fixed point for $\ensuremath{N_{\mathrm{V}}} \rightarrow \infty$. Indeed,
the fixed-point value $\ensuremath{G_{\mathrm{N},\,\ast}}$ approaches zero for increasing $\ensuremath{N_{\mathrm{V}}}$. However, the fixed-point value for the cosmological constant remains non-zero \cite{Dona:2013qba, Christiansen:2017cxa}; therefore the limit $\ensuremath{N_{\mathrm{V}}} \rightarrow \infty$ does not lead to an asymptotically free fixed point.\\
\FRT{Further reading:}\\
\FR{On the impact of non-minimal couplings}\\
Some studies have gone beyond minimal coupling, and included explicit couplings between matter fields and curvature terms, see Tab.~\ref{tab:mattermatterstruncations} for an overview and references.
A non-minimal coupling between scalars and gravity of the form $R^{\mu\nu}\,D_{\mu}\phi D_{\nu}\phi$ slightly increases the amount of screening \cite{Laporte:2021kyp}, while a non-minimal coupling of the form $R\,D_{\mu}\phi D^{\mu}\phi$ slightly reduces the amount of screening\cite{Laporte:2021kyp}. The non-minimal coupling $\phi^2 R$, which is canonically marginal, breaks shift symmetry; therefore it vanishes at the gravity-matter fixed point \cite{Narain:2009fy, Narain:2009gb, Percacci:2015wwa, Oda:2015sma, Labus:2015ska}, see also \autoref{sec:GlobalSymms}, unless shift-symmetry is broken through the presence of, e.g., Yukawa interactions, see \cite{Eichhorn:2020sbo}.
A non-minimal coupling between fermions and gravity of the form $R\,\psi\bar{\psi}$ slightly increases the amount of screening \cite{Eichhorn:2016vvy}. On the other hand, a non-minimal coupling of the form $R^{\mu\nu} \bar{\psi}\gamma_{\mu}\nabla_{\nu}\psi$ reduces the amount of screening and therefore stabilizes the system \cite{ Eichhorn:2018nda}. (We motivate the importance of such a non-minimal coupling in \autoref{sec:GlobalSymms}.)
However, even in the non-minimally coupled fermion-gravity systems, fermions screen the Newton coupling.
For vectors, nonminimal interactions have not yet been investigated.\\
\begin{table}[!t]
\begin{tabular}{c|c|c|c|c|c|c|}
gravitational int.'s & non-minimal int.'s & non-minimal int.'s & non-minimal int.'s & $N_{\rm S}$ & $N_{\rm F}$ & $N_{\rm V}$ \\
& of scalars & of fermions & of gauge fields& & & \\
\hline
$\sqrt{g}$,\, $\sqrt{g}\, R$ & -& - & - & arb. & arb. & arb.\\
\cite{Dona:2013qba, Biemans:2017zca, Wetterich:2019zdo}& & & & & & \\ \hline
$\sqrt{g}$,\, $\sqrt{g}\, f(R)$ \cite{Alkofer:2018fxj}& - &-&- & arb. & arb. & arb.\\ \hline
$\sqrt{g}$,\, $\sqrt{g}\, R$,\,$\sqrt{g}R^2$ & - &-& - & arb.&arb.&arb.\\
$\sqrt{g} C_{\mu\nu\kappa\lambda} C^{\mu\nu\kappa\lambda}$\,\,\cite{Sen:2021ffc}& & & & & & \\ \hline\hline
$\sqrt{g}$,\, $\sqrt{g}\, R$ & $\sqrt{g}\,R\, \phi^2$& - & - & 1 & arb. & 0\\
$\sqrt{g}\, R^2$,\, $\sqrt{g}\, R_{\mu\nu}R^{\mu\nu}$ \cite{Hamada:2017rvn} & & & & & &\\
\hline
$\sqrt{g}$,\, $\sqrt{g}\, R$ \cite{Oda:2015sma} & $\sqrt{g}\,R\, \phi^2$& - & - & 1 & arb. & 0\\ \hline
$\sqrt{g}$,\, $\sqrt{g}\, R$ \cite{Eichhorn:2017sok} &$\ast$ $\sqrt{g}\,R^{\mu\nu}\, \partial_{\mu}\phi \partial_{\nu}\phi$& - & - & 1 & 0 & 0\\\hline
$\sqrt{g}$,\, $\sqrt{g}\, R$ \cite{Laporte:2021kyp} &$\ast$ $\sqrt{g}\,R^{\mu\nu}\, \partial_{\mu}\phi \partial_{\nu}\phi$& - & - & 1 & 0 & 0\\
& $\ast$ $\sqrt{g}\, R g^{\mu\nu} \partial_{\mu}\phi \partial_{\nu}\phi$ & - & - & & & \\ \hline\hline
$\sqrt{g}$,\, $\sqrt{g}\, R$ \cite{Eichhorn:2016vvy} &-& $\sqrt{g}\,R\, \bar{\psi}\psi$ & - & 0 & arb. & 0\\ \hline
$\sqrt{g}$,\, $\sqrt{g}\, R$ \cite{Eichhorn:2018nda} &-& $\ast$ $\sqrt{g}\,R^{\mu\nu}\, \bar{\psi}\gamma_{\mu}\nabla_{\nu}\psi$ & - & 0 & arb. & 0\\ \hline
\end{tabular}
\caption{\label{tab:mattermatterstruncations}We list the interactions that were included in the studies in the corresponding references. We only list the most comprehensive studies at each set of interactions, i.e., for instance for the Einstein-Hilbert truncation, on which the impact of all three species of matter fields was studied, we do not separately list studies which only take into account one or two species of matter fields.\\
Those non-minimal couplings marked by an $\ast$ cannot be set to zero at an interacting gravitational fixed point, see \autoref{sec:GlobalSymms}. In contrast, those non-minimal couplings not marked by an $\ast$ can be set to zero for reasons of symmetry; thus it is consistent to neglect them at a minimally coupled fixed point.
We also indicate the numbers of matter fields that were studied, where "arb." stands for an arbitrary number of fields of the corresponding species, but does not necessary imply that an asymptotically safe fixed point exists for arbitrarily high numbers of the corresponding field. We omit matter self-interactions in this table; those are discussed in \autoref{sec:GlobalSymms}. }
\end{table}
\FR{Fermions in the background field approximation}\\
For fermions, there is a technical subtlety: if one chooses a regulator function which does not regularize the modes of the Dirac operator, and works in the background field approximation, the sign of $a_{\rm F}$ can be flipped to $a_{\mathrm{F}}>0$ \cite{Dona:2012am,Biemans:2017zca, Alkofer:2018fxj, Daas:2020dyo}. %
\subsection{Impact of Standard Model fields}
\emph{Synopsis: The four scalar components of the Higgs field and 45 Weyl fermions of the SM only partially counteract the anti-screening effect of gravitational modes and the 12 gauge fields; therefore, an asymptotically safe fixed point for the Newton coupling persists under the impact of SM matter. The fixed point also exists when further gravitational couplings are included.}
It is an important observational consistency test for any model of quantum gravity, whether the observed matter fields of the SM can exist within the model.
It is evidently not a given that asymptotically safe gravity passes this consistency test, given the results from the previous \autoref{sec:mattereffectsonGN}. First, passing the test requires that the screening effect of the scalars and fermions in the SM does not overwhelm the anti-screening effect of gravitational and gauge fields on the Newton coupling. Only then can an asymptotically safe fixed point for the Newton coupling exist.
Second, passing the test requires that the fixed point continues to exist when further couplings beyond the Newton coupling are considered on the gravitational and the matter side. Matter couplings are discussed in \autoref{sec:GlobalSymms}.
All computations so far agree on the following important result: the asymptotically safe fixed point of pure gravity continues to exist, when the matter content of the SM is accounted for. The fixed-point values and critical exponents depend on the number of matter fields, but, if the number of matter fields of the three species, (scalars, fermions, vectors) are treated as continuous parameters, the fixed-point values and critical exponents at $N_{\rm S}= 0, \, N_{\rm F} =0, \, N_{\rm V}=0$ can continuously be connected to those at to $N_{\rm S}= 4,\, N_{\rm F} = 22.5,\, N_{\rm V}= 12$ \cite{Dona:2013qba, Biemans:2017zca, Alkofer:2018fxj, Wetterich:2019zdo, Sen:2021ffc, Pastor-Gutierrez:2022nki}.\footnote{
If a fixed point can be deformed continuously through increasing a continuous parameter from an initial to a final value, one can think of the final fixed point as a deformation of the original universality class. In contrast, if the fixed point at the final value of the parameter is not obtained through a deformation of the fixed point at the initial value, one cannot understand it as a deformation of the original universality class, but instead has to think of these as two different universality classes. For this second case, the existence of the first universality class (at the initial value of the parameter) does not matter for the existence of the second universality class (at the final value of the parameter). Translated to gravity-matter systems this second possibility would mean that a second asymptotically safe universality class is unrelated to the pure-gravity universality class. While this is a logical possibility, this does not appear to be the case.
}
Therefore asymptotically safe quantum gravity passes a crucial observational consistency test.\\ Some extensions of the SM may also admit a fixed point. The most important extension is probably the addition of three right-handed neutrinos, which are required to explain neutrino-oscillations (unless the neutrinos are Majorana). Further extensions may be necessary to accommodate dark matter, e.g., in the form of an axion or axion-like particle, or a gauge-singlet scalar. Such extensions by 3 Weyl fermions (3/2 Dirac fermions) and one or two scalars are indeed possible in all studies to date.\\
As a first step towards a supersymmetric setting, the inclusion of a gravitino, i.e., a spin 3/2 field, has been studied in \cite{Dona:2014pla}.\\
\begin{figure}[!t]
\includegraphics[width=0.45\linewidth,clip=true,trim=7cm 2.5cm 13cm 4cm]{Figures/mattermatters1gen.pdf}\quad\includegraphics[width=0.45\linewidth,clip=true,trim=7cm 2cm 13cm 3.8cm]{Figures/mattermatters3gen.pdf}
\caption{\label{fig:mattermattersinterplay} We show the plane spanned by the dimensionless Newton coupling $\ensuremath{G_{\mathrm{N}}}$ and the dimensionless cosmological constant $\Lambda$. We also indicate the boundary between the region with non-vanishing Yukawa couplings (green) and the region with vanishing Yukawa couplings (orange). Additionally, the weak-gravity bound, see \autoref{sec:WGB}, is indicated. The fixed-point value (red dot) and the RG flow towards the IR in the background approximation are shown for one generation of Standard-Model fermions (left panel) and three generations (right panel). In both panels, 12 vectors and 4 scalars are included. See \cite{Eichhorn:2017ylw} for the corresponding reference. }
\end{figure}
\emph{Preview on phenomenological consequences:}\\
There are indications for a mechanism that renders the SM plus gravity not only asymptotically safe, but also more predictive than the SM on its own. This mechanism relies on matter fields changing gravitational fixed-point values. Here, we preview this mechanism and get back to it in more detail in \autoref{sec:SMUVcompletion}.
Yukawa interactions which the SM needs for its fermions to be massive, vanish, unless conditions on the gravitational fixed-point values are fulfilled \cite{Oda:2015sma, Eichhorn:2016esv, Eichhorn:2017ylw}, see also \autoref{sec:SMUVcompletion}. These conditions are not fulfilled by the gravitational fixed-point values, as they come out in studies with no or few matter fields \cite{Eichhorn:2016esv,Eichhorn:2017eht,Eichhorn:2017ylw}. However, once three generations of SM fermions are added, the gravitational fixed-point values satisfy the conditions according to the study in \cite{Dona:2013qba}, see \autoref{fig:mattermattersinterplay}.
Thus, fermions push the gravitational fixed point into a region of values, in which fermion mass generation through Yukawa couplings to the Higgs field becomes possible.\\
We caution that one needs to know the gravitational fixed-point values with relatively high accuracy to determine whether or not this mechanism is indeed at work. Current studies have not yet achieved the accuracy to comprehensively confirm the scenario in \cite{Eichhorn:2017ylw}.\\
That the Yukawa couplings of the SM cannot be accommodated automatically is an important result in quantum gravity phenomenology, because it means that the asymptotically safe model is testable \emph{at SM scales}: if the fixed-point value falls outside the green region in \autoref{fig:mattermattersinterplay}, the model is ruled out by an experimental result, namely the measurement of nonvanishing Yukawa couplings at the LHC \cite{CMS:2018uxb,ATLAS:2018mme,CMS:2018nsn,ATLAS:2018kot,ATLAS:2015xst,CMS:2017zyp}.
\subsection{
The effective strength of gravity under the impact of matter
}
\label{sec:matterbounds}
\emph{Synopsis:
The effective strength of gravity can be encoded in a combination of couplings, in which these appear whenever gravitational fluctuations contribute to a system. This effective strength depends on the number of matter fields. Generically, scalar fields drive the effective strength up and fermions and vectors lower it.
}
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\linewidth]{Figures/GeffMatter.pdf}
\caption{\label{fig:GeffMatterplot}
We show the fixed-point values of the effective gravitational coupling $G_{\rm eff}$ \eqref{eq:Geff}, when increasing the number of scalars, fermions or vector fields for minimally coupled matter. The left column refers to fixed-point values obtained within fluctuation computations, taken from \cite{Meibohm:2015twa}, and the right column refers to fixed-point values obtained within the background field approximation, taken from \cite{Dona:2013qba}. We see that the qualitative behavior of $G_{\rm eff}$ agrees for all matter fields between the two different methods.}
\end{figure}
We introduce an effective strength of gravity, first discussed in \cite{Eichhorn:2017eht}:
\begin{equation}
G_{\rm eff}= \frac{\ensuremath{G_{\mathrm{N}}}}{1-2 \Lambda}\, \label{eq:Geff}.
\end{equation}
Herein, $\Lambda$ is the cosmological constant. $G_{\rm eff}$ is a combination of $\ensuremath{G_{\mathrm{N}}}$ and $\Lambda$, in which the two couplings enter in beta functions, therefore this combination should have an asymptotically safe fixed point.
$G_{\rm eff}$ increases, when additional scalar fields are added to the system, see \autoref{fig:GeffMatterplot}. Therefore, we expect that the system becomes increasingly non-perturbative, when scalars are added.
Beyond a critical value of $\ensuremath{N_{\mathrm{S}}}$, the asymptotically safe fixed point ceases to exist. We caution that the approximation in which studies are performed may break down at lower $\ensuremath{N_{\mathrm{S}}}$ than the critical value, see \cite{Meibohm:2015twa, Eichhorn:2018akn, Burger:2019upn}. Nevertheless, a fixed point at small values of $G_{\rm eff}$, where calculations are more easily controlled, may only exist at relatively small values of $\ensuremath{N_{\mathrm{S}}}$.
For fermions, the situation differs, because $G_{\rm eff}$ decreases, when additional species are added \cite{Eichhorn:2018nda}, see \autoref{fig:GeffMatterplot}. In fact, the system may become increasingly perturbative, when fermions are added.\footnote{To make a robust statement on this possibility, other couplings need to be analyzed as well.} This may be important to connect asymptotically safe quantum gravity to the SM: it is known that the SM is perturbative at the Planck scale; thus, a UV completion with gravity has to be able to reproduce this perturbative regime. This is most likely achievable if the UV completion itself is (near-) perturbative in nature, cf.~\autoref{sec:ner-pert}. The SM has 22.5 Dirac fermions, which is sufficient to drive the effective gravitational strength to significantly lower values that for the pure-gravity case.
$G_{\rm eff}$ decreases, when additional vectors fields are added to the system, see \autoref{fig:GeffMatterplot}. Theories, which add additional gauge fields to the SM may therefore be compatible with asymptotic safety. For Grand Unified Theories (GUT), the situation is not so clear, because they need additional scalars to spontaneously break the large gauge group. Different studies find differences on whether the matter content of popular GUT models is compatible with asymptotic safety, e.g., \cite{Dona:2013qba, Wetterich:2019zdo}, indicating that further studies are necessary.\\
\FRT{Further reading:}\\
\FR{Integrating out matter fields}\\
In \cite{Christiansen:2017cxa}, the authors argue that truncations with minimally coupled matter fields are insufficient to infer whether or not bounds on the number of matter fields exist: They show that bounds disappear, if quantum fluctuations of matter fields are integrated out first, and of gravity last, instead of both simultaneously. If a truncation is large enough, the order in which fields are integrated out should be irrelevant.\\
\FR{Additional effective gravitational couplings}\\
The definition in Eq.~\eqref{eq:Geff} can be generalized to
\begin{equation}
\label{eq:Geffn}
G_{\rm eff}^{(n)} = \frac{\ensuremath{G_{\mathrm{N}}}}{(1-2\Lambda)^n}\,.
\end{equation}
For $n>1$, these couplings also make an appearance in beta functions. For fermions, they have been compared in \cite{Eichhorn:2018nda}.\\
\FR{Comparing background approximation and fluctuation computations}\\
For fermions, the decrease of $G_{\rm eff}$ comes about in different ways depending on the choice of approximation: In \cite{Dona:2013qba}, employing the background field approximation, $\Lambda$ becomes large and negative, which decreases $G_{\rm eff}$, in \cite{Eichhorn:2018nda}, which constitutes a fluctuation computation, $\Lambda$ stays approximately constant, while $G_N$ decreases. Such differences between approximations do not matter at the level of $G_{\rm eff}$, which is a more useful quantity to consider, both for its higher degree of robustness, and because $G_{\rm eff}$, not $G_N$, enters beta functions and thus determines the strength of gravity fluctuations. More generally speaking, $G_{\rm eff}$ behaves qualitatively similar for computations employing the background field approximation, and for fluctuation computations for all three types of matter, i.e., scalars, fermions and gauge fields, see \autoref{fig:GeffMatterplot}.\\
\FR{Background field approximation and fluctuation input}\\
In the background field approximation, the difference of background and fluctuation couplings is neglected, see \autoref{sec:methods}. A first step to lift this approximation can be achieved by taking into account the input of fluctuation couplings in the scale-dependence of the background couplings. In this way, the fixed-point values for the fluctuation coupling enter the beta-functions of the background couplings. In \autoref{fig:GMatterplot} we compare these two setups with each other. While the inclusion of fluctuation couplings changes the fixed-point values for $\bar{G}_{\mathrm{N}}$ on a quantitative level, the qualitative features remain rather robust.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\linewidth]{Figures/GeffMatterv3.pdf}
\caption{\label{fig:GMatterplot}
We show the fixed-point values of the background Newton coupling $\bar{G}_{\mathrm{N}}$ as a function of different matter fields. Full markers indicate the background field approximation. Empty markers indicate an approximation, where the fluctuation couplings enter the right-hand side of the flow equation, and therefore the beta-functions of the background couplings.}
\end{figure}
\section{On the near-perturbative nature of gravity-matter systems}
\label{sec:ner-pert}
\emph{Synopsis: An asymptotically safe fixed point can be near-perturbative, i.e., be close to canonical power counting. For such a fixed point, calculations are easier and systematic uncertainties are simpler to control. The SM coupled to gravity may have such a near-perturbative fixed point which is easy to connect to the perturbative RG flow of the SM at and below the Planck scale. More generally, phenomenological studies of asymptotically safe gravity-matter systems typically rely on the assumption that the system is near-perturbative.}
In the previous sections, we relied on an implicit assumption about the nature of the systems we investigated: this assumption is implicit in the beta functions we used, which are all limited to leading order terms (i.e., low orders in the couplings) and to the canonically least irrelevant interactions of the system.\\
This assumption is that the matter-gravity systems we investigated are sufficiently weakly coupled to be near-perturbative, despite being asymptotically safe. Technically, this makes robust calculations possible. Physically, this goes hand in hand with the idea that to control the (trans)planckian regime in a quantum field theory of gravity, a mechanism of dynamical weakening has to apply to quantum gravity.
To provide the technical and conceptual underpinning of the discussion in the previous sections, we therefore define in more detail what we mean by near-perturbative, and discuss the evidence for the near-perturbative nature of gravity-matter systems.
In general, interacting systems can be strongly coupled and governed by non-perturbative effects, or can be weakly coupled and governed by perturbative effects, or anything inbetween. This has phenomenological implications -- for instance, in strongly-coupled systems, the fundamental degrees of freedom can bind together and form new, stable or unstable states. It also has formal implications -- most importantly for the set of tools that is best to analyze the system and also for the type of approximations that can be made.
For gravity in the UV, one may first expect that it is strongly coupled and non-perturbative. This expectation arises, because the dimensionless Newton coupling, $\ensuremath{G_{\mathrm{N}}}$, grows when one goes from low to high momenta. If one simply extrapolates from the classical regime, where $\ensuremath{G_{\mathrm{N}}}(k) \sim k^2$, $\ensuremath{G_{\mathrm{N}}}$ becomes of order one at the Planck scale. This is typically interpreted as a sign of strong coupling. We rush to caution that values of couplings are not a good measure of perturbativity, because, e.g., the fixed-point value of a coupling can be changed arbitrarily by rescalings of the coupling.
However,
several sets of results indicate that asymptotically safe gravity-matter systems are near-perturbative at high energies. With near-perturbative we refer to a situation where the theory is interacting, i.e., not strictly perturbative, but at the same time lacks non-perturbative phenomena such as the formation of stable bound states.
Such near-perturbative behavior of asymptotically safe gravity matter systems is indicated by i) the critical exponents of higher-order interactions which remain near-canonical, ii) the contributions of quantum gravity to beta functions in the matter sector, which are small, and iii) symmetry identities between gravity-matter interactions which are near-trivial, as they are in perturbative settings. We will discuss each of these indications in the following.\\
\emph{Canonically irrelevant couplings remain irrelevant}\\
At a non-interacting fixed point, the critical exponents of all couplings correspond to their canonical mass dimension. At an interacting fixed point, the critical exponents still contain a dimensional contribution, but also an additional contribution $\delta_i$ which is induced by quantum fluctuations, i.e,
\begin{equation}
\Theta_i=d_{\bar{g}_i}+\delta_i\,.
\end{equation}
If the quantum contributions $\delta_i$ grow very large, they can turn canonically irrelevant couplings into relevant directions at an interacting fixed point. This would decrease the predictivity of the system, compared to the non-interacing fixed point, since more relevant directions indicate more free parameters that need to be fixed by experiments.
More importantly, this would indicate that the system is very non-perturbative, since quantum fluctuations drastically change qualitative features of the system. Conversely, an interacting fixed point where all canonically irrelevant couplings remain irrelevant, is near-perturbative and quantum fluctuations only change quantitative features, such as the value of couplings at low energies.\\
In all studies so far, the critical exponents of interactions involving matter fields follow their canonical mass dimension.
In particular, canonically irrelevant matter interactions \cite{Eichhorn:2011pc, Eichhorn:2012va, Eichhorn:2016esv, Christiansen:2017gtg, Eichhorn:2017eht, Eichhorn:2017als, Eichhorn:2020sbo} and non-minimal gravity-matter interactions \cite{Eichhorn:2016vvy, Eichhorn:2017sok, Eichhorn:2018nda, Eichhorn:2020sbo, Daas:2020dyo, Daas:2021abx} remain irrelevant at the asymptotically safe fixed point. This justifies truncations based on canonical power counting a-posteriori.
\\
\emph{The impact of gravity on the matter sector}\\
As mentioned previously, the strength with which gravitational fluctuations impact the matter sector is encoded in effective gravitational couplings
\begin{equation}
G_{\rm eff}^{(n)} = \frac{\ensuremath{G_{\mathrm{N}}}}{(1-2\Lambda)^n}\,.
\end{equation}
Accordingly, the impact of gravity on the matter sector becomes weaker, the more negative the fixed-point value of the cosmological constant, and the smaller the fixed-point value of the Newton coupling gets. Independent of the details of the setup, fermionic matter was found to decrease the effective gravitational coupling \cite{Eichhorn:2018nda}. Similarly, gauge fields decrease ${\ensuremath{G_{\mathrm{N}}}}_{,\,\mathrm{eff}\, n}$, since the fixed-point value for \ensuremath{G_{\mathrm{N}}}{} approaches zero for $\ensuremath{N_{\mathrm{V}}}\to\infty$. While scalar fields might increase ${\ensuremath{G_{\mathrm{N}}}}_{,\,\mathrm{eff}\, n}$, the effective gravitational couplings for the field content of the SM remains small.\\
As a consequence, the gravitational contribution to the scale dependence of matter couplings is subleading compared to matter contributions, such that the matter sector might remain near-perturbative at the UV-fixed point. This can also be seen by explicitly evaluating $f_g$ or $f_y$, e.g., at the fixed point found in \cite{Dona:2013qba}, as was done in \cite{Eichhorn:2017lry}, finding $f_g=0.048$. In particular, the observation of non-vanishing fermion masses requires a small-enough impact of gravity on the matter sector, as discussed in Fig.~\ref{fig:mattermattersinterplay}, see also \cite{Pastor-Gutierrez:2022nki}.\\
These results imply that the asymptotically safe fixed point might provide a straightforward UV-completion of the SM, which is perturbative at the Planck scale. Conversely, one may interpret the fact that the SM couplings are perturbative at the Planck scale as an indication, that a UV completion with gravity must be near-perturbative.
\\
\emph{Non-trivial symmetry identities}\\
Just like in Abelian and non-Abelian gauge theories, one breaks the gauge symmetry of gravity when computing gravitational fluctuations with the FRG. As a consequence, the scale dependence of different gravity-matter vertices differs from one another. In other words: the scale dependence of the Newton coupling, when read-off from different vertices, differs. This however does not mean that diffeomorphism invariance is manifestly broken. Instead, non-trivial symmetry identities, the Slavnov-Taylor identities, encode how diffeomorphism invariance is restored. If these identities are solved together with the scale-dependence of the effective action, diffeomorphism invariance is retained along the RG-trajectory.\\
Naively these identities are trivial in the perturbative regime, where a single gauge coupling can be defined. In Yang-Mills theories, the one-loop universality of beta-functions ensures exactly this feature: the scale dependence for the gauge coupling is identical, when extracted from pure gauge, or gauge-ghost vertices. In the non-perturbative regime however, the scale-dependence of different vertices disagrees, and can even feature opposite signs, see \cite{Cyrol:2016tym}.\\
In asymptotically safe gravity, different gravity-matter vertices are found to agree on a semi-quantitative level at the fixed point \cite{Christiansen:2017cxa, Eichhorn:2018akn, Eichhorn:2018ydy, Eichhorn:2018nda}, but can disagree significantly away from the fixed point.
The semi-quantitative agreement is defined in \cite{Eichhorn:2018akn, Eichhorn:2018ydy} by comparing the scale dependences of different gravity-matter vertices, and setting all versions of the Newton coupling equal to one another. If these differences between scale dependences are zero, one unique Newton coupling can be defined. If these differences are large, all different vertices have to be treated independently to fully capture the UV-behavior of asymptotically safe gravity-matter systems. In analogy to QCD, a semi-quantitative agreement between different vertices at the fixed-point indicates that the theory might be near-perturbative. In particular, it might imply that the underlying Slavnov-Taylor identities are trivial at the fixed point.\\
This provides another piece of evidence that asymptotically safe quantum gravity might be near-perturbative: it is non-perturbative enough to induce scale invariance at high energies, but remains as perturbative as possible.
\subsection{Light fermions}
\label{sec:lightfermions}
\emph{Synopsis: All fermions in the SM are light compared to the Planck scale. This is a consequence of chiral symmetry, which prevents the generation of fermion masses, and which is only broken spontaneously at the electroweak scale by the Higgs mechanism and below by QCD. In asymptotically safe quantum gravity, there are several conceivable mechanisms to break chiral symmetry, which would lead to inconsistencies with the observation of light fermions. Avoidance of some of these mechanisms puts lower and upper bounds on the number of light fermions. For the number of fermions in the SM, chiral symmetry is not broken in asymptotically safe quantum gravity.} \\
The masses of the fermions in the SM range from several hundred keV to several $\mathrm{GeV}$, hence they are very light compared to the Planck scale. The reason for this is that chiral symmetry in the SM is only broken spontaneously at the electroweak scale.
A chiral symmetry allows to rotate left and right-handed fermions $\psi_L$ and $\psi_R$\footnote{The left- and right-handed component of a Dirac fermion can be extracted through the projection operators $P_{R/L} = \frac{1}{2}\left(1\pm \gamma_5 \right)$ as $\psi_{R/L}= P_{R/L}\psi$.} independently. Depending on the number of fermions and their other symmetries, different symmetries can be chiral. For instance, at vanishing Yukawa couplings the quark sector of the Standard Model features an $SU(\ensuremath{N_{\mathrm{F}}})_L \times SU(\ensuremath{N_{\mathrm{F}}})_R$ symmetry, which rotates the $\ensuremath{N_{\mathrm{F}}}$ quark flavors into each other separately for left- and right-handed components.
A mass term $m_{\psi}\,\bar{\psi}\psi= 1/2 m_{\psi} \left(\bar{\psi}_R \psi_L + \bar{\psi}_L \psi_R \right)$ breaks this symmetry explicitly, see also \autoref{sec:GlobalSymms}. If chiral symmetry is broken -- explicitly or spontaneously -- at an energy scale $k_{\chi\rm{SB}}$, quantum fluctuations automatically generate a mass-term for fermions, since it is allowed by the symmetries of the theory, see the discussion in \autoref{sec:GlobalSymms}. Assuming that there is no fine-tuning, the generated, dimensionless mass is of order one at $k_{\chi \rm SB}$. Therefore, they decouple from the RG flow\footnote{Within the functional RG, this decoupling happens automatically due to the built-in threshold effects. Within perturbative RG schemes, the corresponding decoupling and matching at this scale has to be done by hand.} at $k_{\chi\rm SB}$ and their dimensionful masses are $\bar{m} = m\cdot k_{\chi\rm SB}$, where $m$ is of order one.
Therefore, light fermions in asymptotically safe quantum gravity can only be accommodated if gravity does not break chiral symmetry.
This breaking of chiral symmetry could in principle occur on different levels\footnote{ Not all of which are relevant for SM fermions, because some of them would be in conflict with SU(2) gauge symmetry.}: first, interactions that break chiral symmetry explicitly (such as a mass term) could be induced by quantum fluctuations; second, the chirally symmetric subspace could feature no fixed point, necessitating that an asymptotically safe theory contains chiral-symmetry-breaking couplings; third, chiral symmetry might be broken spontaneously during the flow towards the IR; and fourth, the background spacetime might break chiral symmetry. In this last possibility, thermal fluctuations (as they are relevant, e.g., in the very early universe), could also play a role.
We will now discuss these four possibilities.
First, we consider the explicit breaking of chiral symmetry by induced interactions. In asymptotically safe gravity, there are no indications that this occurs. Technically, this is because in all studies so far, the coefficient $\beta_0$ in \eqref{eq:symmbreak} vanishes for interactions that break chiral symmetry explicitly \cite{Eichhorn:2016vvy, Daas:2020dyo}, unless a regulator is chosen that explicitly breaks chiral symmetry.
\\
Second, we focus on chirally symmetric, induced interactions, and ask whether they feature a fixed point. In all studies to date, the answer is positive \cite{Eichhorn:2011pc, Meibohm:2016mkp, Eichhorn:2017eht, Eichhorn:2018nda, deBrito:2020dta}.
Just like for other matter fields, gravity induces self-interactions for fermions. Fermions are special with respect to the induced self-interactions, since the induced four-fermion interactions with the lowest mass dimension are of dimension six. This is a lower dimension than for induced interactions in the other sectors. One could therefore expect that gravity can more easily make the corresponding couplings relevant.
These interactions are of the form\footnote{While both $\lambda_{\pm}$ are induced by gravitational fluctuations, there is a different basis, where only one interaction is induced \cite{Eichhorn:2011pc, Eichhorn:2017eht}, while the second linearly independent four-fermion interaction can be set to zero consistently. Hence, this four-fermion interaction is the only example discovered so far, where gravitational fluctuations do not induce an interaction that satisfies the symmetries of the kinetic term.}
\begin{equation}
S_{\mathrm{Ferm, int.}}=\frac{1}{2 k^{2}}\,\int\mathrm{d}^4x \sqrt{g}\,\left(\lambda_{-}(V-A)+\lambda_{+}(V+A)\right)\,,
\end{equation}
with
\begin{equation}
V=\left(\bar{\psi}^i\gamma_{\mu}\psi^i\right)\left(\bar{\psi}^j\gamma^{\mu}\psi^j\right)\,,\qquad A=-\left(\bar{\psi}^i\gamma_{\mu}\gamma_5\psi^i\right)\left(\bar{\psi}^j\gamma^{\mu}\gamma_5\psi^j\right)\,.
\end{equation}
Explicit studies of the above chirally symmetric four-fermion interactions confirm that gravitational fluctuations induce those \cite{Eichhorn:2011pc}, see also \cite{Meibohm:2016mkp, Eichhorn:2017eht}. However, induced four-fermion interactions do not feature an excluded strong gravity regime, where the chirally symmetric subsector would be UV-incomplete. This is independent of the strength of gravitational fluctuations. Thus, also the second possibility for chiral-symmetry-breaking is ruled out in the studies to date. This can be seen by inspecting the two beta functions $\beta_{\lambda_{\pm}}$, which feature four fixed points for any positive value of $G$ and any value of $\Lambda$:
\begin{equation}
\begin{aligned}
\label{eq: betaplusminus}
\beta_{\lambda_{\pm}}=&\,\,2\lambda_{\pm}+M_{\pm}
-\frac{5 \lambda _{\pm} G}{8 \pi (1-2 \Lambda )^2}\pm\frac{5 G^2}{8 (1-2 \Lambda )^3}\\
&+\frac{3 \lambda _{\pm} G}{4 \pi (3-4 \Lambda )}+\frac{15 \lambda _{\pm} G}{8 \pi (3-4 \Lambda )^2}\,,
\end{aligned}
\end{equation}
where the matter contributions $M_{\pm}$ are given by
\begin{equation}
M_+=\,\,\frac{8 \lambda _+ \left(\lambda _- \left(N_F+1\right)
\right)
}{32 \pi ^2}\,,\qquad
M_-=\,\,\frac{4 \lambda _-^2 \left(N_F-1\right)+4 \lambda _+^2 N_F
}{32 \pi ^2}\,.
\end{equation}
These beta functions admit four partial fixed points, one of which is the shifted Gaussian fixed point of interest, for all values of $G$ and $\Lambda$.
Next, we consider the spontaneous breaking of chiral symmetry by gravitational fluctuations. The spontaneous breaking of chiral symmetry is linked to the formation of bound states, as we will explain below. Because gravity is an attractive force, one may expect that it favors bound-state formation. However, explicit calculations show that this intuition, based on the classical nature of gravity, fails to correctly predict the effect of quantum gravitational fluctuations.\\
To understand the relation between the spontaneous breaking of chiral symmetry, the associated massless Goldstone boson and the induced four-fermion interactions $\lambda_{\pm}$, we first perform a Fierz transformation into a scalar-pseudoscalar basis. Focusing on $\lambda_+$, the transformation reads \cite{Gies:2001nw, Braun:2011pp}
\begin{equation}
\left(V+A\right)=-\frac{1}{2}\left[\left(\bar{\psi}^i\psi^j\right)^2-\left(\bar{\psi}^i\gamma_5\psi^j\right)^2\right]\,.
\end{equation}
In this basis, the four-fermion interactions can be rewritten in terms of auxiliary fields using a Hubbard-Stratonovich transformation, i.e., a change of fields in the path integral, see, e.g., \cite{Braun:2011pp}. Focusing on the case of a single flavor for illustration, the scalar part of the four-fermion interaction can be rewritten as
\begin{equation}
\label{eq:hubstrat}
-\frac{\lambda_{\psi}}{4}\left(\bar{\psi}\psi\right)^2=\left[h(\bar{\psi}\psi)\phi+m^2_{\phi}\phi^2\right]_{\mathrm{EoM}(\phi)}\,,
\end{equation}
where
\begin{equation}
\label{eq:ffrel}
m^2_{\phi}=\frac{h^2}{\lambda_{\psi}}\,,
\end{equation}
and where $h$ is some arbitrary, real, constant, which holds on the equation of motion for the scalar field $\phi$, i.e., when the auxiliary field $\phi$ is integrated out in the path integral.
\begin{figure}[!t]
\includegraphics[width=\linewidth,clip=true, trim=3cm 16cm 8cm 2cm]{Figures/chSB.pdf}
\caption{\label{fig:chSB} We show the schematics of spontaneous chiral symmetry breaking: The effective potential for the bound-state field $\phi$ has a nontrivial minimum -- breaking chiral symmetry, because a vev for $\phi$ corresponds to a vev for the fermion bilinear $\bar{\psi}\psi$ -- after going through the point $m_{\phi}^2=0$ (purple dashed curve). In turn, the mass is inversely proportional to $\lambda_{\psi}$. Thus, the beta function in red corresponds to a chirally broken regime, because $\lambda_{\psi}$ diverges. For the other beta function, the initial conditions can be chosen in the blue dashed region, corresponding to finite $\lambda_{\psi}$ and thus unbroken chiral symmetry. The left fixed point corresponds to the sGFP, while the other one is interacting, even in the absence of additional fields.}
\end{figure}
In terms of the scalar field $\phi$, chiral symmetry is spontaneously broken, when the mass term $m^2_{\phi}$ becomes negative. Since the mass term is related to the four-fermion interaction, see \eqref{eq:ffrel}, the onset of chiral-symmetry breaking is indicated by a divergence of the four-fermion interaction. This argument, which we exemplified on a single channel and for a single flavor, generalizes to the full system of $\ensuremath{N_{\mathrm{F}}}$ flavors, see also \cite{Braun:2011pp} for a review. Hence, if the four-fermion interaction diverges, quantum gravity has broken chiral symmetry. This could happen (and does happen, e.g., in QCD \cite{Braun:2011pp}), when the beta function for the four-fermion interaction has no fixed points and is negative. Then, under the RG flow towards the infrared, the four-fermion interaction diverges.
However, since gravitational fluctuations do not induce a fixed-point collision of the SGFP, the four-fermion interaction cannot grow beyond a bound. Hence, chiral symmetry is not broken spontaneously by gravitational fluctuations.\\
This is very different to fluctuations of gauge fields, which induce the same four-fermion interactions and indeed give rise to a fixed-point collision during the flow towards the IR, which can be related with spontaneous chiral-symmetry-breaking, see, e.g., \cite{Gies:2001nw, Braun:2005uj, Braun:2006wu, Braun:2011pp}.\\
Combining the result from the gravitational and the gauge sector, one can obtain a lower bound on the number of fermions:
If one assumes that an interacting fixed point for the Abelian gauge coupling is realized, see \autoref{sec:Gaugesector} for details, this can indeed give rise to broken chiral symmetry: gravitational fluctuations set a fixed-point value $\ensuremath{g_{\mathrm{y},\,\ast}}$ for the gauge coupling, which, in analogy to the situation in QCD, can induce a fixed-point collision in $\lambda_{\pm}$. This can be seen from the additional contributions proportional to the gauge coupling that arise in Eq.~\eqref{eq: betaplusminus}; where a contribution $\sim \ensuremath{g_{\mathrm{y}}}^4$ arises in $\beta_{\lambda_+}$ and a contribution $\sim -\ensuremath{g_{\mathrm{y}}}^4$ in $\beta_{\lambda_-}$.\\
Since $\ensuremath{g_{\mathrm{y},\,\ast}}$ decreases when increasing the number of fermions in the system, this mechanism gives a lower bound on the number of fermions such that chiral symmetry is unbroken, see \cite{deBrito:2020dta}.
Similarly, if the presence of topology-changing gravitational instantons are assumed, chiral symmetry can also become anomalous and be broken spontaneously \cite{Hamada:2020mug}. \\
The fourth possibility through which quantum gravity might break chiral symmetry is via the background geometry. This is a mechanism which is already active for fermions on classical spacetimes: an anti-deSitter background adds an ``effective" negative mass term for fermions and thus breaks chiral symmetry through gravitational catalysis. One can think of this mechanism as follows: the effective potential for the fermion bilinear (or the corresponding scalar field) depends on the background geometry. An anti-deSitter geometry of sufficiently negative curvature favors a non-trivial ground state, and thus chiral symmetry breaking, over the trivial ground state \cite{Inagaki:1997kz, Ebert:2008pc}. The pertinent quadratic term of the effective potential for the scalar $\phi$ is given by \cite{Gies:2018jnv}
\begin{equation}
U(\phi)= -N_f \phi^2 \left(\# \frac{|R|^{3/2}}{k_{\rm IR}} + \xi\, |R| \right),
\end{equation}
with $R<0$ and where $\#$ is a positive number that depends on the details of the regularization. This gives rise to a curvature bound that parametrically depends on the nonminimal coupling $\xi$.\\
Of course, our universe is not an anti-deSitter spacetime. Nevertheless, gravitational catalysis can become relevant, because the small-scale geometry of quantum spacetime could have both negative- and positive-curvature regions and anti-deSitter is an effective description of simple negative-curvature regions.\\
Using this reasoning, in \cite{Gies:2018jnv, Gies:2021upb}, the curvature bounds were used to constrain asymptotically safe gravity: because an increasing number of fermions shifts the (background) cosmological constant to negative values, and thus shift the background curvature to large negative values, gravitational catalysis may become active at large enough fermion numbers.
Thus,
this mechanism gives rise to an upper bound on the number of fermions which can be light \cite{Gies:2018jnv, Gies:2021upb}. The exact value of this upper bound depends on the assumed structure of the space-time geometry, and on the presence and strength of thermal fluctuations \cite{Gies:2018jnv, Gies:2021upb}.
\section*{Keywords:}Asymptotic safety, quantum gravity and matter, Standard Model, scale invariance, Beyond Standard Model
\tableofcontents
\input{Input/Motivation}
\input{Input/GravityFP}
\input{Input/GlobalSymm}
\input{Input/Lightfermions}
\input{Input/dgreater4}
\section{
Towards a UV completion of the Standard Model
}
\input{Input/SMUVcompletion}
\section{Physics beyond the Standard Model}
\input{Input/DarkMatter}
\input{Input/nondarkBSM}
\input{Input/nearperturbative}
\section{Summary, outlook, and open questions}
\input{Input/summary}
\section*{Acknowledgments}
We thank Johanna Borissova for comments on the manuscript.
A.~E.~is supported by a research grant (29405)
from VILLUM FONDEN. M.~S.~acknowledges support by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities.
\bibliographystyle{spbasic}
|
1,108,101,564,212 | arxiv |
\section{Method}\label{sec:method}
\vspace{-0.1cm}
\begin{figure*}
\vspace{0.2cm}
\centering
\includegraphics[page=2,width=0.9\textwidth]{srcs/images/dc_overview.pdf}
\caption{{\it ACDC-Net}~processes the initial depth map, guide image and rasterised 3D landmarks using three instances of the same backbone with shared weights, except for the batch normalization (BN) layers. It uses 'channel exchange' mechanism~\cite{Wang20arxiv}, replacing feature maps with low importance with most important ones from other network instances, estimating importance as BN scaling magnitude. During training we use the output depth map to compute photometric losses $L_{stereo}^{on}$,$L_{stereo,\{t-1, t+1\}}^{off}$, $L_{temp,\{L,R\}}^{off}$,
consistency losses for the initial depth ($L_{sd}$) and sparse landmark projections ($L_s$), a smoothness loss ($L_{smooth}$), and a sparsity-encouraging loss $L_{\gamma}$ for the BN scaling factors (not shown).
}\vspace{-0.3cm}
\label{fig:losses}
\end{figure*}
The main goal of our method is to complete and refine depth maps produced by an active stereo system, see Fig.~\ref{fig:overview}. To do that efficiently, we leverage a feature-based visual inertial SLAM system to produce motion estimates and accurate (but sparse) 3D landmarks for supervision of the depth completion network. In particular, we propose an end-to-end pipeline that takes the following inputs: i. a reference IR frame captured by the active stereo sensor; ii. a semi-dense depth map produced by the active stereo sensor using a classical stereo-method and iii. an image where pixels are either zero or equal to the depth of the 3D landmarks tracked by the SLAM system and projected to the reference IR frame. Our method outputs a depth map aligned with the reference IR frame and works in real-time on a modern GPU. In our system, we run stereo visual-inertial SLAM on IR images without active illumination, obtained from the same sensor in the so-called {\em interleaved} mode, but our solution can be used alongside other types of SLAM and VO (\textit{e}.\textit{g}. monocular RGB-D SLAM).
\vspace{-0.1cm}
\subsection{ACDC-Net}
\vspace{-0.1cm}
Our network takes an initial depth map, a guidance image and sparse 3D landmarks projected into the current frame as inputs, and outputs a refined depth map, see Fig.~\ref{fig:losses}. Previous depth completion methods have combined multi-modal input with either aggregation based methods such as concatenation and addition~\cite{Chen19arxiv,Eldesokey20pami} or alignment based methods~\cite{zhang18cvpr}. However, these methods are inadequate to balance the trade-off between inter-modal fusion and intra-model processing. To address this, we adapt the channel exchange framework~\cite{Wang20arxiv} that dynamically and parameter-free performs multi-modal fusion by exchanging feature channels between $M$ identical sub-networks~$f_m$. The channel exchange is self-guided by individual channel importance that is measured by the magnitude of Batch-Normalisation (BN) scaling factor $\gamma_{m, l}$ during training. The final output is a learned linear combination of the sub-networks. Since the sub-networks share all weights, the ensemble can be computed efficiently in parallel on a GPU.
\iffalse
The final output $\hat{D}$ is a linear combination of the sub-networks with a decision score $\alpha_m$ learned with an associated softmax.
\begin{equation}
\hat{D} = \sum_{m=1}^{M} \alpha_{m} f_{m}\left(\boldsymbol{x}_{m}^{(i)}\right),
\end{equation}
where the input $\boldsymbol{x}^i$ is the initial depth, guidance image or sparse 3D landmarks. We note that since $f_m$ share all weights (except BN), the ensemble can be computed very efficiently in parallel on a GPU.
\fi
Each sub-network is equipped with BN layers containing the scaling factors $\gamma_{m,l}$ for the $l^{th}$ layer. A channel is replaced if the magnitude of the sub-network-specific BN scaling factor $\gamma_{m,l}$ is below a fixed threshold.
\iffalse
\begin{equation}
\resizebox{0.9\linewidth}{!}{
\cramped{
\boldsymbol{x}_{m, l, c}^{\prime}=
\begin{dcases}
\gamma_{m, l, c} \frac{\boldsymbol{x}_{m, l, c}-\mu_{m, l, c}}{\sqrt{\sigma_{m, l, c}^{2}+\epsilon}}+\beta_{m, l, c}& \text{if } \gamma_{m, l, c}>\theta\\
\max_{m^{\prime} \neq m} \gamma_{m^{\prime}, l, c} \frac{\boldsymbol{x}_{m^{\prime}, l, c}-\mu_{m^{\prime}, l, c}}{\sqrt{\sigma_{m^{\prime}, l, c}^{2}+\epsilon}}+\beta_{m^{\prime}, l, c} & \text{else}
\end{dcases}
}
}
\end{equation}
where $\mu_{m,l,c}$, $\sigma_{m,l,c}^2$, $\gamma_{m,l,c}$ and $\beta_{m,l,c}$ are the mean and variance for the batch, the learnable scaling and bias parameters for the $c^{th}$ channel in the $l^{th}$ layer in the $m^{th}$ sub-networks.
\fi
As opposed to \cite{Wang20arxiv} that replaces with average channel signal across the sub-networks, we propose to replace with the channel with the strongest signal, \textit{i}.\textit{e}.~$\max$ over the sub-networks.
\iffalse
\begin{equation}
\boldsymbol{x}_{m, l, c}^{\prime}=\left\{\begin{array}{l}
\gamma_{m, l, c} \frac{\boldsymbol{x}_{m, l, c}-\mu_{m, l, c}}{\sqrt{\sigma_{m, l, c}^{2}+\epsilon}}+\beta_{m, l, c} \text { if } \gamma_{m, l, c}>\theta \\
\frac{1}{M-1} \sum_{m^{\prime} \neq m}^{M} \gamma_{m^{\prime}, l, c} \frac{\boldsymbol{x}_{m^{\prime}, l, c}-\mu_{m^{\prime}, l, c}}{\sqrt{\sigma_{m^{\prime}, l, c}^{2}+\epsilon}}+\beta_{m^{\prime}, l, c}, \text { else }
\end{array}\right.
\end{equation}
\fi
\vspace{-0.1cm}
\subsection{Visual Inertial SLAM}\label{sec:vslam}
\vspace{-0.1cm}
Our visual inertial SLAM system is inspired by the approach presented in~\cite{Leutenegger15ijrr}. We combine reprojection and IMU errors and formulate the multi-sensor fusion SLAM problem as a factor graph, where the goal is to estimate a set of navigation states (camera poses, speed and biases) plus a collection of 3D landmarks given a set of measurements that includes visual and IMU measurements. For simplicity, we assume that the intrinsics and extrinsics calibration of the sensor rig are given. Note that the visual inertial SLAM operates only on the stereo frames without active illumination.
For a particular stereo frame in time with active illumination, we query from the SLAM system the tracked landmarks from the previous frame without active illumination and project those into the active stereo frame using the estimates of the 3D landmarks, sensor pose and known intrinsics and extrinsics. In this way we generate a depth image where pixels are either zero or equal to the depth of the projected 3D landmarks.
We can get an estimate of the sensor pose for frames with active illumination by integrating IMU measurements between two consecutive states~\cite{Forster17tro}. \update{Even though our SLAM system operates in real-time, the poses and 3D landmarks used for supervision during training come from a batch bundle adjustment optimisation step that uses the passive frames, IMU measurements and loop closures during the sequence}.
Our SLAM system uses the A-KAZE~\cite{Alcantarilla13bmvc} feature extractor, although the method is general enough to use other types of handcrafted or deep learning-based feature extractors.
\vspace{-0.1cm}
\subsection{Training}
\vspace{-0.1cm}
Fig.~\ref{fig:losses} depicts an overview of the losses used in our method.
\noindent\textbf{Photometric Loss Review.} Using an obtained pose $T_{T \rightarrow S}$ between a source view $I_{S}$ and a target view $I_{T}$, we seek to predict the dense depth map $\hat{D}_T$ that minimises the photometric reprojection error $L_p$~\cite{Godard19iccv}:
\begin{align}
L_{p}(I_T, I_{S \rightarrow T}) &= pe( I_T, I_{S \rightarrow T} ) \\
\text{and } \quad I_{T \rightarrow S} &= I_S \langle proj(\hat{D}_T, T_{T \rightarrow S}, K) \rangle \label{eq:sample},
\end{align}
where $pe()$ is a functional that operates over the whole image,
$proj()$ maps $\hat{D}_T$ into 2D coordinates in frame $S$, given $T_{T \rightarrow S}$ and camera extrinsics $K$, and $\langle \rangle$ is the sampling operator. We use bilinear sampling to query from the source images~\cite{Godard19iccv,Zhou17cvpr} . We follow common practice~\cite{Godard19iccv} and define $pe()$ as a weighted average of SSIM and $l_1$ norm with weights~$0.85$~and~$0.15$. \update{The photometric loss assumes a static scene. To obey this assumption, we use a small temporal baseline and recordings in mostly static environments.}
\iffalse
\begin{equation}
pe(I_{S \rightarrow T}, I_T) = \alpha ( 1 - \text{SSIM}(I_{S \rightarrow T}, I_T)) + (1 - \alpha) \| I_{S \rightarrow T} - I_T \|_1,
\end{equation}
where $\alpha = 0.85$.
\fi
\noindent\textbf{Photometric Loss for Interleaved Mode.} The loss must satisfy the photometric consistency constraint, which assumes a static Lambertian scene with constant illumination between source and target frame. This constraint holds between active stereo images, but not between active and passive frames or even between consecutive active frames ($I_{t}^{on}$ and $I_{t+2}^{on}$) as the projector moves with the camera ($t$ indexes over time).
Similar to {\it ActiveStereoNet}~we compute the photometric reprojection error between the stereo frames with the projector on $L_{stereo}^{on}$.
\iffalse
\begin{equation}
L_{stereo}^{on} = L_p(I_{t, L}^{on}, I_{t, R\rightarrow L}^{on}).
\end{equation}
\fi
However, we take one step further by enforcing temporal consistency. We reproject both the previous and next passive images into the current view at time $t$. These passive images satisfy the photometric consistency constraint as neither have the active pattern. We compute the temporal losses, $L_{temp, R}^{off}$ and $L_{temp, L}^{off}$, for both the left/right stereo frame \update{in the current left view. Thus, all photometric losses are computed in the current left view using the depth of this frame for reprojection.}
\iffalse
as:
\begin{align}
L_{temp, R}^{off} &= L{p}(I_{t-1 \rightarrow t, R \rightarrow L}^{off}, I_{t+1 \rightarrow t, R \rightarrow L}^{off}), \\
L_{temp, L}^{off} &= L{p}(I_{t-1 \rightarrow t, L}^{off}, I_{t+1 \rightarrow t, L }^{off}),
\end{align}
note that both left and right frames are projected into the current view which is in the left frame.
\fi
Lastly,
we compute the passive stereo losses, $L_{stereo, t-1}^{off}$ and $L_{stereo, t+1}^{off}$, for the next and previous frames.
\iffalse
\begin{align}
L_{stereo, t-1}^{off} &= L{p}(I_{t-1 \rightarrow t, L}^{off}, I_{t-1 \rightarrow t, R\rightarrow L}^{off}), \\
L_{stereo, t+1}^{off} &= L{p}(I_{t+1 \rightarrow t, L}^{off}, I_{t+1 \rightarrow t, R\rightarrow L}^{off}).
\end{align}
\fi
\noindent\textbf{min-Operator for Interleave Mode.}
\update{When computing the photometric error from multiple views, occluded pixels that are only visible in the target image can cause problems. If the network predicts the correct depth, the corresponding color in an occluded source image will likely not match the target, which leads to a high photometric error. Godard {\it et al. }\cite{Godard19iccv} introduced a $\min$-operator between two temporal reconstruction losses to remove occluded pixels from the loss, however applying this solution across all losses removes all signal from the active pattern as seen in Fig.~\ref{fig:passive_active_loss}.} We propose to split the $\min$ operation into an active and passive part. \update{Fig.~\ref{fig:passive_active_loss} shows that the proposed split} preserves signal in the texture-less areas. The occlusion artefacts mainly stem from the temporal losses as the baseline between temporal frames is larger than between the stereo frames. Therefore, it is enough to perform the $\min$ operation over the temporal frames.
\begin{align}\label{eq:photo_temporal}
L_{photo} &= L_{stereo}^{on}+ \beta \min(L_{photo}^{off}),\\
L_{photo}^{off} &= \left\{ L_{temp, R}^{off}, L_{temp,L}^{off}, L_{stereo, t-1}^{off}, L_{stereo,t+1}^{off} \right\},\nonumber
\end{align}
where we set $\beta = 1$.
To filter out stationary pixels, we apply auto-masking~\cite{Godard19iccv}.
We obtain the auto-mask by comparing the photometric projection error from source to target image with the identity reprojection.
We apply this auto-masking for all passive losses.
\iffalse
\begin{equation}
\mu = [ min_s pe(I_t, I_{s \rightarrow t} < min pe(I_t, I_s)].
\end{equation}
where $[]$ is the Iverson bracket.
\fi
\begin{figure}
\vspace{0.2cm}
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/loss_visualization/ann_171_ir0.png}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/loss_visualization/magma_171_passive.png}
\end{subfigure} \\ \vspace{0.01cm}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/loss_visualization/magma_171_all.png}
\end{subfigure}
\begin{subfigure}[b]{0.45 \linewidth}
\centering
\includegraphics[width=\linewidth]{srcs/images/loss_visualization/magma_171_our.png}
\end{subfigure}
\caption{Applying the minimum technique~\cite{Godard19iccv} over passive and active losses kills all signal from the active loss. Therefore, we propose to take the $\min$ only over the temporal, passive losses. This results in a strong signal on textureless surfaces. \update{Lighter color means higher signal.}}
\label{fig:passive_active_loss}
\vspace{-0.6cm}
\end{figure}
\noindent\textbf{Input-output losses.} For large parts of the scene, the active stereo sensor computes reliable semi-dense depth estimates $D_{sd}$. We use the $l_1$-norm to penalise the difference between~$D_{sd}$ and our depth prediction $\hat{D}$ over $\Omega_{sd}$, where semi-dense depth is available.
\begin{equation}
L_{sd} = \sum_{i \in \Omega_{sd}} w(i) \| D_{sd}(i) - \hat{D(i)} \|_1.
\end{equation}
\update{where $w(i) = 1/D_{sd}(i)^2$ such that distant regions are penalized less by the input-output loss.} Also, we use the depth of the sparse points from the SLAM system as supervision:
\begin{equation}\label{eq:sparse_loss}
L_{s} = \sum_{i \in \Omega_s} \| D_{s}(i) - \hat{D}(i) \|_1.
\end{equation}
These features are more sparse than the semi-dense depth estimates, but more reliable, as they are tracked and refined over several frames. The sparse features are located at boundaries, and at distant objects, where the semi-dense depth maps are often missing depth measurements due to occlusions or the longer range. Therefore, we find that these two losses complement each other and speed up training significantly.
\noindent\textbf{Smoothness loss.} This loss is necessary to improve convergence and avoid the aperture problem. As our guidance image has the active pattern, we cannot directly apply the standard edge-guided smoothness loss \cite{Godard19iccv}. Instead, we use a $9 \times 9$ median filter on the infrared frame to remove the projected pattern followed by an edge-guided smoothness loss:
\begin{equation}
L_{smooth}=\left|\partial_{x} d_{t}^{*}\right| e^{-\left|\partial_{x} I_{t}\right|}+\left|\partial_{y} d_{t}^{*}\right| e^{-\left|\partial_{y} I_{t}\right|}.
\end{equation}
\noindent\textbf{Sparsity regularisation.} Following \cite{Wang20arxiv}, we add an $l_1$ regularisation to the BN scaling factors $\gamma_{m,l}$ to encourage channel exchange between the different sub-networks:
\begin{equation}\label{eq:gamma_regularization}
L_{\gamma} = \sum_m \sum_l \| \gamma_{m, l} \|_1,
\end{equation}
where $m$ indexes over sub-networks and $l$ over BN layers.
\noindent\textbf{Final loss.} Combining these losses gives us the final loss:
\begin{equation}
L = \sum_l \frac{w_1}{l^2} L^l_{photo} + \frac{w_2}{l^2} L^l_{sd} + \frac{w_3}{l^2} L^l_{s} + \frac{w_4}{l^2} L^l_{sm} + w_5 L_{\gamma},
\end{equation}
where $l$ indexes over several scales, and we set $w_1 = 1,~w_2 = 0.01,~w_3 = 1,~w_4 = 10^{-5},~w_5 = 2 \times 10^{-6}$. We compute the losses at four scales. We summarise these losses in Fig. \ref{fig:losses}. Note that all losses are averaged over valid pixels.
\iffalse
\textbf{Uncertainty estimation.} Additionally, we also train a model with uncertainty estimation, similar to \textit{Self-Teaching} scheme presented in \cite{PoggiATM20}. We first train a model in a self-supervised manner, with the losses mentioned in this section, obtaining a model $T$, producing a noisy distribution $d_T$. Then, we train a second instance of the same model, $S$, that uses $d_T$ as pseudo-ground truth. $S$ weights are initialised with the weights of $T$. During training of $S$, we use all the losses introduced in this section except the photo-metric loss \cf Eq. \ref{eq:photo_temporal}, which we substitute with:
\begin{equation}
L_{self} = \frac{\mu(d_S) - d_T}{\sigma(d_S)} + \log(\sigma(d_S)).
\end{equation}
The network $S$ predicts $\log(\sigma(d_S))$ for numerical stability, within a range from -6 to 6:
\begin{equation}
\log(\sigma(d_S)) = -6 + 12*sigmoid(out_{uncertainty}),
\end{equation}
where $out_{uncertainty}$ is the output from the network.
\fi
\section{Appendix}
The appendix contains more details about the channel exchanging network and the proposed $\max$ channel exchange mechanism. We further present explicit expressions for all the photometric losses used for training. We provide additional qualitative results and qualitatively show how different inputs contribute to the final loss. Lastly, we provide more details into the proposed datasets.
\subsection{Channel Exchange Network}\label{sec:model_appendix}
To make the paper self contained, we provide some extra details into the channel exchanging network (CEN). We further highlight that taking the maximum across channels during the exchange, rather than the average as proposed in \cite{Wang20arxiv}, gives superior results. As mentioned in the main paper, we have three identical sub-networks, $f_m$, with shared weights (except for the Batch-Normalisation (BN) layers).
The final output $\hat{D}$ is a linear combination of the sub-networks with a decision score $\alpha_m$ learned with an associated softmax.
\begin{equation}
\hat{D} = \sum_{m=1}^{M} \alpha_{m} f_{m}\left(\boldsymbol{x}_{m}^{(i)}\right),
\end{equation}
where the input $\boldsymbol{x}^i$ is the initial depth, guidance image or sparse 3D landmarks. Channel exchange is achieved via the BN layers, where the individual channel importance that is measured by the magnitude of BN scaling factor $\gamma_{m, l}$ during training. \cite{Wang20arxiv} propose that a channel is replaced with the average across the other sub-networks, if the magnitude of the sub-network-specific BN scaling factor $\gamma_{m,l}$ is below a fixed threshold
\begin{equation}
\resizebox{0.9\linewidth}{!}{$
\cramped{
\boldsymbol{x}_{m, l, c}^{\prime}=
\begin{dcases}
\gamma_{m, l, c} \frac{\boldsymbol{x}_{m, l, c}-\mu_{m, l, c}}{\sqrt{\sigma_{m, l, c}^{2}+\epsilon}}+\beta_{m, l, c}& \text{if } \gamma_{m, l, c}>\theta\\
\frac{1}{M-1}\sum^M_{m^{\prime} \neq m} \gamma_{m^{\prime}, l, c} \frac{\boldsymbol{x}_{m^{\prime}, l, c}-\mu_{m^{\prime}, l, c}}{\sqrt{\sigma_{m^{\prime}, l, c}^{2}+\epsilon}}+\beta_{m^{\prime}, l, c} & \text{else}
\end{dcases}
}
$}
\end{equation}
In contrast to \cite{Wang20arxiv}, that replaces with average channel signal across the sub-networks, we propose to replace with the channel with the strongest signal, \textit{i}.\textit{e}. $\max$ over the sub-networks.
\begin{equation}
\resizebox{0.9\linewidth}{!}{$
\cramped{
\boldsymbol{x}_{m, l, c}^{\prime}=
\begin{dcases}
\gamma_{m, l, c} \frac{\boldsymbol{x}_{m, l, c}-\mu_{m, l, c}}{\sqrt{\sigma_{m, l, c}^{2}+\epsilon}}+\beta_{m, l, c}& \text{if } \gamma_{m, l, c}>\theta\\
\max_{m^{\prime} \neq m} \gamma_{m^{\prime}, l, c} \frac{\boldsymbol{x}_{m^{\prime}, l, c}-\mu_{m^{\prime}, l, c}}{\sqrt{\sigma_{m^{\prime}, l, c}^{2}+\epsilon}}+\beta_{m^{\prime}, l, c} & \text{else}
\end{dcases}
}
$}
\end{equation}
where $\mu_{m,l,c}$, $\sigma_{m,l,c}^2$, $\gamma_{m,l,c}$ and $\beta_{m,l,c}$ are the mean and variance for the batch, the learnable scaling and bias parameters for the $c^{th}$ channel in the $l^{th}$ layer in the $m^{th}$ sub-networks.
In Fig. \ref{fig:exchange_max}, we show the channel exchange for a frame of D435i test set. We can observe that our method learns to route features from the sparse branch to disparity and infrared branches.
Our proposal produces pixel-wise routing (see Fig. \ref{fig:exchange_max}), while the channel exchange proposed by \cite{Wang20arxiv} can not route from different branches, because they use an average of the other branches.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_exchange_max/992987150000_ir0_csv.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_exchange_max/992987150000_disp_csv.png}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_exchange_max/layer0_channel0_feature0.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_exchange_max/layer0_channel0_feature19.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_exchange_max/layer0_channel1_feature0.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_exchange_max/layer0_channel1_feature19.png}
\caption{}
\end{subfigure}
\caption{\textit{Top row: Input infrared and predicted disparity, input sparse points are projected to both. Bottom row: Exchanged features for first ResNet block: (a) Disparity branch, feature \#0, (b) Disparity branch, feature \#19, (c) Infrared branch, feature \#0, (d) Infrared branch, feature \#19. Black, gray, red: features copied from sparse, disparity and infrared respectively.} Features of the sparse branch are copied to both disparity and infrared branches. The proposed 'maximum exchange' allows the network to route features from sparse directly, instead of an average of other branches.}
\label{fig:exchange_max}
\vspace{-0.4cm}
\end{figure}
\subsection{Explicit Expression for Photometric Losses}\label{sec:losses_supplementary}
We use a combination of passive and active photometric losses. We propose photometric losses that operates on the active stereo images ($I_{t,L}^{on}$ and $I_{t,R}^{on}$) and the passive stereo images at the next time step ($I_{t+1,L}^{off}$ and $I_{t+1,R}^{off}$) and the previous time step ($I_{t-1,L}^{off}$ and $I_{t-1,R}^{off}$). We compute the photometric reprojection error between the stereo frames with the projector on $L_{stereo}^{on}$
\begin{equation}
L_{stereo}^{on} = L_p(I_{t, L}^{on}, I_{t, R\rightarrow L}^{on}).
\end{equation}
We incorporate temporal consistency and a wider baseline by computing the photometric losses between the previous and next passive frames. This is achieved by reprojecting both the previous and next passive images into the current view at time $t$. These passive images satisfy the photometric consistency constraint as neither have the active pattern. We compute the temporal losses, $L_{temp, R}^{off}$ and $L_{temp, L}^{off}$, for both the left/right stereo frame as:
\begin{align}
L_{temp, R}^{off} &= L{p}(I_{t-1 \rightarrow t, R \rightarrow L}^{off}, I_{t+1 \rightarrow t, R \rightarrow L}^{off}), \\
L_{temp, L}^{off} &= L{p}(I_{t-1 \rightarrow t, L}^{off}, I_{t+1 \rightarrow t, L }^{off}),
\end{align}
note that both left and right frames are projected into the current view which is in the left frame.
Lastly, since we have reprojected the previous/next left and right frames into the current frame, we can with minimal additional cost compute the passive stereo losses, $L_{stereo, t-1}^{off}$ and $L_{stereo, t+1}^{off}$, for the next and previous frames
\begin{align}
L_{stereo, t-1}^{off} &= L{p}(I_{t-1 \rightarrow t, L}^{off}, I_{t-1 \rightarrow t, R\rightarrow L}^{off}), \\
L_{stereo, t+1}^{off} &= L{p}(I_{t+1 \rightarrow t, L}^{off}, I_{t+1 \rightarrow t, R\rightarrow L}^{off}).
\end{align}
\begin{figure*}[hpt!]
\centering
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/496190030000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/496190030000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/496190030000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/496190030000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/496190030000_r50sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/529016270000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/529016270000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/529016270000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/529016270000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/529016270000_r50sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/563043470000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/563043470000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/563043470000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/563043470000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/563043470000_r50sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/578989550000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/578989550000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/578989550000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/578989550000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/578989550000_r50sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/983179310000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/983179310000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/983179310000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/983179310000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/983179310000_r50sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/998324750000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/998324750000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/998324750000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/998324750000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/998324750000_r50sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/1031084270000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/1031084270000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/1031084270000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/1031084270000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/1031084270000_r50sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/486334783000_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/486334783000_sgbm.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/486334783000_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/486334783000_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_comparison_realsense/486334783000_r50sparse.png}
\end{subfigure}
\caption{\textit{From left to right: Input infrared with projected sparse points, input disparity, {\it ActiveStereoNet}~\cite{Zhang18eccv}, {\it ACDC-Net}-R18 and {\it ACDC-Net}-R50.} Qualitative comparison shows that our model produces accurate depth for challenging scenarios as thin structures (rows 1, 5 and 6), distant objects (row 4), lots of small objects (row 2), specularities (row 7). We also show two failure cases (rows 3 and 8).}
\label{fig:results_realsense_supp}
\end{figure*}
\subsection{Additional qualitative results}
\label{sec:qualitative_results}
In Fig. \ref{fig:results_realsense_supp}, we provide additional qualitative results in the form of images from D435i test set. For each test sample, we show (a) infrared image with sparse points projected, (b) input disparity image, (c) {\it ActiveStereoNet}~\cite{Zhang18eccv}, (d) {\it ACDC-Net}-R18 and (e) {\it ACDC-Net}-R50. We observe that our prediction is more complete than the input and best at conserving edges.
Fig. \ref{fig:results_input_comparison} compares the impact of different inputs for depth completion. We observe that while the incomplete depth input helps provide detail (purple regions), the guide IR encodes context to better complete the depth (orange regions). This is specially observed in cases of reflective or transparent surfaces. Finally, the sparse input helps anchor the depth completions in far away regions (red regions) where the input depth is often missing or has large noise.
\begin{figure*}[hpt!]
\centering
\includegraphics[width=\textwidth]{srcs/images/fig_input_knock/results_all.pdf}
\caption{
Here we compare the impact of different inputs by training multiple networks with different input combination. Purple boxes shows missed details due to unavailable input depth, orange boxes show missed context due to unavailable input IR guide image and red boxes show incorrect far away predictions due to unavailable sparse points.}
\label{fig:results_input_comparison}
\end{figure*}
\subsection{Dataset}\label{sec:dataset_supplementary}
As pointed out in the main paper, we did not find any publicly available dataset sufficient for our needs. Therefore, we presented two new datasets; Active TartanAir and D435i Sequences. In this section, we present a short review of existing datasets and give more details on the curation of Active TartanAir.
\subsubsection{Related work}
\begin{table*}[htp!]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
Name & Synthetic & \begin{tabular}{@{}c@{}}(Pseudo) \\ GT depth\end{tabular} & RGB or IR &
\begin{tabular}{@{}c@{}}Initial\\ depth maps\end{tabular} & GT poses & Stereo & IMU &
\begin{tabular}{@{}c@{}}Active \\ texture\end{tabular} &
\begin{tabular}{@{}c@{}}Interleave \\ mode\end{tabular} \\ \hline \hline
NYU-Depth V2~\cite{Silberman:ECCV12}&&\checkmark&\checkmark&\checkmark&&&\checkmark&&\\
ScanNet~\cite{dai2017scannet}&&\checkmark&\checkmark&\checkmark&\checkmark& &\checkmark& & \\
Matterport3D~\cite{Matterport3D}&&\checkmark&\checkmark&\checkmark&\checkmark&&&& \\
SceneNN~\cite{scenenn-3dv16}&&\checkmark&\checkmark&\checkmark&\checkmark&&&& \\
Stanford
2D-3D-S~\cite{armeni2017joint}&&\checkmark&\checkmark&\checkmark&\checkmark&&&& \\
\begin{tabular}{@{}c@{}}
KITTI-\\
Depth Compl.~\cite{Uhrig17ic3dv}
\end{tabular}&&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&&\\
TUM RGB-D~\cite{sturm12iros}&&&\checkmark&\checkmark&\checkmark&&\checkmark&&\\
EuRoC~\cite{Burri25012016}&&\checkmark&\checkmark&&\checkmark&\checkmark&\checkmark&&\\
OpenLORIS~\cite{shi2019openlorisscene}&&&\checkmark&\checkmark&\checkmark&\checkmark&&\\
VOID~\cite{Wong20icra}&&&\checkmark&\checkmark&\checkmark&&\checkmark&&\\
ICL-NUIM~\cite{handaetalICRA2014}&\checkmark&\checkmark&\checkmark& &\checkmark&&&&\\
SceneNet RGB-D~\cite{McCormacetalICCV2017}&\checkmark&\checkmark&\checkmark& &\checkmark&&&&\\
Replica~\cite{replica19arxiv}&\checkmark&\checkmark$^*$&\checkmark$^*$& &\checkmark$^*$&\checkmark$^*$&&&\\
Hypersim~\cite{roberts2020}&\checkmark&\checkmark&\checkmark& &\checkmark&&&&\\
TartanAir~\cite{Wang20iros}&\checkmark&\checkmark&\checkmark& &\checkmark&\checkmark&&&\\
\hline
\textbf{D435i Seq.} & &\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark \\
\textbf{ActiveTartanAir}&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark\\ \hline
\end{tabular}
\caption{A comparative table for some of the currently available datasets with RGB-D or stereo sequences. The $^*$ mark denotes that the data is not provided, but can be generated with the provided SDK. Both synthetic and real datasets do not contain active textures and cannot be used in interleave mode. Existing real scene understanding datasets often miss some part of the sensor data required to run visual-inertial SLAM, while real SLAM datasets often lack structure ground truth. Synthetic datasets, on the other hand, lack some sensors and initial noisy or incomplete depth maps. Our two new datasets (Active TartanAir and D435i Sequences) are the first datasets designed for depth prediction and completion for active stereo sensors.}
\label{tab:related_datasets}
\end{table*}
Table \ref{tab:related_datasets} summaries existing dataset often used for depth completion. Most of these datasets are not recorded with active stereo sensors, and therefore, do not have images with an active pattern projected into the scene. OpenLoris \cite{shi2019openlorisscene} is a dataset recorded with an Active Stereo sensor, however, it is not designed for depth completion and does not contain pseudo GT depth estimates. Due to the non-existence of active depth completion and prediction dataset for active stereo sensors, we present D435i Sequences and Active TartanAir (based on TartanAir~\cite{Wang20iros}) which contains GT depth, projected active texture and are recorded in interleave mode. The datasets will be made available upon acceptance.
\subsubsection{Active TartanAir}
We simulate an active stereo sensor by rendering a textured pattern into the scene using the ground truth depth. The pattern was obtained by analysing the real pattern projection on a white wall from the D435i sensor. This way we could detect blob positions by using a Difference of Gaussians (DoG) filter together with Non-Maximum Suppression. Recording in interleaved mode was also used to increase contrast by computing the difference between projector on and off. Additionally we model the variation of relative blob intensity with respect to distance experimentally by changing the distance of the sensor with respect to the wall. When doing the overlay on the synthetic images, for simplicity we assume that the projector is aligned to the left camera; we believe this also prevents the network learning predictions based on the position of the pattern in the guidance image, which may change from sensor to sensor. Occlusions in the right image are taken into account by considering the synthetic depth of both left and right images and requiring them to match.
\subsection{Datasets}\label{sec:dataset}
\vspace{-0.1cm}
\noindent \textbf{Active TartanAir} extends the TartanAir dataset~\cite{Wang20iros} for active stereo systems with realistic semi-dense depth maps computed using Semi-global Matching (SGM)~\cite{Hirschmuller08pami} and simulated IMU measurements.
We simulate an active stereo sensor by rendering a textured pattern into the scene using the ground truth depth. The pattern was obtained by analysing the pattern projection on a white wall from the D435i sensor. This way we could detect blob positions by using a Difference of Gaussians filter together with Non-Maximum Suppression. Recording in interleaved mode was also used to increase contrast by computing the difference between projector on and off.
We use the SGM implementation from OpenCV to create initial depth maps similar to the ones produced by the D435i camera. Fig.~\ref{fig:tartanair_dataset} shows the active texture projected into the left frame, the initial depth map without and with the active pattern and the ground truth depth.
As this paper is focused on indoor depth completion, we select the four indoor environments
and one environment that is half indoor and half outdoor
We two environments for testing, and the remaining three environments for training. We uniformly sample $4979$ frames from the hospital Japanese alley environment, while keeping all $33485$ frames in the training set.
\begin{figure}[t]
\vspace{0.2cm}
\centering
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{srcs/images/tartanair/cam0_on_10433333229.png}
\end{subfigure}%
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{srcs/images/tartanair/magma_depth0_10433333229.png}
\end{subfigure}\\ \vspace{0.01cm}
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{srcs/images/tartanair/magma_depth0_off_10433333229.png}
\end{subfigure
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{srcs/images/tartanair/magma_depth0_on_10433333229.png}
\end{subfigure}%
\caption{Shows the left stereo image overlaid with the projected texture, the ground truth depth map, and the depth map from SGM~\cite{Hirschmuller08pami} without and with active pattern.}
\label{fig:tartanair_dataset}
\vspace{-0.6cm}
\end{figure}
\noindent \textbf{D435i active sequences} is a new dataset for active stereo prediction and completion. The dataset is recorded with multiple D435i sensors across several office and apartment environments. We split the dataset into $15446$ training images, $2762$ validation and $4568$ testing images with no sequence environment overlapping in the training and testing sequences. The D435i records data in an interleaved mode, such that the active pattern is projected onto every second frame. We use the frames without the projected pattern to estimate pseudo ground truth depth maps using the general purpose SfM library COLMAP~\cite{Schoenberger16cvpr,Schoenberger16eccv} that was successfully used to create depth prediction datasets, \textit{e}.\textit{g}.~\cite{Li18cvpr}. The estimated pseudo ground truth depth maps are semi-dense, but accurate. The depth maps are projected onto the frames with the active pattern, such that we can evaluate the performance on this type of frames.
On average we tracked $240$ 3D landmarks per IR frame on Active TartanAir sequences and $270$ on D435i sequences.
\section{Experimental Results}\label{sec:results}
\vspace{-0.1cm}
\vspace{-0.1cm}
As there does not exist any public available depth completion or prediction datasets for active stereo sensors, we curate and present two new datasets: a synthetic dataset and a real-world dataset. In this section, we describe these datasets.
\input{./srcs/dataset.tex}
\subsection{Experimental Setup}
\vspace{-0.1cm}
\noindent \textbf{Implementation details.}
We use an encoder-decoder architecture with skip connections similar to \cite{Wang20arxiv} (using RefineNet \cite{LinLMSR20}), where the encoder is either a ResNet18 (R18) or ResNet50 (R50). The final layer is a sigmoid that predicts normalised disparity $\hat{D}$, which we rescale to absolute depth $1 / (D_{min} + (D_{max} -D_{min})\hat{D})$ where $D_{min}$ and $D_{max}$ are hyper parameters for the minimum and maximum disparities. For both datasets we set $D_{min} = 1/20$ and $D_{max} = 1/0.3$. We use the Adam optimizer with $\beta_1 = 0.1$, $\beta_2 = 0.999$, learning rate (lr) $0.00001$, and mini-batch size $12$. We use a lr scheduler that reduces the lr every $15^{th}$ epoch with a factor $10$. The models are trained on a single Nvidia GeForce RTX 3090. Training typically converges after $30$ epochs.
\noindent\textbf{Augmentations.} We perform the following training augmentations, with 50\% chance: random brightness (ranges from 0.8 to 1.2), horizontal flips and random rotation (ranges from -5 to 5 degrees). Augmentations are only applied to the images which are fed to the networks, not to those used to compute losses, where only valid pixels are used. Also, we use Dropblock \cite{GhiasiLL18}, after the first and second ResNet blocks with a drop probability of $0.1$ (disabled during the first training epoch).
\noindent \textbf{Benchmarks.} We benchmark our approach against competing methods for classical stereo~\cite{Geiger10accv}, edge-aware smoothing~\cite{Barron16CVPR}, supervised depth completion~\cite{Senushkin20arxiv}, self-supervised active depth-prediction~\cite{Zhang18eccv} and self-supervised depth-completion~\cite{Wong20icra}. For~\cite{Senushkin20arxiv,Wong20icra} we use the publicly available code with the same implementation details (learning-rate, $\#$ of epochs, optimiser parameters) as proposed in their work.
As~\cite{Zhang18eccv} do not provide their source code, we implement this closely following the authors instructions. We noticed that the convergence of {\it ActiveStereoNet}~suffered when using only the weighted-LCN loss and instead use a combination of photometric + weighted-LCN loss. According to direct conversation with the authors this helps better generalisation.
\noindent \textbf{Evaluation Metrics.} Following existing depth completion methods on indoor scenes~\cite{Ma17icra,Senushkin20arxiv,Sinha20eccv} we evaluate each method using the following metrics:
Root mean squared error (RMSE), Mean absolute relative error (Rel.) and $\delta_i^{'s}$ representing the percentage of predicted pixels with relative error below threshold $i$ ($i \in\{1.25,1.25^2,1.25^3\}$).
We provide the percentage of valid pixels ($\%val$) predicted by each method.
We split our results into regions with and without initial estimates together to better understand how these methods perform on regions where there was no depth information.
\begin{figure*}[htp!]
\vspace{0.2cm}
\centering
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_537556430000_depth_ir0}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_537556430000_depth_input.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_537556430000_depth_voiced.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_537556430000_depth_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_537556430000_depth_r18.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_537556430000_depth_r50.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_537556430000_depth_colmap.png}
\end{subfigure}\\ \vspace{0.1cm}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_580724270000_depth_ir0}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_580724270000_depth_input.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_580724270000_depth_voiced.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_580724270000_depth_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_580724270000_depth_r18.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_580724270000_depth_r50.png}
\end{subfigure}
\begin{subfigure}[b]{0.13\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/d435i_qual/chang_sequence002_580724270000_depth_colmap.png}
\end{subfigure}
\caption{\textit{From left to right: Input IR0, input depth, VOICED, {\it ActiveStereoNet}, {\it ACDC-Net}-R18, {\it ACDC-Net}-R50 and Ground Truth.} Qualitative comparison shows that our model produces better depth than other self-supervised methods.
\vspace{-0.3cm}
\label{fig:results_d435i}
\end{figure*}
\begin{table*}[htp!]
\vspace{0.2cm}
\begin{center}
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c||c|c|c|c|c|c||c|c||c|c||c|}
\hline
\multirow{2}{*}{\textbf{Method}} &
\multirow{2}{*}{\textbf{Sup.}} &
\multicolumn{6}{c||}{\textbf{Result on whole image}} &
\multicolumn{2}{c||}{\textbf{With initial depth}} &
\multicolumn{2}{c||}{\textbf{W/O initial depth}} &
\textbf{time} $\big\downarrow$\\
\cline{3-12}
&
& Rel. $\downarrow$& RMSE $\downarrow$& $\delta_1$ $\uparrow$& $\delta_2$ $\uparrow$& $\delta_3$ $\uparrow$& $\%val$ $\uparrow$
& Rel. $\downarrow$& RMSE $\downarrow
& Rel. $\downarrow$& RMSE $\downarrow
& (ms) \\
\hline
\hline
SGM ~\cite{Hirschmuller08pami} & \xmark
& 0.176 & 1.549 & 0.773 & 0.789 & 0.808 & 77.7
& \textbf{0.023} & 0.505
& - & -
& - \\
ELAS (Robotics)~\cite{Geiger10accv} & \xmark
& 0.120 & 1.483 & 0.861 & 0.875 & 0.885 & 77.7
& 0.070 & 1.086
& 0.337 & 2.759
& - \\
Bilateral Solver~\cite{Barron16CVPR} & \xmark
& 0.190 & \textbf{0.568} & 0.905 & 0.952 & 0.969 & \textbf{100}
& 0.062 & \textbf{0.393}
& 0.326 & \textbf{0.880}
& - \\
{\it ActiveStereoNet}~\cite{Zhang18eccv} & \xmark
& 0.158 & 1.377 & 0.810 & 0.853 & 0.879 & 86.4
& 0.110 & 1.182
& 0.296 & 2.093
& \update{31} \\
{\it ACDC-Net}-R18 (\textbf{ours}) & \xmark
& 0.130 & 1.049 & 0.875 & 0.954 & 0.977 & \textbf{100}
& 0.096 & 0.875
& 0.215 & 1.616
& \update{\textbf{29}} \\
\update{{\it ACDC-Net}-R50 (\textbf{ours})} & \xmark
& \update{\textbf{0.087}} & \update{0.805} & \update{\textbf{0.932}} & \update{\textbf{0.964}} & \update{\textbf{0.979}} & \update{\textbf{100}}
& \update{0.037} & \update{0.558}
& \update{\textbf{0.174}} & \update{1.416}
& \update{135} \\
\hline
\hline
DMNet~\cite{Senushkin20arxiv} & \cmark
& 0.110 & 1.217 & 0.846 & 0.933 & 0.968 & 100
& 0.103 & 1.170
& 0.144 & 1.532
& \update{38} \\
\hline
\end{tabular}}
\end{center}
\caption{We compare {\it ACDC-Net}~with state of the art methods on both regions w./w.o initial depth estimates in \textbf{Active TartanAir} sequences. \textbf{Note} DMNet is the only supervised method (\textbf{Sup.}) in this table and hence shown separately.
We observe that despite being a self-supervised method, our work closes the gap between supervised and self-supervised methods while been computationally efficient. Inference times are for an image resolution of $832\times480$ on RTX 3090. \textbf{Bold} values are best results for the self-supervised methods. $\downarrow$ shows metrics that are better with lower values and $\uparrow$ is vice-versa.}\label{tab:tartanair}
\vspace{-0.3cm}
\end{table*}
\subsection{Results}
\vspace{-0.1cm}
\noindent\textbf{Active TartanAir.} In Table~\ref{tab:tartanair} we compare our work against several baseline methods on the Active TartanAir dataset. While SGM does well where it predicts depth, it fails to produce estimates in texture-less regions without active pattern. Moreover, due to the wide baseline in this dataset, large occluded regions are present in the stereo images leading to low completeness score for both {\it ActiveStereoNet}~and SGM. {\it ACDC-Net}~outperforms competing methods, particularly in regions without any initial depth prediction. In contrast to SGM, {\it ACDC-Net} exploits the combination of guide images and sparse 3D landmarks to predict more accurate depth estimates. Additionally, our method is computationally efficient and running at over $25$ FPS. Fig.~\ref{fig:results_tartanair} shows qualitatively that our network produces crisper results compared to other methods.
\begin{table*}[htp!]
\small
\begin{center}
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c||c|c|c|c|c|c||c|c||c|c|}
\hline
\multirow{2}{*}{\textbf{Method}} &
\multirow{2}{*}{\textbf{S}} & \multirow{2}{*}{\textbf{C}} & \multirow{2}{*}{\textbf{A}} &
\multicolumn{6}{c||}{\textbf{Result on whole image}} &
\multicolumn{2}{c||}{\textbf{With initial depth}} &
\multicolumn{2}{c|}{\textbf{W/O initial depth}} \\
\cline{5-14}
& & &
& Rel. $\downarrow$& RMSE $\downarrow$ & $\delta_1$ $\uparrow$& $\delta_2$ $\uparrow$& $\delta_3$ $\uparrow$ & $\%val$ $\uparrow$
& Rel. $\downarrow$& RMSE $\downarrow
& Rel. $\downarrow$& RMSE $\downarrow
\\
\hline
\hline
SGM ~\cite{Hirschmuller08pami} & \cmark & &
& 0.236 & 0.957 & 0.719 & 0.736 & 0.748 & 57.8
& 0.051 & 0.297
& - & -
\\
ELAS (Robotics)~\cite{Geiger10accv} & \cmark & &
& 0.078 & 0.402 & 0.931 & 0.945 & 0.952 & 84.1
& \textbf{0.045} & 0.227
& 0.193 & 0.715
\\
{\it ActiveStereoNet}~\cite{Zhang18eccv} & \cmark & & \cmark
& 0.123 & 0.538 & 0.903 & 0.957 & 0.973 & 96.4
& 0.066 & 0.261
& 0.308 & 0.997
\\
\hline
Bilateral Solver~\cite{Barron16CVPR} & & \cmark &
& 0.090 & 0.307 & 0.931 & 0.974 & 0.984 & 97.7
& 0.070 & 0.226
& 0.160 & 0.479
\\
\update{S2D-R34~\cite{Ma17icra}} & & \cmark & \cmark
& \update{0.383} & \update{1.008} & \update{0.326} & \update{0.507} & \update{0.667} & \update{\textbf{100}}
& \update{0.360} & \update{0.982}
& \update{0.407} & \update{1.074} \\
\update{Concat inputs (R50)} & & \cmark & \cmark
& \update{0.126} & \update{0.468} & \update{0.860} & \update{0.945} & \update{0.970} & \update{\textbf{100}}
& \update{0.090} & \update{0.323}
& \update{0.194} & \update{0.701}
\\
VOICED~\cite{Wong20icra} & & \cmark &
& 0.194 & 0.569 & 0.737 & 0.874 & 0.934 & 98.7
& 0.179 & 0.482
& 0.239 & 0.761
\\
\update{{\it ACDC-Net}-R50 (\textbf{Mean})} & & \cmark & \cmark
& \update{0.128} & \update{0.374} & \update{0.911} & \update{0.957} & \update{0.973} & \update{\textbf{100}}
& \update{0.101} & \update{0.268}
& \update{0.164} & \update{0.556}
\\
{\it ACDC-Net}-R18 (\textbf{ours}) & & \cmark & \cmark
& 0.095 & 0.361 & 0.909 & 0.974 & 0.986 & \textbf{100}
& 0.075 & 0.253
& 0.148 & 0.550
\\
{\it ACDC-Net}-R50 (\textbf{ours}) & & \cmark & \cmark
& \textbf{0.075} &\textbf{0.290} & \textbf{0.945} & \textbf{0.981} & \textbf{0.988} & \textbf{100}
& 0.055 & \textbf{0.184}
& \textbf{0.117} & \textbf{0.460}
\\
\hline
\end{tabular}}
\end{center}
\caption
With a combination of inputs and complementary losses {\it ACDC-Net}-\{R18,R50\}~outperforms competing methods on the \textbf{D435i} sequences. We benchmark against \textbf{Stereo} (S) and \textbf{Completion} (C) methods w./w.o. the \textbf{Active} pattern (A).}
\vspace{-0.3cm}
\label{tab:realsense}
\end{table*}
\begin{figure}
\vspace{0.2cm}
\centering
\begin{subfigure}{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6/13433333199_rgb.png}
\end{subfigure}
\begin{subfigure}{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_13433333199_asn.png}
\end{subfigure}
\begin{subfigure}{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_13433333199_r18sparse.png}
\end{subfigure}
\begin{subfigure}{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_depth_japanesealley_P004_13433333199_0.png}
\end{subfigure}
\begin{subfigure}{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_13433333199_gt.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6/8233333251_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_8233333251_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_8233333251_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_depth_japanesealley_P003_8233333251_0.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_8233333251_gt.png}
\end{subfigure}
\iffalse
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6/28599999714_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_28599999714_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_28599999714_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_28599999714_gt.png}
\end{subfigure}
\fi
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6/5699999943_rgb.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_5699999943_asn.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_5699999943_r18sparse.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_depth_japanesealley_P002_5699999943_0.png}
\end{subfigure}
\begin{subfigure}[b]{0.18\linewidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/fig6_magma/magma_5699999943_gt.png}
\end{subfigure}
\caption{\textit{From left to right: Input RGB, {\it ActiveStereoNet}, {\it ACDC-Net}-R18, \update{{\it ACDC-Net}-R50} and Ground Truth.} Qualitative comparison shows that our model produces accurate depth for challenging scenarios as specularities ($1^{st}$ row), thin structures ($2^{nd}$ row), rain drops ($3^{th}$ row).
\vspace{-0.5cm}
\label{fig:results_tartanair}
\end{figure}
\noindent\textbf{D435i.}
Table~\ref{tab:realsense} shows that {\it ACDC-Net}~outperforms the self-supervised baselines (VOICED, {\it ActiveStereoNet}, \update{S2D}). Similar to the results on Active TartanAir datasets, we see that SGM performs well where it predicts depth. However, {\it ACDC-Net}~exploits the sparse landmarks and together with temporal losses learns better priors even on regions without initial depth predictions. \update{To emphasize the need for the proposed $\max$ operator in the channel exchange framework~\cite{Wang20arxiv}, we compare with two baseline version. In the first one we concatenate all the inputs and feed it into a single R50 backbone (Concat inputs), and for the other we use the vanilla mean operator based channel exchange network. We observe that only when we use the $\max$ operator are we able to convincingly out-perform competing methods producing state of the art active depth completion results.} Fig.~\ref{fig:results_d435i}
shows qualitative performance of {\it ACDC-Net}~R18 compared with VOICED~\cite{Wong20icra} and {\it ActiveStereoNet}~\cite{Zhang18eccv} on the D435i sequences. Note that VOICED is not able to recover from the initial scaffolding.
\begin{figure}
\vspace{0.2cm}
\centering
\begin{subfigure}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/suplementary/1019541710000.png}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/suplementary/bilateral_solver.png}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/suplementary/resnet50.png}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/suplementary/503548543000.png}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/suplementary/bilateral_solver_2.png}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\centering
\includegraphics[width=\textwidth]{srcs/images/suplementary/resnet50_2.png}
\end{subfigure}
\caption{\update{\textit{From left to right:} \textit{Input IR, Bilateral solver, {\it ACDC-Net}-R50.} While bilateral solver produces sharp edges, it follows the guide image resulting in false depth discontinuities. {\it ACDC-Net}~extracts scene understanding to interpolate depth.}}
\label{fig:bilateral_solver}
\vspace{-0.3cm}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=\columnwidth]{srcs/images/qual2.png}
\caption{\textit{From left to right:} \textit{Input IR, input depth map, {\it ACDC-Net}-R18, {\it ACDC-Net}-R50.} Point-clouds showing the spatial coherency of our method and its ability to fill missing crucial information of the environment, such as walls.}
\label{fig:results_realsense}
\vspace{-0.3cm}
\end{figure}
\update{In Fig.~\ref{fig:bilateral_solver} we observe that while Bilateral solver produces sharper edges compared to {\it ACDC-Net}, it closely follows the guide image resulting in false depth discontinuities from shadows, glare and changes in intensity. In comparison learning based methods like {\it ACDC-Net}~exploit high level scene understanding to avoid them.} We also present the output as point clouds (Fig.~\ref{fig:results_realsense}) to highlight the spatial coherence of our prediction and its potential for downstream tasks, such as robot navigation and scene understanding. Our model is able to complete the initial depth maps with guidance from the IR frame and sparse points even when there are large invalid areas in the input. {\it ACDC-Net}~correctly fills missing depth such as the walls next to the stairs in the first example.
\noindent\textbf{Ablation Studies.} To investigate the impact of different input signals (sparse, depth and guide-IR) for depth completion we conduct ablation studies on the D435i dataset. Specifically, we train multiple networks with different combinations of input signals and observe their performance. In Table~\ref{tab:ablation_input} we observe that each input contribute to the model performance. The sparse depth especially helps in distant regions (low RMSE)
\begin{table}[htp!]
\vspace{0.2cm}
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\emph{IR} & \emph{Dep.} & \emph{Spa.}
& Rel. $\downarrow$& RMSE $\downarrow$& $\delta_1$ $\uparrow$& $\delta_2$ $\uparrow$& $\delta_3$$\uparrow$
\\
\hline
\hline
& \cmark & \cmark
& 0.150 & 0.710 & 0.826 & 0.931 & 0.958
\\
\cmark & & \cmark
& 0.165 & 0.574 & 0.749 & 0.925 & 0.968
\\
\cmark & \cmark &
& 0.417 & 0.883 & 0.761 & 0.911 & 0.949
\\
\cmark & \cmark & \cmark
& \textbf{0.095} & \textbf{0.361} & \textbf{0.909} & \textbf{0.974} & \textbf{0.986}
\\
\hline
\end{tabular}
\end{center}
\caption{Impact of different inputs. \emph{IR} is the reference Guide-IR image, \emph{Dep.} refers to the initial depth and \emph{Spa.} to the projected 3D landmarks.}
\vspace{-0.3cm}
\label{tab:ablation_input}
\end{table}
\begin{table}[htp!]
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\emph{Temp.} & \emph{Sparse}
& Rel. $\downarrow$& RMSE $\downarrow$& $\delta_1$ $\uparrow$& $\delta_2$ $\uparrow$& $\delta_3$$\uparrow$
\\
\hline
\hline
\cmark &
& 0.138 & 0.662 & 0.851 & 0.928 & 0.96
\\
& \cmark
& 0.345 & 1.144 & 0.728 & 0.851 & 0.903
\\
\cmark & \cmark
& \update{\textbf{0.095}}& \update{\textbf{0.361}}& \update{\textbf{0.909}}& \update{\textbf{0.974}}& \update{\textbf{0.986}}
\\
\hline
\end{tabular}
\end{center}
\caption{Impact of different training losses. Here \emph{Temp.} refer to $L_{photo}^{off}$ (Eq.~\ref{eq:photo_temporal}) and \emph{Sparse} refers to $L_s$ (Eq.~\ref{eq:sparse_loss}).}
\vspace{-0.3cm}
\label{tab:ablation_losses}
\end{table}
\begin{table}[htp!]
\begin{center}
\resizebox{\linewidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\emph{Sparsepoints (\%)}
& Rel. $\downarrow$& RMSE $\downarrow$& $\delta_1$ $\uparrow$& $\delta_2$ $\uparrow$& $\delta_3$$\uparrow$
\\
\hline
\hline
100 \% & \textbf{0.095} & \textbf{0.361} & \textbf{0.909} & \textbf{0.974} & \textbf{0.986} \\
75 \% & 0.113 & 0.521 & 0.866 & 0.950 & 0.973 \\
50 \% & 0.115 & 0.526 & 0.862 & 0.950 & 0.972 \\
25 \% & 0.119 & 0.537 & 0.852 & 0.947 & 0.972 \\
0 \% & 0.136 & 0.581 & 0.803 & 0.932 & 0.969 \\
\hline
\end{tabular}
}
\end{center}
\caption{$\%$ of sparse points used as input for inference.}
\vspace{-0.5cm}
\label{tab:ablation_sparsepoints}
\end{table}
Similarly, to understand the impact of different training losses we train multiple networks with different combinations of training losses and report results in Table~\ref{tab:ablation_losses}. Removing the sparse loss affects the performance most indicating its importance in providing supervision for far away regions where we do not have depth estimates. The temporal loss provides supervision on occluded regions together with temporal continuity and in combination, the two losses complement each other giving the best results.
\noindent\textbf{Number of sparse points.} Table \ref{tab:ablation_sparsepoints} shows the performance only drops slightly when number of sparse point is reduced. This means that {\it ACDC-Net}~can obtain accurate dense depth with as little as $60$ tracked landmarks. However, removing the sparse points completely leads to larger performance drop.
\iffalse
\DHJ{Do we need this? I'd remove it to save space.}
\noindent\textbf{IR vs. RGB images} We could indeed have projected into the RGB frame rather than the left IR frame, such that the model could have color information. We chose not to do so, since the RGB frames have rolling shutter and are of much lower resolution than the IR frames for the D435i camera. Furthermore, the active pattern is not visible in the RGB frames, thus valuable information about the textureless areas is lost. For the Active TartanAir dataset, we did not have these practical considerations and therefore we use RGB images for this dataset.
\fi
\noindent\textbf{3D Mapping for Robotics.} Finally, we show how accurate depth completion can boost the performance of a down-stream task such as 3D mapping by recording a dataset using a wheeled robot
with a setup similar to existing robotic vacuum cleaners. The reduced ground clearance of the camera results in significant portions of the floor missing from the depth map. This leads to a floor-less reconstruction, as seen in Fig.~\ref{fig:results_reconstruction}. {\it ACDC-Net}-R50~is able to complete the floor and other regions, significantly improving the map.
\begin{figure}[htp!]
\centering
\vspace{0.2cm}
\includegraphics[width=\linewidth]{srcs/images/floor/willow_comparisson.png}
\caption{\textit{From top to bottom: 3D reconstructions from raw D435i and completed {\it ACDC-Net}-R50.}
\vspace{-0.5cm}
\label{fig:results_reconstruction}
\end{figure}
\section{Introduction}\label{sec:intro}
\IEEEPARstart{D}{epth} sensors are revolutionising computer vision applications that require 3D information about the scene such as non-rigid 3D reconstruction, robotics and augmented reality~\cite{Lenz15ijrr,Newcombe15cvpr}. Active stereo~\cite{Nishikara84spie} in particular is an interesting depth sensor technology
for robotics due to its compact design, small power consumption and lower costs. It extends passive stereo by projecting a textured pattern while internally employing hardware-accelerated classical stereo algorithms~\cite{Keselman17cvprw}. However, active stereo systems suffer from stereo artefacts such as edge flattening, occlusions and do not produce depth estimates for distant, reflective or dark surfaces (See Fig. \ref{fig:teaser}).
{\it ActiveStereoNet}~\cite{Zhang18eccv} is a recent approach developed for active stereo systems. They propose a learnable stereo pipeline for active stereo systems using rectified IR images with the visible active pattern. Their method predicts a dense depth map but suffers in well-lit rooms where the intensity of the active pattern is low.
While {\it ActiveStereoNet}~uses photometric consistency between the stereo rectified IR images, we make one step further and leverage temporal consistency between estimated depth maps. It is important to note that this is a non-trivial extension since the projector moves alongside the camera. For this purpose, we work with the active stereo system in an interleaved fashion (one stereo frame with active illumination followed by one without) and use accurate 6-\textit{degrees of freedom} (DoF) trajectories from a feature-based visual-inertial Simultaneous Localisation and Mapping (SLAM) system inspired by~\cite{Leutenegger15ijrr}. Fig.~\ref{fig:overview} depicts an overview of our method: The visual-inertial SLAM operates on the stereo IR frames without active illumination and IMU measurements (gyroscope and accelerometer), while our network completes the semi-dense depth maps obtained from the frames with the active illumination. As an extra benefit of the closely integrated SLAM system, we show that using the sparse 3D landmarks that are tracked and refined over multiple views as an extra input and source of supervision improves the final output. We benchmark our ACtive Depth Completion-Net~({\it ACDC-Net}) on diverse synthetic and real data for indoor scenes showing improved performance compared to state of the art self-supervised depth completion, depth from stereo and traditional dense stereo matching methods.
We also show that our method improves the down-steam task of 3D mapping for robotics applications.
Our main contributions can be summarised as:
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{srcs/images/pipeline/ann_teaser_no_cite.png}
\caption{We complete depth maps from an active stereo sensor with guidance from aligned IR frames and sparse 3D landmarks. Our proposed method {\it ACDC-Net}~produces high quality depth maps compared to state of the art~\cite{Zhang18eccv}.
\vspace{-0.3cm}
\label{fig:teaser}
\end{figure}
\begin{figure*}
\centering
\includegraphics[page=1,width=0.9\textwidth]{srcs/images/dc_overview.pdf}
\caption{
We propose a depth completion method to improve the accuracy of active stereo depth maps with the help of a visual-inertial SLAM system. The active stereo sensor has an IR stereo pair and a laser projector. We use the left IR image, the depth map from the sensor and the projections of the sparse 3D landmarks tracked by the visual-inertial SLAM system as input to the proposed {\it ACDC-Net}. The network outputs a completed and refined depth map in real-time.}
\vspace{-0.5cm}
\label{fig:overview}
\end{figure*}
\begin{itemize}
\itemsep0em
\item We propose the first end-to-end self-supervised active depth completion method. Rather than predicting a depth map from scratch, we complete the built-in depth prediction with the guidance of an aligned IR image.
\item Our novel, self-supervised loss relies on a combination of warped active and passive frames, producing stronger signal in texture-less areas compared to the photometric loss on passive images only.
\item We show how sparse 3D landmarks from a SLAM system can be used as an extra input and supervision to improve depth completion with a channel exchange network~\cite{Wang20arxiv}.
\item Due to the non-existence of active stereo datasets, we release (upon paper acceptance) synthetic and real world datasets for active stereo depth completion and prediction.
\end{itemize}
\section{Conclusions}\label{sec:conclusions}
In this work we investigate self-supervised depth completion for active stereo systems. We closely integrate a visual-inertial SLAM system into our training and inference pipelines, providing reliable pose estimates during training as well as accurate 3D landmarks. We propose {\it ACDC-Net}~that leverages these 3D landmarks as input and as weak supervision, resulting in more reliable depth estimates for distant areas. We further propose a novel reconstruction loss that relies on both passive and active frames, giving a valuable supervision signal in texture-less areas. We show through several ablation studies that all these losses and inputs contribute to the final result.
In addition, we contribute two datasets (synthetic and real) for active stereo depth completion and prediction. We believe this to be an important contribution due to the non-existence of active stereo datasets with ground truth in the community, and we hope that this will attract further research on this topic.
\section{Related Work}\label{sec:related}
\vspace{-0.1cm}
\textbf{Learning-based active stereo} has had limited research in recent years. Prior to the deep learning era, frameworks for learning embeddings where matching can be performed more efficiently were explored~\cite{Fanello17iccv} together with direct mapping from pixel intensities to depth~\cite{Fanello14tog}. These methods have failed in general textureless scenes due to shallow architectures and local optimization schemes. More recently, {\it ActiveStereoNet}~\cite{Zhang18eccv} proposed a self-supervised stereo method that estimates depth directly from the infrared (IR) images with active illumination. Their main contribution lies in the new weighted local contrast normalisation loss, which extends the photometric loss to handle images with active illumination. However, we find that this loss experiences convergence issues if the active pattern is not strong enough, which is often the case in well-lit rooms and outdoor environments. Similarly, \cite{Riegler19cvpr} proposes a learnable pipeline for monocular depth prediction with active illumination, which uses geometric and disparity losses in addition to photometric consistency \update{between an image and a reference pattern}. In contrast to these two methods, we hypothesize that depth completion can be faster and more reliable than depth estimation.
Learning-based depth completion methods can be divided into \textbf{supervised}~\cite{Ma17icra,Senushkin20arxiv} and \textbf{self-supervised} methods~\cite{Godard19iccv,Tiwari20eccv,Wong20icra}. In this work, we will focus on self-supervised methods. Self-supervised depth completion and depth prediction methods rely on the \textit{photometric} loss~\cite{Zhou17cvpr}, that measures the difference in image intensities between a reference image and an image warped into this reference frame. However, this loss is problematic in poorly textured areas due to the lack of consistent geometry and it does not work well for spatially distant frames due to the large changes in appearance. To address these shortcomings, some works such as~\cite{Zhan19icra} introduces multi-view photometric and depth-normal consistency during training for depth prediction problems. Some recent batch approaches like~\cite{Tiwari20eccv} make use of Structure from Motion~(SfM) or visual SLAM frameworks to produce dense and geometrically consistent depth estimates for monocular videos. Sinha {\it et al. }~\cite{Sinha20eccv} proposed a supervised approach for depth prediction that first predicts a sparse set of 3D points from multiple views that are later densified by an encoder-decoder architecture that fuses depth and image features. In this work, we leverage a feature-based visual-inertial SLAM system to produce motion estimates and accurate (but sparse) 3D landmarks to self-supervise a convolutional neural network for depth completion for active stereo systems.
\textbf{Learning-based depth completion} has shown to attain a higher level of robustness and accuracy compared to monocular depth prediction, which is inherently ambiguous and unreliable~\cite{Ma19icra}. Most work has focused on completing sparse LiDAR images either considering guidance from RGB images~\cite{Eldesokey20pami,Ma19icra,Ma17icra} or without any guidance~\cite{Eldesokey20cvpr,Uhrig17ic3dv}. Uhrig {\it et al. }~\cite{Uhrig17ic3dv} found that sparse convolutions that explicitly consider the sparse nature of LiDAR data performs better than standard convolutions. Eldesokey {\it et al. } expand on these findings and show that normalised convolutions can propagate binary~\cite{Eldesokey20pami} or continuous learned~\cite{Eldesokey20cvpr} confidence through the network. Another line of depth completion methods tries to densify sparse depth prior to feeding it as input to a refinement network. \cite{Wong20icra} use scaffolding to densify sparse depth for visual odometry system, whereas \cite{chen18icvpr} use nearest neighbor upsampling and a heatmap of the locations of the sparse landmarks. These methods are well-suited for sparse LiDAR data that is rather uniformly distributed across the image but add little value to the semi-dense depth predictions of active stereo systems, which are dense in some regions but have large holes in other regions.
\iffalse
\begin{itemize}
\item Supervised passive Depth Completion
\item Sparse and Noisy {LiDAR} Completion with {RGB} Guidance and Uncertainty \cite{VanGansbeke19imva}
\item Sparsity invariant CNNs \cite{uhrig2017ic3dv} suggest sparse convolutions that explicitly considers the sparse nature of LiDAR data. The semi-dense depth produced by RealSense are more dense, but have larger holes than the LiDAR data, which makes this line of research perform less relevant for our application.
\item \cite{ma2017icra} find that sparse points attain higher level of robustness and accuracy as monocular images alone is inherently ambiguous and unreliable.
\item \cite{xiang20arxiv} project into 3D and densify point cloud. They then project back to 2D where they refine with RGB guidance.
\item deep depth completion of a single RGB-D Image \cite{zhang18cvpr} estimates surface normals and occlusion boundaries as an intermediate step before global optimization.
\item guided convolutions \cite{tang19corr}
\item normalized convolutions \cite{eldesokey18bmvc} for sparse depth regression, normalized convolutions for guided depth completion \cite{eldesokey2018pami}
\item \cite{eldesokey2020cvpr} normalized convolution with learned confidence
\item NLSPN \cite{park2020eccv}
\item Self-supervised Passive Depth Completion
\item We take inspiration from the losses presented in Monodepth2~\cite{Godard19iccv}, namely the min loss and the multiscale predictions.
\item Sparse features as input/supervision
\item DELTAS \cite{Sinha20eccv} densification network
\item Self supervised Sparse to Dense \cite{ma19icra} photometric loss, smoothness loss and loss on sparse features (input - prediction)
\item Pseudo RGB-D for Self-improving Monocular SLAM and Depth Prediction\cite{Tiwari20eccv} uses a combination of the photometric loss, which gives good results for small baselines where the photometric consistency holds, and SLAM points which gives reliable information for larger baselines. We take inspiration from the use of SLAM points in the loss, but extend to include these sparse points in the input also.
\item Sensor fusion
\item SelectFusion \cite{chen19corr} general architecture to fuse different sensor modalities. Suggest different learned weightings that can choose with sensor to trust in given scenario.
\end{itemize}
\cite{Popovic20arxiv},\cite{Luo20siggraph},
\fi
|
1,108,101,564,213 | arxiv | \section{Introduction}
Convolutional neural networks (CNNs) have been widely used in various computer vision tasks, such as image classification~\cite{resnet}, object detection~\cite{Ren2015Faster} and visual segmentation~\cite{long2015fully}. These neural networks are often of heavy design with massive parameters and computational costs, which cannot be directly deployed on portable devices without model compressing techniques, \eg pruning~\cite{deepcompression}, knowledge distillation~\cite{distill}, compact model design~\cite{mobilenet,wang2018learning}, and quantization~\cite{xnor,dorefa}.
Wherein, 1-bit quantization has been recently received a great attention, which represents the weights and activations in the network using only two values, \eg $-1$ and $+1$. Thus, binarized networks could be efficiently applied in a series of real-world applications (\eg camera and mobile phone). Nevertheless, the performance of binary neural networks (BNNs) are still far worse than that of their original models. Figure~\ref{bnn} summarizes the performance of state-of-the-art binarization methods~\cite{bnn,abc,xnor,dorefa,bi-real,PCNN} on the ImageNet benchmark~\cite{imagenet}, including XNOR-Net~\cite{xnor}, Bi-Real Net~\cite{bi-real}, PCNN~\cite{PCNN}, \etc. Although they have made tremendous efforts for enhancing the performance of BNNs, the highest top-1 accuracy obtained by PCNN~\cite{PCNN} is about $12.0\%$ lower than that of the baseline ResNet-18~\cite{resnet}.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{./fig/binary.pdf}
\vspace{-0.5em}
\caption{Performance of state-of-the-art methods for binarizing ResNet-18 on the ImageNet dataset.}
\label{bnn}
\vspace{-1.5em}
\end{figure}
The severe accuracy drop mentioned in Figure~\ref{bnn} greatly limits the practicality of BNNs, considering that there are a number of computer vision taks with very high precision requirements such as face recognition~\cite{deepid} and person re-identification~\cite{han2019attribute}. The main reason could be derived from the fact that discrimination of binary features cannot match that of the full-precision features with the same dimensionality. Therefore, it is necessary to find a trade-off approach for establishing compact binary networks with acceptable model sizes by increasing the number of channels in each convolutional layer. Motivated by the recent neural architecture search (NAS~\cite{nas,evonas,wang2017towards}) hotspot, we present to appropriately modify channel numbers of binarized networks and search a new architecture with different channel numbers but high precision. In practice, expansion ratios of all layers in the desired binary network will be encoded to form the search space, and the evolutionary algorithm will be utilized for effectively find the lower bound of BNNs for achieving the same performance as that of their full-precision versions.
We conduct experiments on the CIFAR and ImageNet datasets using VGGNet~\cite{vggnet} and ResNet~\cite{resnet} architectures. Results on these benchmarks show that the proposed approach is able to find excellent binary neural architectures for obtaining high precision with as few computation costs as possible.
\section{Approach}
\paragraph{Binarization Method.}
Following the widely-used DoReFa-Net~\cite{dorefa}, in the binary layer, the floating-point weights $\mathbf w$ is approximated by binary weights $\mathbf w_{b}$ and a floating-point scalar, while the floating-point activations $\mathbf x$ are represented by binary values $\mathbf x_{b}$. The feed-forward in DoReFa-Net is defined as:
\begin{equation}
\begin{aligned}
\mathbf w_{b} &= \text{sign}(\mathbf w) \times E(|\mathbf w|), \\
\mathbf x_{b} &= \text{round}(\text{clip}(\mathbf x,0,1)),
\end{aligned}
\end{equation}
where $E(|\cdot|)$ calculates the mean of absolute value. In the back-propagation process, we adapt the ``Straight-Through Estimator'' method~\cite{ste} to estimate the corresponding gradients. During the quantization process, we restrain the weights and activations of all convolution layers and fully-connected layers to only 1-bit except the first and last layer, following the existing works~\cite{dorefa,bi-real}.
The extremely binary quantization brings enormous computation acceleration and memory reduction. However, most of the state-of-the-art binary networks cannot match the accuracy of the full-precision counterpart models. Recently, the uniform width expansion proposed by WRPN~\cite{wrpn} expands all the layers with only one hyper-parameter for multi-bit quantization networks to pursue this goal.
Although widened binary networks can obtain acceptable performance, such a uniform expansion strategy will obviously increase the required memory and computational complexities, \eg the binary network after expanding $4\times$ is $16\times$ larger than the original one. In fact, there is often strong redundancy in deep neural architectures, we do not need to expand all layers for achieving the desired performance. Thus, we propose to define a binary neural architecture search problem and utilize evolutionary algorithm to search the optimal architectures.
\paragraph{Search Space.
For the search space, we only focus on the search for network width, \ie the number of the channels of each layer. For a given network architecture which has $n$ layers, we define $\mathbf a \in \mathbf R^{n}$ to encode the expansion ratio hyper-parameter of each layer. Our goal is to search $\mathbf a$ for higher accuracy with less FLOPs. All the other hyper-parameters and network settings like stride, kernel size, layer order, remain the same as the original full-precision models.
In the uniform width expansion experiments as shown in Table~\ref{resnet18}, we observe that by only expanding channels by $4$ times, binary neural networks can obtain comparable performance to that of their full-precision model on the ImageNet classification task. Thus we assume that $4$ is the empirical upper bound of expansion ratio to achieve full-precision accuracy. We set $4$ as the largest expansion ratio, and use some smaller ratio to expand or even reduce channels. In practice, we have $6$ expansion ratio candidates in $\mathbf a$ which is defined as follows:
\begin{equation}
\mathbf a = [a_1,...,a_n],\quad\forall\;\; a_i \in\{0.25,0.5,1,2,3,4\}.
\label{ratiocode}
\end{equation}
\paragraph{Search Algorithm.}
As discussed above, we expect to search an optimal architecture with the expansion ratio set $\mathbf a^{\ast}$ for making the accuracy of the binarized neural networks similar to that of its full-precision models with as few parameters and floating-number opeartions (FLOPS) as possible. Therefore, the overall optimization can be described as:
\begin{equation}
\begin{aligned}
\max_{\mathbf{a}}\quad& f(\mathbf w^{\ast}(\mathbf a),\mathbf a),\\
s.t. \quad & \mathbf w^{\ast}=arg\min_{\mathbf w}\mathcal{L}_{train}(\mathbf w, \mathbf a),
\end{aligned}
\end{equation}
where $f(\cdot)$ is the \emph{fitness} function in evolutionary algorithm and $\mathcal{L}_{train}$ is loss on train set, $\mathbf w^{\ast}(\mathbf a)$ is the corresponding trained weight with expansion ratio set $\mathbf a$. We first find an optimal $\mathbf a^{\ast}$ through evolutionary algorithm on a train subset. Then we train the corresponding binary network on full train set to obtain the final model.
Specifically, in every generation during evolution, we maintain a population of $K$ individuals, \ie $\{\mathbf a_1,...,\mathbf a_K\}$, each of which denotes a bianry neural architecture according to a certain expansion ratio code satisfying Eq.~\ref{ratiocode}. These individuals will be continuously updated with pre-designed operations (\eg corssover and mutation) to have greater fitness. Here we have two objects: high performance on the specific task, \eg classification accuracy, and low computation costs, \eg FLOPs. Thus, the fitness $f(\mathbf a_k)$ of an individual $\mathbf a_k$ is defined as:
\begin{equation}
\begin{aligned}
f(\mathbf a_k) &= \max(\text{Acc} - \lambda \times \text{FLOPs}, 0)
\end{aligned}
\label{fitness}
\end{equation}
where $\text{Acc}$ and $\text{FLOPs}$ are the Top-1 validation accuracy and FLOPs of the corresponding widened networks of the individual $\mathbf a_k$, $\lambda$ is the trade-off parameter.
Compared with full-precision layers, the FLOPs of binary layers are divided by $64$ as suggested in Bi-Real Net~\cite{bi-real}. In the calculation of fitness in Eq.~\ref{fitness}, we divide the FLOPs of the candidate models by the FLOPs of original binary network to get the same order of magnitude of accuracy. After defining the search space and fitness function, the evolutionary algorithm can effectively select excellent individuals with higher fitness during the evolution process until convergence.
\section{Experiments}
In this section, we conduct experiments to explore the empirical width lower bound of each layer in binary neural networks on several benchmark datasets, \ie CIFAR-10~\cite{cifar}, and ImageNet~\cite{imagenet}. We use two widely used network structures as baselines, VGG-small~\cite{hwgq} and ResNet-18~\cite{resnet}.
\subsection{Experimental Settings}
For the evolution search process, we search for 50 generations with 32 individuals in each generation. We train each candidate model for 10 epochs on the trainset and obtain the accuracy on validation set as the accuracy used in Eq.~\ref{fitness}. For the trade-off parameter $\lambda$, we set it to $4$ to keep the value of accuracy and FLOPs comparable.
\paragraph{CIFAR-10}
In CIFAR-10 dataset, it takes about 12 hours on 8 V100 GPUs. Then we train 200 epochs for full CIFAR-10 training. The learning rate starts as 0.1 and multiply by 0.1 in the epochs of 60, 120 and 180. We simply follow the same hyper-parameter setup as that in~\cite{hwgq}.
\paragraph{ImageNet}
As the ImageNet ILSVRC2012 dataset is very large, we do not use the whole train dataset in evolution process. We randomly sample a subset of 50,000 images from the original full trainset which belongs to 1000 classes with 50 images for each class in the evolution process and it takes about 180 hours on 8 V100 GPUs. Then we train 150 epochs to check if searched models reaches full-precision accuracy. The learning rate starts from 0.1 and decays by 0.1 in the epochs of 50, 100 and 135. We simply follow the same hyper-parameter setup as that in~\cite{resnet}.
\paragraph{Initialization}
When evaluating each candidate, we train 10 epochs on a small subset in ImageNet dataset, the accuracy of candidate models is especially low and makes it difficult to distinguish the better models from the worse ones. Therefore, we train the model uniformly widened by $4\times$ on the subset with 150 epochs and use it to initialize all the candidate models which we simply intercept first corresponding channels values.
\begin{table}[h]
\centering
\small
\caption{Comparison of widened binary networks of VGG-small architecture on CIFAR-10.}
\renewcommand\arraystretch{1.0}
\begin{tabular}{|c||c|c|c|c|}
\hline
Models&FLOPs&Speedup&Memory&Top-1(\%)\\
\hline\hline
Full-Precision&608M&-&149M&\textbf{93.48}\\
\hline
Uniform-1$\times$&13.2M&46.1$\times$&7.3M&90.24\\
Uniform-2$\times$&45.3M&13.4$\times$&23.7M&91.65\\
Uniform-3$\times$&96.2M&6.3$\times$&49.3M& 91.87\\
Uniform-4$\times$&166M&3.7$\times$&84.1M&92.56\\
\hline
VGG-Auto-A&11.3M&53.6$\times$&5.1M&92.17\\
VGG-Auto-B&59.3M&10.3$\times$&23.4M&\textbf{93.06}\\
\hline
\end{tabular}
\label{vgg-small
\end{table}
\subsection{Results and Analysis}
\paragraph{VGG-small on CIFAR-10}
VGG-small~\cite{hwgq} is a variant network of the original VGG-Net~\cite{vggnet} designed for CIFAR-10. We compare the searched models, \ie Automatic-A, B, with uniformly widened models in Table~\ref{vgg-small}. The standard binarized VGG-Small decreases accuracy only by about $3\%$. As we uniformly increase the width, the accuracy increases subsequently. However with 4$\times$ widened, the accuracy of binarized network still does not achieve that of full-precision network. Our Automatic-B model achieves higher accuracy than the Uniform-4$\times$ with about 1/4 FLOPs and memory. It has the smallest accuracy gap with the full-precision model. Although our Automatic-A model even has less channels than the original Uniform-1$\times$ model, it achieves higher accuracy with about 2\% improvement. This phenomenon confirms our original intention in designing the search space, that some layers need to be expanded and some layers need to be narrowed.
\begin{table}[h]
\centering
\small
\caption{Comparison of widened binary networks and other binarization methods of ResNet-18 architecture on ImageNet dataset.}
\renewcommand\arraystretch{1.0}
\begin{tabular}{|c||c|c|c|c|}
\hline
Models&FLOPs&Speedup&Top-1(\%)&Top-5(\%)\\
\hline\hline
Full-Precision&1820M&-&\textbf{69.6}&89.2\\
\hline
PCNN&169M&10.8$\times$&57.3&80.0\\
ABC$\{5/3\}$&520M&3.5$\times$&62.5&84.2\\
ABC$\{5/3\}$&785M&2.3$\times$&65.0&85.9\\
\hline
Uniform-1$\times$&149M&12.2$\times$&52.77&76.85\\
Uniform-2$\times$&352M&5.2$\times$&64.0&85.45\\
Uniform-3$\times$&607M&3.0$\times$&68.51& 88.25\\
Uniform-4$\times$&915M&2.0$\times$&70.35&89.27\\
\hline
Res18-Auto-A&495M&3.7$\times$&68.64&88.46\\
Res18-Auto-B&660M&2.8$\times$&\textbf{69.65}&89.08\\
\hline
\end{tabular}
\label{resnet18
\end{table}
\paragraph{ResNet-18 on ImageNet}
We also conduct experiments on the large-scale ImageNet dataset. In the uniform expansion experiments, as the width increases, the top-1 accuracy can gradually approach that of the original full-precision model. From the results in Table~\ref{resnet18}, our Automatic-B binarized model can obtain the the same performance with the full-precision model with less than 1/3 computational cost. With similar FLOPs, Automatic-B outperform Uniform-3$\times$ by 1.1\% in terms of Top-1 accuracy and 0.8\% Top-5 accuracy. Our evolutionary search finds a more accurate widened models with as less FLOPs as possible.
We also compare our models with some state-of-the-art binarization methods in Table~\ref{resnet18}. PCNN~\cite{PCNN} does not quantize the downsample layer and adds additional shortcut connections which could inevitably increase end-to-end inference time. In the comparison of ABC-Net with multiple bases, which $5/3$ means 5 binary bases for weight and 3 bases for activations, Our Uniform and Automatic models consistently performs better than ABC-Net by a large margin.
\begin{figure}[htb]
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=1\linewidth]{./fig/arch.pdf}
\end{tabular}
\caption{Number of channels in each layer of widened ResNet-18.}
\label{resnet18-vis}
\end{figure}
\paragraph{Searched Architecture}
To further analyze the searched network architecture, we show the number of output channels in each layer of two binary networks with similar accuracy, \ie Res18-Auto-A and Uniform-3$\times$ in Table~\ref{resnet18}. From Fig.~\ref{resnet18-vis}, we observe that compared with Uniform-3$\times$, the searched architecture Res18-Auto-A has fewer output channels in the 1st, 2nd and last stages. In addition, Res18-Auto-A needs more channels for the middle feature maps inside each block. These observations could inspire us to design blocks or architectures for more efficient convolutional neural networks.
\section{Conclusion}
To establish binary neural networks with higher precision and lower computational costs, this paper studies the binary neural architecture search problem. Based on the empirical study on uniform width expansion, we define a novel search space and utilize evolutionary algorithm to adjust the number of channels in each convolutional layer after binarizing. Experiments on benchmark datasets and neural architectures show that the proposed method can produce binary networks with acceptable parameters increment and the same performance as that of the full-precision original network.
{\small
\nocite{han2018autoencoder}
\bibliographystyle{ieee}
|
1,108,101,564,214 | arxiv | \section{Introduction}\label{sec.I}
Chaotic billiards are fundamental paradigms in statistical physics and
nonlinear dynamics. By connecting dynamics with geometry, billiards serve
as models to address numerous questions ranging from the foundations of the ergodic
hypothesis~\cite{sinai2,stadium2} and the description of shell effects~\cite{brack} to the design of microcavity lasers \cite{hui}
and microwave resonators~\cite{ima,stoeckmann}, among other applications~\cite{stoeckmann}.
A salient feature of billiard systems is that simple geometries, such as those in Fig.~\ref{fig.compa}, suffice to give rise to a
rich variety of dynamical behavior observed in typical Hamiltonian systems. But as previously observed for specific chaotic
billiards, simple geometries may also lead to the existence of the so-called {\em bouncing-ball orbits}: one-parameter
families of periodic orbits exhibiting perpendicular motion between parallel
walls.
Theoretical and experimental work on the Sinai
[Fig.~\ref{fig.compa}(c)] and Bunimovich stadium [Fig.~\ref{fig.compa}(d)] billiards
have shown that such orbits have a major influence on transport properties,
decay of correlations, and spectral properties~\cite{stoeckmann,parallel,stadium,transport}.
This is so because, contrary to the other orbits embedded in the chaotic
component of the phase space,
bouncing-ball orbits are only marginally unstable (i.e., perturbations grow only linearly in time).
In general, marginally unstable periodic orbits (MUPOs) can be regarded as a
source of
regular behavior that masks strong chaotic properties. However, MUPOs are not structurally stable and may be destroyed by small changes in the parameters of the system. Therefore, MUPOs are considered to be non-generic and it has long been assumed that they could exist only for very special systems, like billiards with parallel walls.
Contrary to this expectation, in this paper we show that MUPOs are prevalent in a large
class of billiard systems. The starting point of our analysis is the observation that
many of the most widely studied chaotic billiards consist of {\it local} perturbations of an integrable
billiard. For concrete examples, consider the chaotic billiards shown in the right part of
Fig.~\ref{fig.compa}. All these billiards can be obtained by re-defining the
dynamics in the gray region of the integrable billiards in
Figs.~\ref{fig.compa}(a)-(c), e.g., by introducing a scatterer. It can
be shown that any orbit ({\it i}) lying inside the chaotic component and ({\it ii})
not interacting
with the introduced scatterers will be a MUPO.
Although bouncing-ball orbits evidently satisfy these conditions
in the billiards of Figs.~\ref{fig.compa}(d)-(h),
the existence of such orbits is far from clear in general.
Here we use geometric and analytical arguments to demonstrate the widespread occurrence of MUPOs.
Specifically, using circular-like billiards as model
systems --- such as those in Figs.~\ref{fig.compa}(f)-(g) --- we show that
{\em infinitely} many families of MUPOs exist for almost all parameter choices of
the system. We discuss the impact of these structures on the dynamics of chaotic orbits
as well as the experimental observation of MUPOs in the quantum spectrum of
microwave annular billiards.
\begin{figure}[!ht]
\includegraphics[width=1\columnwidth]{fig1.eps}
\caption{ Adding {\em local} perturbations to integrable billiards, as
those shown in (a)-(c), one obtains frequently studied chaotic billiards, such as
those shown in (d)-(h). The gray regions of the (a) rectangular, (b)
circular, and (c) elliptical billiards are defined in such a way
that chaotic motion is possible ($a<R$ in (b) is the radius of the
smallest circle that circumscribes all scatterers).
MUPOs are shown here to exist in billiards such as (d) Sinai~\cite{sinai2}, (e)
stadium~\cite{stadium2}, (f) annular~\cite{saito}, (g) mushroom~\cite{bunimovich}, and (h) elliptical with scatterers~\cite{note2}.}\label{fig.compa}
\end{figure}
The {\it local} perturbations described above are typical for
billiard systems and differ fundamentally from the {\em global} perturbations
considered in smooth Hamiltonian systems.
In the latter, the KAM theory shows that most quasi-periodic orbits of the
integrable system survive the
perturbation, while all periodic orbits with marginal stability disappear.
Quite the opposite happens in the former case: a large set of quasi-periodic orbits disappears
but there are families of periodic orbits with marginal stability that survive the perturbation
by ``avoiding'' interaction with the localized scatterers.
These orbits give rise to families of MUPOs detached from regular
regions, which were previously observed in billiards with parallel
walls~\cite{parallel}, and for specific parameters of the mushroom
billiard~\cite{nossos,nossos2}. Here we consider {\em generic} control parameters of a wide
class of systems where we characterize the MUPOs both theoretically and
experimentally.
The paper is organized as follows. In Sec.~\ref{sec.II} we perform a detailed
analysis of the existence of MUPOs in the annular billiard, a representative
example of the class of billiards we are interested in. In Sec.~\ref{sec.III} we
show the existence of an infinite number of different families of MUPOs in annular
and other circular-like billiards. Our experimental results on microwave
cavities appear in Sec.~\ref{sec.IV}. Finally, our conclusions are summarized
in Sec.~\ref{sec.V}.
\section{Annular billiard}\label{sec.II}
Annular billiards are defined by two eccentric circles, as shown in
Fig.~\ref{fig.configuration}(a). For a fixed radius $R=1$ of the external circle,
the control parameters are the radius $r$ and displacement $\delta<1-r$ of the internal
circle, which serves as a scatterer.
The phase space shown in
Fig.~\ref{fig.configuration}(b) is obtained by plotting the position $\phi\in[0,2\pi]$ of the collision of
the particle with the external circumference and the sine of the angle~$\theta\in[-\pi/2,\pi/2]$ with the normal
direction right after the collision.
In this system, periodic orbits of period~$q$ and
rotation number~$\eta$ that collide only with
the external circumference define {\em star polygons} of type~$(q,\eta)$, where the
integers $q$ and $\eta$
are coprime and $\eta \leq q/2$.
A star polygon of type~$(5,2)$ is shown in Fig.~\ref{fig.compa}(b) and star polygons of
types~$(2,1)$ and~$(5,1)$ are shown in Fig.~\ref{fig.configuration}.
Each star polygon belongs to a family of orbits of the same type, which is
parameterized by~$\phi$ and has a
fixed collision angle
$\sin(\theta_{sp})=\cos(\pi\eta/q)$.
For this system, the conditions ({\it i}) and ({\it ii}) for the existence of
MUPOs mentioned in Sec.~\ref{sec.I} translate into crossing the
circle of radius $a=r+\delta$
without colliding with the scatterer.
Under these conditions, the orbits are embedded into the chaotic sea but are only marginally unstable (both eigenvalues
of the Jacobian matrix equal $1$). The two orbits shown in Fig.~\ref{fig.configuration} satisfy these conditions
and hence are MUPOs. We use MUPOs~$(q,\eta)$ to denote the entire
one-parameter {\em family}
of orbits corresponding
to star polygons~$(q,\eta)$ that satisfy conditions ({\it i}) and ({\it ii}). Note that these orbits are necessarily
periodic because non-periodic orbits will either collide with the scatterer or
form a regular region for $|\sin(\theta)|>a$, called
whispering gallery. A collision with the scatterer happens
whenever~\cite{saito}
\begin{equation}\label{eq.hitting}
|\sin(\theta) -\delta\sin(\theta-\phi)| > r.
\end{equation}
In Fig.~\ref{fig.configuration}b this condition is satisfied between the
dashed lines. In the following we calculate the geometrical conditions for the
existence of MUPOs and we demonstrate that typically an infinite number of
families~$q,\eta$ satisfy these conditions.
\begin{figure}[!ht]
\includegraphics[width=0.9\columnwidth]{fig2a.eps}\\
\includegraphics[width=1\columnwidth]{fig2b.eps}
\caption{(Color online) Annular billiard for parameters $r=0.35$ and $\delta=0.5$:
(a) configuration space and (b) phase space. MUPOs correspond to the periodic
orbits that cross the circle of radius $a$ [dotted line in (a)] but that do
not hit the scatterer [region between the dashed lines in (b) in which
relation~(\ref{eq.hitting}) is satisfied].
The symbols {\tiny $\blacksquare$} and {\small $\blacktriangle$} indicate,
respectively, individual orbits belonging to MUPOs~$(2,1)$ and~$(5,1)$.}\label{fig.configuration}
\end{figure}
Consider MUPOs that encircle the scatterer from outside, like the
pentagon-MUPO~$(5,1)$ in Fig.~\ref{fig.configuration}. Conditions for the
existence of such {\em outer} MUPOs are obtained by
noting that every star polygon $(q,\eta)$ draws an inner regular $q$-sided
polygon, like the pentagon in Fig.~\ref{fig.compa}(b).
The radii $(d,D)$ of the inscribed and circumscribed circles of this inner
polygon are given by
$d= \cos(\pi \eta/q)$ and $D= d/\cos(\pi/q)$.
It follows that an orbit
of type~$(q,\eta)$ is an outer MUPO~$(q,\eta)$ if and only if
\begin{equation}\label{eq.ineq2}
\cos(\pi \frac{\eta}{q})< \cos( \pi \lambda) \; <\;
\frac{\cos(\pi\eta/q)}{\cos(\pi/q)}+r(1-\frac{1}{\cos(\pi/q)}),
\end{equation}
where $\cos(\pi \lambda) \equiv a=r+\delta$. A similar expression is obtained
for mushroom billiards~\cite{nossos,thesis}.
{\em Inner} MUPOs, like the diameter-MUPO~$(2,1)$ in Fig.~\ref{fig.configuration}, exist when
$$
\delta >
\frac{r}{\cos(\pi (1-\eta)/q)}+\cos(\pi \frac{\eta}{q})+\sin(\pi \frac{\eta}{q})
\tan(\pi \frac{1-\eta}{q}).
$$
{\em Mixed} inner-outer MUPOs may also exist for~$\eta\geq2$. Families of inner, outer, and
mixed MUPOs~$(5,2)$ are illustrated in Fig.~\ref{fig.new}.
\begin{figure}[!ht]
\includegraphics[width=1\columnwidth]{fig3.eps}
\caption{(Color Online) Size~$w$ of the families of MUPOs~$(5,2)$ in the annular billiard with~$r=0.12$: (a)
outer MUPOs for~$\delta=0.2$, (b) mixed inner-outer MUPO for~$\delta=0.5$,
and (c) inner MUPOs for~$\delta=0.8$. All three kinds of MUPOs
may coexist for a fixed~$\delta$. The size~$w$ of the families of
MUPOs in Eq.~(\ref{eq.w}) is given by the length of the external arcs.
MUPOs~$(5,2)$ outside of these regions do not exist since they collide with the inner scatterer.}\label{fig.new}
\end{figure}
Unlike the regular regions around
stable periodic orbits, MUPOs have zero Lebesgue measure in the phase
space. The relevant measure is therefore the size of the families of MUPOs, given by the length~$w$
of the set of angles~$\phi$ (normalized by 2$\pi$) for which an orbit with
a given $(q,\eta)$ exists. In Fig.~\ref{fig.configuration}, $w$ is proportional to the length of
the external arcs of circumference in (a) and to the horizontal lines in (b). For orbits inside the whispering
gallery one would have~$w=1$. For a given family of MUPOs~$(q,\eta)$ we calculate~$w$ as
\begin{equation}\label{eq.w}
w=w_{outer}+w_{inner}+w_{mixed} < 1,
\end{equation}
where $w_{outer}=1-q \beta^-/\pi$ and $w_{inner}= q \beta^+/\pi$
with $\cos(\beta^\pm)=[\cos(\pi \eta/q)\pm r]/\delta$. For the
MUPOs~$(q,\eta)$ investigated in Sec.~\ref{sec.IV} below, $w_{mixed}=0$. Figure~\ref{fig.new}
shows the geometrical representation of the terms in Eq.~(\ref{eq.w}).
\section{Infinite families of MUPOs}\label{sec.III}
We now determine the number and values of the {\em different} families of
MUPOs $(q,\eta)$'s that exist in a given annular billiard~$(r,\delta)$. For
inner and mixed MUPOs, only a finite number of $(q,\eta)$'s
exist~\cite{note1}, which can be obtained
by inspection.
On the other hand, we show next that an infinite number of outer MUPOs~$(q,\eta)$ typically accumulate close to the whispering gallery.
Let~$\eta(q)$ denote the integer~$\eta$ for which
$\eta/q-\lambda$ is minimal and non-negative.
In the limit $q \rightarrow \infty \Rightarrow (\frac{\eta(q)}{q} -
\lambda) \rightarrow 0_+$, both inequalities~(\ref{eq.ineq2}) are satisfied if
\begin{equation}\label{eq.varepsilon}
\frac{\eta(q)}{q} - \lambda < \frac{a \pi}{2 \sqrt{1-r^2}} \frac{1}{q^2}.
\end{equation}
Essentially the same expression is obtained for mushroom
billiards~\cite{thesis} and the same scaling on~$q$ is expected in the case of
other circular-like billiards~\cite{note2}.
Optimal rational approximants
of~$\lambda=\arccos(a)/\pi$ for a fixed~$q$ are obtained by truncating the continued
fraction representation $\lambda =
\frac{1}{\alpha_1+\frac{1}{\alpha_2+...}}=[\alpha_1,\alpha_2,...]$, leading to
the convergent~$\eta'/q'$.
The irrational numbers $\lambda^*$ for which there exists one
integer~$\alpha_{\mathrm{max}}$ such that
$\alpha_i<\alpha_{\mathrm{max}}$, for all $i$,
are called {\em numbers of constant type}.
Numbers of constant type are
difficult to approximate by rational numbers and
there exist constants~$C_1,C_2$ such that~\cite{khinchin}
\begin{equation}\label{eq.c1c2}
\frac{C_1}{q^2} < \left| \frac{\eta'}{q'} - \lambda^* \right| < \frac{C_2}{q^2},
\end{equation}
for all convergents~$\eta'/q'$. Comparing the
inequalities~(\ref{eq.varepsilon}) and~(\ref{eq.c1c2}) we note the
same~$q^{-2}$ dependence.
Since the
convergents are the {\em best} approximants, the lower bound
in~(\ref{eq.c1c2}) is valid for all rational numbers. Therefore,
provided that $\lambda$ is a number of constant type, there are regions of the
control parameters [$a\pi/(2\sqrt{1-r^2})<C_1$ for
annular billiards] for which there exist only a finite number of
families of MUPOs.
The numbers of constant type are uncountable and dense in
the set of real numbers. They have zero Lebesgue measure, however, meaning
that with full probability~$\lambda$ belongs to the complementary
set of irrational numbers for which $C_2\rightarrow0$ in Eq.~(\ref{eq.c1c2}). Therefore, an
infinite number of MUPOs exist for almost all~$\lambda$ and hence for almost
parameters~$(r,\delta)$.
The demonstration above can be used in circular-like
billiards with arbitrary inner scatterers~\cite{note2} to verify
whether the convergents $\eta'/q'$ of~$\lambda=\arccos(a)/\pi$ are MUPOs~$(q',\eta')$
[e.g., satisfy condition~(\ref{eq.ineq2}) in the case of annular billiards or
Eq.~(6) of Ref.~\cite{nosso.mushroom} in the case of mushrooms].
Typically, an infinite number of different families~$(q,\eta)$ can be found among the
convergents.
For the annular
billiard illustrated in Fig.~\ref{fig.configuration}, for instance, all odd convergents tested are MUPOs:
$(5,1),(11,2),(436,77),(1342,237),...$, while the MUPO~$(4,1)$ is not a
convergent.
\section{Experimental results}\label{sec.IV}
Having shown that MUPOs are abundant, we now study the impact of these structures
in quantum experiments. We use the equivalence between Schr\"{o}dinger's and Helmholtz's equations for flat
microwave cavities \cite{ima,stoeckmann} to investigate the effect of MUPOs
in quantum annular billiards. We show that MUPOs are detectable and play a
prominent role among the periodic orbits.
A microwave cavity with radius $12.5\,$cm, height of $5\,$mm, and $4$
coupling antennas was used in the experiments. The inner scatterer
had a radius of $1.5\,$cm, leading to~$r=0.12$. The resonance
spectra have been obtained using a vectorial network analyzer,
measuring the complex amplitude ratio of the input and output
microwave signal of the cavity. For each value of $\delta=0,\,0.08,
\cdots, 0.88 $ we measured $10$ spectra up to $10\,$GHz with a resolution of
$100\,$kHz. Different antennas and antenna combinations were used to find as
many resonances as possible.
Close lying levels (e.g., split doublets) were detected as
one resonance only due to their finite width. However, since the
position of those doublets can be approximately calculated, a
second (\emph{not detected}) eigenvalue could be attributed to the
corresponding frequencies. We justify this procedure by using the
high precision data obtained with superconducting cavities in the
experiments described in Ref.~\cite{exp}: there the doublets could
be resolved, and we found that the length spectrum is stable under
small random shifts of one doublet partner, which allows us to
assume doublets to be degenerate. Finally, by comparing the number
of detected levels $N(f)$ below the frequency $f$ to the expected
number $N_\mathrm{Weyl}$ given by Weyl's formula \cite{brack}, we
checked that almost all eigenvalues in the considered part of the
spectrum have been found ($N \approx 150$ for each
$\delta$).
\begin{figure}
\includegraphics[width=1\columnwidth]{fig4.eps}
\caption{\label{fig_lsp} Experimental length spectrum of the annular billiard
for $\delta=0.48$. Peaks associated to four periodic orbits are indicated: the
shortest one is unstable, the diameter and triangle are MUPOs, and the square
is inside the regular region.}
\end{figure}
Performing a Fourier transform ($\mathcal{FT}$) of the level density
$\rho(f)=\frac{dN(f)}{df}$ we have computed the length spectrum
\begin{equation}
|\tilde{\rho}_{fluc}(x)|=|\mathcal{FT}\{\rho(k)-\rho_{\mathrm{Weyl}}(k)\}|\,,
\label{eq_lsp}
\end{equation}
where $k=2\pi f/c$. The classical periodic orbits manifest themselves as peaks located at
the corresponding orbit length. The length of periodic orbit~$(q,\eta)$ is
given by~$x_{(q,\eta)}=2Rq\sin(\pi \eta/q)$. Particularly, for
all~$\delta$'s we consider the peak heights~$y$ (strengths) of the
diameter ($x_{(2,1)}=0.5\,$m), triangular ($x_{(3,1)}=0.63\,$m),
and square ($x_{(4,1)}=0.71\,$m) orbits. The length spectrum
for $\delta=0.48$ is shown in Fig.~\ref{fig_lsp}, where we indicate
additionally the peak at $x=0.34\,$m related to an {\em unstable}
periodic orbit. Notice that this peak is much smaller than the
peaks associated with the MUPOs.
\begin{figure}
\includegraphics[width=1\columnwidth]{fig5.eps}
\caption{\label{fig_comparison} (Color online) Semiclassical strengths
$S_{sc}$ (lines) compared to the
experimental strengths $y$ (symbols). $S_{sc}$ is given by
Eq.~(\ref{eq.ssc}) and expected from periodic orbit theory, while $y$ is
extracted from the length spectra, as shown in Fig.~\ref{fig_lsp}. Three orbits are considered: diameter
(dotted line and circles), triangle (solid line and triangles), and
square (dot-dashed line and squares). The horizontal lines (top) indicate the
values of~$\delta$ for which the corresponding MUPOs exist.}
\end{figure}
Using periodic orbit theory, the strength of an orbit in a quantum
mechanical length spectrum is given by the amplitudes of the
oscillatory terms in a semiclassical periodic orbit summation.
We use the trace formula for integrable systems to obtain the orbit dependent amplitudes
$\mathcal{A}=\nu\,\frac{\sin^{3/2}{(\pi\eta/q)}}{\sqrt{q}}\,,$
where $\nu=1$ for the diameter orbit, and $\nu=2$ for all other
orbits~\cite{brack}. The expected strength in the case
of the MUPOs is
\begin{equation}\label{eq.ssc}
S_{sc}(q,\eta)=w\mathcal{A},
\end{equation}
where $w$ is the measure of the
entire family, given in Eq.~(\ref{eq.w}) and illustrated in Fig.~\ref{fig.new}.
In Fig.~\ref{fig_comparison} we compare $S_{sc}$ (lines)
with the experimental strengths~$y$ (symbols, rescaled by a common factor) for
different values of $\delta$.
The dependence of $S_{sc}$ on~$\delta$ is due to the factor~$w$. Overall,
the orbits strengths~$y$ approximately follow
the semiclassical behavior~$S_{sc}$ for $\delta>0.3$.
The deviations can be understood qualitatively as the experiment diverges
from the
semiclassical limit: (1) the finite
wavelengths imply a spatial uncertainty of
the order of the typical width of the peaks in the length
spectrum; (2) the Fourier transform of
a finite spectral range generates fluctuations in the
length spectra (of the
order of $10\,$\% of the diameter peak height, as seen for $x<0.2\,$m in
Fig.~\ref{fig_lsp}).
Nevertheless we find that quantum behavior resembles the
classical behavior in the sense that the data support the use of the weighting factors $w$ in the semiclassical
strengths in Eq.~(\ref{eq.ssc}).
\section{Conclusions}\label{sec.V}
We have demonstrated that MUPOs are prevalent and that
they must be accounted for in billiard experiments, which is a new
paradigm that advances previous conclusions drawn for specific
systems \cite{stoeckmann,nossos,stadium}. In particular, MUPOs
have not been previously observed in annular billiards, despite
many theoretical~\cite{saito,hentschel,theoretic} and
experimental~\cite{exp} studies, including detailed catalogs of
periodic orbits~\cite{gouesbet}. We have shown that annular and general
circular-like
billiards typically have an infinite number of different families
of MUPOs in the chaotic component close to the border of the
whispering gallery. This should be contrasted with the case of billiards with
parallel walls such as Stadium and Sinai billiards, where only a finite number
of families of MUPOs exists.
The above mentioned results can be immediately extended to
other chaotic billiards defined by local perturbations of
integrable systems and are expected to find applications in both
classical and quantum studies. Classically, general arguments on marginal
instability can be used to show that the resulting stickiness of
chaotic trajectories to MUPOs generates a universal power law
$p(t)\sim t^{-2}$ for
the survival probability of nearby particles~\cite{nossos2}. This scaling
is expected to hold for long times, while fluctuations
(nonperiodic echoes) occur for short times~\cite{nossos,thomas}.
Studies in the quantum regime have shown that orbits with marginal stability
are robust to small perturbations~\cite{robust} and give rise to different
transport phenomena~\cite{transport}. Recent
theoretical studies and microwave experiments on chaos assisted
tunneling in the annular billiard have demonstrated a pronounced effect on the
tunneling of the so called ``beach region'' between the whispering gallery and
the chaotic region~\cite{theoretic,exp}.
Different mechanisms of dynamical tunneling are currently under
investigation~\cite{schlagheck}, and special attention is being devoted to
mushroom billiards~\cite{barnett,baecker}. Our results have fully characterized the
dynamics in the ``beach region'' of annular and mushroom billiards in terms
of marginal unstable orbits. The
formalization of their contribution to dynamical tunneling and a
comparison with the existing numerical and experimental results are
interesting open questions.
\acknowledgements
We thank M. Brack, B. Dietz, and T. H. Seligman for stimulating discussions and B. Lindner for the careful reading of the manuscript.
This work was supported by DFG (SFB 634), Studienstiftung des Deutschen Volkes, and CAPES (Brazil).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.